Afford Anything - Why Nice People Struggle with Money, with Dr. Sandra Matz, Professor at Columbia Business School
Episode Date: July 25, 2025#628: You follow all the right personal finance advice. You know you should save more, invest regularly, and build an emergency fund. So why does it feel so much harder for some people than others? ... The answer lies in your personality. Dr. Sandra Matz, a professor at Columbia Business School, studies the intersection of psychology and money management. She joins us to explain why one-size-fits-all financial advice often fails. Her research found that agreeable people — those who are caring, empathetic, and put others first — have a harder time saving money. The solution isn't better budgeting apps or stricter rules. It's reframing financial goals to match your personality type. For example, agreeable people save more effectively when they view their emergency fund as protection for loved ones or a way to help others during tough times. By contrast, competitive personalities respond better to framing savings as getting ahead in life. This personalized approach extends beyond personality assessments. Algorithms can now predict your financial behavior using digital footprints — social media activity, spending patterns, even smartphone usage. With just 300 Facebook likes, artificial intelligence understands your money habits better than your spouse does. The conversation also covers the darker implications. Companies exploit these same psychological insights to manipulate spending decisions. Dr. Matz discusses data cooperatives as a solution — member-owned entities where people collectively benefit from their shared information. We dive into negotiation strategies for salary increases, breaking out of financial echo chambers, and using AI to optimize your money management without losing your decision-making autonomy. Resources Mentioned: Dr. Matz's book "Mind Masters" sandramatz.com Timestamps: Note: Timestamps will vary on individual listening devices based on dynamic advertising run times. The provided timestamps are approximate and may be several minutes off due to changing ad lengths. (0:00) Big data meets financial psychology (3:34) Psychology and computer science intersection (6:26) Algorithms vs spouses at predicting personality (7:21) Curly fries predict intelligence (9:01) Self-talk reveals emotional distress (11:04) Nice people struggle with money (14:03) Personality-based savings strategies (22:21) Privacy versus convenience tradeoffs (24:36) Data privacy management burden (26:28) Organ donation defaults (30:40) Data cooperatives concept (36:01) ChatGPT for financial advice (40:04) AI as unlimited intern (44:06) Breaking financial echo chambers (53:14) AI negotiation training For more information, visit the show notes at https://affordanything.com/episode628 Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
We live in a world in which algorithms know more about us than our own families, and while that
couldn't be terrifying, it also gives us the opportunity to learn how to harness big data to make
better financial choices. Today we're joined by Dr. Sandra Mats, a business professor at Columbia
Business School who serves as the director for the Center of Advanced Technology and Human Performance.
Her research blends psychology with computer science. She studies both human behavior and big data
in terms of how we can bridge that gap between intention and action.
Welcome to the Afford Anything Podcast, the show that understands you can afford anything, but not everything.
Every choice carries a trade-off.
This show covers five pillars, financial psychology, increasing your income, investing, real estate and entrepreneurship.
It's double eye fire.
Today's episode covers the letter F, financial psychology.
Welcome, Dr. Mats.
Thank you so much for having me.
Thank you for being here.
You know, we hear a lot in the news about big data, and it sounds like a crazy concept, but a little bit removed from our day-to-day lives if we're not in a relevant industry.
Big data and some of the ideas that come from it can actually be applicable to each of us as individuals when we're trying to save more money, work out more, sleep better, have healthier and wealthier lives.
How is that?
Yeah, it all comes back to this idea that there's someone who is generating all of this data.
data that we can look at. It's a person who might post something about their lives on social media,
who swipes their credit card to buy a certain product, who in a way just carries around their
smartphone with them 24-7, leaving all of these traces about where they go, who they meet.
The beauty of big data is really trying to understand who's the person behind them. What are some
of their preferences, their routines, their needs, their motivations? And you can imagine that
once you understand someone at this level, I kind of see what you're going to be. I kind of see what.
what is going on in their lives. I maybe have a sense of where they want to be in the future.
Maybe I get a sense of what they're struggling with, right? You mentioned savings. You mentioned
eating healthier. Can I use these insights into their psychology to just help them
accomplish the goals that they set for themselves? And I think of it almost as like your friend
that really knows you well, that understands, here's what you want to try to accomplish,
but also here are the things that are difficult for you and then gives you the most personalized,
helpful, meaningful advice because they know you really well.
But it's really trying to understand a person behind all of the data that is generated.
You mentioned your friend who gives you personalized advice.
What that hints at is that our smartphones will soon be able to give us hyper-personalized, hyper-individualized advice that can help us save more or reach any other goal that we want.
So we're going to dig it more into that in a moment.
But first, can you establish the field?
because often when we hear about big data, we hear one of two narratives.
We either hear this very pessimistic black mirror type of warning or we hear this hyper-optimistic,
this is our salvation, utopia.
Can you establish the field of big data?
What is it?
And where are we right now and where are we going?
Yeah, no, absolutely.
And I think it probably depends a little bit on where in this space of big data you play.
There's people who are just interested in optimizing algorithms to make better prediction of what someone is going to buy next.
The domain that I'm predominantly interested in is this intersection of data and psychology.
So I always think of myself as like this person playing in this messy world of human desires, preferences, behaviors, needs,
and then merging it and marrying it with this somewhat more cold and structured world of data and algorithms and AI.
And so what I'm really trying to do is to see, can we use some of.
of these tools that computer science provides. And can we use it in combination with all of the data
that we generate to get insights into people's psychology? And those can be really intimate insights.
So if I can get access to, say, your social media posts, your credit card spending, the censoring
data that gets captured by your smartphone, I can make predictions about, say, your political
ideology, your sexual orientation, your personality traits, your political values. And it's very obvious
insights into some of these really intimate characteristics can give others a lot of power
over your behavior. Think of children. Children are amazing. Kids figure out really quickly
how to talk to their mom to get candy as opposed to their dad. And they do that by understanding,
okay, where is mom coming from? How can I push her buttons and where is dad coming from? And I think
of algorithms in somewhat similar ways. The moment that I can tap into your psychology, I can also
potentially change your behavior. And now back to this black and white narrative that he oftentimes
see in the media, it's obvious how this could be abused. If I can understand your desires,
your preferences, maybe some of your vulnerabilities, some of the weaknesses that you have,
I could play into these weaknesses to get you to do something that you don't want to do.
But on the other hand, there's so many parts in life or so many aspects of life that we struggle
with, right, where we have the best intentions, whether that's saving more, that's
exercising more, eating more healthily, we have the best intentions. But life is hard. And
oftentimes, even with the best intentions, we don't always manage to live up to our own
expectations. So understanding, again, where we're coming from, what we might be motivated by,
how I can help you get to the point where you want to be, could also be hugely helpful.
You mentioned the analogy of a child who understands their parents, and one of the
interesting findings is that big data can actually understand us better than our own families.
In fact, with 300 Facebook likes, it can understand us better than our own spouse.
Yeah.
This was a research study that was done almost 10 years ago now.
Just looking at the Facebook pages that people follow,
we call our spouses, our other halves, right?
And they, similar to family members, go through life with us
in many different situations, different moments.
They see a lot of us, both in terms of how we want to be seen in public,
but also sometimes these more intimate scenes where we might not be our ideal selves.
And yet still an algorithm with just access to 300 of your Facebook pages
can make more accurate predictions of how you think of yourself in terms of personality
than those people who know us intimately.
And for me, what I'm coming back to is Google searches.
The question is like, how could an algorithm be so good?
Like, what does it know that people in our environment don't know?
And Google searches is like this one example that I think most people can relate to.
We ask Google questions that we don't feel comfortable asking our best friends,
sometimes even our spouses, right?
So all of the traces that are out there, they just tell so much about who you are because an algorithm can map it against everybody else.
So it's not just that I sample maybe my circle of friends, the people that I know at work, but the algorithm sees the data from millions of people and can really quickly figure out, well, if you have these patterns and you're like, Lady Gaga, maybe you're more extroverted.
If you like science, maybe you're open-minded and intellectually curious.
So it's very good at making these associations.
Right. And sometimes the associations are not even counterintuitive, but just they seem to be out of thin air. So if you like curly fries, you tend to be more intelligent. Yeah. So this is one of the findings that was published in one of the first papers. That's actually the beauty and the fascinating part of this research is that sometimes the findings are quite obvious. Right. So if you talk about having an amazing time on weekends, socializing with your friends, yeah, you're probably more extroverted. And sometimes these findings are really surprised.
and we have no idea why they're coming from, like curling fries, liking curly fries,
why would that be associated with high IQ? And we don't know. So there's this distinction between
understanding and predicting. So even though we don't fully understand why curly fries might be
predictive of intelligence, at least in the moment when we observe these relationships,
they're still predictive. So if I made a person on the street in the very moment and I know
nothing about them other than them liking curly fries, at least based on the data, my assumption is
they might be more intelligent. Now, the interesting part is with these associations that are somewhat
not intuitive or sometimes even counterintuitive, they might kind of vanish really quickly over time,
right? It could have just been that there's a group of maybe Columbia students, Harvard students,
that all decided to like curly fries for whatever reason. But then the moment that it gets publicized,
and this was publicized on the media, it kind of came up in all of the shows. It probably very
quickly lost its predictive power. But for me, that's the interesting part. It's something,
Sometimes it's obvious. Sometimes it's surprising and we can't make sense of it. Sometimes it's also surprising and we can actually make sense of it in retrospect. My favorite example for this one is that there was this conference full of psychologists, right? So people who think about this topic all the time. And the speaker was talking about finding that he saw in the data. So he was just posing a question to the audience and I kind of found something interesting about the use of first person pronouns, so references to the self like I,
me, myself, what do you think in terms of psychological traits this could be related to? And everybody
in this room, and all of the psychologists were like, it's probably going to be narcissism. Like,
if you talk about yourself all the time, you're probably somewhat more narcissistic. And it turns out
that that's actually not the case. It's a sign of emotional distress. Everybody was surprised,
like, why does that make sense? But then when you take a step back and you think about it,
is like, yeah, the last time that you felt really down and blue and maybe somewhat depressed,
you were probably not thinking about solving the world's biggest problems, right?
What you were thinking about is yourself.
Like, why am I feeling so bad?
Like, am I ever going to get better?
How do I get better?
And this inner monologue that we have with ourselves just creeps into the language that we use.
So there's also things that we now know about human behavior that has just kind of come out from
that was just born out of the data that we didn't know before.
Yeah.
And that was a particularly interesting finding because when I read it, I realized that there have been times in the past where I have masked emotional distress by virtue of talking about the outside world.
So if I'm going through a tough period, then I'll be like, hey, the stock market this week, blah, blah, blah, right?
You know, I'll talk about the economy.
I'll talk about the markets.
I'll talk about everything other than myself.
Myself, yeah.
Yeah.
So in this case, you're actually really good at masking in.
So in this case, you figure it out that if I talk a lot about myself, maybe others are also going to pick up on that.
But in most cases, we just don't have the capacity.
So if you're really feeling completely depressed, you probably don't have the cognitive bandwidth to now fully monitor yourself and say, okay, I should be talking about the outside world because I don't want people poking around in my own emotional life.
Right. That's a great example of something that is not necessarily intuitive, but it makes sense in retrospect.
another example of that was you're finding that a high degree of agreeableness actually correlates
with being bad at money.
Yeah, that was one of the somewhat the pressing findings of some of my work.
Where we were interested in, like, why is it a people mismanage their money?
So if you think about low levels of savings, high levels of debt, high levels of default rates,
and we were just interested in what are some of these psychological dispositions that might
make it more less likely that you actually struggle with saving and managing your finances properly.
And what we found somewhat counterintuitively to me at first was that people who score high on
agreeableness. So those are the nice guys. And people who are caring, they're trusting, they're
empathetic. Those are usually the people that you want to have as friends and that whole society
together in a way. But they also seem to have a harder time managing their finances. And when we
try to probe the mechanism and see why might this be the case, we actually looked at the time.
different things. And my hope was that maybe it's just that they're not as good at negotiating,
right? Maybe it's because they're so nice. They want to make sure that the other side leaves the
table happy. And maybe that's what's going on. And that would have been very easy to train.
So I teach in a business school and I teach negotiations. I'm like, if that's the mechanisms,
I can help those nice guys to do better. But it turned out what was actually driving this effect
was agreeable people saying that they simply didn't care as much about money. And to me, that was
initially a little bit of a downer
because it almost felt like, well, I don't want to,
I don't want them to kind of
all they care about is now money, right?
I don't want to run an intervention that says
you should be caring a lot more about money
because it felt somewhat icky.
But then the more I thought about it,
the more I felt like, no, this is actually
is a complete fallacy.
I think the way that we think about money
and social relationships in society
and also how we talk about it is, well,
either you can care about people
or you care about money.
And in a way, that's not true. You can care about money and you can still care about people. And especially
if you're responsible for other people in your life that you love, right, if you're a parent,
if you're the sole breadwinner and a family, well, you mismanaging your money also can have
dire consequences for the people you love. So it almost felt like this false dichotomy of,
yeah, no, people can care both about and maybe they should care about money if they care about
their loved ones. But that kind of prompted this entire idea to really think about, well, if
agreeable people are struggling with savings, how do I actually help them?
Instead of saying, well, you should be putting money in your bank account,
what is it about their psychology and their motivation that I can tap into
so that they actually find it easier?
Right.
Where that would lead me would be the thought that there's probably some cohort of people,
maybe those who score lower on agreeableness,
encouraging them to save money so that they themselves will have a more secure future
would be a resonant argument.
And there's a separate cohort of people,
probably those who score higher on agreeableness,
for whom save money so that you can have a positive impact
on not just your immediate family,
but your friends, your community, animals,
like all of the people and animals around you.
You can make that positive difference in your community.
And that would be more resonant.
You mentioned animals because we know that agreeable people
also spend more money on pets.
So you're spot on.
But that's exactly the idea that we had.
It's like, how do we frame savings in a way that resonates with people?
And for the people low in agreeableness, that's probably just getting ahead in life.
So those are the people who are more critical and competitive.
So putting an extra dollar to the side just means that they're getting ahead of the game.
Whereas for agreeable people, what you can highlight is, well, you putting some money to the site right now is making sure that your loved ones are being protected now and in the future.
and that's what we tested in some of our studies,
where we actually show that playing into the psychology of people.
So tapping into these motivations,
again, the same way that a benevolent advisor,
like a financial advisor, would do when they get to know you more intimately.
So I grew up in this in like a very small town, tiny village,
and my mom used to work at a bank, right?
And what she describes is that they knew all of the customers.
They knew their families.
They knew exactly what people were motivated by.
They knew exactly what people did.
trying to accomplish in their life. And to some extent, in like an offline context, in any type
of interpersonal conversation, we do some of this very instinctively and intuitively. So you don't
give the same savings advice to everybody who walks into the bank. You don't talk to them in the same
way. You don't tap into the same motivations. And that's what we try to replicate at scale with people
that we don't get to see come through face to face, but we can actually interact with through technology.
Right. Yeah. So it's very much of a know your audience, right? In real life, you are going to talk to different people in different ways, depending on where they're coming from. You meet them where they are. And so big data from what I'm hearing is essentially the algorithmic version of doing that. Yes. For better or worse.
understanding where someone is coming from and playing into these motivations can help them go in the direction that they wanted to go in anyway and just had a hard time doing in the context of saving.
It could also certainly push them off the beaten track and get them to do something that they don't want to do.
So that's the darker side.
Right.
So then what if, let's say somebody wants to use this for ill, somebody wants to use this to convince people to put their money into a big pump and dump scheme or to put their money into some multi-level marketing program?
I mean, how do we as individuals deal with these constant opportunities and threats?
That's the question that I've been grappling with really for the last 10, 15 years,
because there really are these opportunities and these risks.
And it's oftentimes really difficult for individuals to manage that all by themselves.
So, first of all, in many cases, it's completely opaque and not transparent at all.
So in many cases, you don't even know what someone might be targeting you based on
because it's all hidden, and it's kind of hidden between algorithms that are optimizing for attention.
It's hidden behind the fact that advertisers can just pop in some of these dimensions about who you are to target you,
but they're not visible to us necessarily.
I teach this class on the ethics of data in the business school,
and one of the big questions that we talk about in the context of big data is,
well, should you just be giving up on your privacy?
Are we just at the point where you can't get it back and it's too hard?
And many of my students then kind of raise their hand and say, like, but I get all of these perks and benefits.
So I don't really care about my data.
I have nothing to hide.
That's a pretty risky gamble.
First of all, you don't know who might be wanting to abuse your data tomorrow.
So it could be that you're just currently in a really good spot and you're somewhat privileged that you don't have to worry about your data being out there.
That might change entirely tomorrow.
I think the fact that by our data being out there and other people are being able to use it to understand our preferences and needs,
it's, we're just giving up some of the agency that we have. The ability to make our own choices
in life, for me, that's a really, really big question. And then the follow-up question is like,
how do we manage that? Fifth Third Bank's commercial payments are fast and efficient, but they're not
just fast and efficient. They're also powered by the latest in payments technology, built to evolve
with your business. Fifth Third Bank has the big bank muscle to handle payments for businesses of any
size. But they also have the fintech hustle that got them named one of America's most innovative
companies by Fortune Magazine. That's what being a fifth-third better is all about. It's about
not being just one thing, but many things for our customers. Big Bank muscle, fintech hustle. That's
your commercial payments of fifth-third better. The holidays are right around the corner and if you're
hosting, you're going to need to get prepared. Maybe you need bedding, sheets, linens. Maybe you need
serveware and cookware, and of course, holiday decor, all the stuff to make your home a great
place to host during the holidays, you can get up to 70% off during Wayfair's Black Friday sale.
Wayfair has Can't Miss Black Friday deals all month long. I use Wayfair to get lots of storage
type of items for my home, so I got tons of shelving that's in the entryway, in the bathroom,
very space saving. I have a daybed from them that's multi-purpose. You can use it as a couch,
but you can sleep on it as a bed.
It's got shelving.
It's got drawers underneath for storage.
But you can get whatever it is you want,
no matter your style, no matter your budget.
Wayfair has something for everyone.
Plus they have a loyalty program,
5% back on every item across Wayfair's family of brands.
Free shipping, members-only sales, and more.
Terms apply.
Don't miss out on early Black Friday deals.
Head to Wayfair.com now to shop Wayfair's Black Friday deals
for up to 70% off.
That's W-A-Y-F-A-I-R.com.
Sale ends December 7th.
What is the question that we're not asking?
Because the obvious follow-up question is how do we manage that?
But what are the questions, what are the unknown unknowns,
what are the questions that we're not asking that we should be?
One of the big questions that comes up,
and it gets us a little bit already into this world of how should we manage.
I think currently that the questions that we're asking is like,
how do we give more control to individuals, right?
Because it's almost a logical next step.
If you say, well, we don't have control, we don't understand it,
seems to be, well, let's just make the process more transparent.
So let's just help people understand what's happening with their data, and let's give them more control.
So they can say, well, this is a use case that I appreciate because you're helping me save more,
and here's a use case that I don't appreciate because you're just trying to reach deeper into my pocket
and get me to spend more on something that I don't like.
And this is maybe even the best case scenario.
It's certainly even more nefarious use cases.
But that seems to be the current conversation, both in terms of the public discourse,
but also in terms of regulation.
Regulation, oftentimes, if you look to Europe, if you look to California,
they're very heavily focused on transparency and control.
And I think the question here that we're not really asking,
but should be asking, is, are people equipped to take on these responsibilities?
If I say, well, I'm just going to explain to you what's going to happen with their data
and I put you in charge of managing it, can you actually take on that responsibility?
And I think my take on this is probably not, not because of you,
personally or because I don't think anyone could not understand how the world works,
but the world of technology moves so fast.
So I do this for a living and I think about this pretty much 24-7,
and I have a really hard time keeping up with technology and the way the data can be used,
and the way the data is being collected in different parts of the world,
in different places by different companies.
And even if you fully understood the potential of data,
it would be a full-time job to effectively manage it.
If I wanted to fully understand how all of the products and services I'm using, use my data and then go through the process of saying, okay, here's what I like about this, here's what I don't like about it, here's what I want you to do with the data, here's what I don't want you to do.
That would be a full-time job.
That would be the only thing that I'm doing.
I could say goodbye to all of my research.
I could say goodbye to all of my friends to family time.
That would be the only thing that I'm doing.
And I don't think we can realistically expect people to take on that burden.
And we currently frame it as a right.
You have the right to understand your data and you have a right to manage it.
But the real question that I think we're not asking is, isn't it just much more of a responsibility?
And people are not really equipped to take it on.
So that's just a conversation that I would love to see a bit more.
But would the substitute be paternalism?
To some extent, there's different ways in which you can think about this.
I would probably say liberal paternalism in the sense that I would try to not exclude access to certain types of information.
products and so on, but protect people's privacy and their data by default, right?
So one of the big topics when it comes to paternalism, when it comes to nudging and behavior
change, is essentially understanding the fact that we're, as humans, are somewhat lazy, right?
And we have a limited amount of time.
We have a limited amount of cognitive resources.
So whatever is the default, whatever is the baseline, is what most people are going to
stick with.
And there's fantastic research in different domains.
The one that resonates with people and that stakes is organ donations.
So we know that there's countries, for example, where you automatically opted in to an organ donation registry.
So you opted in.
Nobody's forcing you.
You can always opt out.
But you're kind of enrolled automatically.
And then there's other countries like the US, for example, where you have to go, and it's hard.
You have to go to the DMV.
You have to actively say you want to be part of that.
And what's interesting is that most people across the board across different countries agree that being part of the
an organ donation registry is actually the right thing to do. It's a good thing to do. So
intentions and motivation is there. Now, if you had an educated guess of like what's the percentage
of people enrolled in these different countries, you can imagine there's a huge gap. So for the
countries where you actively have to enroll to be an organ donor, it hovers around 10% of the
population that actually turns their intention into action and signs up. For the countries where you have
to opt out, that number is hovering around 97, 98% of the population. So almost everybody
is enrolled. And the only difference there is what was the default? You were enrolled by default or
you were not enrolled by default. And you can translate the same principle to the world of data.
Right. So in the world of data, again, most people would probably say, yeah, in a world where I could
have it all, I would probably want my data to be protected someone, somewhat. But now the current default is,
you have to opt out of all of the tracking that is happening.
And nobody has the time to do that.
Nobody has the time to really go through all of the kind of specific terms and conditions,
understand the legalese and then actively managed.
So there is a world where, kind of coming back to this idea of paternalism,
we just try to change the default such that if you don't do anything,
your data is protected.
And now companies actually have to convince you that by using your data,
they're able to offer much better services and much better products.
Right now, it's lip service most of the time.
So it's like this comment like, well, we need your data to make the product better.
But there's no way for you to actually know if that's true or not,
because most of the time you can't even use the product
without consenting to all of your data being sampled and being extracted.
The idea that changing the default benefits consumers directly
because it protects their privacy,
but it also in a way should lead to more innovation and better products
because now companies will only get the data if they actually live up to their promises and make their products better.
Otherwise, you're not going to say yes to granting them access.
Right. So create an incentive for why I should opt in.
We see the same opt in and opt out with 401k enrollment.
When previously 401k enrollment was opt in and far fewer people were enrolled in 401Ks.
And then there was a big cultural shift.
And now in many major workplaces, the default is opt in.
have to actively opt out, and that has dramatically increased 401K participation.
And it's such a great example for helping you accomplish what you want to do anyway, right?
I think most people would say, no, this actually makes sense.
This is something that is helping me.
I just didn't have the bandwidth with all like kind of the push and pulls of my daily life,
whether it's kids running around at home, or whether it's like stress at work.
That was just not top of mind, and I never made the time to do it.
There's a really nice analogy that a friend of mine mentioned at some point that I like
in this context. So like if you want to change behavior, it's like launching a rocket to space.
One thing that you need is thrust. So essentially that's the motivation part. So you need to be
motivated. You need to have an incentive to change. Like in the 401K, it's like that's, most people
would say, yeah, it's a good thing. But you also need to reduce friction. And I think that's what
the defaults are coming in. Right. So even with the strongest thrust and the strongest motivation,
if the process is really, really difficult, you're still never going to accomplish
And the same is true in the data world, right? Even when most people say, no, I would love to protect my data somehow, the process is currently set up in a way that makes it so, so hard, that you never got that, get that space shut off the ground because there's so much friction in the process.
So let's go back then to the previous question, the one that you asked a few minutes ago, which is how do we manage this as individuals? How do we manage all of this?
I think this is actually now building on your previous question.
Like if we assume that we can't do it alone,
if we assume that transparency and control are somewhat necessary,
but they're not sufficient to make sure that you manage your data properly.
For me, the one thing that I've been thinking a lot about is
how do we actually get support people with this competent, crew, and community
that in a way has like similar interests in how they use data,
but they're also like now a collective that can make much better decisions.
and maybe higher experts.
To give you just one example, expecting moms,
when you're expecting a baby and you're pregnant,
it's a terrifying time because there's so many unknowns
and you kind of see your doctor maybe every two weeks, every four weeks,
and you get like a quick thumbs up, thumbs down,
like the baby is doing well or not based on a quick ultrasound.
But what you really want is almost like a day-to-day,
based on my genetics, based on my medical history,
based on my lifestyle, based on everything,
Everything that you know about me, right?
Take biometrics, take whatever you want.
I want advice that's personalized to here's what you should be doing in terms of nutrition,
in terms of exercise, whatever it is, and to make sure that you're healthy and your baby is healthy.
Maybe track data along the way to see if you're on track or not.
Now, to get to this point, we need to pool data, right?
Even if I get access to all of my genetics and my medical history, it doesn't tell me anything
because I can't map it against, like, healthy pregnancies or unhealthy pregnancies.
So if we wanted to get these insights, what we would need is to have a large pool of data
where women come together and share that data.
Now, I don't want a farmer company to have all of that data.
I don't want any central entity, like a third party, have access to that data.
Because, again, it's like super intimate.
If you think about genetic data, if you think about medical histories.
So it really needs to be someone that you trust.
And there is this idea of data trusts or data co-ops, which are member-owned entities
that actually come together and they say, well, we're kind of all expecting moms.
And I want to be part of that member-owned entity, which means I actually have a say over how that
company is run. And similar, like in how we do this in the financial world, right? So banks have
like these fiduciary responsibilities through their customers. The co-op now has fiduciary
responsibility. So they're legally obligated to act in the best interest of their members. And now,
because you have all of these moms coming together, first of all, the insights that you can generate.
are far superior than anything you could do by yourself,
because now you can, again, see, here's how this plays out
for maybe different social demographics,
different lifestyles, different genetic histories,
but you can also benefit immediately, right?
So it's not that you hand a data to a company,
and maybe you can then purchase their product in 10 years' time
to see what you should be doing or should have done 10 years ago.
So you immediately benefit from these data.
And for me, it's this idea that, yeah, I by myself,
first of all, my data itself is not as valuable,
but I also don't know exactly how to manage it.
I don't know how I should be storing and securing my genetic data in a way that doesn't leak
or isn't the victim of a security breach.
But if we come together, now we actually have the resources to say, let's hire experts.
Let's hire experts whose sole responsibility is to try and figure out how do we both protect your data,
but also maximize the value that you can get from that.
And for me, that's a totally different way of thinking about data, right?
Because it's not saying, I'm just going to put you in charge.
It's like, no, there's so many people who have the same incentives, the same ideas of what could be done with their data,
and we just need to get them together so that they can benefit from the strength of the collective,
whether that's, again, just the insights or the fact that you can now have someone manage and read through all of the terms and conditions
and think through the legalese on your behalf.
It's quite onerous to put that responsibility on any one given individual.
It is nice to have the right to do that, but also with rights comes responsibilities.
and so it is also both are simultaneously true.
Nice to have the right burdensome to have the responsibility.
But if you have support in doing that,
then it really becomes a right.
So you now have someone who's been trained to do that,
who has the time, that's their full-time job,
and they're doing it with your best interest in mind.
So they kind of figure out what is the collective one.
And there's already examples of these data cobs.
So my favorite one is in Switzerland.
It's called My Data.
And they're in the medical space.
So they are thinking about these rare diseases that we still don't understand really well.
So multiple sclerosis is one of one of those diseases that it's just so complicated because it's determined by your genetics, by the environment, by your medical history.
And what they do is they say, well, the way it currently works is if you suffer from MS, the best outcome you can hope for is that a pharma company takes your data, does some R&D, kind of tries to understand a disease better.
and now in the best case scenario, maybe in 15 years' time, you can buy the drugs that they develop based on your data for millions of dollars, right?
Worst case, you never benefit from that data being out there at all.
And what my data does is, again, similar to the expecting mom example, it says, let's get people together who suffer from a mess.
Also, let's get people who don't so that we can have a comparison.
And we, as an entity that's, again, member-owned, we analyze the data.
So we try to extract the most insights, and it's not just that we try to understand the disease at like the meta level.
We also understand you specifically.
So we can get insights and send them to your doctor and say, here's something that we learned based on the tracking of symptoms, maybe mapping against what the symptoms of other people look like.
And we make recommendations to the doctor of like, here's how they should be treating you.
And now you have like almost this full feedback loop where the doctor can then go back to the data cop and say, hey, here's something that work for that patient.
here's something that didn't work.
And it's a completely different setup.
So the patient benefits immediately without having to spend those millions on the drugs,
if anything, oftentimes these data co-ops allow you to monetize some of the data.
So if there is a pharma company that the data co-op trusts and they think could be helpful
in understanding and developing better treatments, well, then they can have deals with their company
because now you have suddenly 300 patients, 500 patients, and your bargaining power,
with a farmer company for the data grows exponentially, right?
Because the data of 500 people is just so much more valuable than the data of one individual.
What I'm hearing from you is how people collectively can come together to use data in a way that better advantages groups of individuals.
But how can any one given individual who's listening to this right now who wants to save more money?
And you're extroverted, you're agreeable, your mail, you're between,
the ages of 40 to 50. So you occupy all of these different cohorts. You work in manufacturing.
You live in Michigan. You have all of these disparate cohorts that you belong to. And all of them have
some certain attributes, some of which, broadly speaking, tend to often be overlapping and some of
which are not, you know? Maybe you're also super into ballet in addition to everything I've just named,
right? So given the mosaic.
of who a person is. How can any individual who's listening to this figure out a way to make it all
work for them? I mean, that's really where these large language models, the likes of Chachipit,
Clode 3, Gemini, really shine and they've entirely transformed this space, because it used to be
that these psychological insights required, first of all, a lot of data to build these models. Then they
needed someone to create a model, right? It says, I can predict something.
about your psychology based on the data that I find about you.
Large language models make these insights accessible for anyone.
So you can ask OpenAI or chat GPT and say, look, here's what I want you to know about me.
And in a way, that also gives you control because you control what data you want to use as an input.
You can say, here's my social demographics, here's my preferences.
Here's something else that you should know.
I'm a little bit neurotic.
So in your advice, please take that into account and factor it in somewhat heavily.
or I really care about my dependents or make sure that as you're giving me financial advice
or you try to help me change my life in whatever way you're hoping to and make sure that you
keep my loved ones in mind. And you can just ask it and say, well, what would you recommend?
And what's remarkable about these large language models is that they're incredibly good at
simulating different personas, right? They've read the entire internet. They've essentially digested
all of the human narratives that we've created over the years.
in terms of stories, in terms of news articles, in terms of like blogs and really anything that's
out there. And they're really good at putting themselves in the shoes of someone of specific
traits. I, for example, use it all the time when I travel. It used to be so time-consuming
to say, okay, I'm going to go to Barcelona and I have certain preferences. I want to kind of
live in a hip neighborhood. It has like lots of good coffee shops. I also want a lot of playgrounds.
So I can take my son there. And it was really hard to find.
And now I can just tell Open AI and say, okay, here's all of the factors.
Think of me as like mid-30s, having a kid, husband is traveling with me, I'm into food,
I like these hip neighborhoods, but make sure that it's safe for the kid.
And it comes up with remarkable recommendations.
It's really just playing around with these language models that can just pretend to be a certain type of person
and think through the world in this way.
It's just a remarkable opportunity that we now have at our thinking.
tips that we'd never had before. And I think of it, the analogy that I always use is it's a really,
really competent intern with unlimited time, resources, and knowledge. You need to give them a good
task. So you need to be very specific of what are you asking for. So whatever context you give it,
whatever the prompt, whatever information, you give it about yourself, that's what it's going to
run with. And you also probably want to check the output. Most of the time, it's doing a great job.
But once in a while, it's just like off by a little bit. Same way with an intern. You probably want a super
the work. But then it's like just the most efficient intern that you can imagine because it's seen so
much. And it's just really good at simulating these different personas. You know, earlier this year I went to
Panama, I had Claude act as essentially my travel agent, a travel planner. You know,
I said, here's what I'm interested in. Here's what I'm not interested in. Here are some of my
parameters. Here are some of the responsibilities that I'm going to have to tend to while I'm
traveling that are going to be kind of limitations on what I can do on a day-to-day level.
Based on all of this hyper-personalized information about me, can you design an itinerary?
It gave me an output.
And I said, all right, what I dislike about this output is X and Y and Z.
And then we went through five or six iterations.
And by the final iteration, I was like, all right, we've nailed it.
And then once the trip began, of course, things came up.
I had a pet that was sick back home.
And so I spent a day FaceTiming with the veterinarian.
So then I just went back to Claude and said, hey, you know what?
Our original Tuesday plans got totally waylaid because I spent all of Tuesday just
face-timing with a variety of vets.
How do we remake this?
And so then we responded in real time once again.
And they probably provided emotional support as well.
Claude is really good at that.
But you're totally right.
What is so remarkable about these large language models as opposed to traditional advertising and persuasion is that it's a conversation, right?
So it's far closer to this like you have a good friend that you can also push back on and say, well, here's something that I don't like.
Just kind of try again.
And it's almost like you're co-creating like the perfect outcome as opposed to me just giving it to you.
And the one thing that I've been thinking a lot about in this context is on some level, in most cases, it's just incredible.
useful, right? Because it's like, it's very good in giving this advice. The one thing that I'm
slightly concerned about is essentially if it's going to make us far less complex and interesting
over time. So the idea that if we just outsource all of our decisions to these large language
models, what they're kind of trained to do is predict the most likely outcome, right? Even the way
that like a large language model operates at a very basic level is it predicts the next
word in a sentence? What's the most likely next word in a sentence? So if you ask it, well,
what color should I paint my wall? It would probably say something like beige or white or something
that's very common. It's probably not going to say turquoise and pink in stripes with blue dots.
Because nobody's, it's never seen that answer in all of the internet, right? And it's very unlikely
that most people are going to like that. So even when you train a model based on your preferences,
you can imagine that it's going to go for the most likely preference that you have.
It's in a way, if you think of yourself as a distribution of preferences,
most of them were somewhere in the middle,
but then you sometimes have these obscure preferences on the edges
that come from serendipity and they come from discovery
and these fun moments where you suddenly explore something new
that you've never seen before and you realize, no, actually I like that too,
if we're not careful and we're just overly outsourcing all of our decisions
to these large language models,
we kind of lose out on the discovery part
that requires still a lot of human intentionality
and saying, like, for one afternoon,
I want to do something that's completely different,
maybe even ask it, right, and say,
hey, what would you recommend to someone who's totally different?
Maybe it's like a different age, doesn't have kids,
what would that world look like?
Because otherwise, I think we're just losing this ability to explore,
which is really how humans and the human species learns.
we kind of learn by exploring new things.
And sometimes it's a mess, right?
Sometimes we try a new restaurant and it's like a total failure.
And we would have been better going to the one that we already know.
But sometimes it's also the biggest hit.
And sometimes we find something new that we didn't know was coming.
And we like even more than our typical go-to restaurant.
I like the idea of prompting experimentation asking Claude,
what would you recommend to somebody who had the opposite characteristics of what I just named?
You can prompt a large language,
to give you something else, almost as an opportunity to say, well, I don't know what the reality
of, like, someone who looks completely different to me has like a completely different experience
in their life, has grown up in a totally different way, lives in a completely different part of
the world. I don't know what the experience is like and what they would want to do when they're
in Barcelona or which movie they would want to watch on a Saturday night. But the large language
model does, right, because it knows all of the personas. So it's like this almost unique opportunity
that we have with technology to say, hey, help me sample more of the world, help me understand
what the world looks like from different perspectives. But you have to be intentional about it,
because that's not the way that it typically works. The echo chamber swap. That's an interesting
concept. You know, you also wrote about that in terms of, give me the news that this other person,
this other very different profile would be getting on this day. Right. And to just have a window into
what are the headlines that they're seeing?
What are the sources that they're reading or listening to or watching?
That echo chamber swap can sometimes pull you out of your bubble.
I've talked to many friends who were like,
I didn't realize how much of an echo chamber I was living in
until I just grabbed somebody else's device, you know,
and looked at it and was like, whoa.
And it's so hard because right now you almost have no way of breaking out of that echo chamber.
There's no way that you can have easily, like a different Facebook news feed.
or like your Google page one that's customized for someone else.
But there's all of these opportunities to actually do that.
The really interesting part is could you even take it a step further and say,
instead of just showing you the news that other people see,
could I also help you relate to those news to some extent by making them more personal?
Because you can imagine, actually the Wall Street Journal had this blue feed, red feed website at some point
where for the same news items, they just said like, here's what this looks like for Democrats,
here's what it looks like for Republicans.
I think the potential danger that you have there is you see what it looks like from the other side
and you find it's so appalling that you just dig your heels and even deeper, right?
So there's always a risk of reactants and saying like this is so far removed from anything that I believe in
that I don't even want to look at it.
And if anything, you're kind of even more alien to me now than you were before.
But I think if we thought about this echo chamber swap really as a personalized experience of saying like,
look, here's the entire experience of that person.
Here's not just the news that they see, but here's what their day looks like.
Here's why they think about immigration in a different way, because they've grown up the way
that they've been educated, the way that their family is set up, whatever it is, probably
makes it much more likely and that they think about it in that way.
And for me, that would be really like a perspective exchange.
It's not just taking a glimpse into what your reality looks like, but kind of immersing yourself
a little bit more in a way that makes it more human and something.
somewhat more relatable. It's a tougher order though. Yeah. You know, another variation of the echo chamber
swap is when you go on Bloomberg, the headlines that you get if it's Bloomberg US versus Bloomberg
Europe versus Bloomberg Asia. And the economic headlines that you get if you're looking at
Bloomberg Asia or Bloomberg Europe is totally different than what you read when you're looking at
Bloomberg US. It's just a way to get a much more global perspective on the world economy.
And that is even still visible to you.
Right.
So you can still go to Bloomberg, Europe, Bloomberg, Asia and see what they're talking about.
In most instances on the internet, it's not even visible because I can't easily see what you see.
There's no way for me to publicly access that.
So we all live in these kind of small realities that I don't even have an opportunity to hop into yours,
the same way that I could at least manage my news if I were to look at all of these different Bloomberg outlets.
You mentioned you teach negotiation at Columbia Business School.
How has your background in big data fed into your practice and teaching of the field of negotiation?
It's an interesting question because in most cases, it's actually relatively far removed.
So I would think of the negotiations world.
Still, the world is like the messy human world, not necessarily including any of these algorithms, data and AI.
at least that was the case when I started.
So when I started eight years ago, it was very much focused on just like face-to-face,
giving you the tools that you need to become better.
Now, I would say over the last two years or so, that's changed a bend.
Because the same way that we think about AI assistant in different parts,
helping you pick the next vacation, helping you maybe come up with an exercise plan.
We can also think about them in the context of negotiations.
And sometimes that could just take the form of,
can we build a tool that allows you to practice, right? So much of negotiation is can you go through a couple of practice rounds where you interact with different counterparts? Like you play through different scenarios. And if this happens and I'm up against the counterpart who's just trying to eat my lunch and is trying to screw me over, how would that conversation unfold? How should I be acting? How should I be preparing? If I know that I'm interacting with a partner that is going to be a long-term relationship, how does that play out? So,
What we now can do is we can develop these tools where an AI not only plays the part of your counterpart, right?
So it's not only that it's the person who negotiates on the other side, but we can also build these coaches that are based on AI and say, look, here's everything that we know from science, from behavioral science, from years of experience that are helpful strategies in how you get more at the table, right?
Here's the questions that maybe you should be asking.
here's how you can make offers to the other side.
Here's how ambitious you should be when making a first offer.
All of these pieces of advice that we typically teach the students in class,
we can teach an AI to act as a supervise and as a coach.
So as you go through your conversation with your AI counterpart,
the coach can say, hey, well, actually just take a step back, take a pause here.
Here's something that you did, which might have actually not been super beneficial to you.
want to try this again by changing the way that you approach this negotiation. So like AI in a way
acts both as a sounding board as a tool to practice, but it also on some level gives the
students me as the instructor on a very much personalized basis. Ideally in my negotiations
class, I would listen into all of the negotiations the students go through and say, hey,
you know what, Tim, here's something that I think you could have done better based on the
conversations that I've seen, here's something that I would want you to try it differently
next time. Now, I can't do this for the 50 students in the classroom, but an AI can do it,
and it can do it pretty much as well as most of us instructors. Wow. It's interesting that you
mentioned that. So I teach an online course on negotiation, and we put a huge emphasis on peer-to-peer
practice because, sure, I can teach concepts and tactics, but really the practice, the peer-to-peer
practice is where the true learning happens. The notion of getting A.
AI involved, I guess that is the next inevitable iteration of this.
Yeah, if you don't have a peer, it's certainly a good fallback option, right?
It's for us, I would never replace the peer to peer because there's also something from
peer to peer that you get that is still somewhat more difficult with AI.
And those are the perceptions, right?
One of the big, I think one of the valuable parts of the class is you go through negotiations
with your classmates and you can stop after the negotiation and say, first of all,
let's turn over the cards, right? Let's see how much you got from the negotiation as opposed to your
counterpart. Yeah, how much of the soap did you get? Yeah. You never get that in the real world,
right? You kind of negotiate and maybe you thought you got a great deal, but actually the other side
totally screwed you over. You're never going to find out. Or you felt like, man, this is like really
tough and I was totally screwed over, but you actually really got a good deal. First of all,
the students see that, but they also get feedback from their classmates and saying like, well,
I actually think you are not assertive enough. I think you could have been much more assertive,
and I didn't even mind.
Or the opposite is like, well, you came in way too strong here.
I can to some extent replicate that and kind of give you the same perceptions and the same
impressions, but I still think there's something unique about hearing it from another
person as opposed to an AI.
Right.
Yeah.
There is a very innate need that we have at a very visceral level to be around other humans
rather than robots.
And I don't think that will ever go away.
human interaction can never be fully replaced by automated interaction.
The way that I think about these AI assistance, it's really a compliment, right?
You can imagine this in many different contexts.
Like negotiations is one.
Think about it in the context of coaching.
Think about it in the context of therapy.
There's always the fear that, well, is AI going to replace human therapists?
Well, probably not.
For anyone who can afford a therapist in blood and flesh, they're probably still going to go through that therapist.
but there's still so many other people who either can't afford a therapist at all
or who might actually benefit from also having an AI that they can talk to in between sessions, right?
If you think about like a therapy session, you probably see a therapist maybe once a week,
maybe twice a week.
And there's so much that happens in between.
And you can only retrospectively talk about it in the session.
Whereas if you had an AI that says, well, I'm in the middle of a fight with my sister.
I just came out of it and it's fresh in my mind.
Emotions are still kind of running high.
and I can capture that moment using an interaction with a chatbot.
And now I can take that interaction to my therapist next week.
Now that I'm cooled off again, we can discuss it in a more rational way.
That would be hugely beneficial.
And I think the same is true in context like negotiations.
Yeah, you kind of practice with your counterparts, your human counterparts.
But to make the most of that, you might actually have like some practice in the middle
so you can iron out some of the things that are difficult for you.
try different things and say, well, I really feel terrible being too assertive, but I want to
give it a shot one time. Or I want to try and see how it feels to lie to my counterpart when they
asked me a question. You probably don't want to do this with another human being.
They're your classmates. They're going to find out once you turn over those cards. And even though
it's just role plays, people take it very personally. But doing some of that with an AI and just seeing
how does it feel, how do they react, I think can be hugely beneficial as a compliment.
Right, right. I could see also when you're using AI as a negotiating counterpart, if you could upload a PDF into your AI into Claude or chat GPT that gives them the role of the other person. But if you yourself have not read that PDF so you can create that information asymmetry, then it's possible that the AI can reflect that information asymmetry in that they,
they have information that you don't and then you can have that negotiation.
I think where it becomes challenging is, even though it would be counterproductive,
it would be so tempting to read that PDF, right?
That's true to the classroom too.
Yeah.
So the one thing that we tell them in the beginning is if you read the materials of the other side,
that negotiation is going to be useless for you.
And the same is true if you have an assistant, like an AI assistant.
If you read the materials of your AI assistant, it's not going to be useful for you in any way.
Yeah, exactly.
preserving the information asymmetry from the two parties. It's fundamental to having good peer to peer
practice. How do you manage it, though? So this is one thing that we do in our class on negotiation.
Towards the end of the class, we start with single party, single issue. Then we move to single party
multi-issue. So you've got two individuals who are negotiating multi-issue. But then we go to multi-party
multi-issue. And that's where coalitions come in, people form factions.
Those are the fun ones.
Yeah, exactly.
They're the hardest to manage because you've got to collect like nine people into the same Zoom room at the same time.
So logistically, it's kind of the hardest to schedule.
But they're the most fun and I think most beneficial practice sessions that we do inside of our class.
But how do you do that or do you do that with an AI or with data?
Yeah, that's much harder to do.
So you could still technically set it up that way because you have different AI agents interacting with you,
interacting with one another.
So you could technically still set it up.
I think in terms of the benefits for students, it would probably be lower.
I 100% agree.
So I think AI is oftentimes really amazing for like these teaching people the basic skills
so that you can then spend more time in the classroom dealing with the big negotiations.
Right.
So I can save time on the single issue negotiations that I go through and the kind of single party multi-issue negotiations and dive in more quickly into the more complex ones because I can just outsource some of the easy.
wants to AI practices outside of the classroom. Right, right. Well, this is a fun conversation.
I didn't expect to be talking so much about negotiation, but it's one of my favorite topics.
Me neither. And I think it's like this one skill that I would love everybody to learn more about.
When I joined Columbia Business School, I did negotiate my contract. I absolutely hated negotiations.
I always felt like I'm just going to ruin all of my relationships. I don't feel comfortable asking
for more. Why would anyone want to go through this? And then two weeks later, I think I was told that
I was going to teach negotiations. And I was like, you'd be kidding me? I'm the worst person to teach
negotiations. It was the best thing that happened to me because I learned so much. If you think about
it from a psychological point of view from like reading other people, figuring out like interpersonal
dynamics and so on, I think I've learned so much and I think about it completely differently now.
I don't think of it as like this tug of war anymore, right, where I'm just kind of trying to screw you and you're trying to screw me, but much more of like this creative problem solving.
Like there's all of these puzzle pieces on the table and how do we make them work such that you get a good deal and I get a good deal.
And for me, that's really changed entirely how I think about negotiations.
And I wish there were more people learning the art and science of negotiations.
Yeah.
Actually, you know what?
I do have a follow-up question.
And I'm curious what your thoughts are when it comes to that creative problem solving and puzzle
solving when there's a power imbalance.
When it's a negotiation between an employee and a boss, the employee wants to get a raise.
The boss has certain budgetary constraints.
But there's this power dynamic, right, which makes it so inherently different from,
let's say, selling a car on Facebook marketplace where you're lateral to one another.
No, totally.
And we could now dive into the topic.
I mean, the one thing, because you mentioned salary negotiations, I think the one thing that we talk about with students a lot is just how do you put yourself in a position where you level out some of these dynamics just by you having alternatives, right?
If your life depends on that job, you probably have a harder time asking for more.
If you know that even if your boss, worst case says, I'm not going to give you a raise and now I also don't like you and you're not going to get promoted, but you have something else lined up.
even if you don't mention that in the negotiation itself,
we know that just psychologically it makes you a lot more confident.
So you might be going in more confident in asking for more.
So that is one thing.
And the other one is if you think about this,
this like a tug of war as opposed to creative problem solving
and trying to figure out really kind of interests rather than positions.
So like why is it that you care about this stuff?
What really matters to you as opposed to what are you asking for?
in a way, if you're lower in power, that becomes even more important.
Because in a way, that's the only thing that you can do, right?
You don't have the form of power to impose a solution on your boss because still your boss.
But you can try and figure out, okay, how do I actually suggest something that gets me what I want,
but also make sure that the boss is happy and gets what they want, right?
Because they oftentimes have to save face and justify what they give to you to other employees,
to maybe their superiors.
So how can you be creative in terms of finding a solution that still works for you, but also for your superior?
So the creative problem solving, I think, almost becomes more important if you don't hold a formal power.
Right. How do you manage that when so much communication now happens online?
You know, when you're having real-time face-to-face synchronous communication, these creative problem-solving conversations become a lot easier.
But when this is done over email, it becomes a lot tougher, you know, because this asynchronous
communication on a slack thread or in email or text message is a very challenging way to hammer out
any issue. And yet, so much of the time, the counterparty will insist on having asynchronous
communication. I was going to say, if you are someone who thrives in face-to-face, try to insist, right?
You can't be overly pushy, totally understood. And sometimes your counterparty,
part insists on you doing it over email, but to the extent that you can, you might want to change
that context and say, hey, I would really love to meet face-to-face any chance that we could catch up,
even if it's like half an hour, because I just want to see what this looks like from your side
and tell you a little bit more about why I'm thinking about it the way that I'm thinking.
Again, if that counterpart completely walls and says, like, no, we have to do it over email,
then you're stuck with that. And I think there, it's oftentimes still helpful to both signal
flexibility, but also ask for the things that you want, right? So you want to be very clear,
and this is true for face-to-face, but it's even more necessary for these formalized email
conversations of here's the things that really matter to you. A lot of people are afraid of communicating
that because they think, well, if I tell them what really matters to me, aren't they just going
to exploit me? And, you know, like in the grand scheme of things, you're probably going to be
exploited once in a while if you follow that strategy. But more often than not, just by using
signaling, hey, here's what's important to me. Now, let me hear a little bit about what's important
to you in terms of where do you see my career trajectory? What is it? I could actually contribute to the
company that I'm not currently doing that I could be doing in my next role. So the more you make it about
this again, like two-sided, I'm going to tell you a little bit about what I care about. And here's why.
So kind of providing this rationale and saying, here's why I care about getting a title that
reflects my level of seniority, because I think it's going to allow me to better interact with both
internally, the people that I'm supervising, but also maybe external suppliers and so on.
And so I think the more that you justify and engage in this, okay, but also what can I do for you,
that I think is always helpful, but you just need to make it very explicit when you communicate online.
Well, thank you for spending this time with us. Where can people find you if they would like to
learn more? I recently published a book that's called Mindmasters, which has a lot of the topics that
we discussed in it. And then I also have a
website, Sandra Mats.com.
Thank you to Dr. Sandra
Mads, a professor at Columbia Business School and the
author of Mindmasters.
What are three key takeaways that we got from today's
conversation? Key takeaway number one.
Nice people struggle with money.
But there is a fix.
People who are agreeable, those who
are trusting, caring,
high in empathy,
people who are agreeable consistently
have worse financial outcomes
than their less agreeable
counterparts. That's what the data shows.
there is a solution, and the solution is not to become meaner, but rather to reframe your financial goals around what you actually value, like protecting others or helping others.
So the research shows that if you have a more competitive personality, then you respond better to framing saving and investing as a way of getting ahead in life.
But by contrast, if you have a more agreeable personality, then you respond better to frame a saving and investing as a way of getting ahead in life.
you respond better to framing savings and investing as a way to support your loved ones,
to make a positive impact in the world. So if you want the motivation to continue saving,
to continue investing, and motivation, it's like showering. It's something that we need every day,
or at least every other day, right? It's like motivation doesn't last. That's why we need to
continually re-up it. That's why we need to continually engage with it. That motivation comes from
reframing your financial goals in a way that fits you and your personality.
And that makes a much bigger difference than trying out some new budgeting app or trying out
stricter rules.
What we found somewhat counterintuitively was that people who score high on agreeableness,
so those are the nice guys.
And people who are caring, they're trusting, they're empathetic.
Those are usually the people that you want to have as friends and that hold society together
in a way.
But they also seem to have a harder time managing their finances.
That is the first key takeaway number two.
If you want to predict your financial future, you can find evidence of that in your digital footprint because there are companies that are already using your social media activity, your spending patterns, your smartphone data.
They're using all of that to predict whether or not you're going to default on loans, to predict how much you spend or to predict what kind of products you're most likely to buy.
When you understand this, you have some power because you can use the same AI tools as your personal financial advisor by feeding those AI tools your personality traits and your money goals and seeing how they reflect that back to you.
That doesn't mean that they're a replacement for a human financial advisor, but they're a supplement.
The research study that was done almost 10 years ago, just looking at the Facebook pages that people follow on, we call our spouses our other half.
And they, similar to family members, go through life with us in many different situations,
different moments.
They see a lot of us, both in terms of how we want to be seen in public, but also sometimes
these more intimate scenes where we might not be our ideal selves.
And yet still an algorithm with just access to 300 of your Facebook pages can make more
accurate predictions of how you think of yourself in terms of personality than those
people who know us intimately.
technology is already being used to influence your spending decisions, but you can flip the script.
You can use it to improve your own money management.
So that's that second key takeaway.
Finally, key takeaway number three, you can use AI to break out of your financial echo chamber.
So the thing is, when we talk about digital echo chambers, I think we all know that social media, the algorithm, feeds us the same echo chamber political content, for example, over and over and over again.
again. In that same way, your financial habits, your financial mindset, that can also get
echochambery and it can get stuck in a rut. You can prompt AI to show you how somebody with
completely different characteristics would approach their money. For example, ask AI what a super
risk-taking entrepreneur would do, or ask AI how somebody from a totally different culture
might handle retirement planning. Ask AI about different person.
different phases of life, different life circumstances, and then prompt it, like, what would a person in this
circumstance with this mindset, with this approach, and then with this type of portfolio, what would they do?
Use it to sort of try on different hats, right, to see life through different perspectives,
to break out of your echo chamber and to widen those horizons, because it's a way that you can
discover financial strategies that you would never consider on your own.
you can prompt a large language model to give you something else,
almost as an opportunity to say, well, I don't know what the reality of,
like someone who looks completely different to me,
has like a completely different experience in their life,
has grown up in a totally different way,
lives in a completely different part of the world.
I don't know what the experience is like and what they would want to do
when they're in Barcelona or which movie they would want to watch on a Saturday night.
But the large language model does.
Those are three key takeaways from this conversation with Dr. Sandra Mats.
Thank you so much for being part of the Afford Anything community.
If you enjoyed today's episode, please do three things.
First, share this with all of the people in your life.
Share this with that agreeable person who puts everyone first and share this with that competitive person who's super motivated about getting ahead.
Share this with the person who's got 300 Facebook likes and share it with the person who asks Google all kinds of questions that they would never actually ask their friends.
and share it with the person who likes curly fries,
because you know that they're smart,
because smart people like curly fries.
Share it with the person who talks about themselves
using first-person pronouns when they're stressed,
and share it with the person who insists
on doing salary negotiations by email.
Share this with all of those people and more,
because that's the single most important way
that you can spread the message of F-I-R-E.
Number two, open up your favorite podcast playing app,
Apple Podcast, Spotify, Pandora,
whatever it is you use to listen to this, open that app, hit the follow button to make sure you don't miss any of our amazing episodes, and while you're there, leave up to a five-star review.
We absolutely appreciate it, and the more reviews we get, positive ones, the better of a chance that we have of bringing on big, awesome, thoughtful, insightful guests who can share knowledge with you.
Also, head to our YouTube page, YouTube.com slash afford anything. Hit the follow button there, because the more subscribers are.
we have, the better guests we can bring on. We have a course on how to negotiate. It's called
Your Next Raise. It's been in beta for the past many, many months. And in August, we are going
to be releasing it in its full non-beta version for the first time. If you would like more
information about it, make sure you're subscribed to our newsletter, afford anything.com slash
newsletter, because that's where we'll send out all of the info when we launch in August. Again,
that's afford anything.com slash newsletter where we will send out more info. Also, if you remember from
the last first Friday episode, I promised that I would write a piece about student loans, about how to
graduate from college debt free. I'm going to publish that on the last day of July.
Make sure that you're subscribed to our newsletter so that you can read that piece at the end of
July. Again, that's afford anything.com slash newsletter. Totally free. Thank you again for being an
This is the Afford anything podcast. I'm Paula Pant and I'll meet you in the next episode.
