Chief Change Officer - From Chicago Booth to Berkeley Haas, Juliana Schroeder: Raw Insights on How AI will Change Our Decision Making and Communication
Episode Date: August 2, 2024How do you like a free, unfiltered Berkeley Haas MBA course by Dr. Juliana Schroeder? Currently, Juliana is the Harold Furst Chair in Management Philosophy and Values and Associate Professor in the Ma...nagement of Organizations at Berkeley Haas. Juliana’s research examines how people make social judgments and decisions. She studies the psychological processes underlying how people think about the minds of those around them, and how their judgments then influence their decisions and interactions. Juliana has taught in all of the MBA degree programs at Haas, including the Full-time MBA program, the Evening and Weekend MBA program, and the Executive MBA program, as well as the PhD program. She teaches classes such as: Leading People, Negotiations and Conflict Resolution, and Experimental Methods in Behavioral Science. She also directs multiple executive education programs such as Communications Excellence and the CEO program. She is also a co-founder of the Psychology of Technology Institute, which supports and advances scientific research studying psychological consequences and antecedents of technological advancements. Personal website Episode Breakdown: 3:39 — Road to Behavioral Scientist: How she was drawn to psychology as a profession. 10:42 — AI and the Perception of Power in Decision-Making: Discussing a 2020 paper published before ChatGPT—what foresights did she explore, and what are the implications for now and the future? The paper link 22:28 — AI as a Strategy Tool for Executives: Exploring whether AI is the best strategist for executives in high-stake and low-stake decisions. 28:50 — AI's Role in Synchronous and Asynchronous Human Communications: How AI influences different modes of communication. Connect with Us: Host: Vince Chan | Guest: Juliana Schroeder Chief Change Officer: Make Change Ambitiously. A Modernist Community for Growth Progressives World's Number One Career Podcast Top 1: US, CA, MX, IE, HU, AT, CH, FI Top 10: GB, FR, SE, DE, TR, IT, ES Top 10: IN, JP, SG, AU 1.3 Million+ Streams 50+ Countries
Transcript
Discussion (0)
Welcome to our show. I'm your host, Vince Chen.
As we close the curtain on our Women Change Leaders series, I've saved a special treat for last. Coming to us from the halls of UC Berkeley
is associate professor and psychologist Juliana Schroeder. You might have noticed most of our
guests have taken quite the scenic route through their careers. Juliana, on the other hand, has kept her eyes on one prize,
digging deep into the human mind.
She's now leading the charge
in teaching negotiation and management
to both MBA students and seasoned executives.
Take a quick look at her website or UC Berkeley's
and you'll be blown away by her achievements.
We're talking a laundry list of titles, a mountain of papers, and a substantial collection of awards.
And get this, she's bagged not one, but two master's degrees and two PhDs.
At an age where many are still figuring things out.
I could easily spend a good 10 minutes here, just running through her credentials bio and
all the incredible things she's achieved.
But let's be honest, I know you're here for the insights.
So, while I'm skipping the long intro to save us some time, I can't
recommend enough diving into her profile yourself. Trust me, if you're even a bit of a nerd like me,
Juliana's work is a gold mine. Juliana and I met at Chicago Booth.
She was my TA for two courses taught by two amazing professors and social psychologists, Nick Apley and Linda Ginzel.
I still remember the first day we met.
I was sitting next to her on front row when the whole classroom was packed. I didn't know she was actually my TA.
I raised hand and answered question. I got the question wrong. Then she whispered to me,
trying to explain the reason why. Then we met again in Singapore. This time, I pull her aside, asking her about reciprocity, a very important concept in psychology and negotiation.
In my eyes, she is very sharp.
Those who know me well understand that I use this word very selectively as a compliment.
Over time, I've observed the growth of her academic career.
I told myself, I must invite her to my podcast. So, wish granted, here we are. Let's get started.
Good afternoon, Juliana. Thank you so much for having me, Vance. Good afternoon, Juliana.
Thank you so much for having me, Vince. Good afternoon.
Let's start with a brief introduction of your background.
For the benefit of the audience, how I met Juliana, that was when I was at Chicago Pool.
Yeah, I am an associate professor in the management of organizations at the UC Berkeley Hall School of Business. And by the way, I'm incredibly impressed that you have kept in
touch for more than 10 years since I was a teaching assistant back in when I was doing my
PhD at Chicago Booth. I thought that I was going to be a hard scientist in my high school days.
And then when I got to college, I took some social science classes. I took psychology and
economics and I just completely fell in love with them. I just think it's fascinating to be able to
better understand how people think and feel. They kind of say that research is me-search.
And so I think I like to study the things that I find to be like fascinating and challenging and
that are kind of hard for me. So I study things like decision making and negotiations and persuasion.
And I'm an experimentalist, which means that I run experiments on people
to better understand counterfactual worlds.
Like what would happen if people live their life in this condition versus this condition.
Check out your personal website.
You have published a lot of papers over time. Like you
said, you study power, study negotiation, decision making. I was wondering when you were in the
master PhD program, when you were thinking of choosing specific areas of research, why you chose language, mind perception, what's fascinating
about those areas that you decide, well, yeah, I really, really want to go deep to become a deep
thinker, researcher, and teacher in those areas? That's a great question because psychology is
so broad. There are so many different aspects of human bias and decision making and behavior that you could study.
But to me, I kept coming back to the fact that we live in a social world and, you know, man is a social animal.
And so all of our society kind of rests on having this cooperative function with those that are around us.
And that involves having to engage with with those that are around us. And that involves having
to engage with other people effectively and productively. And so I see the umbrella of all
of my research as being around mind perception, which is how we come to perceive and understand
the minds of those around us. And this is a really fascinating topic because, of course,
we can't just directly read other people's minds.
And if we could, the world might be kind of a mess.
You can imagine that that could end up leading to all sorts of problems and issues.
And it's good that we are allowed to keep secrets from each other.
But the fact that we don't have very much insight can lead to challenges as well,
because sometimes we have to make these guesses at what other people are
thinking and feeling. And there are systematic ways in which we can go astray in that. And I
basically study all the different building blocks and how people come to make inferences about
others' minds. Think about both the top-down and bottom-up influences on people's mind reading and
mind perception. The top-down is like I bring to bear beliefs about the world
and stereotypes about certain people.
So the very first time I might have met you and talked to you, Vince,
I'll have like certain beliefs that I have in my mind
and I immediately start forming these inferences about you.
They happen in this kind of split second.
Might be based on the way you look or your accent, where you're from.
And then at the same time, the longer I engage with you,
like say we're having an actual back and forth conversation,
might be synchronous or might be asynchronous.
I'm starting to modify kind of those overall beliefs and stereotypes
based on like this bottom up feedback I'm getting
regarding your specific characteristics.
So what you're actually saying to me, how you're
saying it, kind of your non-verbals and your verbals together. And I'm integrating all that
information in my mind in this really fluid, amazing way to come up with an overall belief
about you or belief system about you. One thing before we deep dive
into your research areas,
while you're talking about
trying to understand the mind
in other people,
always wondering like psychologists themselves,
how they try to understand
their own psychology.
You, as a living human,
how you perceive
or figure out your own psychology? Would you make you
smarter or more complicated in a sense to figure out your own psychological state of mind when
something exciting happened or something bad happened? Yeah, that's such a great question,
Vince. But I would say that I hope after having studied this for so long that I do have more insight, not just into how we engage with other minds, but also how we engage with our own minds.
Sometimes we focus on the differential processes that are involved in trying to read other people's minds as compared to trying to recognize and understand our own minds. Of
course, when you're thinking about your own mind, the primary way in which you engage is just through
introspection. You're going to introspect like, what am I feeling and what am I thinking right now?
But there is some really interesting research in psychology that has pointed to the limits
of people's own introspection and their overconfidence when it comes to their own
introspection. So they might get a sense that, oh, I know exactly why I made that decision.
But sometimes they don't know the factor that actually influenced them. It might even be
something in the environment that was outside of their explicit consciousness that was swaying them.
And the experimenters know this because they manipulated that factor. But people still have
the sense that they know why they made the decision because they can come up with some
sort of post hoc rationalization for why they did it. That doesn't mean, so introspection sometimes
fails. It doesn't, you know, we have the sense that we know ourselves, we know our own minds,
but it doesn't necessarily mean that we truly do. And so I think it's very interesting to think about
the ways in which we sometimes fail
when we're trying to read other people,
but also the ways in which we sometimes fail
when we're trying to understand ourselves.
And I think there are some parallels
and some ways in which the processes are different
that I've studied.
Well, human, if there's one thing that I learned
from Chicago Booth MBA program is not all the formulas, human, if there's one thing that I learned from Chicago Booth MBA program, it's not all the formulas, equation, all that.
It's about, seriously, about psychology and social psychology.
And one of those is biases.
Like you mentioned, you obviously know a lot about that area.
Yeah, and you are also human.
You have your biases,
although you're very aware of that.
So anyway, we'll talk about that later.
That's a very interesting topic
when it comes to biases.
Now, but that's one area
or in particular one paper
that interests me
when I did my research
for this interview.
This paper was published in year 2020.
It's called Power and Decision Making,
New Directions for Research in the Age of Artificial Intelligence.
Now that's 2020.
That's before we have chat GPT and many other AI tools as of today.
So can you tell us a bit more about your argument
for that paper back then thank you for reading that paper and it is right it's a bit dated now
it's four years old so funny i wrote that with my co-author nate fast together we direct an
institute called the psychology of Technology Institute.
And so we have been very interested in better understanding the psychology
behind how people come to adapt and engage with and even design different forms of new technology
with a particular focus on AI, as well as bidirectionally how technology changes our
psychology and how technology
has been changing our minds, both at like the micro level, the individual level, as
well as how that aggregates to societal change, which a lot of people have been studying these
days, thinking about things like polarization and misinformation and just how new tech is
influencing our society broadly and democracy and other huge societal shifts that we're
seeing in the world.
And at the time, Nate and I were very interested in thinking about the proliferation of all these virtual assistants.
So we were looking at like Siri and Alexa and we thought, oh, they're in fact in the marketing literature.
There were a set of papers that came out around the same time. And they were all kind
of concerned about the fact that it's steep. A lot of people had these personal virtual assistants
that they could take with them anywhere. They were on their phones and they could tell them to do
anything they wanted. And they would yell these orders to their virtual assistants and their
virtual assistants would immediately do anything they wanted. And the virtual assistants were usually female voices.
And so we thought there might be some interesting psychology going on in this. And some of the
papers that came out, in fact, were concerned about children growing up with virtual assistants
and learning to be rude to their virtual assistants and what that would do to politeness and society.
And we were more interested in the feeling of power that it might give you, that if you carry
these virtual assistants around in your pocket, that might lead people to have this sense that
they have maybe almost like an inflated sense of, the part of it could be real. So we differentiate between the subjective
and the objective sources of power. And we're really just more looking at people's subjective
sense of do they feel like they have power. And there's a long line of research that finds that
when people feel like they have power, that puts them into more of a goal orientation. So they're more likely to act rapidly. They make
quick decisions. They tend to be more instrumental and less relationship focused. They may be more
overconfident in their decision making. So power can lead to this like inflated sense of self
and changes the ways in which people behave in these systematic ways and most of that research had looked at real instantiations of power like people
having resources and people having other humans that were doing things for them and we thought
well maybe just like the feeling of being powerful with virtual assistants might lead to some of
these consequences but we actually theorized that not just any interaction with the virtual assistant would make people necessarily feel powerful. We thought particularly if the virtual
assistant was humanized. So if it was the case that people engage with a virtual assistant and
see it as being somewhat human-like, then perhaps they would show some of these consequences of
power that they would become
higher in their goal orientation and instrumentality. And so we did find that,
and it's interesting to think even how we were considering humanization back then, because now,
of course, as you mentioned, there are so many more types of virtual agents that are out in the
world. And they're not necessarily just assistants
anymore either. Like, I don't know. So we haven't tested this in chat GPT, for example. I don't know
if people, when they engage with chat GPT, they see it as being an assistant for them or if they
see it. I know a lot of people who would just anecdotally will say that when they engage with
chat GPT, they try to be very respectful and very kind
because you never know when the machine overlords
are going to take over, you know?
So they probably are seeing themselves
as being more low power, right?
I don't know like subjectively how that would work
with certain virtual agents that are out in the world now.
But I do know that if people see the virtual agent
as an assistant, like they're there to serve you and they humanize it, then I think we would expect to see these results of goal orientation.
Now, the humanization piece I mentioned is interesting, too, because at the time we were thinking about humanization as being more about, for example, whether you interact with it as if it's like a human, like, does it talk to
you? Can you talk back to it? As opposed to, you know, writing, does it have an avatar with it?
Would there be some sort of face that you can see? And now I think there's a lot more
sophistication in terms of humanization. I think that even, so research now suggests that for most LLMs like Chappie, Chappie,
and other ones, most people cannot differentiate it from a human when they don't pass like
the Turing test is what we call it.
So they cannot tell whether or not in abstract in isolation, if you're just give it the responses,
they can't tell whether it is a human or not with any sense of accuracy. So they're
essentially at the level where they are using language to the degree that a human would.
And I do think that still the voice to voice interaction is fundamentally humanizing. And
I have some other research on this. So I think that voice to voice will make people see agents as being more
human-like. I think language, yes, we already know that the LLMs are at the level of human.
And then we've been studying just other random cues to humanness that exist, especially when
you're engaging in like text-based online communication with an ambiguous agent. So for
example, we found one of another cue that you might have to expect is whether or not it makes typos and corrects those typos.
So we found that. So it's interesting, like typos in general are kind of dehumanizing.
When you see a type, you're like, oh, that, you know, it's not very competent.
Whenever the agent is, if you imagine it's possible that it could be some sort of chat bot or some sort of LLM and it's making a lot of typos.
Perhaps you just think it's like a poorly programmed chatbot. But what we found is that
when you're having a synchronous back and forth conversation, like for example, with customer
service agents, like on Amazon or something, and they make a typo and then they correct that typo,
then people are really likely to think it must be a human.
And that's because I think people have expectations that they're bringing to bear
regarding the humanness of the agents that they're interacting with and the programming
of different chatbots and what they expect to be in the programming or not. And so they're not
expecting that a typo that's corrected will
be something that most companies would program into their chatbots. It also signals something
about like having an active mind, that there's like a mind, a human-like mind on the other end
that is monitoring the conversation and the errors and correcting their own errors. So that just
really signals humanness. You can also imagine
we're also kind of playing with other things, but there are other cues that people might take
to signal humanness. Like perhaps if you have a really overly effusive customer service agent
that uses a lot of exclamation marks, English remarks and things, and you're like, okay,
that seems like it's probably a human because why would the chatbot do that?
But so those are like new things
that are happening in the world right now. So are you carrying on with your original research back
in 2020 and today with all the new development and still studying this? If you are, what's your
status? What's your observation? Yeah, we really were mostly just theorizing even in the 2020 article. And I
think that the theory would still hold that people would feel when they feel like they're more,
they have more power because they're engaging with a virtual agent that's humanized. That's
when they're going to engage in more goal-oriented type behavior that
we generally see from like higher power people. But when they are not perceiving the virtual
agent to be their assistant, and then they don't feel like they have power, or if they see it as
their assistant, but it's not humanized, then I don't think that we would see the same results. So I would predict that the theory would still hold, but we have not tested it with some
of the newer technology that exists.
So I would love for anyone out there who wants to study this to reach out to me so I can
test it further.
Now, let me share a bit about my user viewpoint.
Yes, I use CheckGPT sometimes. I don't have that
conscious feelings of power when I use it.
Do I see it as a system? I see it as, honestly,
as a colleague, so to speak. Although I found
this colleague, a lot of times, provides me
with a huge degree of confirmation bias. Whatever I say,
oh yeah, that's right. You can think of it this way and all that. I'm very conscious about
confirmation bias when I use ChatGPT. When I ask them questions, I try to get them
help me figure it out or maybe write something more for me, giving more inspiration, creativity.
And they keep coming back with the same idea.
Eventually, I said, that's not working.
I would imagine if this is talking to a human colleague, I might be more careful in terms of the language I use.
Am I saying any things that may upset you?
But I still see it as a machine.
And as of now, the emotional aspect of it is not so human yet.
So that's why I don't see it just as an assistant.
I would take it more like advisor, you know, depending on situation.
Yeah, I share your intuition that it might be a bit more nuanced with chat GPT. I think when we wrote this article in 2020, we were envisioning a future in which people would just have like armies
of virtual assistants, like maybe they're humanized like these robots. Your house is just filled with
robots that are just there to serve you. And they're very humanized.
And so we're like, what is this going to do to people's psychology and to their minds?
And that vision of the future hasn't really played out yet.
I guess it's still possible.
Who knows?
But I think you're right that I don't think people probably see Chachi BTS necessarily just being their servant per se.
If anything, you know, there's maybe more of a sense of uncertainty about like where the power dynamic really lies in that relationship.
Well, if I structure the questions, I must say they give me some ideas as if I'm talking to a fairly intelligent person.
And then we keep communicating. Then this kind of interaction or conversation, sometimes, honestly, is more interesting
than talking to a human
who may not have any sense of independent thinking.
I do see the value in terms of using the machine,
a highly intelligent machine,
and me as the human also being aware
of what kind of biases that I may face if I use this tool.
Just be aware of that, be mindful not to be distracted or get so carried away by that.
So far, this conversation, this interaction for me is still manageable.
But then I watched a video posted by an adjunct professor of entrepreneurship from Chicago Bull.
The topic is why AI may be your best astrologist.
I know you work with and teach a lot of MBAs, executives.
Do you see us, like people like us, decision-making, executive decision-making,
perhaps AI could be one of our best astrologists? Yeah, that's a great question. By the way,
while you were talking, I was just thinking about how it would be so interesting if one of the
concerns potentially of having people feel like they're high power and
they have all these virtual assistants that are working for them is that people that are sometimes
in really high power positions can get this very inflated sense of self and they become
overconfident and they make their decisions too quickly. And so you could imagine that perhaps
people, perhaps companies might even want to design their virtual assistants
in such a way that keeps people in check, like by, for example, pushing back against them and
making like power dynamic a little bit less clear. So maybe people would might sort of actually
appreciate it if their virtual agents give them a little bit of sass, give them a little bit of pushback.
So that'd be just something really fun to play around with in terms of design.
But to get to your bigger question about to what extent people are using AI wisely,
particularly leaders are using AI in their decision making. I think the principle to keep in mind is that AI needs to complement
and improve our decision making. It shouldn't really substitute. And we've seen some pushback
against this already. There are people have a strong sense of when it's more or less appropriate
for AI to be making decisions on their behalf. And there's a long line of literature on
what we call like algorithm aversion versus algorithm appreciation. And it is changing
over time as well. So for one example, hiring decisions. So this is one in which people very
strongly believe that there should be a human decision maker that is involved at the high level, even if there's some parts of the decision making process that are driven by AI.
Google famously got into trouble by pretty much automating their entire hiring, and promotion process, and there was a lot of rebellion among the employees and job candidates regarding the algorithms and things that, you know, ostensibly the algorithm inputs or calculations or weighting functions couldn't, weren't taking into account properly.
And there's just this sense that humans need to be involved in the process.
And so they changed their process. There's a famous case study on this. In order to have a retreat, an annual retreat,
in which algorithms were still involved.
There was still a lot of AI decision-making happening,
but humans were involved as well.
And so there were humans there at the retreat,
and they were going through the data
and going through the algorithms, recommendations,
and making decisions as a function of those.
So they were using that as an input into their decision-making process,
but they weren't the final outcomes.
And so that made people feel a lot better.
And I think that there is this hiring,
like I said, that is a domain
in which people do think there should be a human involved.
There are other domains where people are okay
with just taking the algorithms.
For example, it didn't used to be the case,
but now think about musical selection, music preferences.
So people are pretty much happy using Spotify's algorithm
to select most of their musical preferences,
even though that is historically,
music has been something that's seen as something associated with human sentiment,
kind of like emotional and artistic.
But that's one where people are much more likely
to just be willing to take an algorithm.
I think the big concern that people tend to have
is when there's the potential for something to be unfair
or there's a high stakes decision.
And in which case, you know, a lot of times
the algorithms are operating in this kind of black box fashion and people don't understand exactly all the machine learning that's going on
within the algorithms. And so there's this concern that the outcome may not be fair or may not be
warranted. Or if it's really high stakes, then we should know exactly every single input and
every single calculation that's being passed for it, which I think is reasonable.
But at the same time,
I've heard some really compelling arguments that people's psychology is going to hold them back
from being able to reap all the benefits of the algorithm.
So you can imagine, for instance,
that this is a really high stakes decision.
And so I think it's incredibly important
to know exactly all the
calculation, the input that go into the decision, but maybe, and so therefore I don't want to use
an algorithm because I don't know exactly how it works per se, but the algorithm still may be way
more sophisticated and able to do a lot more than the human could. And so by choosing not to use the
algorithm there, I'm limiting our ability to make a good decision.
And so those are like tricky tradeoffs that we're having to navigate now.
Last question, I'd like to get your insights.
Now, you study human to human interaction conversation.
And we just talk about me as a human talking, working with a machine.
This human and machine interaction will become more and more common.
For younger kids, they are going to grow up in this era.
So they just would be more immersed in this space.
Adults being trained and grow up in an era
where it's just human to human.
And now we are in this human machine era.
So what advice would you give to MBA students,
executives, managers, how we could make better use
of our human communication skills?
Or if you have to highlight a couple
of premium human qualities, human skills
that we should hold on to? That's a great question. I think that people need to learn how to use
technology to their advantage in communication settings, that they shouldn't just be thinking about how what are the like uniquely
human elements of communication because those are always going to be changing I think that our world
is constantly changing so it's more about how do you engage with new technology in order to improve
your abilities to communicate and let me give you a bunch of examples here that come from my research. So one is that we now have all these different platforms at our fingertips
that we can use to more effectively communicate with those around us. It's amazing that you and
I are getting to have this. I think you're all the way across the world and from me, and we're
still having this great conversation and we're doing it through
an audio only platform here we could also be seeing each other via video and so I think
there are both like technologies that have more or less synchronicity that's like speed between
when I say something and you respond and then there's also platforms that have more or less what we call like paralinguistic cues, which are the cues beyond the words, which include the nonverbals, like being
able to see facial expression, being able to hear the tone of my voice. Those are all paralinguistics.
And so what we typically find in our research is that the more of these paralinguistic cues,
but particularly voice, that are present in an interaction, and the more synchronous these paralinguistic cues, but particularly voice, that are present in an
interaction and the more synchronous it is, the more humanizing a conversation is and the more
human-like a communicator will appear. And so if you care about trying to reduce misunderstandings
and have clearer mind reading and being seen as more human-like and making the best possible
first impression, we would suggest that you start with the medium, a modality, a communication platform, whether it's like in
person or video chat, that is going to be able to maximize those things, as opposed to starting with
text, which a lot of people actually do think that they should start with their text, like their cover
letter, when they're trying to make a good impression on recruiters, for instance, and we actually find
that the elevator pitch is like much more effective,
even controlling for the words that people use.
I would suggest that people should be quick
to switch modalities and platforms
as it is more or less effective for them.
So sometimes people get caught in this meeting culture
where they're stuck in these video conversations
and they don't need to work out a lot of detail
and they don't need the synchronous conversation. lot of detail and they don't need the
synchronous conversation. They can do some of it asynchronously. So it's like time to get off the
meeting and just go into email instead and work on your to-do list independently. And then you can
meet again later. All right. And so you can jump off once you've kind of worked through some of
the detail. Or if you start out in email and you start realizing that things are more complicated
than you expect and there's some conflict, then you should jump into Zoom.
And so I think people should just be quicker to move into a different modality or platform that serves their purposes in terms of communication.
All right. So that's just one is like the medium by which we're engaging. And then another thing that we can think about is how we use communication tools to service our interactions.
Like there are all these cool tools now, like we can transcribe automatically as we're speaking so that we have a searchable log of everything that we're saying.
Or there are certain new startups that are developing tools that will give you sentiment analysis. So after I send an email, I can get information about compared to most of the users
in your organization that was on the angrier side, like your anger sentiment was high in that email.
Oh, I should have toned that down, or maybe I should tone it down in the next iteration.
And so I can take that feedback and I can use that. Now I do think there's a potential. So
you might just be tempted to like say, oh, all of these tools sound great.
Why not just employ all of them?
Let's transcribe everything we're saying and let's use the sentiment analyses that exist.
But there might be a cost on the back end to distraction because humans are only capable of engaging in so much at once. And I've talked to a couple of startups now
that are building these new communication platforms that are basically everything. So
there's words that are scrolling because it's everything being transcribed as we speak.
We can see each other. We can hear each other. It's like all the modalities are happening at once.
And like, again, on the one hand, that sounds kind of great. But on the other hand,
I think there might be a cost in terms of distraction.
As a teacher and an educator, I'm very, very aware of this.
It's very salient to me, that trade-off.
And so I do think people need to be wise in thinking about which communication tools they want to utilize
and really pay attention to the new research coming out on this.
And so that's probably what i would like to leave people with
sure that opened up another opportunity for me to invite you back so that you can talk more about
this human interaction not just communication but in the way we think we analyze we speak
all those things i'll save this part for our next conversation It's really nice to have you on the call,
Juliana. It's always very, very fascinating. This is just the beginning of many conversations that
we'll have with you on this podcast. Thank you so much. Thank you. I really enjoyed it.
That brings our Women Change Leaders series to a close.
Next week, we're switching gears to spotlight five incredible gentlemen leading change across the globe.
They are shaking things up with digital banking and insurance in Singapore, pushing DEI initiatives to the next level
at one of the world's oldest wine and spirits enterprises,
transforming the venture landscape in Portugal,
and driving innovations at Yale University.
Don't forget to follow us to stay updated with all the latest.
Thanks so much for listening. Catch you on the next one.