Chief Change Officer - #351 Juliana Schroeder: AI, Power, and the Psychology of Human Connection

Episode Date: May 6, 2025

The way we communicate is changing—but what does that mean for the humans doing the talking?Dr. Juliana Schroeder, associate professor at UC Berkeley Haas, has spent her career unpacking how we perc...eive other minds—both human and machine. In this episode, she breaks down how AI isn’t just reshaping tech—it’s reshaping the psychology of communication itself. From virtual assistants to algorithmic bias, and from voice cues to power dynamics, Juliana offers a grounded look at what we gain (and risk losing) as AI enters our social and professional lives.For executives, educators, and anyone raising kids in a world of voice bots and Zoom calls, Juliana’s insights are both sobering and empowering: technology may evolve, but the need for mindful, human-centered interaction is here to stay.Key Highlights of Our Interview:The Mind Behind the Mind: Why She Studies Perception“Humans can’t read minds—but we act like we can. I study how we form beliefs about others’ thoughts and feelings—and where those beliefs go wrong.”From Hard Science to Human BehaviorA former physics student, Juliana fell in love with psychology’s messier questions: persuasion, power, and decision-making.Alexa, Am I in Charge?“When we treat virtual assistants like humans, we start to feel powerful. That shift can change how we act—sometimes for the worse.”The Confirmation Bias Trap of AI“LLMs like ChatGPT often reflect what we say, not what we need. They’re agreeable by design—and that creates a unique kind of echo chamber.”Medium Matters: Why Voice Beats Text“Text strips out nuance. Voice restores it. If you want to be seen as warm, competent, or persuasive—don’t rely on email.”Hiring, Algorithms, and the Need for Transparency“When high-stakes decisions get outsourced to black-box AI, people rebel. We still want a human in the loop.”Designing Better Conversations—With Humans and MachinesWhat if your AI pushed back? Juliana imagines future assistants with ‘sass’ to counteract human overconfidence.What Leaders Should (Still) Master“Great communicators don’t just speak—they switch modalities when needed. They know when to email, when to Zoom, and when to step away.”More Tech ≠ Better Talk“Too many tools can backfire. The best leaders know how to reduce distraction and amplify meaning—whether talking to humans or machines.”_________________________Connect with us:Host: Vince Chan | Guest: Juliana Schroeder, PHD  --Chief Change Officer--Change Ambitiously. Outgrow Yourself.Open a World of Expansive Human Intelligencefor Transformation Gurus, Black Sheep,Unsung Visionaries & Bold Hearts.EdTech Leadership Awards 2025 Finalist.15 Million+ All-Time Downloads.80+ Countries Reached Daily.Global Top 3% Podcast.Top 10 US Business.Top 1 US Careers.>>>150,000+ are outgrowing. Act Today.<<<

Transcript
Discussion (0)
Starting point is 00:00:00 Hi, everyone. Welcome to our show, Chief Change Officer. I'm Vince Chen, your ambitious human host. Our show is a modernist community for change progressives in organizational and human transformation from around the world. Coming to us from the halls of UC Berkeley is associate professor and psychologist, Juliana Schroeder. You might have noticed,
Starting point is 00:00:48 most of our guests have taken quite the scenic route through their careers. Juliana, on the other hand, has kept her eyes on one prize, digging deep into the human mind. She's now leading the charge in teaching negotiation and management to both MBA students and seasoned executives.
Starting point is 00:01:13 Take a quick look at her website or UC Berkeley's and you'll be blown away by her achievements. We are talking a laundry list of titles, a mountain of papers, and a substantial collection of awards. And get this, she's back, not one, but two master's degrees and two PhDs. At an age where many are still figuring things out. I could easily spend a good 10 minutes here just running through her credentials
Starting point is 00:01:47 while and all the incredible things she's achieved. But let's be honest, I know you are here for the insights. So while I'm skipping the long intro to save us some time, I can't recommend enough diving into her profile yourself. Trust me, if you're even a bit of a nerd like me, Juliana's work is a gold mine. Juliana and I met at Chicago Booth. She was my TA for two courses taught by two amazing professors and social psychologists,
Starting point is 00:02:25 Nick Apley and Linda Ginzel. Still remember the first day we met? I was sitting next to her on front row when the whole classroom was packed. I didn't know she was actually my TA. I raised hand, I answered question, I got the question wrong. Then she whispered to me, trying to explain the reason why.
Starting point is 00:02:51 Then we met again in Singapore. This time I pulled her aside, asking her about reciprocity, a very important concept in psychology and negotiation. In my eyes, she is very sharp. Those who know me well understand that I use this word very selectively as a compliment. Over time, I've observed the growth of her academic career. I told myself I must invite her to my podcast. So, which granted, here we are.
Starting point is 00:03:31 Let's get started. ["The New York Times"] Good afternoon, Juliana. Thank you so much for having me, Vinh. Good afternoon, Juliana. Thank you so much for having me, Vince. Good afternoon. Let's start with a brief introduction of your background. For the benefit of the audience, how I met Juliana, that was when I was at Chicago Pool. Yeah, I am an associate professor in the management of organizations at the UC Berkeley House
Starting point is 00:04:05 School of Business. And by the way, I'm incredibly impressed that you have kept in touch for more than 10 years since I was a teaching assistant back when I was doing my PhD at Chicago Booth. I thought that I was going to be a hard scientist in my high school days. And then when I got to college, I took some social science classes, I took psychology and economics and I just completely fell in love with them. I just think it's fascinating to be able to better understand how people think and feel. They kind of say that research is me-search and so I think I like to study the things that I find to be like fascinating
Starting point is 00:04:40 and challenging and that are kind of hard for me. So I study things like decision-making and negotiations and persuasion. And I'm an experimentalist, which means that I run experiments on people to better understand counterfactual worlds. Like what would happen if people live their life in this condition versus versus this condition. Check out your personal website. You have published a lot of papers over time. Like you said, you study power, study negotiation, decision-making.
Starting point is 00:05:12 I was wondering when you were in the master PhD program, when you were thinking of choosing specific areas of research, why you chose language, mind perception, what fascinating about those areas that you decide, well, yeah, I really, really wanna go deep to become a deep thinker, researcher, and teacher in those areas. That's a great question because psychology is so broad. There are so many different aspects of human bias and decision making and behavior that
Starting point is 00:05:48 you could study. But to me, I kept coming back to the fact that we live in a social world and man is a social animal. And so all of our society kind of rests on having this cooperative function with those that are around us. And that involves having to engage with other people effectively and productively. And so I see the umbrella of all of my research as being around mind perception, which is how we come to perceive and understand the minds of those around us. And this is a really fascinating topic because, of course, we can't just directly read other
Starting point is 00:06:25 people's minds. And if we could, the world might be kind of a mess. You can imagine that that could end up leading to all sorts of problems and issues. And it's good that we are allowed to keep secrets from each other. But the fact that we don't have very much insight can lead to challenges as well, because sometimes we have to make these guesses at what other people are thinking and feeling. And there are systematic ways in which we can go astray in that. And I basically study all the different building blocks and
Starting point is 00:06:54 how people come to make inferences about others' minds. Think about both the top down and bottom up influences on people's mind reading and mind perception. The top down is like I bring to bear beliefs about the world and stereotypes about certain people. So the very first time I might have met you and talked to you, Vince, I'll have like certain beliefs that I have in my mind and I immediately start forming these inferences about you. They happen in this kind of split second. It might be based on the way you look or your accent, where you're from. And then at the same time, the longer I engage with you, like say we're having an actual
Starting point is 00:07:29 back and forth conversation, might be synchronous or might be asynchronous. I'm starting to modify kind of those overall beliefs and stereotypes based on like this bottom up feedback I'm getting regarding your specific characteristics. So what you're actually saying to me, how you're saying it, kind of your non-verbals and your verbals together. And I'm integrating all of that information in my mind in this really fluid, amazing way to come up with an overall belief about you
Starting point is 00:08:00 or belief system about you. One thing before we deep dive into your research areas, while you're talking about trying to understand the mind in other people, always wondering like psychologists themselves, how they try to understand their own psychology. You as a living human, how you perceive or figure out your own psychology, would you make you smarter or more complicated in a sense to figure out your own psychological state of mind when something exciting happened or something bad happened? Yeah, that's such a great question, Vince.
Starting point is 00:08:43 But I would say that I hope after having studied this for so long that I do have more insight not just into how we engage with other minds, but also how we engage with our own minds. Sometimes we focus on the differential processes that are involved in trying to read other people's minds as compared to trying to recognize and understand our own minds. Of course, when you're thinking about your own mind, the primary way in which you engage is just through introspection. You kind of introspect like, what am I feeling and what am I thinking right now? But there is some really interesting research in psychology that has pointed to the limits of people's own introspection and their overconfidence when it comes to their own
Starting point is 00:09:23 introspection. So they might get a sense that, oh, I know exactly why I made that decision. But sometimes they don't know the factor that actually influenced them. It might even be something in the environment that was outside of their explicit consciousness that was swaying them. And the experimenters know this because they manipulated that factor. But people still have the sense that they know why they made the decision because they can come up with some sort of post-hoc rationalization for why they did it. That doesn't mean... So introspection sometimes fails. It doesn't, you know, we have the sense that we know ourselves, we know our own minds, but it doesn't
Starting point is 00:10:00 necessarily mean that we truly do. And so I think it's very interesting to think about the ways in which we sometimes fail when we're trying to read other people, but also the ways in which we sometimes fail when we're trying to understand ourselves. And I think there are some parallels and some ways in which the processes are different that I've studied. Now, but that's one area, or in particular one paper, that interests me when I did my research for this interview. This paper was published in year 2020. It's called Power and Decision Making, New Directions for Research in the Age of Artificial Intelligence. in the age of artificial intelligence. Now that's 2020, that's before we have Chet, GPT
Starting point is 00:10:49 and many other AI tools as of today. So can you tell us a bit more about your argument for that paper back then? Thank you for reading that paper. And it is right, it's a bit dated now, it's four years old, so funny. I wrote is right. It's a bit dated now. It's four years old. So funny. I wrote that with my co-author, Nate Fast.
Starting point is 00:11:09 Together we direct an institute called the Psychology of Technology Institute. And so we have been very interested in better understanding the psychology behind how people come to adapt and engage with and even design different forms of new technology with a particular focus on AI, as well as bidirectionally how technology changes our psychology and how technology has been changing our minds, both at like the micro level, the individual level, as well as how that aggregates to societal change, which a lot of people have been studying these days thinking about things like polarization and misinformation and just how new tech is influencing our society broadly and democracy and other huge societal shifts that we're seeing in the world.
Starting point is 00:11:54 And at the time, Nate and I were very interested in thinking about the proliferation of all these virtual assistants. So we were looking at like Siri and Alexa and we thought, oh they're in fact in the marketing literature. There were a set of papers that came out around the same time and they were all kind of concerned about the fact that it's steep. A lot of people had these personal virtual assistants that they could take with them anywhere. They were on their phones and they could tell them to do anything they wanted. And they would yell these orders to their virtual assistants and their virtual assistants would immediately do anything they wanted.
Starting point is 00:12:34 And the virtual assistants were usually female voices. And so we thought there might be some interesting psychology going on in this. And some of the papers that came out in fact were concerned about children growing up with virtual assistants and learning to be rude to their virtual assistants and what that would do to politeness and society and we were more interested in the feeling of power that it might give you that if you carry these virtual assistants around in your pocket that might lead people to have this sense that they have maybe almost like an inflated sense of the part of it could be
Starting point is 00:13:10 real. So we differentiate between the subjective and the objective sources of power. And we're really just more looking at people's subjective sense of do they feel like they have power. And there's a long line of research that finds that when people feel like they have power, that puts them into more of a goal orientation. So they're more likely to act rapidly. They make quick decisions.
Starting point is 00:13:37 They tend to be more instrumental and less relationship focused. They may be more overconfident in their decision making. So power can lead to this like inflated sense of self and changes the ways in which people behave in these systematic ways. And most of that research had looked at real instantiations of power, like people having resources and people having other humans that were doing things for them. And we thought, well, maybe just like the feeling of being powerful with virtual assistants might lead to some of these consequences. But we actually theorized that not just any interaction
Starting point is 00:14:12 with the virtual assistant would make people necessarily feel powerful. We thought particularly if the virtual assistant was humanized. So if it was the case that people engage with a virtual assistant and see it as being somewhat human-like, then perhaps they would show some of these consequences of power that they would become higher in their goal orientation and instrumentality. And so we did find that, and it's interesting to think even how we were considering humanization back then, because now, of course, as you
Starting point is 00:14:45 mentioned, there are so many more types of virtual agents that are out in the world. And they're not necessarily just assistants anymore either. Like, I don't know. So we haven't tested this in ChatGPT, for example. I don't know if people, when they engage with ChatGPT, they see it as being an assistant for them or if they see it. being an assistant for them or they see it. I know a lot of people who would just anecdotally will say that when they engage with ChatGPT, they try to be very respectful and very kind because you never know when the machine overlords
Starting point is 00:15:14 are going to take over. You know, so they probably are seeing themselves as being more low power, right? I don't know like subjectively how that would work with certain virtual agents that are out in the world now, but I do know that if people see the virtual agent as an assistant, like they're there to serve you and they humanize it, then I think we would expect to see these results of goal orientation. Now the humanization piece I mentioned is interesting too, because at the time we were thinking about humanization as being more about, for example, whether you interact with it as if it's like a human. Like, does it talk to you? Can you talk back to it? As opposed to, you know, writing, does it have an avatar with it? Like, so would there be some sort of face that you can see. And now I think there's a lot more sophistication in terms of humanization.
Starting point is 00:16:08 I think that even so research now suggests that for most LLMs like Chappie, Chappie BT and other ones, most people cannot differentiate it from a human when they don't pass like the Turing test is what we call it. So they cannot tell whether or not in abstract, in isolation, if you're just give it the responses, they can't tell whether it is a human or not with any sense of accuracy. So they're essentially at the level where they are using language to the degree that a human would.
Starting point is 00:16:40 And I do think that still the voice toto-voice interaction is fundamentally humanizing, and I have some other research on this. So I think that voice-to-voice will make people see agents as being more human-like. I think language, yes, we already know that the LLMs are at the level of human. And then we've been studying just other random cues to humanist that exist, especially when you're engaging in like text-based online communication with the ambiguous agent. So for example, we found one of another cue that you might have to expect is whether or not it makes typos and corrects those typos. So we found,
Starting point is 00:17:17 so it's interesting like typos in general are kind of dehumanizing. When you see a type, you're like, oh that, you know, it's not very competent. Whenever the agent is, if you imagine it's possible that it could be some sort of chat bot or some sort of LLM, and it's making a lot of typos, perhaps you just think it's like a poorly programmed chat bot. But what we found is that when you're having a synchronous back and forth conversation, like for example, with customer service agents, like on Amazon or something, and they make a typo and then they correct that typo, then people are really likely to think it must be a human.
Starting point is 00:17:55 And that's because I think people have expectations that they're bringing to bear regarding the humanness of the agents that they're interacting with and the programming of different chatbots and what they expect to be in the programming or not. And so they're not expecting that a typo that's corrected will be something that most companies would program into their chatbots. It also signals something about like having an active mind, that there's like a mind, a human-like mind on the other end that is monitoring the conversation and the errors and correcting their own errors. So that just really signals
Starting point is 00:18:30 humaneness. You can also imagine we're also kind of playing with other things, but there there are other cues that people might take to signal a humaneness. Like perhaps if you have a really overly effusive customer service agent that uses a lot of exclamation marks, English marks and things. And you're like, okay, that seems like it's probably a human because why would the chat bot do that? But so those are like new things that are happening in the world right now. So are you carrying on with your original research back in 2020 and today with all the new developments, is you studying this.
Starting point is 00:19:05 If you are, what's your status? What's your observation? Yeah, we really were mostly just theorizing even in the 2020 article. And I think that the theory would still hold that people would feel when they feel like they're more, they have more power because they're engaging with a virtual agent that's humanized. That's when they're going to engage in more goal oriented type behavior that we generally see from like higher power people. But when they are not perceiving the virtual agent to be their assistant, and then they don't feel like they have power or if they see it as their assistant but it's not humanized then I don't think that we would see the same
Starting point is 00:19:51 results. So I would predict that the theory would still hold but we have not tested it with some of the newer technology that exists. So I would love for anyone out there who wants to study this to reach out to me so I can test it further. Now, let me share a bit about my user viewpoint. Yes, I use CheckGPT sometimes. I don't have that conscious feelings of power when I use it. Do I see it as assistant? I see it as honestly as a colleague, so to speak.
Starting point is 00:20:26 Although I found this colleague, a lot of times provides me with a huge degree of confirmation bias. Whatever I say, oh yeah, that's right. You can think of it this way and all that. I'm very conscious about confirmation bias when I use chat GPT. When I ask them questions, I try to get them to help me figure it out or maybe write something more for me, giving more inspiration, creativity, and they keep coming back with the same idea.
Starting point is 00:20:59 Eventually I say, that's not working. I would imagine if this is talking to a human colleague, I might be more careful in terms of the language I use. Am I saying anything that may upset you? But I still see it as a machine, and as of now, the emotional aspect of it is not so human yet. So that's why I don't see it just as an assistant.
Starting point is 00:21:28 I would take it more like advisor, depending on situation. Yeah, I share your intuition that it might be a bit more nuanced with chat CPT. I think when we wrote this article in 2020, we're envisioning a future in which people would just have like armies of virtual assistants.
Starting point is 00:21:44 Like maybe they're humanized like these robots, your house is just filled with robots that are just there to serve you and they're very humanized. And so we're like, what is this going to do to people's psychology into their minds? And that vision of the future hasn't really played out yet. I guess it's still possible. Who knows? But I think you're right that I don't think people probably see Chatgypity as necessarily just being their servant per se.
Starting point is 00:22:08 If anything, you know, there's maybe more of a sense of uncertainty about like where the power dynamic really lies in that relationship. Well, if I structure the questions, I must say they give me some ideas as if I'm talking to a fairly intelligent person, then we keep communicating. Then this kind of interaction or conversation sometimes, honestly, is more interesting than talking to a human who may not have any sense of independent thinking.
Starting point is 00:22:40 I do see the value in terms of using the machine, a highly intelligent machine, and me as the human also being aware of what kind of biases that I may face if I use this tool. Just be aware of that, be mindful not to be distracted or get so carried away by that. So far, this conversation, this interruption, for me, is still manageable. But then I watch a video posted by a adjunct professor of entrepreneurship from Chicago Bull. The topic is why AI may be your best astrologist.
Starting point is 00:23:24 I know you work with and teach a lot of MBAs, executives. is why AI may be your best astrologist. I know you work with and teach a lot of MBAs, executives. Do you see us, like people like us, decision-making, executive decision-making, perhaps AI could be one of our best astrologists? Yeah, that's a great question. By the way, while you were talking, I was just thinking about how it
Starting point is 00:23:46 would be so interesting if one of the concerns potentially of having people feel like they're high power and they have all these virtual assistants that are working for them is that people that are sometimes in really high power positions can get this very inflated sense of self and they become overconfident and they make their decisions too quickly. And so you could imagine that perhaps people, perhaps companies might even want to design their virtual assistants in such a way that keeps people in check, like by, for example, pushing back against them and making like power dynamic a little bit less clear.
Starting point is 00:24:25 So maybe people would might sort of actually appreciate it if their virtual agents give them a little bit of sass, give them a little bit of pushback. So that'd be just something really fun to play around with in terms of design. But to get to your like bigger question about to what extent people are using AI wisely, particularly leaders are using AI in their decision-making. I think the principle to keep in mind is that AI needs to complement and improve our decision-making. It shouldn't really substitute.
Starting point is 00:24:59 And we've seen some pushback against this already. There are, people have a strong sense of when it's more or less appropriate for AI to be making decisions on their behalf. And there's a long line of literature on what we call like algorithm aversion versus algorithm appreciation. And it is changing over time as well. So for one example, hiring decisions. So this is one in which people very strongly believe that there should be a human decision maker that is involved at the high level, even if there's some parts of the decision making process that are driven by AI. Google famously got into trouble by pretty much automating their entire hiring process and there was a lot and promotion process and there was a lot of rebellion among the employees
Starting point is 00:25:53 and job candidates regarding the algorithms and things that you know ostensibly the algorithm inputs or calculations or waiting functions couldn't weren't taken into account properly and there's just a sense that humans need to be involved in the process. And so they changed their process. There's a famous case study on this in order to have a retreat, an annual retreat, in which algorithms were still involved. There was still a lot of AI decision-making happening, but humans were involved as well. And so there were humans there at the retreat, and they were going through the data and going through the algorithms recommendations and making decisions as a function of those.
Starting point is 00:26:31 So they were using that as an input into their decision making process, but they weren't the final outcomes. And so that made people feel a lot better. And I think that there is this hiring, like I said, that is a domain in which people do think there should be a human involved. There are other domains where people are okay with just taking the algorithms. For example, it didn't used to be the case, but now like think about like musical selection, music preferences. So people are pretty much happy using Spotify's algorithm to select most
Starting point is 00:27:02 of their musical preferences, even though that is historically music has been something that's seen as something associated with human sentiment, kind of like emotional and artistic, but that's one where people are much more likely to just be willing to take an algorithm. I think the big concern that people tend to have is when there's the potential for something to be unfair or there's the high stakes decision and in which case you know a lot of times the algorithms are operating in this kind of black box fashion and people don't understand exactly all the machine learning that's going on within the algorithms and so there's this concern that the outcome may not be fair or may not be warranted or if it's really high stakes,
Starting point is 00:27:45 then we should know exactly every single input and every single calculation that's being passed for it, which I think is reasonable. But at the same time, I've heard some really compelling arguments that people's psychology is gonna hold them back from being able to reap all the benefits of the algorithm. So you can imagine, for instance, that this is a really high stakes decision.
Starting point is 00:28:09 And so I think it's incredibly important to know exactly all of the calculation, the input that go into the decision. But maybe, and so therefore I don't want to use an algorithm because I don't know exactly how it works per se, but the algorithm still may be way more sophisticated and able to do a lot more than the human could. And so by choosing not to use the algorithm there, I'm limiting our ability to make a good decision. And so those are like tricky trade-offs that we're having to navigate now.
Starting point is 00:28:39 Last question I'd like to get your insights. Now you study human-to-human interaction conversation, and we just talk about me as a human talking, working with a machine. This human and machine interaction will become more and more common. For younger kids, they are going to grow up in this era. So they just would be more immersed in this space. Adults being trained and grow up in an era where it's just human to human.
Starting point is 00:29:14 And now we are in this human machine era. So what advice would you give to MBA students, executives, managers, how we could make better use of our human communication skills. Or if you have to highlight a couple of premium human qualities, human skills that we should hold on to. That's a great question. I think that people need to learn how to use technology to their advantage in communications settings that they shouldn't just be thinking about how, what are the like
Starting point is 00:29:56 uniquely human elements of communication because those are always going to be changing. I think that our world is finally changing so it's more about how do you engage with new technology in order to improve your abilities to communicate? And let me give you a bunch of examples here that come from my research. So one is that we now have all these different platforms at our fingertips that we can use to more effectively communicate with those around us.
Starting point is 00:30:24 It's amazing that you and I are getting to have this, you're all the way across the world and from me, and we're still having this great conversation and we're doing it through an audio only platform here, we could also be seeing each other via video. And so I think there are both like technologies that have more or less synchronicity, that's like speed between when I say something and you respond. And then there's also platforms that have more or less
Starting point is 00:30:51 what we call like paralinguistic cues, which are the cues beyond the words, which include the nonverbals, like being able to see facial expression, being able to hear the tone of my voice, those are all paralinguistics. And so what we typically find in our research is that the more of these paralinguistic cues, but
Starting point is 00:31:10 particularly voice that are present in an interaction, and the more synchronous it is, the more humanizing a conversation is, and the more human like a communicator will appear. And so if you care about trying to reduce misunderstandings and have clearer mind reading and being seen as more human-like and making the best possible first impression, we would suggest that you start with the media and the modality of communication platform, whether it's like in person or video chat, that is going to be able to maximize those things as opposed to
Starting point is 00:31:42 starting with text, which a lot of people actually do think that they should start with their text, like their cover letter, when they're trying to make a good impression on recruiters, for instance. And we actually find that the elevator pitch is like much more effective, even controlling for the words that people use. I would suggest that people should be quick to switch modalities and platforms as it is more or less effective for them. So sometimes people get caught in this meeting culture where they're stuck in these video conversations and they don't need to work out a lot of detail and they
Starting point is 00:32:12 don't need the synchronous conversation. They can do some of it asynchronously. So it's like time to get off the meeting and just go into email instead and work on your to-do list independently, and then you can meet again later. All right. And so you can jump off once you've kind of worked through some of the detail. Or if you start out in email and you start realizing that things are more complicated than you expect, and there's some conflicts, then you should jump into Zoom.
Starting point is 00:32:35 And so I think people should just be quicker to move into a different modality or platform that serves their purposes in terms of communication. All right. So that's just one is like the medium by which we're engaging. And then another thing that we can think about is how we use communication tools to service our interactions.
Starting point is 00:32:58 Like there are all these cool tools now, like we can transcribe automatically as we're speaking so that we have a searchable transcribe automatically as we're speaking so that we have a searchable log of everything that we're saying, or there are certain new startups that are developing tools that will give you sentiment analysis. So after I send an email, I can get information about compared to most of the users in your organization, that was on the angrier side, like your anger sentiment was high in that email.
Starting point is 00:33:23 Oh, I should have toned that down or maybe I should tone it down in the next iteration. And so I can take that feedback and I can use that. Now I do think there's a potential. So you might just be tempted to like say, oh, all of these tools sound great. Why not just employ all of them? Like let's transcribe everything we're saying and let's like use the sentiment analysis that exists. But there might be a cost on the back end to distraction
Starting point is 00:33:46 because humans are only capable of engaging in so much at once. And I've talked to a couple startups now that are building these new communication platforms that are basically everything. So there's words that are scrolling because it's everything being transcribed as we speak. We can see each other, we can hear each other. It's like all the modalities are happening at once. And like, again, on the one hand, that sounds kind of great, but on the other hand, I think there might be a cost in terms of distraction.
Starting point is 00:34:13 As a teacher and an educator, I'm very, very aware of this and very salient to me, that trade-off. And so I do think people need to be wise in thinking about which communication tools they want to utilize and really pay attention to the new research coming out on this. And so that's probably what I would like to leave people with.
Starting point is 00:34:50 Thank you so much for joining us today. If you like what you heard, don't forget to subscribe to our show, leave us top-rated reviews, check out our website, and follow me on social media. I'm Vince Chen, your ambitious human host. Until next time, take care.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.