Chief Change Officer - Juliana Schroeder Phd: How to Stay Human in a World of AI Machines
Episode Date: December 11, 2024Juliana Schroeder, Associate Professor at UC Berkeley Haas School of Business, unpacks the evolving dynamics of communication in the human-machine era. Juliana dives into the intersection of human and... machine interaction, highlighting how technological advancements like generative AI are reshaping how we connect, collaborate, and convey ideas. From leveraging paralinguistic cues to mastering the art of switching between communication modalities, Juliana emphasizes the timeless value of empathy, adaptability, and emotional intelligence in navigating a world increasingly mediated by technology. Key Highlights of Our Interview: The Mind Behind the Machine “AI isn’t just changing technology; it’s reshaping how we think, act, and perceive power. When virtual assistants act human-like, they give users a psychological boost that can even alter decision-making.” Confirmation Bias Central “When users consult ChatGPT, it often mirrors their ideas, reinforcing their thoughts. It’s a colleague that nods a lot but rarely challenges, creating a unique kind of echo chamber.” Medium Matters “From text to video to voice, the platform you choose shapes how your message lands. Want to make a strong first impression? Skip the text and go for face-to-face—or at least a well-delivered elevator pitch.” Humanize the Experience “Paralinguistic cues—like tone of voice and facial expressions—are what make conversations truly human. To connect, think beyond words and embrace the richness of full-spectrum communication.” High Stakes, High Scrutiny “In critical domains like hiring, people demand transparency. The idea of an algorithm handling everything creates unease, sparking backlash when decisions feel like they emerge from a ‘black box.’” _________________________ Connect with us: Host: Vince Chan | Guest: Juliana Schroeder, PHD Chief Change Officer: Make Change Ambitiously. Experiential Human Intelligence for Growth Progressives Global Top 3% Podcast on Listen Notes World's #1 Career Podcast on Apple Top 1: US, CA, MX, IE, HU, AT, CH, FI, JP 2.5+ Millions Downloads 80+ Countries
Transcript
Discussion (0)
Hi, everyone. Welcome to our show, Chief Change Officer. for change progressives in organizational and human transformation
from around the world.
Coming to us from the halls of UC Berkeley is Associate Professor and Psychologist Juliana
Schroeder. You might have noticed most of our guests have taken
quite the scenic route through their careers. Juliana, on the other hand, has kept her eyes
on one prize, digging deep into the human mind. She's now leading the charge in teaching negotiation and management to both MBA students
and seasoned executives.
Take a quick look at her website or UC Berkeley's and you will be blown away by her achievements.
We are talking a laundry list of titles, a mountain of papers and a substantial collection
of awards. And get this, she's
back, not one, but two master's degrees and two PhDs. At an age where many are still figuring
things out. I could easily spend a good 10 minutes here, just running through her credentials while and
all the incredible things she's achieved.
But let's be honest, I know you're here for the insights.
So while I'm skipping the long intro to save us some time, I can't recommend enough diving
into her profile yourself. Trust me, if you're even a bit of a nerd like me,
Juliana's work is a gold mine.
Juliana and I met at Chicago Booth.
She was my TA for two courses
taught by two amazing professors and social psychologists,
Nick Apley and Linda Ginzel.
Still remember the first day we met?
I was sitting next to her on the front row when the whole classroom was packed.
I didn't know she was actually my TA.
I raised hand and answered question.
I got the question wrong.
Then she whispered to me trying to explain the reason why.
Then we met again in Singapore.
This time I pulled her aside, asking her about reciprocity,
a very important concept in psychology and negotiation.
In my eyes, she is very sharp.
Those who know me well understand that I use this word very selectively as a compliment.
Over time, I've observed the growth of her academic career.
I told myself, I must invite her to my podcast.
So, which granted, here we are, let's get started.
Good afternoon, Juliana.
Thank you so much for having me, Vince.
Good afternoon.
Let's start with a brief introduction of your background.
For the benefit of the audience,
how I met Juliana, that was when I was at Chicago Booth.
Yeah, I am an associate professor
in the management of organizations
at the UC Berkeley Hall School of Business.
And by the way, I'm incredibly impressed
that you have kept in touch for more than 10 years
since I was a teaching assistant
back when I was doing my PhD at Chicago Booth, I thought that I was
going to be a hard scientist in my high school days.
And then when I got to college, I took some social science classes.
I took psychology and economics and I just completely fell in love with them.
I just think it's fascinating to be able to better understand
how people think and feel.
They kind of say that research is me
search. And so I think I like to study the things that I find to be like fascinating and challenging
and that are kind of hard for me. So I study things like decision making and negotiations
and persuasion. And I'm an experimentalist, which means that I run experiments on people
to better understand counterfactual
worlds. Like what would happen if people live their life in this condition versus this condition.
Check out your personal website. You have published a lot of papers over time. Like
you said, you study power, study negotiation, decision making. I was wondering when you were in the master PhD program,
when you were thinking of choosing specific areas
of research, why you chose language, mind perception?
What fascinating about those areas that you decide,
well, yeah, I really, really wanna go deep
to become a deep think decide, well, yeah, I really, really want to go deep to become a deep thinker,
researcher, and teacher in those areas.
That's a great question because psychology is so broad.
There's so many different aspects of human bias and decision making and behavior that
you could study.
But to me, I kept coming back to the fact that we live in a social world and, you know,
man is a social animal.
And so all of our society kind of rests on having this cooperative function with those that are around us.
And that involves having to engage with other people effectively and productively.
And so I see the umbrella of all of my research as being around mind perception, which is how we come to perceive and understand the minds of those around us. And this is a really fascinating
topic because, of course, we can't just directly read other people's minds. And if we could,
the world might be kind of a mess. You can imagine that that could end up leading to
all sorts of problems and issues. And it's good that we are allowed to keep secrets from each other.
But the fact that we don't have very much insight can lead to challenges as well.
Because sometimes we have to make these guesses at what other people are thinking
and feeling and there are systematic ways in which we can go astray in that.
And I basically study all the different building blocks and how people come to make inferences about others' minds.
Think about both the top-down and bottom-up influences
on people's mind reading and mind perception.
The top-down is like I bring to bear beliefs
about the world and stereotypes about certain people.
So the very first time I might've met you
and talked to you, Vince,
I'll have like certain beliefs that I have in my mind and I immediately start forming these inferences about you.
They happen in this kind of split second.
Might be based on the way you look or your accent, where you're from and then at the
same time, the longer I engage with you, like say we're having an actual back and forth
conversation might be synchronous or might be asynchronous.
I'm starting to modify kind of those overall beliefs and stereotypes based
on like this bottom up feedback I'm getting regarding your specific
characteristics.
So what you're actually saying to me, how you're saying it, kind of your
non-verbals and your verbals together.
And I'm integrating all of that information in my mind in this really
our roles together. And I'm integrating all that information in my mind in this really fluid,
amazing way to come up with an overall belief about you or belief system about you.
One thing before we deep dive into your research areas, while you're talking about trying to understand the mind in other people. Always wondering, like psychologists themselves, how they try to understand their own psychology.
You, as a living human, how you perceive will figure out your own psychology,
which will make you smarter or more complicated in a sense to figure out your own psychological state of mind
when something exciting happened or something bad happened.
Yeah, that's such a great question, Vince.
But I would say that I hope after having studied this for so long, that I do have
more insight, not just into how we engage with other minds, but also how we
engage with our own minds.
Sometimes we focus on the differential processes that are involved in trying to
read other people's minds as compared to trying to recognize and understand our
own minds.
Of course, when you're thinking about your own mind, the primary way in which
you engage is just through introspection.
You kind of introspect like what am I feeling and what am I thinking right now?
But there is some really interesting research in psychology that has pointed to
the limits of people's own introspection and their overconfidence when it
comes to their own introspection.
So they might get a sense that oh, I know exactly why I made that decision.
But sometimes they don't know the factor that actually influenced them.
It might even be something in the environment that was outside of their explicit consciousness
that was swaying them.
And the experimenters know this because they manipulated that factor.
But people still have the sense that they know why they made the decision because they
can come up with some sort of post-hoc rationalization for why they did it.
That doesn't mean...
So introspection sometimes fails.
It doesn't, you know, we have the sense that we know ourselves, we know our own minds,
but it doesn't necessarily mean that we truly do.
And so I think it's very interesting to think about the ways in which we sometimes fail
when we're trying to read other people, but also the ways in which we sometimes fail when
we're trying to understand ourselves.
And I think there are some parallels and some ways in which the processes are different
that I've studied.
Now, but this one area or in particular one paper that interests me when I did my research
for this interview, this paper was published in year 2020. It's called Power and Decision Making,
New Directions for Research
in the Age of Artificial Intelligence.
Now that's 2020, that's before we have chat,
GPT, and many other AI tools as of today.
So can you tell us a bit more about your argument
for that paper back then?
Thank you for reading that paper.
And it is right, it's a bit dated now.
It's four years old, so funny.
I wrote that with my co-author, Nate Fast.
Together we direct an institute
called the Psychology of Technology Institute.
And so we have been very interested in better understanding the psychology behind how people
come to adapt and engage with and even design different forms of new technology with a particular
focus on AI, as well as bidirectionally how technology changes our psychology and how
technology has been changing our minds, both at the micro level, the individual level, as well as how that aggregates to societal change,
which a lot of people have been studying these days thinking about things like polarization and misinformation
and just how new tech is influencing our society broadly,
and democracy and other huge societal shifts that we're seeing in the world. And at the time, Nate and I were very interested
in thinking about the proliferation
of all these virtual assistants.
So we were looking at like Siri and Alexa,
and we thought, oh, they're in fact
in the marketing literature.
There were a set of papers that came out
around the same time, and they were all kind of concerned about the fact that it's the, a lot of people had
these personal virtual assistants that they could take with them anywhere.
They were on their phones and they could tell them to do anything they wanted.
And they would yell these orders to their virtual assistants and their virtual assistants
would immediately do anything they wanted.
And the virtual assistants were usually female voices.
And so we thought there might be some interesting psychology going on in this.
And some of the papers that came out in fact were concerned about children growing up with
virtual assistants and learning to be rude to their virtual assistants and what that
would do to politeness and society. And we were more interested in the feeling of power that it might give you,
that if you carry these virtual assistants around in your pocket,
that might lead people to have this sense that they have maybe almost like an
inflated sense of, the part of it could be real.
So we differentiate between the subjective and the objective sources of power,
and we're really just more looking at people's subjective sense. So do they feel like they have
power? And there's a long line of research that finds that when people feel like they have power,
that puts them into more of a goal orientation. So they're more likely to act rapidly.
They make quick decisions.
They tend to be more instrumental and less relationship focused.
They may be more overconfident in their decision making.
So power can lead to this like inflated sense of self and changes the ways in which people
behave in these systematic ways.
And most of that research had looked at real instantiations of power,
like people having resources and people having other humans that were doing things for them.
And we thought, well, maybe just like the feeling of being powerful with virtual assistants
might lead to some of these consequences.
But we actually theorized that not just any interaction with the virtual assistant
would make people necessarily feel powerful. We thought particularly if the virtual assistant
was humanized. So if it was the case that people engage with a virtual assistant
and see it as being somewhat human-like, then perhaps they would show some of
these consequences of power that they would become higher in their goal
orientation and instrumentality.
And so we did find that and it's interesting to think even how we were considering humanization back then because now,
of course, as you mentioned, there are so many more types of virtual agents that are out in the world.
And they're not necessarily just assistants anymore either. Like, I don't know.
So we haven't tested this in ChatGPT.
For example, I don't know people when they engage with ChatGPT, they see it as
being an assistant for them or if they see it.
I know a lot of people who would just anecdotally will say that when they
engage with ChatGPT, they try to be very respectful and very kind because you
never know when the machine overlords
are going to take over.
You know, so they probably are seeing themselves as being more low power, right?
I don't know like subjectively how that would work with certain virtual agents that are
out in the world now, but I do know that if people see the virtual agent as an assistant,
like they're there to serve you and they humanize it, then I
think we would expect to see these results of goal orientation. Now the humanization piece I mentioned
is interesting too because at the time we were thinking about humanization as being more about,
for example, whether you interact with it as if it's like a human, like does it talk to you? Can you talk back to it?
As opposed to, you know, writing, does it have an avatar with it?
Like so there'd be some sort of face that you can see.
And now I think there's a lot more sophistication in terms of humanization.
I think that even so research now suggests that for most LLMs like Jack Chibiti and other ones, most people
cannot differentiate it from a human when they don't pass like the Turing test is what
we call it.
So they cannot tell whether or not in abstract in isolation if you're just give it the responses,
they can't tell whether it is a human or not with any sense of accuracy.
So they're essentially at the level where they are using
language to the degree that a human would.
And I do think that still the voice to voice interaction
is fundamentally humanizing
and why I have some other research on this.
So I think that voice to voice will make people
see agents as being more human-like.
I think language, yes, we already know that the LLMs are at the level of human.
And then we've been studying just other random cues to humanists that exist,
especially when you're engaging in like text-based online communication
with the ambiguous agent.
So for example, we found one of another cue that you might have to expect is
whether or not it makes typos and corrects those typos.
So we found that, so it's interesting, like typos in general are kind of
dehumanizing when you see a type, you're like, oh, that, you know, it's not very
competent whenever the agent is.
If you imagine it's possible that it could be some sort of chat bot or some
sort of LLM and it's making a lot of typos, perhaps you just think it's like a poorly programmed chat bot or some sort of LLM, and it's making a lot of typos.
Perhaps you just think it's like a poorly programmed chat bot.
But what we found is that when you're having a synchronous
back and forth conversation, like for example,
with customer service agents, like on Amazon or something,
and they make a typo and then they correct that typo,
then people are really likely to think it must be a human.
And that's because I think people have expectations that they're bringing to bear regarding the humanists of the agents that they're interacting with and
the programming of different chatbots and what they expect to be in the
programming or not.
And so they're not expecting that a typo that's corrected will be something that most companies would program into their chatbots.
It also signals something about like having an active mind, that there's like a mind, a human-like mind on the other end that is monitoring the conversation and the errors and correcting their own errors.
So that just really signals humaneness. You can also imagine we're also kind of playing with other things, but there,
there are other cues that people might take to signal a humanness.
Like perhaps if you have a really overly effusive customer service agent that
uses a lot of exclamation marks, English marks and things, and you're like, okay,
if that seems like it's probably a human, cause why would the chat bot do that?
But so those are like new things that are happening in the world right now.
But do that.
But so those are like new things that are happening in the world right now.
So are you carrying on with your original research back in 2020 and today with all the new developments, still studying this?
If you are, what's your status?
What's your observation?
Yeah, we really were mostly just theorizing even in the 2020 article.
And I think that the theory would still hold that people would feel when they feel like
they're more, they have more power because they're engaging with a virtual agent that's
humanized.
That's when they're going to engage in more goal-oriented type behavior that we generally see from like
higher power people. But when they are not perceiving the virtual agent to be their assistant,
and then they don't feel like they have power, or if they see it as their assistant, but
it's not humanized, then I don't think that we would see the same results. So I would
predict that the theory would still hold,
but we have not tested it with some of the newer technology that exists.
So I would love for anyone out there who wants to study this
to reach out to me so I can test it further.
Now, let me share a bit about my user viewpoint.
Yes, I use CheckGPT sometimes.
I don't have that conscious feelings of power when I use
it. Do I see it as assistant? I see it as honestly as a colleague so to speak. Although I found this
colleague a lot of times provides me with a huge degree of confirmation bias.
Whatever I say, oh yeah, that's right.
You can think of it this way and all that.
I'm very conscious about confirmation bias when I use chat GPT.
When I ask them questions, I try to get them help me figure it out or maybe write
something more for me, give me more inspiration, creativity,
and they keep coming back with the same idea. Eventually I said, that's not working.
I would imagine if this is talking to a human colleague, I might be more careful in terms of
the language I use. Am I saying anything that may upset you?
But I still see it as a machine.
And as of now, the emotional aspect of it is not so human yet.
So that's why I don't see it just as an assistant.
I would take it more like advisor, you know, depending on situation.
Yeah, I share your intuition that it might be a bit more
nuanced with chat CPT.
I think when we wrote this article in 2020, we're
envisioning a future in which people would just have like
armies of virtual assistants.
Like maybe they're humanized like these robots.
Your house is just filled with robots that are just there
to serve you and they're very humanized.
And so we're like, what is this going to do to people's psychology into their minds?
And that vision of the future hasn't really played out yet.
I guess it's still possible.
Who knows?
But I think you're right that I don't think people probably see
Tachi P.T. as necessarily just being their servant per se.
If anything, you know, there's maybe more of a sense of
uncertainty about like where the power dynamic really lies in that relationship.
Well, if I structure the questions, I must say they give me some ideas as if I'm talking to a fairly intelligent person,
then we keep communicating.
Then this kind of interaction or conversation sometimes honestly is more interesting than
talking to a human who may not have any sense of independent thinking.
I do see the value in terms of using the machine, a highly intelligent machine, and me as the
human also being aware of what kind of biases
that I may face if I use this tool.
But just be aware of that, be mindful not to be distracted
or get so carried away by that.
So far, this conversation, this interaction,
for me is still manageable.
But then I watched a video posted by an adjunct professor of entrepreneurship
from Chicago Bull. The topic is why AI may be your best astrologist. I know you work with and teach
a lot of MBAs, executives. Do you see us, like people like us, decision-making, executive decision-making, perhaps AI could be one of our best astrologists.
Yeah, that's a great question.
By the way, while you were talking, I was just thinking about how it would be so interesting if one of the concerns potentially of having people feel like they're high-power and they have all these virtual assistants that are working for them is that people that are sometimes
in really high power positions can get this very inflated sense of self and they become
overconfident and they make their decisions too quickly.
And so you could imagine that perhaps people, perhaps companies might even want to design their virtual assistants in such a way that keeps people in check.
Like by, for example, pushing back against them and making like power dynamic a little bit less clear.
So maybe people would might sort of actually appreciate it if their virtual agents give them a little bit of sass,
give them a little bit of pushback.
So that'd be just something really fun to play around with in terms of design.
But to get to your bigger question about to what extent people are using AI wisely,
particularly leaders are using AI in their decision making, I think the principle to keep in mind is
that AI needs to complement and improve our decision-making.
It shouldn't really substitute.
And we've seen some pushback against this already.
There are people have a strong sense of when it's more or less appropriate for AI to be making decisions on their behalf. And there's a long line of literature
on what we call algorithm aversion
versus algorithm appreciation.
And it is changing over time as well.
So for one example, hiring decision.
So this is one in which people very strongly believe
that there should be a human decision maker
that is involved at the high level,
even if there's some parts of the decision-making process that are driven by AI.
Google famously got into trouble by pretty much automating their entire hiring process,
and there was a lot, and promotion process, and there was a lot of rebellion among the
employees and job candidates regarding the
algorithms and things that you know ostensibly the algorithm inputs or calculations or waiting
functions couldn't weren't taken into account properly and there's just a sense that the human
need to be involved in the process and so they changed their process there's a famous case study
on this in order to have a retreat, an annual retreat in which
algorithms were still involved.
There was still a lot of AI decision making happening, but humans were involved as well.
And so there were humans there at the retreat and they were going through the data and going
through the algorithms recommendations and making decisions as a function of those.
So they were using those as an input into their decision-making process, but they weren't the final outcomes.
And so that made people feel a lot better.
And I think that there is this hiring, like I said,
that is a domain in which people do think there should be a human involved.
There are other domains where people are okay with just taking the algorithms.
For example, it didn't used to be the case, but now like think
about like musical selection, music preferences.
So people are pretty much happy using Spotify's algorithm to
select most of their musical preferences, even though that
is historically music has been something that's seen as
something associated with human sentiment, kind of like
emotional and artistic.
But that's one where people are much more likely to just be willing to take an algorithm.
I think the big concern that people tend to have is when there's the potential for something
to be unfair or there's the high stakes decision and in which case, you know, a lot of times
the algorithms are operating in this kind of black box fashion and people don't understand
exactly all the machine learning that's going on within the algorithms.
And so there's this concern that the outcome may not be fair or may not be warranted, or
if it's really high stakes, then we should know exactly every single input and every
single calculation that's being passed for it, which I think is reasonable. But at the same time, I've heard some really compelling arguments that
people's psychology is going to hold them back from being able to reap
all the benefits of the algorithm.
So you can imagine, for instance, that this is a really high stakes decision.
And so I think it's incredibly important to know exactly all of the calculation,
the input that go into the decision,
but maybe, and so therefore I don't want to use an algorithm because I don't know exactly how it
works per se, but the algorithm still may be way more sophisticated and able to do a lot more than
the human could, and so by choosing not to use the algorithm there, I'm limiting our ability to
make a good decision. And so those are like tricky trade-offs that we're having to navigate now.
Last question, I'd like to get your insights.
Now you study human to human interaction conversation,
and we just talk about me as a human talking,
working with a machine,
this human and machine interaction
will become more and more common.
For younger kids, they are going to grow up in this era.
So they just would be more immersed in this space.
Adults being trained and grow up in an era
where it's just human to human.
And now we are in this human machine era. So what advice would you give to MBA students,
executives, managers, how we could make better use of our human communication
skills? Or if you have to highlight a couple of premium human qualities, human skills that we should hold on to.
That's a great question. I think that people need to learn how to use technology to their advantage in communications settings. that they shouldn't just be thinking about how,
what are the like uniquely human elements of communication
because those are always going to be changing.
I think that our world is constantly changing.
So it's more about how do you engage with new technology
in order to improve your abilities to communicate.
And let me give you a bunch of examples here
that come from my research.
So one is that we now have all these different platforms at our fingertips
that we can use to more effectively communicate with those around us.
It's amazing that you and I are getting to have this, you're all the way across the world
and from me and we're still having this great conversation and we're doing it through
an audio only platform here. We could also
be seeing each other via video. And so I think there are both like technologies that have more
or less synchronicity. That's like speed between when I say something and you respond. And then
there's also platforms that have more or less what we call like paralinguistic cues, which are the cues beyond the words,
which include the nonverbals, like being able to see
facial expression, being able to hear the tone of my voice,
those are all paralinguistics.
And so what we typically find in our research
is that the more of these paralinguistic cues,
but particularly voice, that are present in an interaction
and the more synchronous it is, the more humanizing a conversation is,
and the more human-like a communicator will appear.
And so if you care about trying to reduce misunderstandings and have
clearer mind reading and being seen as more human-like and making the best
possible first impression, we would suggest that you start with the medium,
a modality, a communication platform, whether it's like in person or video chat, that is
going to be able to maximize those things, as opposed to starting with text, which a
lot of people actually do think that they should start with their text, like their cover
letter when they're trying to make a good impression on recruiters, for instance, and
we actually find that the elevator pitch is like much more effective, even controlling
for the words that people use. I would suggest that people should be quick to
switch modalities and platforms as it is more or less effective for them. So sometimes people
get caught in this meeting culture where they're stuck in these video conversations and they don't
need to work out a lot of detail and they don't need the synchronous conversation, they can do
some of it asynchronously. So it's like time to get off the meeting and just go into
email instead and work on your to-do list independently and then you can meet again later.
All right, and so you can jump off once you've kind of worked through some of the detail.
Or if you start out in email and you start realizing that things are more complicated
than you expect and there's some conflict, then you should jump into Zoom.
And so I think people should just be quicker to move into a different modality
or platform that serves their purposes in terms of communication.
All right.
So that's just one is like the medium by which we're engaging.
And then another thing that we can think about is how we use communication
tools to service our interactions.
Like there are all these cool tools now, like we can transcribe automatically as
we're speaking so that we have a searchable log of everything that we're saying.
Or there are certain new startups that are developing tools that will
give you sentiment analysis.
So after I send an email, I can get information about compared to most of
the users in your organization, that was on the angrier side, like your
anger sentiment was high in that email.
Oh, I should have toned that down or maybe I should tone it down in the next
iteration and so I can take that feedback and I can use that.
Now I do think there's a potential.
So you might just be tempted to like say, oh, all of these tools sound great. Why not just employ all of them? Like let's transcribe
everything we're saying and let's like use the sentiment analysis that exists. But there might
be a cost on the backend to distraction because humans are only capable of engaging in so much
at once. And I've talked to a couple of startups now that are building
these new communication platforms that are basically everything.
So there's words that are scrolling because it's everything being
transcribed as we speak.
We can see each other.
We can hear each other.
It's like all the modalities are happening at once.
And like again, on the one hand, that sounds kind of great.
But on the other hand, I think there might be a cost in terms of
distraction as a teacher and an educator. I'm very, very aware of this and it's very salient to me
that trade-off.
And so I do think people need to be wise in thinking about which communication tools they
want to utilize and really pay attention to the new research coming out on this.
And so that's probably what I would like to leave people with.
Thank you so much for joining us today. If you like what you heard, don't forget, subscribe to our show,
leave us top-rated reviews, check out our website, and follow me on social media.
I'm Vince Chen, your ambitious human host.
Until next time, take care.