ACM ByteCast - Rosalind Picard - Episode 51
Episode Date: April 3, 2024In this episode of ACM ByteCast, our special guest host Scott Hanselman (of The Hanselminutes Podcast) welcomes ACM Fellow Rosalind Picard, a scientist, inventor, engineer, and faculty member of MIT�...�s Media Lab, where she is also Founder and Director of the Affective Computing Research Group. She is the author of the book Affective Computing, and has founded several companies in the space of affective computing, including the startups Affectiva and Empatica, Inc. A named inventor on more than 100 patents, Rosalind is a member of the National Academy of Engineering and a Fellow of the National Academy of Inventors. Her contributions include wearable and non-contact sensors, algorithms, and systems for sensing, recognizing, and responding respectfully to human affective information. Her inventions have applications in autism, epilepsy, depression, PTSD, sleep, stress, dementia, autonomic nervous system disorders, human and machine learning, health behavior change, market research, customer service, and human-computer interaction, and are in use by thousands of research teams worldwide as well as in many products and services. In the episode, Rosalind talks about her work with the Affective Computing Research Group, and clarifies the meaning of “affective” in the context of her research. Scott and Rosalind discuss how her training as an electrical with a background in computer architecture and signal processing drew her to studying emotions and health indicators. They also talk about the importance of data accuracy, the implications of machine learning and language models to her field, and privacy and consent when it comes to reading into people’s emotional states.
Transcript
Discussion (0)
This is ACM ByteCast, a podcast series from the Association for Computing Machinery,
the world's largest education and scientific computing society.
We talk to researchers, practitioners, and innovators who are at the intersection of
computing research and practice.
They share their experiences, the lessons they've learned, and their own visions for
the future of computing.
I'm your host today, Scott Hanselman.
Hi, I'm Scott Hanselman. Hi, I'm Scott Hanselman.
This is another episode of Hansel Minutes on behalf of the ACM ByteCast.
This joint podcast is a cooperation between ByteCast, the ACM, and my podcast, Hansel Minutes.
And today, I'm chatting with Dr. Rosalind Picard.
She's a scientist, inventor, and engineer. She's a member of the faculty of MIT's Media Lab and
the founder and director of the Effective Computing Research Group at the MIT Media Lab,
and the founder of several companies in the space of effective computing. How are you?
I'm well, thank you. How are you doing?
I'm doing okay. I'm trying to be present and ask, and when people say like, how are you,
that's a question where you have to kind of go, oh, I'm fine.
That could just be the nice social thing that you say.
Or you could go and say, you know, the darkness persists, but so do I.
Or I'm feeling this way, I'm feeling that way.
Being in touch with your feelings, it requires a lot of introspection.
I wonder if I want to sit down at my computer one day and have the computer ask me how I am and then maybe disagree with me.
Is that something that I want? I could say, well, I don't know. Are you okay?
Some want it, some don't, and it may depend very much on how you're feeling that day.
It seems like how we interact with computers can very quickly enter what I call the uncanny valley
of AI, where everything's amazing and it's getting better and better. And then it's like,
oh, that's creepy. How do you, when deciding what a computer should know, what an algorithm should
understand, when does it reach like, that's wonderful and joyful and delightful and that's
not okay. You shouldn't have known that about me. Do you think about those things in your research
and work? Oh, yes. Yeah. We've backed off from some research where it felt too creepy or too worrisome or where talking to people about the misuses of the technology made us pause's a feeling of like, I don't know if
I want to interact with a piece of hardware with an algorithm in that way yet until we kind of try
it. But once you've proven that it can be done, that means that unscrupulous people or others
could choose to do that. And what can we do as consumers? I guess the only thing I can do is
vote with my feet. Voting with feet is really powerful. Voting also with dangling
something, it's not really voting, it's distracting, with something that's better to do,
I think is sometimes even more powerful, right? Like, yeah, you can work on that, which we're
all worried might not be such a good thing to do. Or here's a better thing to work on, right? This
is even more challenging, it's even more interesting. And it has incredibly good ways it could be improving people's lives.
So distracting from an iffy topic to one that we also find hard and challenging and fun
and interesting, but really important for people's lives is, I think, a good strategy
also.
So the group that you founded and work in is the affective, affective, affective,
AFF. And people may be hearing effective, and we want to call that out. Can you talk about what an
affect is? Because this is not the effective computing group, it is the affective computing
group. Well, as the founder of our lab said in the early days when I proposed it, he said,
affective computing, that's nicely confused with effective computing. And
hopefully it is effective also. But the original naming of it was me trying to avoid the word
emotion. I thought emotion would ruin my career. And I really wasn't interested in emotion
initially. I thought that was something that made us irrational, it was undesirable.
The last thing we wanted to get near machines.
One of the great things about machines was they weren't emotional.
But as I studied the human brain more and more and realized that the intelligence, the
flexibility, the ability to adapt to complex, unpredictable inputs in the human brain was
involving these emotion systems and that they were actually helping us
be more rational, more intelligent. And I thought, oh dear, I need to figure out how to combine this
with machines, but not call it emotion. So I named it affective computing with an A. And initially,
affective meant that included all kinds of things emotion, but it turns out it's a bigger umbrella term that has
emotions under it, but also has other things that theorists argue whether or not they're emotion.
But I include them in affect. Things like feeling interested, things like feeling motivated,
things like feeling bored, things like feeling frustrated. None of those were on the emotion
theorist's emotion list when I started.
I really, one of the things that I've always enjoyed about computer science is the naming part.
And I have many great memories of working at places like Intel or Nike 20, 30 years ago,
sitting around with a thesaurus, trying to find the right word. And when you find the word,
you're like, yeah, that's it. That's the noun that we're going to use for this object. And now the system just falls out. And while affect may not seem initially intuitive, the more you dig into it, the more the mouthfeel of affective computing, it just works. It's an expression of emotion,
it's gestures, it's postures, it's voice, it's the vibes, as the young people might say today.
Yeah, you got it.
Very good.
Every now and then we hit upon a name that actually works.
I was working at the time in a group that we had just named Perceptual Computing.
And so that two-word kind of like try to come up with something short that covers a lot
was influential.
But actually, it turns out perceptual computing needed affect and cognition.
It needed to understand the brain in a more complete way than the cognitive scientists
had been describing about it. And actually, in fairness, some of them had said affect was a part
of cognition. But I realized affect was more embodied than a lot of the cognitive theories
that sometimes you can just kind of
wake up with a bodily state that makes you feel a little irritable or anxious or something.
And we don't fully understand why that biochemistry and physiology affect our feelings like that. And
then the cognition of it seems to follow the feeling in some cases. And then in other cases,
we just think about emotions,
and the emotions really are kind of cognitive. Like Marvin Minsky would say to me in the days
when I first started doing affective computing, and he was writing a book called The Emotion
Machine, he'd say, well, aren't emotions just another kind of thought? And I said, well,
they can be a kind of thought, but they're not just another kind of thought.
And Marvin Minsky, of course, the co-founder of the AI Laboratory at MIT, and he is now
late, but did a lot of work in the space of cognition.
Now, you make an interesting point when you call out there's the expression of one's
affect, and then there's also the what's happening inside.
Like if you wake up on the wrong side of the bed and you're like, I don't know why I
feel weird today.
Maybe I'll feel better tomorrow. There's the emotional part. Like when you asked me at the beginning, how are you? I kind of did an internal inventory. But then there's my temperature, my blood sugar, all of the things that I may or may not have measurement of that might have direct or indirect analysis, like effect on my affect
that I can't necessarily measure. How much is this about emotions and things like that? And
how much of this is about measuring these other parts of our bodies, these health indicators,
heart rate, temperature? Yeah, it's a great question. When I first started working on
affective computing, I'm trained as an electrical engineer and my background's
really computer architectures and signal processing. So I was thinking about emotion
as a kind of signal and I wasn't quite sure how to get the signal initially. It wasn't just
wirelessly coming out of my feelings right into the computer. So I started exploring like, you
know, what is this anyhow? You know, do I have to draw blood? Do I have to plug something into my gut? Do I have to go invasively in the brain? And I started with what I could get. I started with multi-channel physiological measurements because there had been a little bit of work suggesting that emotions influence your skin conductance, your heart rate, your muscle tension.
So we started measuring everything we could get a sensor to stick on our bodies with.
This was in the early days at the Media Lab in the early 90s when Steve Mann and Dad Starner were there wearing amazing computers that they had been building. And so it was easy to wear
a computer and attach all of these sensors to our bodies. So we'd have 50 pounds of wearable computing on us, walking around with antennas on our heads and collecting this
physiology while getting the context around you, which is really important too, so that you could
tell the difference between if your heart rate was going up because you're walking faster,
or if your heart rate was going up because you were holding still and somebody's approaching
you that looks very threatening. As a personal point of note that you might find interesting,
it might be a use for this conversation. I am a 30-year type 1 diabetic. And as such,
I've been with my open source community on the forefront of quote-unquote wearable computing,
but only in the context of I have an open source artificial pancreas. So I have an insulin pump embedded in my arm. I have a Dexcom, but I've been putting
that data along with my open-source cohorts in databases. So I have a Mongo database with the
last 15 years of my blood sugar. And over the last 10 years with continuous glucose meters,
I have every five minutes, 24 hours a day for the last decade or so of my blood sugar. So one time I took information from
Microsoft Outlook, which has an API and my blood sugar, and I correlated which meetings were
raising my blood sugar. And I have this demo on YouTube where you can see that my most stressful
meetings, if blood sugar is a leading indicator of stress in some way, is this particular vice president
at the company that every time I go to a meeting with this person, this guy.
That is so cool.
Wow.
I would love to add some of other sensors to what you're doing.
I've been wanting to do exactly that.
Actually, I had borrowed a CGM for a very stressful series of events just to look at
exactly what it sounds like you already
know. Well, so, was I doing laypersons or poor man's affective computing without realizing? And
I was trying to, like, figure out the correlations between these two possibly uncorrelated variables.
Yeah, absolutely. And I want to hear more about what you learned from the stress and the glucose, because my understanding is, you know, now with the technology, we can see what works for each individual, right? And to what extent this is a group thing, or there's great individual variation.
Yeah.
Definitely stress raises the blood sugar. And sometimes, you know, eating more sugar with that is actually the worst thing to do.
Yeah, indeed. So in doing that, I was starting to think about the quantified self and then I
learned that there's a movement called the quantified self movement, just like folks like
Steve Mann was running around wearing all kinds of sensors. There are conventions. I've had folks
from the quantified self movement on the show. Is it by observing, though, are we not maybe adding to the cognitive load? Like there is talk right now of maybe that the Apple Watch may not be a great idea because it's causing people to be paranoid and overly aware of what's going on. Like by observing it, we've changed it. There certainly are cases where people may, for example,
they want to improve their sleep and then looking at the device and looking at their sleep, they can
get more anxious about not sleeping well. And next thing you know, they're lying in bed going,
oh no, I'm not going to be able to sleep, which is the number one cause of insomnia,
happens to be fear of not being able to fall asleep. So it can be that without sort of proper
coaching around these, the devices can exacerbate some of the problems. They also, with proper
coaching, like learning that you can handle those fearful thoughts with, hey, what's the worst thing
that happens if I can't sleep? Okay, so I just lay here and rest all night with my mind awake.
Usually once you let go of that anxiety, you're asleep. So
there's just more to be learned around it, right? How to use them. They are not a silver bullet,
right? You just put this on and suddenly your health is better.
Yeah. The reason I ask is that in the space of the type one diabetics, like I'm a person with
a non-working pancreas, I feel strongly that having a continuous glucose device has a lot
of value to me. But now, because we are
a small market, there's only a small percentage of us, the people who sell these sensors are now
selling them to fitness influencers and selling them to whomever. But the regular folks, regular
Joes and Janes out there, they may not be familiar with how easily the Y-axis can be used to make
someone feel like something is happening that it's not, like the great statistical lie of the Y-axis can be used to make someone feel like something is happening that it's not,
like the great statistical lie of the Y-axis.
So I'll see normally blood-sugared people,
oh, your blood sugar is spiking because you ate a grape.
And it's like, you know, and then again, it changes their emotion,
and it becomes this cycle of like, you don't need this.
And to the point of the Apple Watch, it is a one-lead EKG.
If you actually get a heart EKG, they're going to be a seven lead or you know it's does it have value so the question i'm going to
ask you is that how accurate does this stuff have to be whether it be a camera or a sensor or you
know because you're building these both at the electrical hardware perspective and in the
algorithms on the other side and i'm curious yeah how do you think about accuracy it's a great
question and it varies a lot depending on the use case when we're say working with a kid on the
autism spectrum and just trying to help them understand that this feeling they've been having
their whole life that they maybe didn't have a name for might be related to their skin conductance going up. And if they could learn to sense that going up before
they explode, they might get an early warning indicator and be able to do something to
self-regulate. That, a few microsiemen in the signal, it's not a big deal, right? It's just
kind of learning where they are relative to their daily baseline. However, when we are
taking a multitude of signals from the wrist, several autonomic signals, motion, temperature,
and we are analyzing patterns with the AI on the wrist in real time to alert people to a possibly
life-threatening seizure, like the most dangerous kind of seizure, what used to be called the
grand mal, but technically it's called a generalized tonic-clonic seizure, convulsive
seizure, where you lose consciousness. And usually people recover from them fine afterwards. They
don't need an ambulance. They don't need to go to the hospital. But they do need somebody there
making sure that they're in a safe position, they're on their side, there's nothing obstructing their airway, and that they don't progress into a state of apnea, which is the number one cause of death
among people who have a seizure disorder. So that case, we need a lot more accuracy
on the wrist because you don't want it just going off every time they move their wrist.
And the nut company Empatica is commercialized and has the only FDA cleared wearable on the market that does seizure monitoring.
There's a lot of consumer devices that claim to run apps that do like little shake detectors,
but they don't pass the bar of accuracy, sensitivity, specificity, safety, cybersecurity,
giant list, thousands of pages of tests that a device, in this case,
made by Empatica, has been done to provide that kind of carefulness. So I contrast these use
cases. There's ones where you're just kind of trying to learn a correlate of something going
on in your body. And then there are other cases where you're going to take an action,
you're going to call somebody, and you're going to log this for medical purposes. And there the bar is a lot higher. We do serious amounts of testing that a little bit because the fact that you take action is so important based on a signal versus simply noting
it. And a lot of folks who may not be trained in statistics might look at a data point and go,
look, my sugar's high. Something must be done. But what happened previous an hour? What does
the trend look like? There's so much more to look at. In my system, in my
open source artificial pancreas, I have an FDA cleared prescribed sensor that is specifically
been cleared to take action on so that the insulin pump in the closed loop system doses me
automatically based on the signal, which would be different than maybe an Instagram ad for a glucose system
for an influencer who just wants a general sense of how food affects their body.
Right. I just want to know if this, I want to show this person if when they run the meeting,
they're making our blood sugar spike, right? We don't need treats brought to this meeting. We need
some fiber and veggies.
Exactly.
And exercise, lower that blood sugar, add treadmills to this meeting room when they're in charge.
So then we have to ask ourselves when we are interacting with computers and sensors and systems that are going to give us a sense of the affect and, of course, its effect on us, are we going to then try to close the loop in some way and do something with that feedback?
And if so, is it safe to do that?
Right.
And, you know And it's interesting. When I first
started teaching machine learning at MIT, I always started off with the costs of the different kinds
of decisions you'd make, right? The probability of being right for this, the cost of being right,
the cost actually of being right. There's a cost to being right as well as a cost to being wrong.
And over the years, a lot of the machine learning people seem to skip that decision function where you put the costs in.
They just put the error in.
But there are costs to the different kinds of errors.
And you do have to take into account these annoyance costs or these learning costs or these inconvenience costs or the convenience costs, right?
Everything has, when you convert it to action, there's cost. Now, right now, we're in a very weird AI moment. It feels like capitalism has gotten its teeth into AI.
And ironically, it's, I guess, when we talked about products and mouth feel and whether
the thing feels right, AI seems to feel good in people's mouths right now, when machine
learning apparently didn't capture the imagination of the people over the last several decades
of machine learning being a thing.
I'm sure that you are applying AI to these
systems. Has anything changed because of these large language models or the moment, the hockey
stick of AI, or have you already been at the forefront of this for a very long time?
Just the term AI, just what it means has changed dramatically lately. It's funny when you say that
the machine learning didn't capture
the imagination. I remember having conversations with the founding parents in the field of AI,
you know, like John McCarthy and Marvin Minsky, saying that the machine learning I was doing was
not AI. So the first day when I had proposed a course on machine learning at MIT and was teaching
it, the first day I had to contrast and tell my students, look, I'm teaching you machine learning.
This is not AI. This is pattern recognition. The AI people say that this is too mathematical to be
AI. It's not what the brain does. And so you might ask why I'm teaching it because it's not AI.
And I said, well, I'm teaching it because I think it works. And so I started showing them how this
pattern recognition stuff could be used for a lot of different things we were doing in the lab. Obviously, I wasn't the only person doing pattern recognition machine learning in the world, but it wasn't big at MIT. The LLMs, the large language models, foundation models, have made amazing, amazing progress. It's really impressive.
At the same time, they're built on a kind of a shaky foundation, right? They're not built on
truth, integrity, for the impressive conversations you can have with them, they're not built on the kind of foundation that entities that usually have these conversations with us are built on.
So there are these zingers regularly, right?
Where, as we're seeing, as people are fond of pointing out in the media, where the statistics drive you into some direction where it just says something that's statistically beautiful
and completely wrong. And it says it with equal authority because, of course, it doesn't know
anything. It doesn't actually know right from wrong or truth from fiction. And some will quip,
well, neither do all people. Well, that's true. But we do at least have a sense of right and wrong and so forth. So I bring that up because there decision, again, back to something that might be life
threatening, a really serious situation, then I need to trust things in the system that I can't
with today's super impressive LLMs. There's that shaky foundation underneath. So, I think it calls
for some different approaches. It's not enough to just quote unquote fix those models as people are
doing. For example, people are making them explainable. How did you get to the decision?
But the explanations themselves can be hallucinated. So, you know, it's kind of turtles all the way
down as we joke, right? We're going to have to do something different there, I think, for some
use cases. Yeah, I realize that it's not appropriate to anthropomorphize these models and
assume that they are humans. But there seems to be this Dunning-Kruger effect of talking to an AI where
it's like it's very much overestimating its competence. And just like talking to a confidently
wrong person at a dinner party, I'm just kind of starting to treat all of these different co-pilots
as random people on the street that I've hired as an assistant. And I just need you to double
check your numbers on that and research.
And like,
I love that you're excited and I I'm excited about your enthusiasm,
you know,
LLM,
but I really need you to go and back that up with data and stay grounded in,
in real,
in real numbers.
They even give the illusion of doing that,
like copilot,
you know,
giving you these references.
And I was talking to it about something that I know a lot about and asking for its reference. And it was giving me the reference. I go to that page to double check
and the page is not saying what it's saying. It's still giving me the wrong thing with this
reference. So, I was giving it a better reference with like accurate information and it was still
anchoring on the previous one. So, that was a little frustrating. I think the referencing is
a step in the right direction.
But again, it can give the illusion of being knowledgeable and solid when it's not. So that
kind of pretense, that kind of false pretense, that kind of false impression, I think is actually
really dangerous for a lot of people using them. And again, there are use cases where this doesn't
really matter. It's fine. But there are other use cases where it's outright dangerous.
Yeah. Well, I appreciate you calling it out because in my initial question, I may have
implied that ML and AI are like synonyms or they're two sides of the same coin. And while
they may be related, the Venn diagram of the two is perhaps farther apart than the
media would have us think.
So that's an important thing to point out.
So then back to the idea of this feedback loop, if I'm using not AI, but rather machine
learning and proper data, you believe that we can create models that will allow computers
to really respond intelligently to feedback, whether it be emotional feedback or any kind
of signal that we can get out of the
person. And it's not AI that's going to make that loop. It's going to be proper data and proper
science that is more deterministic, let's call it. Yeah, I don't know, deterministic. I mean,
we use probabilities. I've used Bayesian methods from day one. I think we need the probabilities in there for a lot of good reasons, you know, variability and representing uncertainty and so forth. But I guess ultimately, even those random number generators we build in the machines are, or at least the ones I used to build were built with linear congruential multipliers, you know, they had underlying deterministic systems in them. So that's a whole other
interesting question. What is truly random? Is determinism possible, though, in the context of
affective computing? Let me give an example. This recording that we're doing right now was
actually rescheduled. We had a meeting last week. We met. I've never met you before. We're talking on a webcam. And the vibes were off. And I said, are you in a good headspace? Do you still want to do this right now? And I don't know what sub-millimeter facial expressions or sense, again, you're not even in the same room as me, caused me to do that. But it sounds like I was correct and we rescheduled. That's not deterministic though. Maybe I had a 70% probability. Yeah, I was kind of astonished when you picked up on that actually, because
I'll explain to the listeners as I explain to you. When you asked that, I said, well,
actually, I just got the news that a friend's 25-year-old daughter died. And that really put
me in a funk, right? I have kids around the same age. And I was definitely in a
weird headspace, a real grieving headspace. I don't think I looked sad, but I was really
not in my normal headspace. And you picked up on something. So kudos to you, because usually
people don't pick up on these things that don't know somebody well and also
over video conferencing media, right? So I don't quite know what you picked up on
because we didn't talk very long, but you nailed it.
But then my question then is, should the computer do that? Should the webcam? If I sit down to my
Microsoft Teams or my Zoom, should the computer go, I don't know if you're in the right headspace to be deleting email today, Scott?
You know? desired, you know, pattern recognition of your text, it would detect if maybe this email was a
little too hot. And maybe it wasn't gonna say you can't send it, but it was gonna get flag it with
four chili peppers, which would hopefully give you a moment to say, Oh, did I really want to
send a four chili pepper? That's what Twitter needs is to don't send that tweet that's too
spicy. And then you basically have a cool down moment where you don't let them tweet for three minutes and then they have to rethink that they're going to be mean on the
internet. I think, yeah, there are a lot of experiments going on now with, before you share
that, have you actually read the article? Do you realize that some people think this is not factual?
And I think things like that are really helpful. Also, if you can turn them on or off, depending
on, I think people should still be in charge and have autonomy over these things. Although I recognize that private companies running these things are also in charge and have some liability. So there's a balance there. But I think we need to be very respectful of people when we design all these things and let people know what is happening and give people the say as to what is being done with their data.
In our work with affective computing, we have done everything with fully informed consent, IRB approval.
I have been opposed from day one to reading information from people without their consent about their emotions.
And I know that there are companies out there that do that, very well-known companies, some that have even been mentioned already on this conversation,
and not Empatica. Empatica does fully informed consent. But I really think that should be a part
of the experience. And if people are not comfortable with their camera, their microphone
sensing affective information from them,
then that should not be done.
Yeah. Yeah, I really love that. As in my space, in the diabetes space, the idea that
I made the data, it literally came from me, but I have to send the data to a third-party healthcare
place. And then I, in order to get it back, I have to sign forms. And I like, I made it,
this was me that made this.
Right.
This is my data.
Yeah.
My emotions are my data.
And then also I love this, this concept of fully informed consent is just so, so fundamental
to how all of this data, all this telemetry for, for lack of a better word, the effect,
the affect of what I'm feeling, what I'm doing, what I'm, it's all a telemetry.
It's human telemetry. I want complete control of it.
And I want to know exactly what I can and can't do with it and whether or not it's a
good idea to make assumptions or close a loop and take an action.
So confidence numbers.
Our user interfaces should include all of this context so that one can make the right
decision.
I appreciate so much that your
platforms are doing that. Yeah, I personally have found it incredibly helpful to get feedback on
these data. The systems are still not as good as the best people. Like you pointing out,
asking me about the headspace I was in the other day, it was really interesting. You were picking
up on something that it's not clear. I thought I was hiding it.
I wasn't. So there's so many times when people are maybe a little late to the party of figuring out
what's on their face or what's inside them. For example, we're going to be presenting a paper at
the American Psychiatry Association coming up soon, where some doctors came to me and said, you know, we are told that when we
sit down with patients who have substance abuse disorders, that often we appear judgmental,
that we don't appear as compassionate as we think we appear. We want to be compassionate,
we want to help the patient. But you know, sometimes at the end of a long, tough day,
you know, you sit down to listen
to them and maybe your brow's a little furrowed because you're concentrating, but you don't really
look like you're concentrating. You look like you're angry or you look like you're annoyed or
you look like you're getting a headache. And the last thing you want to do is hear about their
problems. So they asked us if we could build a tool where they could practice looking as compassionate as they wanted to feel. Now, one might argue, is this authentic? Is this a good idea? You're kind of learning a poker face that's not a blank poker face. You're learning a compassionate face. Your job involves not just feeling compassionate and exchanging information to try to be helpful,
but looking compassionate. And these are different things, then maybe this system
could be helpful. So we've built that. They're using it. They're excited. It's
making some helpful ripples for people who want it, again, with fully informed consent.
Yeah. I was literally told in a executive coaching session on Monday that I have no poker face. And I mentioned this to a
friend at work and they said, oh yeah, like everyone knows this about you. Like you could
make animated GIF memes of like Hanselman faces. Just like, what? And like, apparently I've been
doing this for years and everyone knows it. And I'm like, does this mean I should turn my camera
off on teams because I'm broadcasting exactly what my face is. But it's so funny the
disconnect between how we think we are perceived and what is the reality. And like you said,
it could be millimeters. And it's so fascinating that we could come to a place where our machines
could respond intelligently and tell us things that we would ourselves never perceive. I think
that's really cool. Yeah. Yeah. It's funny you have that too. I was once called leaky by a great emotion
theorist because of showing everything on my face, but actually I've chose to let it be shown
actually inspired by an artist and just a brilliant philosopher artist, Bill Watterson,
who did the Calvin and Hobbes cartoons
for so many years, which I miss. I love, love, love his work. And his characters were so
expressive, so wondrously, enjoyably expressive that I thought, why not? Why not show a little
bit more expression in daily life? And actually, others now have done work in learning experiences, Cynthia Brazil's group at MIT and others with robots, that when the robots were more expressive when engaging with people and learning interactions, the people learned more.
They not only enjoyed it more and were more engaged, they learned more.
Yeah.
Well, this has just been an absolute joy. Thank you so much, Dr. Picard, for hanging out with us today and for sharing your knowledge
and for getting folks excited about this. My pleasure. Thank you for giving us this time
together. This has been another episode of Hansel Minutes in association with the ACM
ByteCast. We hope you've enjoyed this episode. If you have, I encourage you to take a look at
the back catalog and explore some of the other episodes of both Hansel Minutes
and the ACM ByteCast podcast.
We'll see you again next week.
ACM ByteCast is a production of the Association for Computing Machinery's Practitioner Board.
To learn more about ACM and its activities, visit acm.org. For more information about this and other episodes, please do visit our website
at learning.acm.org slash ByteCast. That's B-Y-T-E-C-A-S-T, learning.acm.org slash ByteCast.