Embedded - 367: Data of Our Lives
Episode Date: March 25, 2021Dr. Ayanna Howard (@robotsmarts, wiki) spoke with us about sex, race, and robots. Ayanna’s Audible book is Sex, Race, and Robots: How to Be Human in the Age of AI. You can see more of her research... from her Google Scholar page. Find some best practices and tools for reducing bias AI: Partnership on AI AI Fairness 360 (IBM) Model Cards (Google) Ayanna has recently moved from being Professor and Department Chair at Georgia Tech to be Dean of Engineering at The Ohio State University. Her current favorite robot is Pepper. Ayanna spoke more about her robotics and trust research on Embedded 207: I Love My Robot Monkey Head (transcript).Â
Transcript
Discussion (0)
Welcome to Embedded. I am Alicia White here with Christopher White. We are very pleased to welcome
Dr. Ian Howard back to the show. Hello, Anna. Thanks for coming back. Thank you. This will be
fun. Could you tell us about yourself as if it had been two or three years since we last talked to
you? So I am a roboticist. My official title is Dean of Engineering at The Ohio State University.
But really, when I think about who I am, I'm defined by the things that I build,
which is robots, program design, robots for the
good of humanity. And your position at Ohio State University is new. You were at Georgia Tech?
Correct. Prior to Ohio State, I was chair professor of School of Interactive Computing at Georgia Tech.
I remember we talked to you last time you were at Georgia Tech.
I was.
And it was before you were even a chair, I believe.
Yes, because I've been at Georgia Tech for 16 years.
And so I was, you know, associate professor, full professor,
endowed chair professor, had a bunch of different associate chair positions,
you know, director positions, and then chair.
Well, I'm excited that you're a dean now. That's a position of power. I hope you use it wisely.
I will. I will. It also allows me to set a gold standard for engineering nationwide,
which I'm excited about. Okay, so before we ask you
more questions about that
and about your book,
we want to do lightning round.
Are you ready?
Yes.
Favorite fictional human?
Bionic woman.
Favorite building
on the Georgia Tech campus?
Klaus.
Best food in Atlanta?
Sushi.
Three words you'd like other people to use
to describe you?
Humble, compassionate, and intelligent.
Do you think the singularity is closer
than it was five years ago?
Yes.
What is the best animal shape for a robot
that is supposed to engender trust in its users?
Dog.
Yes, definitely.
Well, those little seals are really cute.
Yeah.
They are.
Pretty sharp teeth.
Complete one project or start a dozen oh complete one to 90 percent
do you have a tip everyone should know i believe in yourself it's a good one
okay so you wrote a book called sex race and robots how Robots, How to Be Human in the Age of AI.
I mean, I want to say what's it about,
but that kind of explained it, didn't it?
Does it?
Oh, I don't know.
I think there's some details.
Sex, race, and robots.
Yeah, you're like, wait, sex, race, robots?
Where are we going with this?
Yeah, so one, it makes you interested in what's going on.
It was very enjoyable, but it's only available on Audible audiobook. How does that work?
So, and it will be in print, but probably not for, I think it's a year and a half from now. And so the Audible is basically the spoken word version of
the written book. And so one of the reasons why it was an Audible and an audio was really to drive
in the accessibility so that it felt much more human, much more connected for people to
really delve into some of the concepts of the book. When I was listening to it, it was very
warm. It felt like I was hearing from a friend about their life and the technology all at one time.
Is it for a general audience or for a technical one?
No, it's for a general audience.
But there's a little bit of a trick.
So it's for a general, I would just say,
general audience that just wants to know
a little bit about artificial intelligence,
but is also maybe I would say, you know,
intellectually curious.
But if you're a tech head like me, I dribble all these kind of things throughout. So, you know, even if it's your general
reader, if you're a tech person, you go, oh, oh my gosh, that is so funny. Oh, I got that. I remember
that. And so it's kind of a nice, you know, the Easter eggs are in there. They definitely, definitely are.
You grew up in Pasadena, California, and I grew up inland from there, but still in California.
There were so many things you talked about that I internet didn't really exist and how it started at the beginning of your career and how it changed everything.
Yes, I mean, like modems, like people, like students are like, modems, what is that?
Like telephone line, landline, what is that?
Yes.
Did you have someone in mind when you wrote it? So when I wrote it, I really thought
about kind of the young up and coming students that had just graduated, really thinking about
them being the next generation that is really going to live in this world that we're defining with the use of
artificial intelligence. So your subtitle promises how to be a human in the age of AI. So I feel like
I really should ask you this. How should we be humans in the age of AI? I mean, we should embrace
our individuality, for one, and our differences. And I think, so that's the big thing. The other is,
is that I think we need to lean into those things that make us human. Those things such as
relationships between people, those things such as teaching and learning and creativity and
understanding social justice. You know, those things that really define us as human are the things that we need to lean into,
given that we are living in this technology-infused world
that is becoming much more technology-based.
But in part of the book, you advocated against emotions.
So I advocated against people getting emotional, right? So emotions are actually part of our DNA. And I talk about emotions from very early on. It's a survival mechanism, right? And the way that we react to people. And some of it's a learned behavior, some of it's reactive, some of it's in our genetics DNA. But when I was saying we shouldn't be emotional, it's that when we're interacting with robots, when we're interacting with AI, a lot of times we interact based on an emotional connection.
It's very much like back in the day when email came out and people would send emails, you know, all case letters because they were angry.
Right. Like that's an emotional rendition of you getting angry and everyone's like, oh, you're angry. Stop yelling.
Right. And when I say stop being emotional, stop being emotional with these robots.
You're being reactive. You're going to the first search term
because, oh, yes, yes, yes, that's it, that's it, that's it. I'm excited about it. We need to just
pause and stop and think a little bit. Otherwise, we will get in trouble.
By in trouble, do you mean horrifically manipulated by the powers of big business?
Yes, horrifically manipulated and not even really think about it until it's too late.
But I yell at my computer all the time.
Nothing bad has happened yet.
Well, okay, so you yell at your computer, but then do you go on a spending spree so that you feel better?
I don't think those are connected.
Oh, no, no. So look at this. Imagine I'm a big company and I want you to go on a spending spree,
right? So I can intentionally do things because I know how to manipulate your emotions to make
you angry. And if I know that your trigger for when you're angry is to go shopping,
right? It's like, oh, you're the perfect model. I'm just going to do a couple of little things.
I'm going to put things on your feed just to get you riled up because I know then I'm going to make a lot of money because they're usually thinking about, well, this company knows where I surf and what kinds of things I like. Facebook is seeing I've gone to these sites
and therefore I'm interested in analog synthesizers or something like that. And so I end up with
endless analog synthesizers in my Instagram feed. But what you're saying is taking that a step
further and trying to figure out this person takes certain actions based on certain, let's call them activations.
And therefore we can go further than just knowing what they like and know when they're most likely to take action.
Correct.
That's not very nice.
Well, and a lot of people, when they're emotional, their barriers are lowered for all kinds of things.
Yes, and I think of con men doing that.
I haven't really considered codifying an AI con man.
Yeah, and I don't think that people think of it as codifying an AI con man.
I think it's just that people are driving. At the end of the day, peopleifying an AI comment. I think it's just that, you know, people are driving,
at the end of the day, people want to make money, businesses want to make money. So what is the best
way to do that? You want to, you know, maximize your touch points, you want to maximize, you know,
the output. And so one of the, I want to say one of the nice things, but one of the nice things
about AI is that with all the data that's out there, you can do these two or three steps removed and figure out someone and figure out what their triggers are and figure out what's the next step. colleges, right? And I know that you're a college-educated person, mother, and if you're
searching for colleges, that probably means you might have someone in your household that's about
to go to college. You know what? That means that in about two or three years, you might be thinking
about graduation, maybe a nice trip, maybe a car for the child, maybe a year or two,
right? Like these are things that we know happen in life. Data can show it. You just have to think
two or three steps ahead is all. Some human basically has to say, okay, take all the data,
think two or three steps ahead. What's the next thing two years from now or three years from now?
How much of this is codified, intentional,
and how much of it is just comes out of the data of our lives?
Most of it is coming out of the data of our lives right now, but we know that there is
experimentation going on to try to codify it a little bit better. Yeah, because we hear a lot about bias in training of machine learning.
Certainly the racial bias of a lot of image sets
is one that's a famous example.
But I haven't heard a lot about
kind of weaponizing bias intentionally yet.
And I feel like that's just not being talked about,
even though it's probably being done.
It is. I mean, there's been, you know, some of it is rumors, some of it is, you know,
media that's, you know, found leaked messages. But I remember there was one leaked study
where one of the advertising companies was selling profiles of teenage girls and if if anyone has a teenage girl you know
it can get very emotional um but they're selling it as an advertising mechanism like as you know
a profile um so what does that tell you tell us do that there's some folks that know that teenage girls can be emotional certain times of their journey as a teenager.
And one of the triggers that they had mentioned in this report was depression.
Of course. I mean, of course we get profiled.
I mean, they ask us some of these questions.
What is your age range? 40 to 45, 46 to 50?
They ask us our incomes, our genders, our identities, our likes, whether or not we're
managers or engineers, whether or not we like vanilla ice cream or chocolate and we fill out these forms and they aren't all the same form but
we're not as anonymous as we'd like to be not anonymous at all yes exactly
how do we how do we stop feeding into this machine? Can we? So this is, the problem is, is I don't think we can at this point.
And it's only because we have so many services that are provided to us that is based on the
data now, right? So what that means is that if we suddenly decide that, you know, I individually
am not going to give any of my data out there.
I'm going to scrub myself from the web like that's even possible.
It also means that you won't have access to, you know, all the things that you might want to in terms of, you know, better loan rates.
If you're on that side of the spectrum or better health care options.
Right. And so right now there's these profiles that are created
that are beneficial, right? It's just that they're also detrimental to certain groups
and certain populations. But what I do think is that we can control how a little bit more transparency on how it's consenting and knowing,
okay, if I'm giving this data, really, what are you doing with it?
Like, I just want to know and, you know, don't lie.
It's like, I'm giving you a free coupon.
What else am I giving you?
No, no, no, no.
You're giving me a free coupon, but then you're also using it because you're selling it to advertisers, right? Like, give us the option to decide how it's being used, when it's being used against us. That's really, I think, where we are now, because I don't think we can change the clock to say, okay, now let's just stop collecting data. I do often, whenever I'm asked about cookies, want to thank the entire European Union for GDPR, for making everyone actually be a little more transparent.
I agree.
And as you know, it's also seeped into, you know, California and San Francisco area.
I think Washington has also looked at some of these GDPR related kind of regulations as well.
I think it's moving in the right direction.
Okay, so that's how all of us are being manipulated by AI. And it's easy to understand that because the more data we give them, folks, we're mostly being advertised to or
manipulated about things that relate to advertisement. And that's, I mean, that's not
awful. I'm not going to say that's terrible until it's used, my medical history is used against me.
It's not a big deal, but that is distinctly not true for a lot of people.
There's a darker side to this manipulation.
Your book covers some of that.
There's a darker side.
Yeah.
And actually, I would push back a little bit.
It's not just advertising.
It's also, you know, middle class. If you have kids that you want to go into college, you might actually be in the positive or the negative, right? Like the datas that are, you know, being used for college applications and determine who goes in are much more being used in AI. If you're going for your next job, a lot of the recruitment tools
and a lot of the filtering tools,
irrespective of your economics,
are being used based on past data.
You know, as you get older,
I'm sure, you know,
as we get into, at least in the US,
you know, social security
and, you know, what should, you know,
that age of 65 or 67,
I don't know what it is, right?
Like that may move because the AI has figured out
that, you know, people living too long, that's what we're going to make it like 78 at some point.
And here's all the data that supports that. I don't think it's not just advertisement today,
but the negatives are, it's also being used in surveillance. It's also being used in
predictive policing. It's also being used somewhat in the healthcare system. It's being used in applications related to facial recognition.
It's being used in language and language models and the biases and natural language processing
that is in our Syrian Alexa. It's being used in these applications that are in some cases harmful because they're not
trained with all of the representation of what makes us people and human and, you know,
non-Western versus Western and all these aspects. Some people say it's just the data sets, that we
just need broader data sets. We need ones that show more people, that have more voices. Is that enough?
No. So that's just one piece of the puzzle. And I will say 10 years ago, maybe seven years ago,
we were all talking about, we just need more data. We just need more data. But the fact is,
it's not just the data. We do need more representative data, but it's also the way that we code up the algorithms, the parameters that we select. They
have developer biases. It's about how we choose the outcomes. What are we measuring? What are
we comparing? Are we, you know, trying to figure out loan rate? Are we trying to figure out the
amount of the loan? Are we trying to figure out neighborhoods? Someone has to select what we're learning from the data. There's human
biases in that. And even the data itself, how it's coded, how, because typically right now,
there are human coders that label the data. Like if you think about facial recognition, you know,
happy, sad, you know, here's a face, here's not a face. There's biases in the human coders. So
the data as well as the labels, and really we're now discovering, and again, 10 years ago,
I don't think all of us were talking about this, but it's throughout the entire pipeline. It's not
just one place, which is the data. Although, you know, having better data would be nice. And so
it's one of the easier things, quote unquote, to address because you're like, oh, data, data, data.
We could fix that.
But it's throughout the pipeline.
I think that's a very important thing to consider.
I mean, when you said that in the book, I was like, I'm working on a DNN, just a standard neural network.
I'm not even doing anything really creative with it.
How can I possibly have bias in what is just a standard off-the-shelf network?
But the bias can come in on what I decide is good enough.
If I don't test it under the right conditions, I test it on my desk.
I test it with me.
And the testing is part of it.
And not accepting when the errors, you know, it doesn't, okay, it can't read my pulse, then it's broken.
But if I don't have it check other people's and not just the four co-workers who look like me, I won't know it's broken.
So it's the algorithm, it's the data, and it's the testing. if you imaged anyone who had darker pigment, everyone who had darker pigment, they all looked
the same. So if you were, say, a dark-skinned woman versus a lighter-skinned Black woman,
you all look the same is because the way they chose the range of colors and the optics,
it excluded basically the range of darker skin pigments.
So that was, someone selected that, right?
And it wasn't changed until basically the candy companies,
Hershey's and the furniture companies,
like we can't see our furniture.
Like you can't see the beauty of the browns.
And so they started fixing it, not because of people,
because it was a commercial application that was like, oh yeah, we need to do
this. Right. But that was a selection choice. And yet you're like, well, how can a camera possibly
be biased again? It's just hardware, right? Like it just takes pictures, but a human had to make a
decision about how it operated with neural networks. And some of these are past biases being fed to the future.
I think there was a resume reader who would, an AI resume reader, that would reject women's
resumes because they never hired women.
So, you know.
Right.
They're not good employees.
Right?
It's like, they're not good employees. They weren't hired. They didn't have great recommendation letters. They didn't, right? They didn't exist. So, of course, we're going to exclude them.
I mean, is that, those are the things that are supposed to get better when we make them mechanized.
Why? Making things mechanized is...
They're supposed to be fairer. It's just to be easy. I don't think fairness... I've never heard of fairness be... I've never assumed something is fair because it's automated.
Good point.
I'm curious.
I think it should be because if it was...
It's an emotionless computer. It should be judging based on the merits, not on the things it shouldn't be able to tell, like gender, because that's not on my resume. But you can tell my gender based on my name.
But it can't do something that, you can't make something know what merit means without bias if you're teaching it that.
If it doesn't understand anything.
No, I mean, it's a human making this. I mean, humans making mean humans making a copy of a human right basically it's what's going on a bad copy
well maybe maybe not maybe just a really good copy of a you know normally flawed human being
yeah so i i think so you know one is these machines just because they're automated, aren't fair.
But I do think that we can make them fairer. I do think that we can, and I won't say remove bias.
I truly don't believe we can ever remove bias as long as people exist.
But I think we can mitigate it.
I think we can reduce it so that the machine is as good as the best unbiased, quote unquote, human, right? I think we can do
that. But to remove the bias, step one is identifying it, isn't it? Or do you have a
different path to get there? No, you have to identify the bias. And that's why I said I don't
think we'll ever remove the bias because we all have biases, right? We all have biases. And it
might be a bias with respect to religion. it might be a bias with respect to religion
it might be a bias with respect to social economics with respect to gender race ethnicity
age right it doesn't matter we all have some bias and therefore if any of us are involved
that bias is going to creep in because we don't even know that it exists sometimes
it's like being a fish in water.
Yes.
You don't realize you're in the water.
How do we, I mean, this is very much a human problem and not as much a technology problem.
But how do we fix this?
So the way that I think about fixing it is twofold.
One is, as a developer, because I'm a technologist and I'm a developer, I think we need to rethink the concept of what we call as development.
Right?
So typically, you know, it's coders, it's technologists that are working on the technology.
And then, you know, it's thrown out in terms of, okay, here's the application.
You folks take it.
I think when we think about development, just like if you think about movies
and films, right? Like the movie crew is everyone is there, like thinking about it, scripting it out,
creative. You also have the filmmakers. You also have the technologists that are part of the team.
And so I think we need to rethink how we design these systems such that the team is not just
developers and technologists. The team
is the social scientists. It's the ethicists. It's also the coders and the developers. It's also the
human factors people. And that is the team. And the team does not produce a product unless everyone
is represented that can understand these things. So that's one. The other thing is, I think, and I
want to, and this is why I did the book, is I want people to feel empowered to push back on the technology that they're using and demand stuff. And basically, you know, I've seen, and I give an example, you know, everyone wants to go green now, right? Everyone, you know, sustainability, go green, these companies, zero emissions, zero emissions. This is actually very, very expensive for a company, right? Like this is not like a cheap
thing. You just turn on a switch and like, oh yeah, we're zero. No, they actually had to put
in a lot of funding, a lot of effort, a lot of strategy. And at the end of the day, it's not
like they're selling more products, right? But there was enough of people that said, pushed back,
like we need to go green. We're not going to sell. We're going to go to these other companies.
And companies finally said, oh, you know what? Maybe this is something we should focus on.
And once you have a few do it, then everyone else, you know, comes on board. And I think that we as
a community have that power to say, you know, look, we need to make sure that we have unbiased
technology. We have unbiased algorithms.
You know, we're going to do it,
else we're going to go to these startup companies
and we're not going to use company X, company Y, company Z search engines.
And we're going to have a movement.
I think we can change the world.
I think we can change these companies.
I love the analogy because it does give hope that things can change
and that consumers can push some of that change.
There are companies that do green certification
and there are standards now.
Do you think we'll be seeing racial bias and gender bias
and religious bias and all other biases
standardized i mean i guess we don't standardize the biases we
and standardize the non-biases what we need is an ai to detect bias i believe there are some
there are some you know i actually think that we are going to start seeing a lot more third party certification companies that will come in and certify.
Right. Like I have a new product. I have a new algorithm.
You know, I'm going to petition, contract this company.
That's a third party to come in and do my audit.
We do this for our finances all the time. You know, third party come audit my financial files
because, you know, SEC might get on me
if we don't have a third party auditor.
Like, this is a fact.
And so I see that happening more and more.
You said SEC, the Securities Commission,
but I immediately went to FCC with the radios because that is more what I think about when I certify things.
But I could see the same, exactly what you're saying.
I would go to a company and get audited and they would check for all of the things that I don't know how to do for accessibility and fairness.
Exactly.
Is fairness the right word?
So, you know, no one has converged on the actual word.
So there's fairness, accountability, ethics.
Equity.
I mean, equity. I mean, there's a hodgepodge of words that the community is still
kind of working through, figuring out, you know, what does this look, transparency,
explainability. There is no convergence yet of what it is that we're talking about,
because even fairness, fairness with respect to whose criteria.
Exactly.
Yeah, that's what I was about to ask,
because it makes me slightly uncomfortable to think about certifying this. Not just,
not in terms of the goal, but it could be a tremendous amount of power to that organization,
right? And who watches the watchers at that point? I know it's kind of a glib response to that, but there is some discomfort in my mind.
Okay, if we standardize this, how are we to know that the standard isn't biased or could be misused?
Oh, it will be.
Yeah, it will be.
And it's not even, I don't think it's, I don't think it's the standard of, you know, quote unquote, what is fairness or what is bias.
I think it's the standard process of how do you assess and audit.
Yeah.
Which is two different things.
A very open sort of process.
Right.
Right.
And so everybody knows the rules.
Yeah.
Everyone knows the rules, right?
They know what they have to do.
Like for FCC, right? Like, you know, kind of the accessibility, you know, you have to think um like for FCC right like you know kind of the which accessibility you know you have
to think about it now how you make your say your website or your language or you know if you're
recording accessible is going to be different depending on the medium that you're using
right if it's the image space or it's the sound space or right like accessibility is going to have a different process, a different tool,
depending on your medium of expression of communication.
And there are likely to be different levels. I mean, when we started the podcast, we were audio
only. And then about a year ago, we started doing transcripts because I realized how important that was for accessibility. And that would have given us, you know, another dash three or dash five on some standards
board.
It's not like we'd have to check everything off on the first pass.
Correct.
Correct.
Christopher really doesn't like this.
He's told me to make faces.
No, I don't dislike it.
I've never really thought about it before, so I'm wrestling with it a bit.
I think it's great if it's something that people put out there and we're certified this way,
and that leads to becoming more accepted than another company that doesn't.
Like Fairtrade Chocolate?
Sure.
But I don't know that anybody's suggesting this, but I don't want people coming in and saying,
you can't exist because you don't conform to...
Well, let's talk about this with respect to the FDA.
Okay?
Think about it.
I want to create a drug or a medicine.
I can't just go into my garage and put some chemicals together and be like,
yeah, here's something. I'm trying it on my dog. Look at that. It makes him healthier.
I'm not going to sell it to people. I mean, you totally can for a little while.
And then you can go to jail. If anyone dies, you definitely go to jail.
So what is the FDA? The FDA is a third party agency, right? There are some processes. You have to collect data. You have to talk about the processes when you actually start your trials or your pilots.
Like who did you who did who was part of your your pilot study? How did you find them?
Right. Were they were they coerced in any way? What was the lab like?
And there is all of this paperwork. There's all of these things that the FDA requires.
And then there's levels of harm. Right. so the class of, you know, device one
or two or three, right? And depending on the level of harm is associated with how you have to
document your processes and the things that you need. But then there are things that aren't FDA
approved. You can buy a whole grocery store's worth of stuff that the FDA didn't approve
because they don't cause harm. And so maybe, maybe surveillance systems
get anti-biased approved, but I don't know, our cold medicine doesn't.
Right. Or the app that selects your music, right? Because, you know, there is some bias in the
music because, you know, they recognize you based on your past preferences and associate you with others.
Right. But maybe that is like, yeah, that's so not harmful.
That's OK. That's a bias that I accept as opposed to police profiling would be a bias I don't accept.
No, it's tricky. No, it's it's wouldn't want to put, like, a music company through...
If we're using the FDA as an analogy,
I wouldn't want every software company
to have to go through an FDA-like process,
even at low level of concern,
because, one, they can't scale.
There's way more software companies
than there are drug companies.
Right, but even the FDA, for example,
with AI, exercise is actually not regulated by fda unless
you know the exercise information is being fed to your doctor who's then using it right like
there's some criteria yeah i would say it's the same thing. If you're creating, you know, a music app, it's fine. But if your music app is then feeding into, you know, is, okay, this data exists.
Is it being used in such a way?
Is it being used in an innocuous way, like you're saying,
like for exercise or, you know, over-the-counter supplement,
or is it being used in a directed way toward surveillance
or controlling population behavior or something?
It's so easy to cross the line, though.
It is, yeah. That's why I'm struggling with this.
Yeah, it's a really thorny problem, and I think that's, it's thorny and new, right?
I mean... Oh, I don't think it's new for some people. Well, actuary, you know, before we had AIs, we had, doing these things, we had actuaries who were saying, well, you know, people tend to die later, so let's move this over, or behaviors look like this.
So the biases were always there.
They were just in human AIs.
Or natural intelligence, I guess.
Yeah, yeah, wow.
When do you think we'll get there?
Where's there? Well, there? Where's there?
Well, yeah, where's there?
I think right now we are at the crossroads where a lot of things are happening because
these systems are being used now in scenarios that could cause harm, right?
And so I think, right, the conversation is
happening. Government is being involved. If you look at, you know, the number of, you know,
potential bills that might be coming through, you know, it's happening. And the question is,
what is going to come out of it? And are there going to be, you know, best practices or not?
Who's going to control it?
Will it be government control?
Will it be civil liberty?
We don't know.
But it's happening now.
Yeah.
Are you excited or horrified?
A little bit of both.
I'm excited because this is my area.
And I've been doing research in this field for, you know, quite a number of years. I'm horrified because there are no answers.
And I feel like there's not enough people that are concerned about it that should be.
Do you think, what's your preferred approach to go after kind of the, I don't want to say low-hanging fruit because that makes them sound less important, to go after like the big things like police surveillance and predictive policing and that kind of thing first?
Or is it better to kind of work toward a generic framework that can be applied in all sorts of levels or to do both
at the same time? I don't know. So I would do both only because the big things like surveillance,
predictive policing are now, right? Like these are systems that are being used now.
And there's no real box around their use, how they can be used,
you know, the evaluation or auditing of the biases that might be present, right? That doesn't exist,
and yet they're being used. And so we need to make sure that those issues need to be addressed.
Now, because they're being used now you know it's basically like oh
we sold all the nuclear weapons oh man do we have any rules on how they're supposed to be used
no okay let's right right i think we might right like that's where we are maybe we should write
the manual right yeah i think we need to write the manual because you know there wasn't a big
red button oh my gosh we forgot the big red button.
Right. Like that's where we are.
At least with nuclear weapons, you can't just copy them with a disk drive and then ship more of them out on the Internet.
Yeah, that's the good thing.
Yeah. But in the meantime, we also need to think about how to do this more strategically so that we aren't in this scenario where we're like, oh my gosh, what's going on?
So besides academia, or maybe, maybe academia is the best place, but where, what are the organizations that are really kind of driving, driving this conversation?
Um, so like some of the nonprofits are like Partnership on AI, AI Now Institute. So these are, you know, nonprofit organizations that are, you know, coming up with reports, coming up with best practices, looking at what companies are doing, trying to convene groups together to have discussions.
But it's just it's just's just a drip in the bucket.
We've talked a little bit about privacy, so maybe this is related to that.
But is this going to be like curb cuts where they were fought for so long because our curbs were perfect
and then somebody comes along and Berkeley says, no, we're going to make
ramps because people need ramps who are in wheelchairs. And then everybody realizes they
always wanted them. Is it going to be like that where once we have the rules, everybody's going
to be like, oh yeah, this was always better. Why did we fight it? I think so. Because one of the
things is when you make, just like with
accessibility, you know, as we know, when you make something accessible, it actually makes it
accessible to a wider range of individuals than you thought you originally were targeting.
I think it's going to be the same thing. Like when we think about, you know, quote unquote,
mitigating bias, it also means that we are mitigating bias for a
larger group of individuals that we didn't even think about. So it just makes it good. It makes
a good practice. I mean, I always think about like Siri and Alexa. Like I don't use, you know,
voice recognition just because I like control. But, you know, voice recognition was not just to you know help us you know interact with our devices
right that was an accessibility feature that was created and started and and then people were like
oh this is actually kind of useful like maybe we should expand and put a little bit more money like
maybe people will use this and now it's like you know kind, kind of a standard. For a lot of people, the biases work for them.
For a lot of people, the biases are biased in their favor.
How do we convince people that's not okay?
It's not okay to hoard that, to defend it.
Because the bias will come after every single person at some point, right?
And so it might be that today you're on the side of being on the advantage,
but the fact is, is we all have something that is different than the majority.
Something.
And whatever it is, it could be that, you know,
you like fries and mayonnaise, right?
Like maybe it's something like mine or like that.
And then all of a sudden it's like,
there's no more mayonnaise in the world
because no one likes it with fries.
And so the fact is, is we all have one thing,
at least one, some of us multiple things
that is different than the majority
because that's actually what makes us unique
and human and different. And what that means then is that
at some point, you are going to be the target of the bias, guaranteed 100%, no doubt. And therefore,
wouldn't it be nice if you kind of fixed it now? Age bias is one of those things.
If we're lucky, we experience it, but when we experience it, it's not pleasant.
There's one.
Does this come out of your robotics research? And if so, how?
Yeah, so because I interact with my robotics, I do human-robot interaction. I interact with a lot of people,
a lot of differences of people. And early on, I did notice that the types of learning that my
robots were doing tended to shift toward different populations. So I worked with kids. When I took
my algorithms and interacted with older adults, it didn't work as well. I interacted a lot with, you know, children with autism. And just so happens, boys had a larger incident rate. And so I started noticing that when my. And I realized that this was a bigger problem than just, you know, my own lab research.
That as these systems were being deployed to billions of people outside in the world,
that these biases that I was recognizing were really going to perhaps derail our entire society if we didn't address it.
Your research has also been largely about researching trust, how people trust robots.
Do you think people will trust unbiased robots more?
Or do you think that humans aren't that smart?
People will trust robots whether they're biased or not.
They will trust the intelligent nature of these devices and the decision-making processes and the fact that these AI systems
basically mitigate our need to have to work.
It's actually an energy conservation function for us.
Yeah.
I think you need to unpack that for me.
Yeah.
So what happens is when we are in a system,
when we're working with an automated system, robot or AI system, what happens is that we can go into reactive mode.
Being reactive, like, you know, yeah, sure, yes, yes, yes, yes, is actually easier for us in terms of the energy we expend in terms of thinking than us having to process.
And so if we are in a scenario where we're interacting with a machine and it seems to
get it right most of the time, we actually go into this like energy conservation mode, i.e.
I'm not going to necessarily think about this task because the AI knows what
it's doing, and therefore I can exert my energy on other things, like breathing, right? And so
what happens is that we go into that mode very quickly when it comes to AI, when it comes to
robots, very, very quickly, because we have this perception that they know what they're doing.
And it's actually very hard to break that as well.
I mean, this is the AI cars and people seeing they work well on freeways and then stop and
then start playing with their phones and forget that they are in charge.
Right.
And watching movies and reading books. Yes.
All these things they tell us not to do, but the car works almost all the time.
Why should I pay attention?
Right. I'm logging my hours. It's good. It's really good.
That's not something we're going to fix.
Humans are always going to trust things that work most of the time. But we shouldn't. I just, I don't know what the fix is for that. I know I just said it's not something we can fix, but I feel like it is something we need to, do we need them to give us their confidence number?
I mean, I guess that would be a probability and I would like it, but I suspect most people wouldn't.
Yeah.
So I've been kind of thinking about this in my own research.
And some of the solutions we're exploring have some ethical consequences, right?
Like, should an AI have a denial of service?
Because again, remember, we can model people, we can model their behaviors,
which also means we can model when they're in an overtrust mode. Should, for example, the car,
next time you put it in, you know, autonomous mode, it goes like, nope, sorry, last time you
totally overtrusted me. So this time, no autonomy for you, right? Well, it's literally what Tesla's
doing manually right now.'ve read about they've got
this full self-driving beta going that people are using and they're monitoring people and if they
detect that you've been nodding off or not looking at the camera enough times they delete you from
the program denial of service right yeah so i could totally see that being a thing but that's
going to make people very angry right right right so there's there's this thing is how how would you you know how do you balance that um how do you
do it so that it's now you're losing your you know the human autonomy the humans you know
feeling that they are in control um but but then you wake them up right like you're like oh man like yeah
you're right i i did just kind of you know not often didn't pay attention so this time i'm gonna
i'm gonna do right and then of course a week later i'll observe us again because you know as you said
we are human how much are we trending toward kind of the asimov world here the three laws or something
well i mean he wrote a lot of things that that kind of touched on morality and robots and stuff
but i do feel like we're starting to kind of engage with that stuff in reality yeah we're getting
we're getting closer uh because the the systems are are I won't say becoming, they are integrated now quite well
into our lives. And that's just going to continue and accelerate.
When I went to school, when I went to college, there was no ethics course. I even went to a college that is very humanities heavy for a tech school.
But I think there are more ethics courses happening now.
Do most undergrad CS or engineers now have an ethics course?
Most places do. I would say majority of computer science, majority of engineering programs have some form
of an ethics requirement. Sometimes it's within, you know, it's a full course. Sometimes it's a
thread integrated through courses. It's different ways of how it's done. So, yeah, the next general, I mean,
at the institution, university level,
there is movements about how to do this so that it's much more integrated into the curriculum.
So that is just not a course, right?
Where it becomes part of the student's DNA.
But it's, the conversations are happening.
We're going to start seeing movement in the
education space over the next three to five years so that the next generation of, you know,
computer scientists, engineers, I think will have the tools to think about these problems and have
the tools to identify that they don't know everything, right? And so they need to build and have teams that, you know,
represent the community in all these aspects.
You're teaching college students?
You're teaching 20-somethings that they don't know everything?
Some of them are under 20.
Yeah, yeah.
But more seriously, what does an ethics course consist of?
I mean, teaching them they don't know everything and they need to be part of a team.
Is it about don't do things that are going to make the world worse?
It's okay to say no?
What else is in an ethics course?
Yeah, so at Georgia Tech, I taught, and it's actually still being taught, but constructed and taught an ethical AI course. And so the way that I approached it is, you know, we would go over, I would use example, word embeddings, which is a methodology in natural language processing, which has some biases, you know, women are to, or, you know, men are to doctors as blank is to nurses,
right? Like all women are nurses and all men are doctors kind of thing. And so we go through this
and what they have to do is, is that one, they have to assess some of the algorithms that are
out there and come up with these, you know, these, these firefighter, you know, what's a firefighter,
men or women? Well, most word embeddings will say it's a man, right? Police officer. So they go through this entire exercise. So that's the awareness. And then what they have to do is they actually have to develop solutions to remove some of these biases so that there's not this type of association. And then what they have to do is then, and this is just the word embeddings, and then they have to find a data set that's out there and do a full like kind of analysis that's separate from what I teach them to actually basically do an audit. That's an example.
Teaching the auditing skills would be, that's an example teaching the auditing skills would be that's that's pretty cool
yeah well i use there's different tools that are out there that um i pull from that that there's
actually tools out there that you know you can look at disparate impacts you can look at different
outcome measures based on data sets based on an algorithm. So there's some fairly
now they're techy
tools. They're not like anybody can just be like,
oh yeah, sure, I'm going to compile this Python
script and run it.
But they do exist.
You spoke about these students having a project
in the ethics class
where they could
monitor the data they found
for biases.
And then we were also talking about making some sort of agency or auditing system to look for biases.
Are these connected? Are there tools coming?
So there are some tools that are out there um one of the and so a couple of those there's a tool from
called model sheets for example that that google has that people can use there's tools called about
ml that comes out of the partnership on ai um ibm has a set of uh tools that you can use to do
basically auditing assessment of bias around different measures and metrics.
The problem is that they are designed for technologists.
So there is not like a simple, oh, I could just pop in and double click an app and voila, everything's like magic.
It actually requires a little bit of intellect to figure out how to use them. But they're there. And I teach my students how
to use them for a bunch of different types of applications, AI applications.
That's kind of reassuring. Okay, so I want to change subject. You left Georgia Tech after
16 years. You're going to Ohio State. Why? Yes. Well, one, I'm, you know, bittersweet. I really do
love Georgia Tech. It was my home. I still think of it as my academic home.
But going to the Ohio State University as Dean of Engineering was an opportunity I could not,
I could not refuse. One is that the institution is really leaning into kind of three things that
I'm passionate about. One is ensuring that the student and faculty population look like the city
of Columbus, look like the state of Ohio, look like the U.S., and providing resources to do that.
Because in engineering, we make things for everyone, and therefore engineers should look
like the community that they're making things from, which is, again, something I'm very passionate about.
The other is around innovation, basically looking at emerging technologies and how do you design for the future in a responsible way.
So thinking about responsible engineering around medicine, around 6G, around artificial intelligence, which again, I'm like, oh, wait,
hold it. I work in all those areas. I'm like super, super psyched about that. And then the third,
which is about making education and engineering accessible by, in fact, President Christina had
announced, you know, she would like to have debt-free, like all students would
graduate from college as undergrad debt-free, right? That makes engineering accessible to
basically anyone who wants to be an engineer, like come here. And as engineering, as an engineering
college, though, we have to ensure that we have the scaffolds to ensure that every student that
wants to do engineering can be successful because their high school may not have had computer science or may not have had
calculus or may not have had these things that we require to kind of start off. And so what can we
do as a university to make sure every single engineering student is successful? And so I'm
excited about thinking about that. And all these three things means that it can be a national gold standard that other universities can look at to do it the right way.
But what about the robots?
Are you going to miss those?
Oh, I still have a research lab.
I'm always a roboticist.
Oh, good.
Right?
Like, roboticist never changes.
All of my jobs, I've always said, you know, Dr. Rihanna Howard,
roboticist, and then there's the title. So I will always be a roboticist. I will always build design
and program. Are there new sensors or system that make you excited about the future robotics? What
should we be looking for? I'm, you know, I still kind of enjoy humanoid robots just because they're really fun to work with.
There aren't, there hasn't been a real difference.
Let me say, you know, there's Pepper, which is, I have it at Georgia Tech.
I'll buy one at Ohio State.
That's kind of the funnest one I still have.
The difference is, is that the algorithms are much more powerful, right? Like we can design much more powerful algorithms that can take advantage of the hardware
to do more interaction with people, you know, language,
recognition in terms of the image space and behaviors.
I had a couple of listener questions I want to get to.
The first is one I suspect you get a lot.
What's a piece of advice you'd give to aspiring robotics engineers who are just starting out in the field?
I would, my one big piece of advice is be curious and explore.
So one of the things about robotics and engineering, computer science, it really is this exploration process to find solutions to, you know, problems
that people haven't yet solved. I mean, that's the ultimate goal. And in order to do that,
you have to be okay with being curious and making, you know, things aren't going to always work
exactly the way you thought, but that is part of the exploration process. So really lean into that
and be okay with it. You have a distinguished career.
Paul Kay wanted to know if there was a time you came across a fork in the road and can look back and think that that was important.
And why choose one fork over the other?
I did. So I would say about five years ago, I had the opportunity to go into corporate because, you know, AI robotics is like even now is like really, really, really popular.
And a lot of companies were starting to poach academics into their fields and, you know, becoming CTO and things.
So like pretty nice, lucrative kind of positions. And I really
had to think about what it was that I wanted to do. And I felt that my ability to basically have
the freedom with respect to my research, the freedom to basically talk about change was more important at this stage than, you know, the other opportunity
just in general. And I chose right because, you know, I do have a national stage. I can talk
about bias. I could talk about fairness. I could talk about AI and ethics and robotics. And, you
know, I can point fingers even to companies and even, you know, my own institution. And that's my job. That is my
function. That is expected of me. And I wouldn't, I wouldn't give that up for the world. I didn't
know that back then, but now I realize it was the right choice. It's a tough choice, I bet. I mean,
lucrative versus academic. Oh, very like times 10 salary level lucrative. But I also realized that what brings me joy is really that impact in terms of society and having that direct impact with people that I know I have now.
It's about making a difference.
Exactly.
You are an inspiration to many.
Your career path was actually pretty linear, given, I don't know, you're black, you're a woman, you're in technology, and sometimes it doesn't go linearly.
Was there a time when you thought, you know, no, I don't want to do technology. I'm done.
You know, there was never a time I didn't want to do robotics, but there was definitely times
when I didn't know if, you know, the path that I was supposed to follow. For example, I have a PhD. I, a couple of times,
like, why am I doing this PhD thing, right? Like, I can do robotics without a PhD, so why should I
continue this path? So I had a bunch of those kind of moments, but robotics, I always wanted to do.
I didn't know what that was, right? Like, I think I would have been perfectly happy
if I was in the garage building robots and, you know, going to the grocery store and working
there, right? And I'd be like, I'm a roboticist, right? Because I'm building robots. I think that
was the one thing I never could see myself giving up. It was the jobs that sometimes it's like,
maybe I shouldn't be here. Maybe I should be someplace else. And mostly it was because of culture,
environment, microaggressions, the feeling of not belonging many times from others,
some intentional, some unintentional. What do you want people to take away from your book?
So the big thing I want people to take away from the Sex, Race, and Robots book is that we all have a responsibility in this world of artificial intelligence.
Whether you are a developer, a technologist, a consumer, or just a person that lives in a house somewhere in the middle of the Sahara forest. We all have a responsibility because if we don't,
then the decisions are going to be left up to a very small number of individuals
that might not necessarily reflect your individual interests
and the things that you want for your home, your family, your community.
Do you think it was important to put so much of yourself, of your past, your memories and memoirishness into a book that talked about, you know, AI and robotics and bias, a lot of them were removed.
And so I felt that by weaving in my own personal story, when people can relate to my story
themselves, then they can also relate to the AI and the issues, right? So it's more of a,
I'm putting you in my place because all the stories, there's at least one that, you know,
people are like, oh yeah,
I remember going through that. And then as soon as I grab that kind of sentiment and feelings,
then it's like, oh, then this other stuff that surrounds it has got to be important because now
I can relate. It is incredibly effective. I like that you did that. When we talk about ethics and bias and AI and race relations and all of that, it can be dry and hard and difficult, even if it's good.
But the personableness of your book, I really appreciate it.
So thank you.
Thank you.
Thank you.
Do you have any thoughts you'd like to leave us with?
Of course, read the book.
Listen to the book.
But no, I think it's that this last is,
you had asked the question, you know,
are we closer to the singularity, you know,
than we were five years ago?
You know, I think we are, but of course, you know,
that could be a million minus five years.
But what I want people to really think about is they don't need to be fearful or afraid
or think AI is this, you know, all-knowing thing that we have no control over.
We have direct control.
AI is learning from us, is learning from our behaviors and our biases.
And if we can control ourselves and our biases, then we can definitely control AI.
Our guest has been Dr. Ayanna Howard, Dean of Engineering at Ohio State University and author
of Sex, Race, and Robotics on Audible. There will be links in the show notes for that book,
as well as a link to the transcript for her previous episode and a link to that show.
I just wanted to add that there's a lot of pessimism out there,
and it's great to hear your viewpoint and your optimism come through.
And I think it's helpful for a lot of people to think about these issues in a more optimistic frame.
So, thank you.
Thank you.
Thank you to Christopher for producing and co-hosting. Thank you. a quote to leave you with from Ayanna Howard's book, Sex, Race, and Robots, How to Be Human in
the Age of AI. We've all become anomalies in the world of AI, but we have the power to triumph.
If we open our minds and embrace the differences that make us human,
we have a chance of preserving our humanity in the age of AI.