Fresh Air - Unmasking AI's Racism And Sexism
Episode Date: November 28, 2023Computer scientist and AI expert Joy Buolamwini warns that facial recognition technology is riddled with the biases of its creators. She is the author of Unmasking AI and founder of the Algorithmic Ju...stice League. She coined the term "coded gaze," a cousin to the "white gaze" or "male gaze." She says, "This is ... about who has the power to shape technology and whose preferences and priorities are baked in — as well as also, sometimes, whose prejudices are baked in."Also, we remember former First Lady Rosalynn Carter, who died at age 96 last week. She spoke with Terry Gross in 1984.Learn more about sponsor message choices: podcastchoices.com/adchoicesNPR Privacy Policy
Transcript
Discussion (0)
This is Fresh Air. I'm Tanya Mosley. You've probably heard the male gaze or the white gaze,
but what about the coded gaze? Computer scientist Joy Bollamwini coined the term while in grad
school at MIT. As a brown-skinned Black woman, the facial recognition software program she was
working on couldn't detect her face until she put on a white mask. This experience set Bollamwini
on a path to look at
the social implications of artificial intelligence, including bias in facial analysis technology,
and the potential harm it could cause millions of people like her. Everything from dating app
glitches to being mistaken as someone else by police. She's written a new book about her life
and work in this space called Unmasking AI, My Mission to Protect What is Human in a World of Machines.
Last month, after meeting with Bollamwini and other AI experts, President Biden recently issued an executive order aimed at making AI safer and more secure.
This landmark executive order is a testament to what we stand for, safety, security, trust, openness.
American leadership and the undeniable rights endowed by a creator that no creator, no creation can take away.
Proving once again that America's strength is not just the power of its example, but the example of its power.
Joy Bollamwini is the founder of the Algorithmic Justice League, an organization that raises awareness about the implications of AI.
She is also a Rhodes Scholar and has a Ph.D. from MIT.
Her thesis uncovered large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon.
Bollamwini's research was also featured in the Netflix documentary Coded Bias.
And Dr. Joy Bollamwini, welcome to Fresh Air.
Thank you so much for having me.
The coded gaze is this term that you coined a few years ago after an experience you had
with a program that you were building called Aspire Mirror. Can you explain what the tech was supposed to do and why it couldn't detect your face?
Sure. So at the time, I was a graduate student at the MIT Media Lab, and I took a class called
Science Fabrication. The idea was to make something fanciful. So I made a kind of art installation that used face tracking technology
to detect the location of a person's face in a mirror and then add a digital mask. And so this
is what I was working on when things went a little sideways. So in that experience of working on the
class project, which was an art installation, I found that the software
I was using didn't detect my face that consistently until I put on a white mask. I was working on the
system around Halloween, so I happen to just have a white mask in my office. So when I was debugging
and trying to figure it out, that's how I came to see that
my dark skin wasn't detected, but the white mask was. And that led to all kinds of questions. Is it
just my face? Is it the lighting conditions? Is it the angle? Or is there something more at play?
And so for me, this was really my first encounter with what I now term the coded gaze.
And so you've likely heard of the male gaze or the white gaze.
This is a cousin concept really about who has the power to shape technology and whose preferences and priorities are baked in, as well as also sometimes whose prejudices are baked in.
Well, when you first started to speak about this, people said stuff to you like, well,
I mean, it could be the camera because there wasn't a lot of light.
There is no bias in math algorithms.
You break apart this idea through your research on cameras.
Can you briefly describe what you found?
Yes. I, too, wanted to believe that tech was completely neutral. That's why I got to it.
I thought, okay, I can escape the isms and messiness of people. But when it came to the
type of tech I was exploring, computer vision, technology, detecting a pattern of a face, I really had to ask myself, okay, let's go back
and think not just computer vision right now, but camera technology in general. And when you look at
the ways in which camera technology and particularly film photography was developed, it was optimized for light skin. And in particular, the chemical
compositions used to expose film used to be calibrated by something called a Shirley card.
Now, a Shirley card was an image of a white woman named Shirley. Later on, there were others, but it became known as the Shirley card. And that was
literally the standard by which the chemical composition would be calibrated. And the issue
is that people who didn't look like Shirley weren't as well accounted for. And some people might argue, oh, it's just the limitations of the
technology. But actually, when furniture companies complained and chocolate companies complained,
I can't see the difference between the milk chocolate and the dark chocolate,
right? Or the fine grain in my mahogany. They updated the chemical composition. The darker skinned among us got
a little bit of a windfall, but it showed that it wasn't necessarily just the limitation of the
technology, but a limitation in who was thought worthy of being seen. Right. Going back to this
mask, I mean, the discovery, Joy, it just floors me because of what the mask represents
in our day-to-day lives. I mean, the figurative mask has been used to describe what Black and
brown people wear in order to fit the norms or expectations of the dominant culture. And from
the very start, this was not lost on you, although you wanted to find a reason that wasn't social.
I really did. I was hoping that it was just down to technical issues. And as I was having that
experience of coding in the white mask, I really thought about the book, Black Skin, White Mask,
which is talking about the ways in which people change themselves
to fit a dominant group. And I just didn't think it would be so literal where I was
changing my dark skin face to be made visible by a machine. And I thought the last place I
would be coding in whiteface would be MIT.
You talk quite a bit about the different spaces that you work in and you've worked in in technology.
Language is very important to you when talking about all of this, especially when we talk about facial recognition technologies. There are two types, right? So there's facial verification and facial
identification. Can you break down the differences? Absolutely. So when we're thinking about the ways
in which computers read faces, I'm thinking of a set of questions a computer might be asking.
And so first, there's actually face detection. Is there a face at all?
And so the experience I had of coding in a white mask to have my face detected was an example of
face detection failure. So that's one kind of way a computer can analyze a face. Another kind of way
a computer might analyze a face is guessing an attribute of the face. So
let me guess the age, let me guess the gender. Some might try to guess ethnicity and others
might try to guess your emotion, but like we know you can put on a fake smile. The guess doesn't
mean what is being displayed on the face actually is true to how somebody feels or identifies internally. And then when we
get to what's more technically known as facial recognition, to your point, there are two flavors.
So facial verification is also known as one-to-one matching. So this is the type of facial recognition
you encounter if, say, you're trying to unlock a phone. So there's a
face that's being expected, then there's a face that's attempting to have access, and there's
that one-to-one match. Now, when we get to facial identification, also known as one-to-many matching,
this is when you might think of, you know, Mission Impossible, Tom Cruise being detected in an airport among a ton of people.
So that's the surveillance kind of use case. was to walk people through different ways in which AI systems can be integrated into various
types of technology. So there's a deeper understanding when people are hearing about
news headlines or new breakthroughs in AI. So I really appreciate you asking about the nuances
between these things. Right, because once you started speaking about
this, you had a TED Talk a few years ago, you started getting a slew of letters from people
whose lives were really impacted, in some cases almost ruined. One person wrote you from jail
believing that they were locked up because of false facial recognition. There was a false facial
recognition match. Can you go into more detail on
why something like this could happen? Yes. So even after the people who would actually send me
letters, there were also news stories. One recent one that sticks with me is the arrest of Portia Woodruff due to facial recognition misidentification. Portia was eight
months pregnant when she was falsely arrested for committing a carjacking. And I don't know anyone
who's eight months pregnant jacking cars. So there's also this question of this over-reliance on machines, even when common sense might indicate there could be other alternative suspects.
And to your question, how does it happen?
How is it that someone can be misidentified by a machine?
So we have to look at the ways in which we teach machines to recognize the pattern of a face.
And so the approach to this type of pattern recognition is often machine learning.
And when we talk about machine learning, we're talking about training AI systems that learn from a set of data.
So you have a data set that would contain many examples of a human face. And from
that data set, using various techniques, the model would be trained to detect the pattern of a face.
And then you can go further and say, okay, let's train the model to find a specific face.
What my research showed and what others have shown as well is many of these data sets
were not representative of the world at all. I started calling them pale male data sets
because I would look into the data sets and I would go through and count, right? How many
light-skinned people, how many dark-skinned people, how many women, how many men, and so forth.
And some of the really important data sets in our field, they could be 70% men over 80% lighter-skinned individuals.
And these sorts of data sets could be considered gold standards, the ones we look to, to judge progress in our field. So it became clear to me
that, oh, the data that we're training these systems on and also the data that we're using
to test how well they work don't include a lot of people. And so it's not then so surprising
that you would have higher misidentification rates for people who were
less represented when these types of systems were being developed in the first place. And so when
you look at people like Portia Woodruff, who was falsely arrested due to facial recognition
misidentification, when you look at Robert Williams, who was falsely arrested due to facial misidentification in front of his two young daughters.
When you look at Najir Parks, when you look at Randall Reed, Randall was arrested for a crime that occurred in a state he had never even set foot in.
And all of these people I've mentioned, they're all dark skin individuals. There's something else that your
research also found. And I want to get a clear understanding of why this happens too. Why is it
that in some cases, this technology misgenders people with dark skin? This has actually happened
to me, I admit, while playing some of those TikTok facial recognition games. It always thinks that I'm a guy.
So this was what I ran into after my TED Talk. So I did my TED Talk you mentioned a bit earlier,
and I had my TED profile image. And I was showing the example of coding in a white mask,
face detection failure. And so I decided to use my TED profile image and just upload it to
the online demos of a number of companies, some well-known companies. And I noticed that some
didn't detect my face, but the ones that did detect my face were labeling me male.
And that's when I started actually looking at gender classification. And as I went and I looked at the
research on gender classification, I saw with some prior studies, actually older women tended to be
misgendered more often than younger women. And I also started looking at the composition of the various gender
classification testing data sets, the benchmarks, and so forth. And it's a similar kind of story
to the dark skin. Here, it's not just the proportion of representation, but what type of woman is represented. So for example,
many of these face data sets are face data sets of celebrities. And if you look at women who tend
to be celebrated, women who tend to be- Yeah, they're lighter skinned women.
Lighter skinned women, but also fit very specific gender presentation norms and stereotypes as well. And so if you have
systems that are trained on some type of ideal form of woman that doesn't actually fit many ways
of being a woman, this learned gender presentation does not reflect the world.
Well, in 2019, you spoke before the House on facial recognition technology.
And I want to play a clip of House Rep. Michael Cloud, a Republican of Texas, asking you about
the implications of private companies having access and using facial recognition technology.
Let's listen.
You mentioned Facebook in your remarks, and I find that interesting because I'm extremely
concerned about the government having this kind of unchecked ability. I would be curious to get
your thoughts of corporations having the same sort of ability, and also, Ms. Giuliani, if you want to speak to that. Absolutely. So you're looking at
a platform that has over 2.6 billion users. And over time, Facebook has been able to amass
enormous facial recognition capabilities using all of those photos that we tagged without our
permission. What we're seeing is that we don't necessarily have to
accept this as the default. So in the EU where GDPR was passed, because there's a provision for
biometric data consent, they actually have an option where you have to opt in. Right now,
we don't have that in the US, and that's something we could immediately require today.
That was our guest, Joy Bollamwini, answering a question from House member Michael Cloud about
private company access to facial recognition technology. You've brought this up time and
time again about permissions and opting in, opting out. We've seen lots of talk from the White House and really from Congress
more broadly about taking steps, but there hasn't been steps yet to regulate or at the very least
push for an option for people to opt whether they want their images to be used in these larger
data sets. What are some of the biggest challenges for you as you talk about this with lawmakers?
One of the challenges is awareness. Oftentimes, so many of the ways in which AI systems are adopted
or deployed are not known until there are issues. I remember the Algorithmic Justice League, we actually did an op-ed about the IRS adopting facial recognition technology for access to basic tax services.
And one of the biggest challenges is the narrative of consent versus the reality of coercive consent.
And this is what I mean by that.
So you go to the IRS website
and you're told that, okay, this is optional. Unless if you're creating a new account. Okay,
so then when you're creating the new account, they're saying, okay, your option is to use this
third party to sign up. And then if you sign up for the third party and you actually go to their website and you read their conditions, they'll say, you don't have to use us.
But if you use us, you're going to waive away your right to a class action lawsuit.
We can't guarantee you that this technology is going to work.
And so, like, technically on paper, you don't actually have to use this.
But the reality is a different scenario.
And then we get the flip when we're looking at facial recognition use within airports.
On their websites for the TSA, it will say this is an opt-in program. That's what it says.
TSA officers are here to treat you with dignity and respect. That's what it says on the website.
So I go to airports. I travel often. And what I see are agents just telling people to step up
and look at the camera. But you can actually say you have to opt into it. You can say no?
Well, opting in, if we are saying this is opt-in, you should be asked if you want to do it. Instead,
what you're being told is to step up to the camera. So what's meant to be an opt-in process
in the way that their policy is written is actually executed
in an opt-out fashion. And many people don't even know that they can opt out. And in fact,
it was supposed to be opt-in. Our guest today is Joy Bollamwini. Her new book is titled Unmasking
AI, My Mission to Protect What is Human in a world of machines. We'll continue our conversation
after a short break. I'm Tanya Mosley, and this is Fresh Air.
This message comes from WISE, the app for doing things in other currencies. Send, spend,
or receive money internationally, and always get the real-time mid-market exchange rate with no
hidden fees. Download the WISE app today or visit WISE.com. T's and C's apply. across the NPR network. Even though our show is available to you for free,
it's not free to produce.
If you already donate to public media, thank you.
But if you're listening and are thinking about becoming a supporter,
now is a great time to start.
Today is Giving Tuesday.
It's an international day of giving.
You have options.
With Fresh Air Plus, your donation gets you sponsor-free shows and exclusive bonus episodes.
You can also make a tax-deductible donation to your local NPR station, to the NPR network,
or all of the above. I hope you'll consider joining the community of listeners who make
this work possible. We can't do it without you, and your support makes sure everyone can listen.
You can give today at donate.npr.org slash fresh air, or subscribe to NPR Plus at plus.npr.org.
Thank you. Today, my guest is computer scientist Joy Bollamwini, founder of the Algorithmic Justice League, an organization that raises awareness about the
impacts of AI. She's written a new book titled Unmasking AI, My Mission to Protect What is Human
in a World of Machines. Her TED Talk on algorithmic bias has over a million views, and her MIT thesis
mythology uncovered large racial and gender biases in AI services from companies like Microsoft, IBM, and Amazon.
She serves on the global tech panel convened by the vice president of the European Commission to
advise world leaders and technology executives on the ways to reduce the harms of AI.
You know, Dr. Joy, I was really struck by how honest in the book you were about, at first,
you were hesitant about this idea of being an activist and taking on issues of race within AI
and computer science more generally, because you were at MIT to create groundbreaking technology.
You did not want to be labeled as someone who was taking on issues of race and
racism. That or sexism or any of the isms as the work is intersectional. And so when I got into
computer science, I wanted to build cool future tech. And that's what took me to the media lab. I was not trying to deal with various types of isms. And I also understood it would make my life harder, you know, if I needed to burden of being that person, that person pointing out the flaws, critiquing everything when we're out here just trying to have fun. harmful these systems could be and who would be harmed, particularly people like me, people from
communities of color, women, marginalized identities of many forms. And I realized that
I had a platform, I had the skills and technical know-how to do deep investigations of these systems. And that maybe, in fact, I did have
some kind of duty. And certainly I had the opportunity to say something and have it be heard.
Even in saying something, though, you were very aware of the perception of you as a Black woman.
I was struck by a story you tell in the book.
When you started to speak about the coded gaze, you would practice before speaking to an audience.
That's all people do that, but not exactly what to say. Of course, that was very important, but also just as important as how to say it. You didn't want to come off like an angry Black woman.
Oh, yes. I actually remember when I was recording this video for an art installation
called Hi Camera. And in that video, I'm having a playful conversation with a computer vision
system. And I'm saying, Hi, camera, can you see my face? You can see my friend's face.
What about my face? That's not how I initially said it. I said, Hi, can you see my face. You can see my friend's face. What about my face? That's not how I initially said it. I
said, I was like, hi, can you see my, what about my friends? You can't see my face. And so because
I certainly felt a certain kind of way about the situation. And so I was wearing my own mask
to be heard because I understood that if I were perceived as being angry or bitter,
that might block certain people from understanding what I was saying. And understanding what I was
saying actually had implications for everybody, because no one is immune from being harmed by AI systems. Also, by that time, I had
quite a bit of experience navigating as a black face in very white places. So I also had an
understanding from those experiences at how easily concerns can be dismissed because of tone policing and so many other factors.
And so part of the strategy was to speak in a way that would allow people to hear me.
I've mentioned a few times that you're a poet.
You call yourself actually a poet of code. When did it become clear to you that you could use your poetry to bring meaning to this larger body of work?
It wasn't clear.
At first, I took a little bit of a risk for me because I spent so much time wanting to have my research be taken seriously.
You were concerned your poetry wouldn't seem objective.
I was concerned people might also think it's a gimmick.
It's like all manner of concerns, yet alone is the poetry is any good, right?
So there's that part too. And so after I published the Gender Shades
paper, and it was really well received in the academic world and also industry,
in some ways I felt that gave me a little bit of a shield to experiment with more of the poetic side. And so
shortly after that research came out, I did a poem called AI, Ain't I a Woman, which is both a poem
and an AI audit where I'm testing different AI systems out. And so the AI audit results are what drive the lyrics of the poem.
And as I was working on that, it allowed me to connect with the work in a different way. This
is where the humanizing piece comes in. So it's one thing to say, okay, the system is more accurate
than that system, or this system performs better on darker
skin or performs better on lighter skin, and you can see the numbers. But I wanted to go from the
performance metrics to the performance arts. So you could feel what it's like if somebody is
misclassified, not just read the various metrics around it. And so that's what the whole experimentation around AI, A to I, a woman was.
And that work traveled in places I didn't expect.
Probably the most unexpected place was with the EU Global Tech Panel.
It was shown to defense ministers of every EU country ahead of a
conversation on lethal autonomous weapons to, again, humanize the stakes.
This is very powerful. And I was pretty moved when I watched a video of you reciting the poem
along with those images, which you say they work in conjunction with each other because this AI Ain't I a Woman is a modern day version of Sojourner Truth's 1851 speech at the Women's Rights Convention in Akron, Ohio.
What has been the reaction in these text spaces when you read these poems?
What some of the feedback that you receive?
The mood shifts.
I mean, sometimes I'll hear a gasp. It goes beyond
what a research paper could do, or even just what I could do simply by describing it. Because what
the evocative audit allows you to do and invites you to do is to be a witness to what AI harm
can look like. And so companies have changed even the ways
in which they develop their AI systems.
And some have pointed to that poem
and other work from the Algorithmic Justice League
as influencing their choices.
So again, for me, it was dipping my toe
into the creative space a bit,
not quite knowing how far it would travel.
If you're just joining us, we're talking with computer scientist, researcher, and poet Joy Bollamwini about her new book, Unmasking AI, My Mission to Protect What is Human in a World of Machines.
We'll continue our conversation after a short break.
This is Fresh Air.
I want to talk a little bit about solutions because I want to know where you think we are
in this push for regulation. I'm thinking about back when cars first made their way on the roads
and there was essentially you didn't need a license, you didn't wear seat belts, there were no rules of the road. And it sounds crazy right now, but it was this new
technology that was out there and we didn't have any of that. And so sometimes I think about that
when we're talking about AI and talking about technical advances, because where are we in this,
in the stage of getting to the point of regulation? It's a car with no safety checks, a driver with no license, roads that haven't even been paved.
So I don't think we even have the roads yet, yet alone traffic signs.
So we are in very early days when it comes to legislation and regulation around AI, but I will say we are in a very different atmosphere when it comes to the
conversation compared to when I started this work in 2015 is when I started building the Aspire
Mirror in 2016 is when I really started doing more of the deeper dive research. And as I write in
the book, I would mention AI bias, algorithmic discrimination.
And I was sometimes just flat out dismissed, sometimes met with ridicule.
And so to have the White House release an executive order on AI, to have a EU AI Act in its final stages, to have some of the world's leading companies also
acknowledging AI bias, AI discrimination, and AI harm. This seems like the norm now,
but it wasn't so long ago when that was not even mentioned when people were talking about AI.
And if it was, it was definitely done in a way that was marginalized.
So I think there has been significant progress in terms of prioritizing the need to do something.
Now there's that something part.
What is the something, right? As I mentioned
earlier, you met with President Biden this past summer as part of this roundtable with several
other experts in this space. What was the most urgent message you were able to impart to him?
For me, it was that we have an opportunity to lead on preventing AI harms. And the subtitle of the book is protecting what is human in a world
of machines. And when I think of what is human, I think about our right to express ourselves,
the essence of who we are and our expectations of dignity. So I challenge President Biden for the
U.S. to lead on what I call biometric rights.
So when I'm talking about our essence, our actual likeness.
So right now, and I do various examples throughout the book, as you know, someone can take the voice of your loved one, clone it and use it in a hoax. So you might hear someone screaming for your name saying someone has
taken something. And you have fraudsters who are using these voice clones to extort people.
Celebrity won't save you. You had Tom Hanks, his likeness was being used with synthetic media with
a deep fake to promote a product, you a product he had never even heard of.
And so we see these algorithms of exploitation that are taking our actual essence. And then we
also see the need for civil rights and human rights continue. And so it was very encouraging
to see in the executive order that the principles from the blueprint for an AI Bill of Rights, such as protections from algorithmic discrimination, that the AI systems being used are effective, that there are human fallbacks were actually included, because that's going to be necessary to safeguard our civil rights and our human rights. You know, with everything that you talk about, I just keep going back to
thinking about this most salient point. You use the term, these are socio-technical problems.
What we are really seeing in AI is a reflection of who we are. So your book is really also asking us to look at ourselves too.
Absolutely. And lately with the adoption of AI systems, at first I thought we were looking
at a mirror, but now I believe we're looking into a kaleidoscope of distortion. And this is what I mean by that. I was looking at a
recent test done by Bloomberg News. They decided to test out text-to-image generation system,
generative AI system. And so they put in prompts for high-paying jobs, low-paying jobs. So CEO, judge, architect. They didn't look like me. I'll put it that way, right? And then when you had low-paying jobs, social worker, terrorist, inmate, that's where you saw an overrepresentation of men of color.
And I was particularly struck by the example of the prompt for judges. And so in the US, women make up around 30% of judges. And this particular
text-to-image model depicted women as judges no more than 3% of the time.
And so this is when I started thinking of this notion of a kaleidoscope of distortion,
because the technologies we believe to be bringing us into the future are actually taking us back from the progress already made that in and of itself isn't yet enough.
There is this real fear that robots are going to take over the world at some point, you don't need to have super intelligent AI systems
or advanced robotics to have a real harm. A self-driving car that doesn't see you on the road
can be fatal and harmful. I think of this notion of structural violence, where we think of acute violence, there's the gun, the bullet, the bomb, we see that type of violence. But what's the violence of not having access to adequate health care? What's the violence of not having housing in an environment free of pollution. And so when I think about the ways in which AI systems are used to determine
who has access to healthcare and insurance, who gets a particular organ, you know, in my mind,
there are already, and in also the world we see, there are already many ways in which the integration of AI systems lead to real and
immediate harms. We don't have to have super intelligent beings for that. What power do
everyday citizens have in all of this? Because it feels like, okay, the power is in these big
companies and in the government taking steps to push for regulation?
I truly believe if you have a face, you have a place in the conversation about AI.
My own story started with sharing an experience,
an experience that felt a bit uncomfortable and was definitely embarrassing,
and I wasn't even sure if I wanted to share it.
But in sharing that experience, I realized I was not alone. And it encouraged others to share their stories of being X-coded,
experiencing AI harm. So I would never doubt the power of your lived experience and sharing your personal story. So as you encounter AI systems, whether it's in your workplace, maybe it's in the hospital, maybe it's at school, you know, asking questions, right?
Does this system, why have we adopted this system?
Does it actually do what we think it's going to do?
Dr. Joy Bollamwini, I really appreciate this conversation.
I appreciate your knowledge and I appreciate this book.
Thank you so much.
Thank you so much for having me.
Joy Boulamwini, talking about her new book, Unmasking AI,
My Mission to Protect What is Human in a World of Machines.
Coming up, we listen back to Terry's interview with former First Lady Rosalind Carter,
who will be laid to rest this week. She died last week at the age of 96. This is Fresh Air.
Former First Lady Rosalind Carter will be laid to rest tomorrow in Georgia. She died last week at
the age of 96, two days after it was announced that she had entered hospice care. As First Lady,
Rosalind Carter became known for her
humanitarian work and as a dedicated advocate for mental health care. Terry Gross interviewed
Carter in 1984 about her memoir, First Lady from Plains. Here's an excerpt of their conversation.
Is there any unofficial job description that exists for First Ladies?
Any guide that exists on paper anywhere?
No.
How did you know what was expected of you?
Well, I really didn't know what I could do when I got to the White House. Jimmy had been
governor, and I had worked on projects that I was interested in and had had to learn a
lot of things the hard way at home because there were no precedents.
We had a new mansion and no staff, and I had to start from scratch and learn how to do everything,
entertain the state legislature in Washington.
I entertained members of Congress, entertained ambassadors from foreign countries in Washington.
I entertained heads of state.
So I thought that it would possibly be a little bit like the Governor's Mansion,
and it was.
And when I got there, I learned that a First Lady can do just about what she wishes to.
She can be a hostess at the White House
and not be involved in issues if she doesn't want to be
and not be involved in projects.
But there's so many opportunities when you get there
and so many things that you can do
that I think it would be a real waste
not to take advantage of those opportunities.
How did you choose your priorities in the White House?
Well, I had always, for many years, not always,
but for many years worked on mental health issues
ever since Jimmy was governor,
and I became interested in that,
in traveling around the state,
in the campaign and talking to people
about their problems, and that came up so often.
What will your husband do for my mentally retarded child
or my brother or sister or my relative who is mentally retarded
or emotionally disturbed, mentally ill?
And so I asked Jimmy about it one day,
and he said that if that was what I wanted to do and what I wanted to work on, it would
be good, because he'd been in the state senate, and he knew the need. And when we got to the
governor's mansion, I knew that's what I wanted to do. I'd always worked on women's issues
and been supportive of the Equal Rights Amendment. There were so many things I wanted to do when
I got to the White House, and I was just eager
to get started on my projects when I got there.
You were sometimes represented as being the power behind him who is very influential.
What did you think of that press coverage?
Well, some of it was ridiculous.
People who said that I made the decisions and stuff just didn't know Jimmy Carter.
He is a very strong person.
But I did go to cabinet meetings.
The reason I went to cabinet meetings was because Jimmy got so tired of me jumping on him every day
when he came home to see, why did you do this or why did you do something that I had seen on television
or read in the newspaper or heard on the radio?
And finally he said, why don't you come to the cabinet meetings
and then you'll know why we make these decisions or why we do these things. So I went in. I was
very comfortable. I sat in the row around the back of the room with the secretaries who were there
and learned just generally what the administration was doing, which was really important to me
because I went out into the country. People asked me about what they were doing.
I never knew details, but I could tell them generally
what the administration was trying to do.
And also I could see negative news stories
and not worry about them if I knew the true facts about something.
It was good for my own peace of mind.
And if you live there for four years with all of the criticisms,
you have to be very confident that you're doing what's right.
And I needed to know that I needed for my own benefit
to know that Jimmy was doing what he was doing and why he was doing it
and that it was good for the country and best for the country.
Were you criticized at all by cabinet members who felt uncomfortable
that the president's wife was sitting in on the meetings?
Never.
What about from the public or the press? Again, the we didn't elect you syndrome.
What are you doing there? Well, as I said, I'd learned very early that I was going to be criticized. Even if I had stayed in the White House and poured tea for four years, I would
have been criticized. So why not do? I don't know anybody. I really don't think I know anybody
who wouldn't go to the cabinet meetings and sit in and listen if they had an opportunity.
You've described yourself as a political partner with your husband, Jimmy Carter.
Did you have to work hard to achieve that partnership?
It just kind of developed from being in the Navy and learning to take care of things while he was gone,
and then coming home and working and building the peanut business together and then going to Governor's Mansion and doing my thing.
It just developed a mutual respect for what the other could do.
Rosalind Carter, I want to thank you very much for talking with us.
Good. I enjoyed it.
Rosalind Carter from an interview recorded in 1984.
She'll be laid to rest tomorrow at her home in Plains, Georgia. On the next Fresh Air, how did evangelicals become Donald Trump's most
unflinching advocates? That question plagued Tim Alberta as a journalist and as the son of an
evangelical pastor. We'll talk about searching for an answer by traveling to evangelical churches
around the country. His new book is about American evangelicals in the age of extremism.
I hope you can join us. To keep up with what's on the show and to get highlights of our interviews,
follow us on Instagram at NPR Fresh Air.
Fresh Air's executive producer is Danny Miller.
Our technical director and engineer is Audrey Bentham.
Our interviews and reviews are produced and edited by Amy Sallet, Phyllis Myers,
Sam Brigger, Lauren Krenzel,
Heidi Saman, Anne-Marie Baldonado,
Teresa Madden, Thea Chaloner,
Seth Kelly, and Susan
Yakundi. Our digital media producer
is Molly C.V. Nesper.
Roberta Shorrock directs the show.
I'm Tanya Mosley.