Unlocking Us with Brené Brown - Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans
Episode Date: April 3, 2024In this episode, Brené and Craig discuss what is known in the AI community as the “alignment problem” — who needs to be at the table in order to build systems that are aligned with our values a...s a democratic society? And, when we start unleashing these systems in high stakes environments like education, healthcare, and criminal justice, what guardrails, policies, and ethical principles do we need to make sure that we’re not scaling injustice? This is the third episode in our series on the possibilities and costs of living beyond human scale, and it is a must-listen! Please note: In this podcast, Dr. Watkins and Brené talk about how AI is being used across healthcare. One topic discussed is how AI is being used to identify suicidal ideation. If you or a loved one is in immediate danger, please call or text the National Suicide & Crisis Lifeline at 988 (24/7 in the US). If calling 911 or the police in your area, it is important to notify the operator that it is a psychiatric emergency and ask for police officers trained in crisis intervention or trained to assist people experiencing a psychiatric emergency. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Hi, everyone. I'm Brene Brown, and this is Unlocking Us. No, I have not started smoking
two packs of cigs a day. I've just got like a raspy cough. Let me just tell you,
this conversation, not only was my mind blown as I was having it, I loved watching people on my team
listen to the podcast and look at me and just make the little mind-blown emoji thing.
This is such a cool and scary and hopeful conversation. It's just,
I don't know that I've ever learned so much in 60 minutes in my career.
So I'm so excited that you're here. This is the third episode in a series that we're doing
that I call Living Beyond Human Scale. What are the possibilities? What are the costs? What's the
role of community and IRL relationships in a world that is like AI, social media, 24-hour news. We're being bombarded
by information at the same time people are fighting for our attention and will use any
means necessary to get it. I don't know about you, but I'm tired. This series is going to be
kind of unique in that it's going to cross over between Unlocking Us and Dare to Lead. I'm so glad you're here. So I'm talking to Professor S. Craig Watkins,
and he just has a gift of walking us into complicated things and making them so straightforward
and then asking really unanswerable, important questions. Maybe they're answerable,
not by me, but it's just incredible. October 9th to October 16th, get amazing deals on shoes and boots, on sale at 30% to 40% off.
And you can shop new styles during the Macy's Fab Fall Sale from October 9th to October 14th.
Shop oversized knits, warm jackets, and trendy charm necklaces,
and get 25% to 60% off on top brands when you do.
Plus, get great deals on cozy home accessories from October 18th to October 27th.
Shop in-store or online at Macy's.com.
I just don't get it.
Just wish someone could do the research on it.
Can we figure this out?
Hey, y'all.
I'm John Blenhill, and I'm hosting a new podcast at Vox called Explain It To Me.
Here's how it works.
You call our hotline with questions
you can't quite answer on your own. We'll investigate and call you back to tell you
what we found. We'll bring you the answers you need every Wednesday starting September 18th.
So follow Explain It To Me, presented by Klaviyo. Let me tell you a little bit about Craig before we get started.
So S. Craig Watkins is the Ernest A. Sharp Centennial Professor and the Executive Director
of the I.C. Squared Institute at UT Austin, the University of Texas at Austin, Hookham.
He is an internationally recognized scholar who studies the impact of computer-mediated
technologies in society.
He has been a visiting professor at MIT's Institute for Data Systems and Society.
He's written six books.
He lectures all over the world around social and behavioral impacts of digital media, with
particular focus on matters related to race and systemic inequality.
He leads several research teams
that are global, and he's funded by the NIH, the National Institutes of Health.
He's leading a team that's using one of the largest publicly available data,
health data sets from the National Institutes of Health, to develop models to understand the
interactions between demographics, social and Factors, and the Distribution of Chronic Disease and Mental Health Disorders.
His full bio will be on the podcast page on brennabrown.com.
Comments will be open.
Maybe if you've got some really good questions, I could try some more answers out of him.
He was really generous with his time, but I'm really glad you're here.
On a very serious note, I want to say that Craig and I talk about
how AI is being used across healthcare. And one topic that we are discussing is how AI is being
used to identify suicidal ideation. So if this is a topic for you that has got a big hook,
I would suggest not listening. And always, if you or a loved one is in immediate
danger, please call or text the National Suicide and Crisis Lifeline. In the U.S., it's just 988.
It's open 24-7. And if you need to call the police in your area, which would be 911 in the U.S.,
but if you're calling the police in any area, it's always important to notify the operator that it's
a psychiatric emergency and ask for an officer trained in crisis intervention or trained to
assist people experiencing a psychiatric emergency. One of the kind of quotes from
this before we jump in that really stood out to me that I just cannot stop thinking about
is Craig saying, when we started unleashing AI systems in high-stake environments
like education, healthcare, criminal justice, without any proper understanding, without any
guardrails, without any policies, without any ethical principles to guide how we were going
to go about incorporating these decisions into these high-stake environments, you have to ask,
what were we thinking? Let's jump in.
Craig, welcome to the podcast. I'm so happy you're here.
Thank you for having me. So we'd love to start with you sharing your story with us. And normally, especially researchers are like, my research
story? And I'm like, no, like you were born, little baby story, like you were born where?
How did you get into this? Tell us about yourself. Yeah, absolutely. So I'm a native Texan,
which is rare among the faculty at the University of Texas at Austin, actually.
Grew up in Dallas, the youngest of three. My mother and father neither went to school beyond high school, but had high aspirations for their children, myself included. And from a very early age, my mom essentially socialized me to think big and to put importance on education. And so that was something that was always a significant part of my life. As a very young child, I can remember always just writing,
literally. I learned in second or third grade how to physically make books with the spine and
the front cover, back cover, and would literally physically make books and then just populate those
books with stories, with characters, with images. And so I've always had an interest in writing.
I've always had an interest in just human behavior
and just observing human behavior.
And over the course of my sort of academic training,
that became a lot more formal.
Undergraduate University of Texas,
where I currently live in Austin and teach at UT Austin,
graduate school, PhD, University of Michigan,
lived on the East Coast, Philadelphia,
and New York for a while,
but developed a deep
interest in wanting to understand human behavior, and human behavior in particular in relationship
to technology. So it's something that I've always just had a natural interest in. And as you can
imagine, over the last 20 years or so of my academic career, just the significant and tremendous
evolutions in technology and what that means for human behavior has been just a nice
alignment for me from a natural sort of, you know, curiosity perspective, formal academic training
perspective. And so it's worked out pretty well for me in terms of how the world has evolved
and how my interest in the world has continued to grow as well.
When I think of AI and I think of research and I think of human behavior, which is my interest in the intersection. I think of you.
And what I love is that your mom said, think big.
And you study artificial intelligence and the biggest stuff in the whole world.
Like, you went there.
Who could have known, right?
You know, obviously, no idea back in the 80s or so, 90s or so, what the world was evolving towards.
But certainly, it's been a really interesting journey and excited along the way, definitely.
I am so glad you're with us.
And we got a chance to talk a little bit before the podcast.
And you know that I have a slight obsession with your work.
And I've tried to divide it up into two super related things.
One thing, and I will put links on the show page
to all of Craig's work and where you can find him, his LinkedIn, his books.
What I really want y'all to watch that has blown my mind that I want to talk about and dig into
is your TEDxMIT talk. So the TED talk that you gave at MIT. Before we get started, can you tell us a little bit about
the project at UT that you're on and also kind of what you're doing also with MIT and the
collaborative projects that you're working on as context for our conversation? Yeah, absolutely. So
at UT, I wear a few different hats. I'm the director of sort of a think and do tank called the IC Squared
Institute. Under my leadership over the last year and a half or so, we've dug pretty deeply into the
intersection between innovation and health and AI in particular. And so this idea of bringing
innovative thinking, innovative partnerships to how we understand health and well-being through
the lens or through the perspective, the possibilities, and also the perils of artificial intelligence.
UT also has three grand challenges, and these are initiatives that are funded by the Office
of Vice President for Research, endorsed by the president of the university.
And one of those three grand challenges is good systems.
And good systems is a reference to what we oftentimes refer to as ethical and responsible
artificial intelligence. And I'm one of the co-principal investigators for the Good Systems project,
focusing primarily on thinking about the ways in which we can design and deploy AI systems in ways
that mitigate rather than exacerbate inequities like race and gender inequity, for example.
And my work in AI kind of works on two
interesting ends of the spectrum. We have teams that are doing a lot of the computational work
that is working with large data sets, for example, from the National Institutes of Health,
trying to understand, right, how do we design models and use data to understand with a little
bit more empirical nuance what's happening from a health disparities perspective. So we can get
into a little bit more of that in detail if you'd like. But at the other end of that spectrum is the human dimension.
And that is sort of thinking not only about AI from a computational sort of mathematical
perspective, we're thinking about it strictly as a technical or computational problem,
but understanding that there are significant human and ethical questions at stake. And so
we spend a lot of time talking to just people in the
front line, particularly people in healthcare, getting their sense of AI, their concerns about
AI, their aspirations for AI. And so I feel like the way in which we approach the work gives us
a kind of multidimensional perspective that helps us to understand, right, all of these sort of
intersecting questions that are driving conversations today around AI across many domains, and certainly as it relates to AI.
And I had a fortunate opportunity about a couple of years ago to spend a year at MIT as a visiting professor,
working with teams there at the Institute for Data Systems and Society, where they have launched similar initiatives, right,
where they're trying to build teams that are trying to understand the more complex and nuanced ways in which race and racial discrimination, systemic racism,
influence society, and trying to sort of build up models and teams and collaborations that allow us
to understand these dynamics in much more sophisticated ways. And so we continue to
work with some of the teams there, particularly in the healthcare space and health and well-being space. So let's start with, I want to ask some questions about what I think I learned from you
and dig in. So one of the things that's been scary in my experience, I spend probably the vast,
vast majority of my time inside organizations. There is a mad scramble right now within organizations to understand how to use AI.
I'm using chat GPT a lot right now. Everyone's kind of experimenting and thinking about it.
And I've got one daughter in graduate school. I've got a child graduating from a young person,
not a child, but a young person graduating from high school. We talk about the future all the time. One of the things that was very interesting to me about your work is when I
go into organizations, I get really nervous because what I hear people say is, oh, we're
going to use AI. That will control for all the bias in everything from hiring to how we offer services to how we evaluate customers.
And my sense is that it's not going to work that way. My fear is, my fear just from like
using my own chat GPT prompts, my fear is we may not eliminate bias. we may scale it. We may just bring it to full scale. And things I'm
thinking about are like algorithms for hiring that people have been using for five years and
show that there's tremendous gender bias or race bias in those hiring algorithms. So tell me about the intersection of AI, scale, and fairness.
Yeah, so we have a day or so, you said, for this conversation?
I was going to say, I'm going to start with a small.
Let me tell you something.
There's a couple of things in that MIT TED Talk that freaked me out and blew my mind.
And the first one was completely reconceptualizing how I think about fairness.
So yeah, it's just crazy to me.
Yeah, what I was trying to get at at that talk in my work is that in only academics
where I could take a notion or a concept like fairness and render it almost impossible to
understand, fairness just seems really easy, right?
Simply fairness.
But what's happened, Brene, is that in the machine learning and AI developer community, they have heard the critiques that these systems that they're building inadvertently, right?
And I don't think any of this is intentional, but the system that they're building, if it's a hiring algorithm, if it's an algorithm to diagnose, right, a chronic health condition, an algorithm designed to
determine whether or not someone will be a repeat offender or default on a loan, that what we've
come to understand as we have deployed these systems in higher and higher stakes environments,
education, healthcare, criminal justice, et cetera, that they are inadvertently sort of
replicating and sort of automating the kinds of historical and legacy
biases that have been a significant part of the story of these institutions. And so what the
developers have come to understand is that this is something obviously that we don't want to
replicate. We don't want to scale. We don't want to automate these biases. And so they've come up
with different notions of fairness, right? So how can you build an algorithmic model that functions in a way that performs this task, whatever that task is, in a way that's reasonably fair?
And if we take race and ethnicity, for example, this might be one way of thinking about this.
And so think about this, for example.
So take race and what we've come to understand about racial bias and racial discrimination, you know, trying to build a model.
And so what some developers argue is that the way in which you mitigate or reduce the likelihood of this model performing in a way that's disproportionate, right, to how it might perform its task across race is just to try to erase any representation or any indication of race or ethnicity in the model.
So you strip it of any data, any notes, anything that might reflect race.
And so this idea of creating a model that's racially unaware,
and this idea by being racially unaware,
it then reduces, at least from this perspective,
the likelihood of reproducing historical forms of racial bias.
But on the other end of that, on the flip side of that argument, right,
is saying, is that possible?
And should we build models that are, in fact, race aware? In other words, that you build into the model some consideration of race, some consideration of how race maybe develops certain kinds of properties or certain kinds of features, respective of the domain of interest that you're looking at. So it's just one example of, you know, how developers are taking different approaches to trying to cultivate models that operate from a perspective that's more fair.
So here's what's really interesting.
So there was a study that some colleagues of mine and others did at MIT where they were
looking at medical images.
And they essentially trained a model to be able to try to predict if the model could identify the race and ethnicity of the persons whose medical images belong to if you
strip the image of any explicit markers of race or ethnicity. And the model was still able to
predict that with high degree of accuracy, the race of the patient who was imaged in this particular
screening. So then they did some other things, right?
They said, well, how about if we account for body mass index?
How about if we account for chronic diseases?
Because we know that certain chronic diseases distribute along racial and ethnic lines.
How about if we account for organ size, bone density?
You know, all of these things that may be subtle or proxy indicators of race.
And so they stripped the images, they stripped the database
of all of those kinds of indicators, and the model was still able to predict the race and ethnicity
of persons represented in these images. And then they even went so far as to degrade the quality
of the image, and the model could still, with a high degree of accuracy, identify the race and
ethnicity of the patient represented in that image. The story here, right, is that
these models are picking up on something, some feature, some signal that even humans can't detect,
human experts in this space can't detect. And what it suggests, right, is that our models are
likely understanding and identifying race in ways that we aren't even aware, which suggests, right,
that how they're performing their task, if that's predicting a repeat offender, if that's predicting who's likely to pay back a loan or not, if that's
predicting who's likely to graduate from college in four years or not, that is picking up on
racial features and racial signals that we're not aware of, suggesting that it may be virtually
impossible to build a quote-unquote race-neutral or race-unaware algorithm. Well, let me ask you this. My reflexive response to that is
I'm not sure that racial neutrality, like gender neutrality, is a good idea because
it gives a little bit of the colorblind stuff and gives a little bit of the assumption that everyone starts from the same place in terms of opportunity and exposure.
Where am I wrong here?
You're not.
And it's precisely part of the problem with how these fairness models and fairness frameworks have been developed.
So the intentions are good, but I think the sort of deep awareness
and understanding of the complexities
of systemic inequality, systemic racism, for example,
aren't very well understood.
And so as a result,
what I was trying to suggest in that talk
is good intentions aside,
that these models in the long run
are likely going to be largely ineffective
precisely because they
operate from the assumption that racial bias and discrimination are interpersonal, that
they're individual as opposed to structural and systemic.
And so in that sense, they're good initial efforts.
But I think if we really want to get at this problem in a way that's going to be enduring
and impactful, it's going to require a little bit more sophistication in thought.
And it sort of speaks to what I see as another evolving development in the whole AI machine
learning space is that 10, 15 years ago, certainly 20 plus years ago, these were problems that were
largely being addressed by a very limited range of expertise, engineers, computer
scientists, maybe data scientists. But what we've come to understand now is that it requires much
more multidisciplinary expertise. You need social scientists in the room. You need humanists in the
room. You need designers in the room. You need ethicists in the room. And sort of understanding
now that the complexity of these systems requires complexity of thought, multidisciplinary kinds of expertise in order to address them. And hopefully what that talk will encourage, and I
think, you know, talks like that and research like that is encouraging AI and machine learning
developers to bring more people into the conversation in order to say, hey, if this
model is attempting to address injustice in the criminal justice system, injustice in how we
deliver healthcare and health services, we've got to have domain expertise in the room justice system, injustice in how we deliver healthcare and
health services, we've got to have domain expertise in the room, right?
We just can't have computational expertise in the room.
We need domain expertise helping us sort of think about all of these different variables
that are likely going to impact any model that we develop, data sets that we compose,
procedures that we articulate as we begin to develop this system.
And I think that's something that we're beginning to see more and more of.
And it's certainly something that we're doing here at UT Austin with the Good Systems Grand
Challenge, which is really saying, hey, bring together multidisciplinary expertise to solve
these problems.
This is no longer a problem that's adequately solved by just computer scientists or engineers.
Doing that has gotten us to the point now where
we see that that's insufficient, it's inadequate, and increasingly indefensible.
Ooh, okay. Let me ask you a question. So I want to just define for everyone listening,
when you say domain expertise, you mean not just people who can code and understand algorithms, but a behavioral scientist,
someone who has got a domain expertise in structural or institutional inequality, so an ethicist.
So just for fun, what would your perfect table look like?
Can I give an example of a table we're trying to put together literally as we speak?
Yes, I love it.
So I'm part of a team that's just been funded by the National Institutes of Health to explore the development of AI and machine learning models to help us better understand what's driving these rapidly increasing rates of young AfricanAmericans committing suicide. So we've all heard about the youth sort of mental health, behavioral health crisis in this country,
and it gets expressed in a number of ways, higher rates of depression, general anxiety disorder, loneliness,
and significantly and tragically, suicidal ideation and increasing rates of suicide.
We know that suicide is the second leading cause of death for young people between the ages of like 16 to 24 or so.
And so we received this funding from NIH to try to say, could we develop models that may be able to identify, right, what are some of the social risk factors?
What are some of the behavioral factors that might contribute to or even be predictive
of someone contemplating or trying to commit suicide?
So this is a partnership with the med school from Cornell.
So we've got engineers and researchers, computational experts from Cornell,
and here at UT Austin.
Now, we could have approached that project and said,
okay, let's just put our computational expertise,
our computational talent together, and sort of develop these models
and see what we could generate, write a paper,
you know, hand that over to the NIH and say,
you know, here it is, good job done, we're finished. But we said, you know, hand that over to the NIH and say, you know, here it is, good job done,
we're finished. But we said, you know, that isn't really satisfactory. That isn't sufficient.
We've got to bring other people to the table, other people into the conversation.
And so part of what we also proposed to the National Institutes of Health is that we would
compose an advisory board that consisted of behavioral health specialists, that consisted
of community stakeholders,
people who are on the ground, who are talking to, live with, and understand what young African
Americans are going through, and could give our computational team expertise, guidance,
and frameworks, ideas that we might not ever consider simply because we don't have the deep
domain expertise that they have in that space. And the idea, right, is they may be able to give us feedback on, is our data set
appropriate? Is it proper? Is it going to give us sufficient insight? Is it going to be able to
perform a task, predictive tasks that are relevant to the problem as they see them? And so it's this
idea of building AI with rather than for people, with rather than for distressed populations.
And when we began to do that, when you began to bring that outside external expertise into the room at the table, you get a very different perspective of what's happening on the ground.
You get a very different perspective of what the possibilities and limitations of your model are.
And so that's how we've approached the work and what we think is unique about the work.
But I also think this is becoming kind of a new standard, right?
It's certainly a new standard in terms of how the National Science Foundation, the National
Institutes of Health are funding future AI research.
They're beginning to require researchers to say, hey, who are you talking to?
Are you talking to people beyond your laboratory?
Are you talking to people beyond your sort of academic expertise? Because if we're going to build AI that matters,
AI that impacts the world in a significant way, we've got to expand who's contributing
to that conversation and who's driving how we design and deploy these systems in real-world
contexts. As a social worker, you're speaking my normal language, which is make sure the people at the table making
decisions for clients, make sure those include people with lived experience, community on the
ground experience. It's got a very kind of old school organizing. We want the quantitative,
but we also need the qualitative. It reminds me of Karen Stout, who was my mentor
when I was getting my PhD. Her term was femicide, the killing of women by intimate partners.
And what's coming up for me in this conversation right now, Craig, is I was the first person with
a qualitative dissertation at my college, and it was like a shit show. They really fought it.
But one time she told me, because she was a domestic violence researcher, she said,
it would be so great if all that mattered were the stories.
But we have to have the quantitative information.
We have to have scope and scale and the numbers.
And it would be so great if all that mattered were the numbers.
We can't move anything for funding or really understanding
without the stories. So is it fair to say that there's a new awareness that lived experience,
on-the-ground experience, behavioral science, humanism, as you mentioned,
these are relevant for building AI that serves people?
You know, absolutely.
I mean, obviously, right,
the critical currency in all of this
when we talk about AI and machine learning is data.
Right.
And I think we are approaching techniques
and developing practices
that are beginning to understand
different kinds of data
and how we can leverage these systems
to probe different kinds of data, to we can leverage these systems to probe different kinds of data to
yield outputs, outcomes, results, findings that could be extraordinarily revealing. And so, for
instance, in this suicide project that I mentioned to you, actually some of the kind of critical
piece of data that we're using, what's oftentimes referred to in the field is unstructured data.
Unstructured data is another way of thinking about stories, right?
What stories are healthcare professionals on the ground saying about young African-Americans?
What stories are law enforcement officials saying about young African-Americans?
So we're looking at a data set, right?
A registry of violent deaths that are beginning to try to capture some of that unstructured data,
more qualitative data.
And so we now have computational techniques.
We now have systems that can actually begin to analyze that language.
What words are they using?
What environmental triggers are they identifying?
What behavioral triggers or what social support triggers are they identifying as they describe
a condition, as they describe a person and their journey, and how it might help us to
understand how that led to suicide.
And so being able to mine unstructured data, and we're seeing this right via social media,
we're seeing this via diaries that people keep.
There's an interesting researcher who studies suicide, and he studies suicide by studying
suicidal notes, right?
What notes do people leave behind as they began to take on suicide? And this idea
of studying those notes, what he refers to as sort of the language of suicide in terms of what are
people communicating? How are they communicating that? What sentiments are they expressing?
What keywords are they using? What are the linguistic markers that might help us to
understand in a very different and unique way? You know, what are those signals that may help
us to see and identify before someone reaches crises, how to intervene and how to prevent some of these things from happening?
But I'm increasingly fascinated by how you might be able to mine stories as a form of data to understand human complexity, to understand the human condition in extraordinary ways.
Now you're speaking my language.
I'm not a zero and ones kind of girl,
but man, when you talk about unstructured qualitative data,
now we're singing from the same hymnal here.
And I know you need both,
but I do think there's so much language and story as the portal that we share, I think, into each other.
And so I think it's exciting. about the work we do can be radically changed by the tools we use to do it. So what is enterprise software anyway?
What is productivity software?
How will AI affect both?
And how are these tools changing the way
we use our computers to make stuff,
communicate and plan for the future?
In this three-part special series,
Decoder is surveying the IT landscape presented by AWS.
Check it out wherever you get your podcasts.
Hello, I'm Esther Perel, psychotherapist and host of the podcast, Where Should We Begin,
which delves into the multiple layers of relationships, mostly romantic. But in this
special series, I focus on our relationships with our colleagues, business partners, and managers. Listen in as I talk to coworkers facing their own challenges with one another
and get the real work done.
Tune into Housework, a special series from Where Should We Begin,
sponsored by Klaviyo.
One of the things that blew me away, again, about the TEDxMIT talk, first of all, can I just say thank you for speaking to a room of academics in a non-academic way?
Like, I've had my family listen to it, and they were like, well, is this a professor talking to professors?
Because I need a heads up.
Like, we need a closed caption that translates.
And I said, it is a professor talking to professors, but the reason why I love your work
is I always think a mark of a genius is someone that can explain conceptually complex things
in a way that's understandable to everyone. One of the things that you explained in this talk that was both vitally important,
I believe, but also really scary is when you talk about, you talked about the different types of
racism. So interpersonal racism, pretty easy to recognize between people, language,
it's fairly obvious. Then institutional racism and then structural racism.
And you built a map of these things that look like the London Underground. It was so complicated.
Tell me about, this is one of the things that keeps me up at night, the increasing use of AI in policing
and trying to build computational and mathematical models that account for
not just interpersonal racism, but institutional and structural racism and the complexity of those
things. Again, another easy question for you.
Yeah, yeah, absolutely.
I should have ate my Wheaties this morning.
No, this is a great question.
I think part of what I was trying to get at in that talk is,
so I'm a social scientist by training.
And so I've had the, I guess I could say,
fortunate opportunity just to have been
immersed in the academic literature for 20, 25 plus years or so throughout my academic training
and career, even longer actually. And you kind of understand that systemic discrimination
is very powerful insofar as we begin to understand how racial discrimination in one domain, let's say
neighborhood and housing, has implications for how racial disparities get expressed in other domains.
Education, for example, access to schools, access to high quality schools, what that then means in
terms of access to college, access to employment, and that these things are sort of woven and interconnected in ways
that are so intertwined, right?
The web of degree of how these things are intertwined with each other is so complex
and so dynamic.
And this is, you know, part of what I was, you know, my time at MIT and beyond, just,
you know, talking to people in these spaces from a computational perspective, how is it even possible to develop?
What kind of data set would you need to really begin to understand the interconnectedness of all of this?
How, just where someone lives, what the implications are for health disparities, what the implications are for access to employment, access to education,
access to other kinds of resources. And it's just extraordinary how these different things are sort
of interlocked, let's say how they're connected, and how one disparity in another is sort of
reciprocal and connected to another disparity in a whole different domain. And I don't know.
And so I consider myself more so than anything, a learner in this space.
And so I feel like I'm still learning, but I haven't landed on a solution or even the notion
that it's possible from a computational perspective to deal with the utter complexity of all of this.
What we started doing at MIT was just isolating very specific domains, healthcare, discrimination in housing, and
trying to see if we could come up right with predictive models, if we could come up with
computational models that allowed us to understand with a little bit of more empirical rigor
and specificity what was happening in a specific domain.
But when you start trying to understand how what's happening in that domain is sort of
connected to what's happening in another domain, building those kinds of models, building those kinds of data sets.
I'm not even quite sure if we understand or have the right questions to ask to be able to really
help us begin to start cultivating or developing the right kinds of problem definitions, the right
kinds of data preparation, the right kinds of modeling that might allow us to understand with even more complexity what's happening here. And so the idea of building models for what's called
predictive policing and understanding, right, that oftentimes what's happening in predictive policing
is simply sort of scaling the kinds of historical biases that have typically defined
the ways in which police go about their jobs, right? Public safety, how they manage the streets,
how they manage certain communities
vis-a-vis other communities.
And there's this kind of self-fulfilling prophecy, right?
That if you understood crime as existing
in these zip codes and these zip codes
and how you've arrested people
and how you've deployed resources
to sort of police those communities,
the idea, right, is that if you are allocating resources,
surveilling communities, policing communities in ways that you believe are required, that it's going to lead to higher
arrest rates. It's going to lead to a whole lot of other outcomes. And so you end up just sort
of repeating that cycle as you begin to start relying on these historical trends through your
data signals to say, hey, this is where we should be allocating resources. This is where we should
be policing. This is where we should be addressing concerns about this or concerns about that in terms of criminal activity,
criminal behavior. And so what you end up creating, right, are oftentimes systems that are not
necessarily predictive of outcomes that we think are important, but are in some cases, right,
instead predictions of ways in which we have tended to behave in the past. And so a crime prediction algorithm becomes not an algorithm that predicts who's most likely to be a repeat offender, but instead it becomes a model that predicts who's most likely to be arrested.
And that's a very different prediction, a very different outcome that then requires a very different kind of institutional response.
Who's more likely to commit a crime versus who's more likely to be arrested?
Two different questions. And that's oftentimes referred to increasingly today in the AI
community is what's oftentimes referred to as the alignment problem. And while I don't agree
fully with how the alignment problem is currently being defined. It's this notion, right, that we are
building systems that are leading to unintended consequences, consequences that are not aligned
with our values as a democratic society. And so this idea, right, that when you build a crime
risk prediction model, you want that model, right, to perform at a very high level, a very
reliable level, and you want it not to discriminate or be biased.
And what we've seen instead is that these kinds of models, in fact, are biased, that they do discriminate. And so, therefore, they're sort of misaligned with our values as a society in terms
of what we want them to do based on what they really do when they get applied in real-world
contexts. I'm kind of stuck here in a way. Who's the we?
Because I think of that quote that I thought of a lot after George Floyd's murder that the systems aren't broken.
They're working exactly as they were designed.
So are the folks that are defining fair and working on alignment, have they gone through
some rigorous testing to make sure they're the ones who should be defining what aligned and fair is? Do you know what I'm saying? the depth expertise to sort of understand how and what we need to be aligning society to do,
right? And so you can make a case, right? And this is part of the problem that I have with the ways
in which the alignment problem has been defined. You mentioned, for example, hiring algorithms.
And so there was a study, this report a few years ago about the Amazon hiring model that was biased
against women, right? When they were looking to hire engineers and they ended up basically
discontinuing any development and certainly any deployment
of that model because all of the experiments suggested, right, that it was bias against
women that was picking up on certain linguistic signals and markers and resumes that oftentimes
favored men, male candidates, as opposed to female candidates.
And some could say, oh, well, certainly we can't have that, and that's not what we intended
the model to do. Well, the real question, right, is that model was aligned quite neatly with the ways in which
Amazon historically had hired. And so there was nothing exceptional. There was nothing
unique about that model. In fact, it was quite aligned. It was quite consistent with how
Amazon had hired, which is why it was developing, right,
the sort of predictive outcomes that it was developing, because clearly Amazon had a history
of hiring male engineers vis-a-vis female engineers. And so in that sense, is it an
alignment problem or is it another problem, right, that we need to understand in terms of what's
happening here? And I think your articulation, right, is really where we need to be going as
we sort of understand and try to develop these concepts and frameworks that might help give us guide rails, give us principles, give us concepts that we can then begin to integrate into our practices to help us better understand and address these issues as they sort of play out and materialize in the ways in which these systems get deployed. And so oftentimes, even in the risk
assessment models where it turns out, right, the important study that was published by ProPublica
a few years ago about the COMPAS model and how it was discriminating and biased against Black
defendants vis-a-vis their white defendants. And again, the question is, this model was in some
ways performing in ways that were quite aligned with, historically, how judges had made decisions about who was most likely to be a repeat offender, and therefore the
kinds of outcomes that they were making based on those sort of human conceptual models.
And what we've done is to sort of create a system where we are unintentionally designing
machine models, machine learning models, that are behaving in similar ways, making
similar decisions, but simply in ways that are scaled, simply in ways that are behaving in similar ways, making similar decisions,
but simply in ways that are scaled, simply in ways that are automated, simply in ways
that are accelerated beyond anything that we could have ever imagined.
Yeah, I mean, there are two things that come to my mind when you say that.
One is the person with the most power defines a line.
And if it's really a line to just causes, then it reminds me of
like the Jack Nicholson line, like, you can't handle alignment. You know, you can't handle
the truth. Like, if you really want to align to the highest ideals of democracy,
that is going to shift power in ways that the people who hold it will not be
consistently comfortable is the nicest way I can put that. I think you're absolutely right. And
not only are they not comfortable with it, I don't even think they can conceptualize what
that would look like even. In other words, it would require recognition of a kind of expertise and tapping into communities and experiences that they typically might have little, if any, connection to.
And so how their notion of alignment gets defined and how they proceed with trying to address the quote-unquote alignment problem in AI, I think will continue to suffer from these kinds of constraints for the very reasons that we've identified here.
Wow.
So now we're not going to say, you know, instead of saying the systems are not broken, they're working just like they were built.
I don't want to in 10 years say the algorithms are not broken.
They're working exactly as designed.
Absolutely.
Yeah, absolutely.
And this is the interesting thing, right, about systemic sort of structural inequality.
I don't think any of this is necessarily intentional cetera. I'm not aware
of any process where they intentionally design these systems to perform in these ways that lead
to these sort of disproportionate impacts. But nevertheless, right, by virtue of how they
define problems, by virtue of how they build and prepare data sets, how they build and develop the architecture for
models, that these kinds of biases, implicit though they may be, end up obviously significantly
impacting the ways in which these systems perform, therefore undermining their ability to address
these disparities in any meaningful or significant way. And I think that that's important to note is
that just by virtue
of doing what they tend to do, what they've been trained to do, how they function and operate as
systems, as practices, as customs, oftentimes end up reproducing these kinds of social,
economic disparities in ways that are even unbeknownst to them.
Okay. Can I not push back, but push in on something?
Absolutely.
I totally agree with your hypothesis of generosity, that there's nothing like the Machiavellian, ha, ha, ha, we're going to build this, and we're going to really exploit these folks.
But I also wonder, you know, weirdly, last week I was in Chicago interviewing Kira Swisher for her new book on technology, and she had an interesting hypothesis.
In the interview, she said, what is the opposite of diversity, equity, and inclusion? Homogeneity,
inequality, and exclusion. Right. And so she said, one of the things that she has witnessed as being a tech reporter on the ground for 20 years in early days is that the folks that are kind of running Meta and Amazon and some of these companies have not had the lived experience of bias or poverty or pain, and therefore, maybe they didn't build something evil to start with, but certainly the
lack of exposure to different lived experiences, maybe it's not intentional, but I don't think
the impact is any less hurtful. Does that make sense? I think you're right, and I think that
was a point that I was trying to make, that even though it isn't necessarily
intentional or by design, it doesn't reduce the nature of the impact and the significance.
Yeah.
So take, for example, just about a year, year and a half ago, I was having a conversation
with a CEO from one of the big AI hiring companies.
And they had come to realize, the company had come to realize that how they had built
their system and how employers were finding potential employees, right, through these big platforms that are now doing a lot of the hiring for companies and for different organizations.
And what they've come to realize are the ways in which racial bias are baked into the hiring process. But they had no way internally, right, in terms of their system
to really understand empirically, right, how this is playing out because they never thought to ask
when people upload their resume to this company. They weren't required to declare, for example,
their racial or ethnic identity. So then it becomes hard to know, right, you know,
who's getting employed, who's not getting employed, who's getting interviewed, who's not getting interviewed. If we want to do some type of cross-tabulation,
how are men and women comparing vis-a-vis these different dimensions? How are Blacks,
Whites, Asians, Latinx comparing vis-a-vis these different dimensions that we want to understand?
And the conversation that we were having was trying to help them essentially develop ways to
maybe infer race via resumes that
were being submitted. And they weren't really comfortable with that. And what they recognized
is that at some point, they were going to have to sort of change their model and if not required,
at least ask or invite people to declare a racial or ethnic identity for purposes of being able to
understand and to run tests underneath their engine to sort of see, right, you know, how these things distribute along racial and
ethnic lines.
Again, who's getting interviewed, who's getting hired, who's getting denied.
And what he said to me, Brene, which is really powerful, he wasn't a part of the founding
team, but he's the current CEO.
And what he said to me is that those issues that we were discussing, some of which I've just shared with you, never came up when the original founders built the platform.
They never even thought to ask the question, how does racial bias, how does gender bias
impact or influence the hiring process? There's decades of literature, research literature that
have documented the ways in which hiring social networks, the ways in which customs and norms within organizations, affinity bias, likeness bias, similarity bias.
There's tons of literature that has established these as facts of life in the ways in which people get hired and how those processes develop. And so my point is, I don't think that they necessarily built a system
that they wanted to discriminate or be biased against women or against people of color,
but because it never even occurred to them to ask the question. And kind of to your point,
vis-a-vis lived experience, vis-a-vis their own sort of position in the world, it never occurred
to them that these are issues, right, that are likely going to diminish the quality and performance
of this company, this model that we're building. Therefore, how do we get ahead of this before
it becomes a problem later on, which is what's happened now? And so, the point is not even
understanding or having the awareness to ask these questions can pose significant problems,
create significant deficits or significant challenges to developing systems that are
high-performing, systems that mitigate these kinds of historical and legacy biases.
And that's what I was getting at when I said I don't necessarily think that it's always,
if ever, intentional.
But it doesn't mean that it's not just as problematic, just not as impactful, just not
as profound as if it were someone sitting in a secret room saying, ha-ha, how can we
discriminate against this or that group?
It's so interesting because that example that you shared about the hiring, first of all,
this is how like newly read in, I'm on this stuff.
Like I didn't realize that all those big hiring online companies were AI machine learning
driven.
And of course it makes sense, but this is such a working example in my
mind of what you talked about in the beginning about fairness. So without the right people at
the table, the assumption is, hey, no, I don't want to ask about race or gender because we're
going to be a super fair company. And then not asking about race and gender means that you have
no evaluation data to see if you're doing the right thing.
Exactly, yeah.
And I think what the company has realized, right, is that clearly these kinds of historical biases are impacting who gets hired, who gets interviewed.
And, you know, suggesting to me, right, that for some populations, right, their likelihood for that is to say being recognized, being identified
as a viable candidate are significantly diminished, right, because of the ways in which
algorithmic procedures sort of filtering out certain resumes, privileging others, and they're
now beginning to understand, hey, wait a minute, right? And I'm sure they're getting internal
feedback maybe from African-Americans who upload their resumes or women who upload their resumes.
And so they're getting both formal and informal feedback or signals suggesting that something's happening here. And if we want to
understand what's happening here, we don't even have a mechanism or a way of studying this in a
formal way because we never even thought Wisconsin discovered, kind of by accident,
that mini golf might be the perfect spectator sport for the TikTok era.
Meanwhile, a YouTuber in Brooklyn found himself less interested in tech YouTube
and more interested in making coffee.
This month on The Verge Cast, we're telling stories about these people who tried to find new ways to make content, new ways to build businesses around that content, and new ways to make content about those businesses.
Our series is called How to Make It in the Future, and it's all this month on The Verge Cast, wherever you get podcasts.
Do you feel like your leads never lead anywhere and you're making content that no one sees
and it takes forever to build a campaign?
Well, that's why we built HubSpot.
It's an AI-powered customer platform
that builds campaigns for you,
tells you which leads are worth knowing,
and makes writing blogs, creating videos,
and posting on social a breeze.
So now it's easier than ever to be a marketer.
Get started at HubSpot.com slash marketers.
I cannot think, to be honest with you,
about a more important conversation we need to be having right now.
It's just the idea in my mind of scaling what's wrong and hurtful is so scary.
And I think I see it happening in the organizations I'm in because one of the things that you talked about in the TEDxMIT talk that
was so profound is about these two Black men who were falsely arrested, wrong people, AI was used to identify them. And the answer from law enforcement was
the computer got it wrong. And I'm actually seeing that right now in organizations that I'm in,
like, wow, the computer really screwed this up, or wow, the algorithm really didn't do what we
wanted it to do, as if it's an entity that walks into work with a briefcase
and a satchel. And so I want to talk about your belief that instead of calling it artificial
intelligence, we should call it what? Yeah, I suspect that in five, certainly 10 years,
that we may not even no longer use the term artificial intelligence for all of the baggage
that it's likely to continue developing. And instead, and it may not be this precisely, but something more along the lines of augmented
intelligence, right? In other words, where we need to get to really quickly in society
is understanding how do we design and deploy these systems in ways that augment, enhance,
expand human intelligence and capacity rather than
substitute, replace, or render obsolete human intelligence and capacity.
We are squarely in the battle right now, right, in terms of what AI will mean for society.
And I think more importantly, who will help drive and contribute to whatever that conversation,
however that conversation unfolds and materializes.
And what we're arguing about is that we need to bring more voices, more diverse perspectives,
more diverse expertise into this conversation to help make sure that we move along a path
that's going to lead us towards augmentation rather than automation or artificial intelligence.
What was striking about that example that I gave is we've come to
understand now that facial recognition systems, as they were originally being developed, were just
simply flawed and faulty. That the rate of error in terms of identifying people with darker skin
color, for instance, women compared to men, just consistently higher rates of error when it came
to that. And a lot of that had to do with the ways in which these models are trained, the sort of training data sets used to develop these
models in terms of who they perform well for versus who they don't perform very well for.
And when one of the African-American men who was falsely accused of a crime via facial recognition
system, when they brought him in and they showed him the photo, he looked at the photo and he
looked at him and he said, this obviously isn't me.
So why am I even here?
And the response was, oh, well, I guess the computer got it wrong.
And my response is that the computer didn't get it wrong necessarily.
The humans who enforced that decision or output that the computer made got it wrong. And so what we're really sort of trying to push back against
is what's oftentimes referred to in the research literature as automated bias. We could have just
a conversation just about the many different ways in which bias runs through and is distributed
across the entire development cycle of artificial intelligence, even from conceptualization, right?
How a problem is defined or conceptualized and how bias gets baked into
that. We hear a lot about the data and how the data is, the sort of training data sets lead to
bias, but there are other elements and manifestations of bias across the development cycle,
including, right, how, where, and under what context these systems get deployed. And in this
case, automation bias is a reference to humans increasingly surrendering more and more decision
making authority to machines.
So even when human expertise, even when human judgment, human experience, just looking at
this photo when you showed up at this man's house and saying, hey, there's a mismatch
here.
This isn't the person that's in this photo.
And instead, they allowed the machine, they allowed the system to determine their action. And that's where we really got to sort of guard against, right, deferring complete authority to these systems in ways, right, that undermine our capacity to begin to deliver and give more, build or deliver more and more faith in these systems, even when they may run counter to what human expertise and experience might suggest.
Dang.
This is so, boom, mind-blowing for me.
I saw this interview with Jeff Bezos a couple of weeks ago, and he said, what people don't understand is that if the data say one thing and my gut says another, I'm going with my gut.
We're giving over too much.
Do you think we're relinquishing not just power, but also responsibility?
Like, ultimately, I don't give a shit what the computer says about this guy.
I hold that officer responsible. Absolutely. And I think you've used the word a few times
in the conversation today, scary. And it's interesting, Brene, when we talk to different
stakeholders across this space, one of the words that often comes up is scary.
Really?
Yeah, and I've been thinking more and more about why is that?
So when we're talking to healthcare professionals and talking to them about their concerns or
their aspirations for AI, if it's clinicians, if it's social workers, if it's community
health workers, if it's nurses, or if it's just others kind of outside of the healthcare
domain, for example, students even, right?
This word scary comes up over and over and over again.
And as we're seeing this in the data that we're collecting, I've been trying to sort of understand what's going on there, right?
What are people really saying, right?
The thing that we oftentimes hear when people talk about chat GPT, when it was sort of unlocked and unleashed into the world,
you know, people's immediate response was scary.
People in education, the immediate response was scary.
Therefore, we shut it down and deny students access to it for fear that they'll cheat,
for fear that it'll lead to academic dishonesty.
It'll diminish their ability or even interest in writing.
And so I think the word scary is in some ways a reference to uncertainty, a reference to
confusion, a reference to not feeling comfortable in
terms of understanding just what's at stake as we see these systems being pushed onto
society in a much more accelerated fashion, in higher and higher stakes, without the proper
guardrails, without the proper systems, regulations, laws, and policies that enable us to sort
of manage these systems in a way that's
appropriate, in a way that's proper. I really do believe that in the next five to 10 years or so,
we'll look back, because I like to think we're kind of in the stone ages of all of this, right?
And think about generative AI. We're literally at the early days of this, right? And at some point,
we'll look back and say, what in the hell were we thinking when we started unleashing these systems in high-stakes environments, education, healthcare, criminal justice, without any proper understanding, without any guardrails, without any policies, without any ethical principles to How did we even come to making those kinds of
decisions, recognizing all of the ensuing problems that are associated with moving at this pace,
this velocity, this scale, in a way that we simply just aren't currently prepared,
or as you like to say, wired to even accommodate? I don't like that. I agree. I agree. You know,
one thing I'll share with you from my research
that I think, well, for me, you just nailed the scary for me. I'm fearful because the distribution
of power is so off right now that I don't want this in the wrong hands without complete understanding.
Maybe in 10 years, we'll look back and say, I think we will look back and say exactly what you
said. I just worry about the casualties over the next 10 years as we try to figure it out,
because tech is moving faster than policy is moving and protections are moving.
And Craig, I think one of the things from my work that I've seen is the biggest shame trigger,
the biggest professional shame trigger that exists for people is the fear of being irrelevant.
And you are one of a handful of people, and I've talked to a, I don't know what the, I don't know what the shit ton is the word I'd use. I've talked to a lot of people.
You are one of the few people that I've talked to that talk about domain expertise. We need the
people with the lived experiences. We need the people on the ground. We need the letters. We
need the unstructured data.
Unfortunately, many of the people that I've talked to who I would not have on the podcast,
because I would not personally want to scale their ideas, would say, we don't have to mess around with all that messy stuff anymore.
That, to me, is the scary part.
Yeah, because in some ways it's, and I understand, right, the need to make human complexity, you know, less complex and more within our grasp.
Yes. expertise, expanding who's at the table, who's in the room, you know, when conversations are
happening, when decisions are being made about what's being designed and what's being deployed
and what the sort of downstream impacts of that might be. And hopefully we are building a culture,
we're building the conversation, we're building and documenting research and literature via
experimentation and a variety of other techniques to sort of make the case for why this is absolutely necessary as we push further and further ahead into this world.
Because let's face it, right? This is the world that's going to happen, right? And a world that's
increasingly presented by AI-based kinds of solutions, machine learning-based kinds of
solutions, algorithmic design kinds of solutions.
So the question isn't if this will happen, but the question is how will this happen and for whom this will happen and under what circumstances. And that's where we've really got to do the work
in terms of just bringing in different kinds of voices, different kinds of expertise
to sort of bring to the table, bring to the conversation, issues, ideas, concepts, lived experiences
that otherwise get utterly erased from the process.
This is why I'm really grateful for your work.
Now, we have answered five out of my 21 questions,
but so you just have to promise to come back again
sometime on the podcast.
Would you be open to that?
Yeah, absolutely.
Look forward to it.
Enjoy the conversation.
And you're right, there's Look forward to it. Enjoy the conversation.
And you're right.
There's a lot to think about here.
Yeah, because I think you have some really interesting takes on the upside of social media.
I want to talk to you about that next.
But for time, we're going to stop here.
But this does not give you a pass from the rapid fire.
Are you ready?
S. Craig Watkins.
Let's see.
You're not going to have time to plug it into anything.
Okay.
Fill in the blank for me.
Vulnerability, not systemic vulnerability, but relational personal vulnerability is what in your mind?
Not communicating one's needs.
Okay.
You, Craig, are called to be really brave.
You've got to do something that's really hard and really brave, and you can feel the fear in your body.
What's the very first thing you do?
Me, probably, in that moment, pray.
Yes. And as for to summon up strength that I probably can't imagine being able to summon up, but somehow being able to tap into some well of strength and capacity that may be beyond my ability to even conceive.
I love that.
Some strength that surpasses all human understandings.
That's the prayer part, right?
Absolutely.
Oh, love it.
Okay.
Last TV show you binged and loved.
Oh, so I, this is, this is a challenge for me because as much, there's so much good TV out there and I just, I struggle to find time to watch it.
But the last thing that I was able to watch recently was Amazon's Mr. and Mrs. Smith.
Oh, yeah.
Yeah. Smith. And I watched it in part because Glover is just, he's an intriguing creative to me. I first discovered him through his TV show Atlanta, which
I thought the first couple of seasons was just really brilliant in terms of how it began to
probe a lived experience, young African-Americans in a way that I'd never really seen in entertainment media before.
And so he taps into a kind of lived experience that is at one time that the New York, so for
instance, the New York Times called Atlanta the blackest show ever made. But then you look at a
show like Mr. and Mrs. Smith, and he shows the ability, right, to expand in ways that are just,
I think, extraordinary, and yet still bring that kind of Black ethos,
Black experience in ways that I think are still powerful,
unique, and creative.
And so I was just curious to see what he would do
in a show that was completely out of world different
from what I'd seen him in with the show Atlanta.
Yeah, he may be a polymath.
I mean, his musical abilities, his writing abilities,
like, he's incredible.
Yeah.
Favorite film of all time?
Oh man, that's a really interesting question.
Favorite film of all time.
I should have one.
I don't know if I really have one,
but if I had to, I'm trying to think off the top of my mind.
Hmm.
It can be just one that you really love,
that if you come across it, you're going to watch it.
I would say, what was the movie?
I should know this, right?
The movie with Matt Damon, where he was at MIT?
Good Will Hunting.
Good Will Hunting, yeah.
I've written about movies in the past.
I've written about black cinema in the past.
And so, you know, certainly have studied movies and a prior iteration or version of myself. But yeah, just for whatever reasons, Good Will Hunting just struck a chord with me because it's about the underdog and how there's so much potential
in the underdog that's rarely if ever tapped. And what would it mean if we could unleash that
power into the world? And yet it's probably hidden talent in this country and elsewhere that never, ever gets discovered or realized.
Yeah. And it's just such a, it's such a story about class and trauma and violence. And
when he solves that proof on the board in the hall, like it's just like, yeah, it's,
oh, you picked it. That's one of my top five. Okay. A concert that you'll never forget. Are you a music person?
I am.
I went to, so I'm a big hip hop guy.
And years ago, I went to a concert with the group Outkast from Atlanta.
And they put on a remarkable performance.
The way in which they presented hip hop in the live, in the moment experience,
which is really incredible for me.
And yeah, I just, I remember that from time to time. Had such a great experience there and feel like it was, for me, a really incredible opportunity.
I love it.
Favorite meal?
Ooh, favorite meal.
So my wife, who you may not know, you actually know, but a year ago, she introduced me to
homemade Brazilian fish stew.
And when I ate it, I felt like I was in heaven.
I've tried to replicate it, but I haven't been able to replicate it.
But the mixture of the tomato base with coconut milk, with fresh fish, red peppers, onions, paprika, cumin. I mean,
it was just a delight and really enjoyed that meal. And one day we'll find a way to replicate it
in my own attempts. I think you're going to end up using AI to try to do that,
but I don't think it's ever, I don't think it's going to have what your wife puts in it.
Okay. Two last questions. What's a snapshot of an ordinary moment in your life that gives you
real joy? Something that I started doing during the pandemic and have continued doing and find It's just going on walks by myself and listening to either great music or really interesting podcasts and just getting away from the grind of meetings, the grind of work, the grind of writing, etc.
And just enjoying that opportunity to walk, to decompress, to enjoy it, literally to enjoy the environment around me. I happen to live in a neighborhood that has great sunsets, and I take pictures of those
sunsets, and it's just a nice reminder of what's important in the world and what's
important in life and how important it is just to take care of ourselves.
Beautiful.
Last one.
Tell me one thing that you're grateful for right now.
I would say my mother. And my mother passed away in 2013.
She passed away on the same day of the Boston Marathon bombing. And my mother is a reason
why I'm where I am today, why what I do today? I mentioned earlier, right, I was born and raised in Dallas.
I grew up in South Dallas, all-Black neighborhood,
went to all-Black public schools,
but always understood for some reason
that the world had something to offer me
and I had something to offer the world.
And a lot of that, I think, is a tribute to my mother
and the values that she instilled in
me, the confidence that she instilled in me. And so there rarely is a day that goes by that at some
point I don't say thank you to myself and to heaven for her. We are very grateful that your
mom told you to think big because you are thinking very big and I think we'll be better off for it.
So, Craig, thank you so much for being on the podcast. It was just such an important conversation,
and I truly am really grateful for your remarkable work.
And I know we'll talk again,
because I have 16 more questions that we didn't get to.
All right, I look forward to that follow-up conversation someday.
Thank you so much.
Thank you so much.
Okay, I don't want to close this thing with the word scary again, but holy shit.
Let me tell you what helps me sleep better at night, knowing that Dr. Watkins is on the case, along with some other really talented computational people that are
deeply, deeply tethered to ethics and social justice. It's the whole series. What's the
potential? The potential is so great. And what are the costs? Hi, if we don't get ahead of
intention policies and having the right people like he mentioned at the table.
You can learn more about this episode along with all the show notes, all of his information about
where you can find his work on brennabrown.com. We'll have comments open on the page. Love to
know what you think. We're going to link to all of Craig's work. I think I mentioned the MIT
TED Talk. You got to watch that TED Talk.
This would be the most amazing lunch and learn at work just to take 20 minutes and watch a TED Talk and have a 40-minute discussion.
That's one hour about what you learned, what you thought was important.
It's essential.
We always have transcripts up within three to five days of the episode going live.
The more I learn about social media,
the more shit showy I think it is. So we're not doing comments on Instagram and Facebook right now. We're opening comments on the website and we're trying to build these really neat
communication and community building tools in it. And it's still in process. So we'd love for you to
leave comments and ask questions there. We're also doing a lot of community building with
newsletters. So we've got like an occasional newsletter, but we're also doing a lot of community building with newsletters. So we've
got like an occasional newsletter, but we're also doing shorter weekly newsletters that kind of have
key takeaways, weekly challenges to think about work and life and love. So you can sign up for
newsletters on the episode page as well. I'm really glad you're here. These can be heady
conversations, but I think they're also heart conversations.
We got to figure out if we're building the world we want to live in.
And for me, that's an awkward, brave, and kind world. So I'll see you next week.
Unlocking Us is produced by Brene Brown Education and Research Group.
The music is by Keri Rodriguez and Gina Chavez.
Get new episodes as soon as they're published by following Unlocking Us on your favorite podcast app.
We are part of the Vox Media Podcast Network.
Discover more award-winning shows at podcast.voxmedia.com. that's been completely revamped for urban adventure. From the design and styling to the performance,
all the way to features like the Bose Personal Plus sound system,
you can get closer to everything you love about city life
in the all-new, reimagined Nissan Kicks.
Learn more at www.nissanusa.com slash 2025 dash kicks.
Available feature, Bose is a registered trademark of the Bose Corporation.
So you've arrived. You head to the
brasserie, then the terrace.
Cocktail?
Don't mind if I do.
You raise your glass to another
guest because you both know the
holiday's just beginning.
And you're only in Terminal 3.
Welcome to Virgin Atlantic's unique upper-class clubhouse experience
where you'll feel like you've arrived before you've taken off.
Virgin Atlantic. See the world differently.