Microsoft Research Podcast - 135 - Just Tech: Centering Community-Driven Innovation at the Margins Episode 3 with Dr. Sasha Costanza-Chock
Episode Date: April 13, 2022In “Just Tech: Centering Community-Driven Innovation at the Margins,” Senior Principal Researcher Mary L. Gray explores how technology and community intertwine and the role technology can play in ...supporting community-driven innovation and community-based organizations. Dr. Gray and her team are working to bring computer science, engineering, social science, and communities together to boost societal resilience in ongoing work with Project Resolve. She’ll talk with organizers, academics, technology leaders, and activists to understand how to develop tools and frameworks of support alongside members of these communities. In this episode of the series, Dr. Gray and Dr. Sasha Costanza-Chock, scholar, designer, and activist, explore design justice, a framework for analyzing design’s power to perpetuate—or take down—structural inequality and a community of practice dedicated to creating a more equitable and sustainable world through inclusive, thoughtful, and respectful design processes. They also discuss how critical thinkers and makers from social movements have influenced technology design and science and technology studies (STS), how challenging the assumptions that drive who tech is built for will create better experiences for most of the planet, and how a deck of tarot-inspired cards is encouraging radically wonderful sociotechnical futures.https://www.microsoft.com/research
Transcript
Discussion (0)
Welcome to the Microsoft Research Podcast Series, Just Tech,
centering community-driven innovation at the margins.
I'm Mary Gray, a Senior Principal Researcher at our New England Lab in Cambridge, Massachusetts.
I use my training as an anthropologist and communication media scholar
to study people's everyday uses of technology.
In March 2020, I took all that I'd learned about app-driven services
that deliver everything from groceries to telehealth
to study how a coalition of community-based organizations in North Carolina
might develop better tech to deliver the basic needs and health support
to those hit hardest by the pandemic.
Our research together, called Project Resolve,
aims to create a new approach to community-driven innovation, one that brings computer science, engineering,
the social sciences, and community expertise together to accelerate the roles that communities
and technologies could play in boosting societal resilience. For this podcast, I'll be talking with researchers, activists, and
nonprofit leaders about the promises and challenges of what it means to build technology with
rather than for society. My guest for this episode is Dr. Sasha Kostantichok, a researcher,
activist, and designer who works to support community-led processes that build shared power,
dismantle the matrix of domination, and advance ecological survival.
They are the Director of Research and Design at Algorithmic Justice League,
a faculty associate with the Berkman Klein Center for Internet and Society at Harvard University,
and a member of the steering committee of the Design Justice Network.
Sasha's most recent book, Design Justice, Community-Led Practices to Build
the Worlds We Need, was recently a 2021 Engineering and Technology Prose Award finalist. It has been
cited widely across disciplines. Welcome, Sasha. Thanks, Mary. I'm excited to be here.
Can you tell us a little bit about how you define design justice? Design justice is a term, you know, I didn't
create this term. It comes out of a community of practice called the Design Justice Network.
But I have kind of chronicled the emergence of this community of practice and some of the ways of
thinking about design and power and technology that have sort of come out of that
community. And I've also done some work sort of tracing the history of different ways that people
have thought about design and social justice, really. So in the book, I did offer a tentative
definition, kind of a two-part definition. So on the one hand, design justice is a framework for
analysis about how design distributes benefits and burdens between various groups of people.
And in particular, design justice is a way to focus explicitly on the ways that design
can reproduce or challenge the matrix of domination, which is Patricia Hill Collins' term for white supremacy,
heteropatriarchy, capitalism, ableism, settler colonialism, and other forms of structural
inequality. And also, Design Justice is a growing community of practice of people who are focused
on ensuring more equitable distribution of designs, benefits, and burdens, more meaningful
participation in design decisions and processes, and also recognition of already existing community-based
indigenous and diasporic design traditions and knowledge and practices.
Yeah. What are those disciplines we're missing when we think about building and building for and know, learning the basics of how to code, building websites, working with the Indymedia network.
Indymedia was a kind of global network
of hackers and activists and social movement networks
who leveraged the power of what was then
the nascent internet
to try and create a globalized news network
for social movements. I became a project manager for various open source projects for a while.
I had a lot of side gigs along my educational pathway. So that was sort of more sort of
practice. So that's where I learned, learned you know how do you run a software project
how do you motivate and organize people i came later to reading about and learning more about
sort of that long history of design theory and history and then sort of technology design stuff
i was always looking at it along the way, but started diving deeper more recently.
So my first job after my doctorate was, you know, I received a position at MIT.
And so I came to MIT to the comparative media studies department,
set up my collaborative design studio.
And I would say, yeah, at MIT, I became more exposed to the HCI literature, spent more time reading STS work, and in particular was drawn to feminist science and technology studies.
You know, MIT is a very alienating place in a lot of ways.
And there's a small but excellent community of scholars there who of scholars there who take, you know, various types
of critical approaches to thinking about technology design and development, and sort of the histories
of technology and socio-technical systems. And so kind of through that period from 2011 up until now,
I spent more time engaging with that work. And yeah, I got really inspired by feminist STS. I also, parallel to my academic
formation and training, was always reading theory and various types of writing from within social
movement circles, stuff that sometimes is published in academic presses or in peer-reviewed journals, and sometimes totally isn't, but to me is often
equally or even more valuable if you're interested in theorizing social movement activity than the
stuff that comes primarily from the academy or from social movement studies as a subfield of
sociology. So I was always reading all kinds of stuff that I thought
was really exciting that came out of movements. So reading everything that AK Press publishes,
reading stuff from Autonomia and sort of the Italian sort of autonomous Marxist tradition.
But also in terms of pedagogy, I'm a big fan of Freire, and I didn't encounter Freire through the academy. It
was through, you know, community organizing work. So community organizers that I was connected to
were all reading Freire and reading other sort of critical and radical thinkers and scholars.
So wait, hold the phone. You didn't actually, I mean, there wasn't a class where pedagogy of the oppressed
was taught in your training i'm just now i'm like really that's i don't think so yeah wow
yeah because i didn't have formal training in education it was certainly referenced but the
place where i did you know study group on it was in movement spaces not in the academy
same with bell hooks i mean bell hooks there would be like the occasional essay in
like, I did undergraduate cultural studies stuff.
I think Marjorie Garber, you know, I think had like an essay or two
on her syllabus of bell hooks. So I remember
encountering bell hooks early on, but reading more of her work came later in through
movement spaces. And so then what i didn't see was a lot of people although increasingly now i think
this is happening you know putting that work into dialogue with design studies and with science and
technology studies and so that's what i that's what i get really excited by is the evolution of that. And maybe to that point, I feel like you have, dare I say, mainstreamed Patricia Hill Collins
in computer science and engineering circles that I travel.
Like to hear colleagues say the matrix of domination, they're reading it through you,
which is wonderful.
They're reading, they're reading it through you which is wonderful they're reading they're reading what that means
and design justice really puts front and center this critical approach can you tell us about how
you came to that framework and put it in the center of your work for design justice
Patricia L Collins develops the term in the 90s. The matrix of domination is her phrase.
She elaborates on it in her text, Black Feminist Thought.
And of course, she's the past president of the American Sociological Association,
towering figure in some fields, but maybe not as much in computer science, in HCI,
and in other related fields. but I think unjustly so.
And so part of what I'm really trying to do at the core of the Design Justice book
was put insights from her and other Black feminist thinkers and other critical scholars
in dialogue with some core, for me in particular, HCI concepts,
although I think it does go broader than that. The matrix of domination was really useful to me
when I was learning to think about power and resistance. How does power and privilege operate?
This is a concept that says you can't only think about one axis of inequality at a time.
You can't just talk about race or just talk about gender.
You can't just talk about class because they operate together.
Of course, another key term that connects with matrix of domination is intersectionality from Kimberly Crenshaw. She talks about it in the context of legal theory,
where she's looking at how the legal system
is not set up to actually protect people
who bear the brunt of oppression.
And she talks about these classic cases
where black women can't claim discrimination under the law
at a company which defends itself by saying,
well, we've hired black people. And
what they mean is they've hired some black men and they say, and we've also hired women,
but they mean white women. And so it's not legally actionable. The black women have no
standing or claim to discrimination because black women aren't protected under anti-discrimination
law in the United States of America. And so that is sort of like a grounding that leads to this, you know, the conversation. Matrix of domination
is an allied concept. And to me, it's just incredibly useful because I thought that it
could translate well in some ways into technical fields because there's a geometry and there's a
mental picture. There's an image that it's relatively easy to generate for engineers, I think, of saying, okay, well, okay, your x-axis is class, your y-axis is gender, your z-axis is race, this is a field and somewhere within that, you're located.
And also everyone is located somewhere in there.
And where you're located has an influence on how difficult the
climb is. And so when we're designing technologies, and whether it's interface design, or it's an
automated decision system, you have to think about if this matrix is set up to unequally distribute
through its topography, burdens and benefits to different
types of people, depending on how they are located in this matrix at this intersection.
Is that correct? Do you want to keep doing that or do you want to change it up so that it's more
equitable? And I think that that's been a very useful and powerful concept. And I think for me,
part of it maybe did come through pedagogy.
I was teaching MIT undergraduates. Most of them are majoring in computer science these days.
And so I had to find ways to get them to think about power using conceptual language that they could connect with. And I found that this resonated. Yeah. And since the book has come out and it's been received by many different scholarly communities and activist communities,
has your own definition of design justice changed at all or even the ways you think about that matrix?
That's a great question. I think that one of the things that happened for me in
the process of writing the book is I went a lot deeper into reading and listening and thinking
more about disability and how crucial disability and ableism are, how important they are as sort of axes of power and resistance, also of sources of
knowledge. So like disability justice and disabled communities of various kinds being key places for
innovation, both of devices and tools and also of processes of care. And just there's so much
phenomenal sort of work that's coming, you know,
through the disability justice lens that I really was influenced by in the writing of the book.
So another term that seems central in the book is co-design. And I think for
many folks listening, they might already have an idea of what that is, but can you say a bit more about what you mean by co- to frame a studio course that I wanted to set up that felt really friendly and inclusive and was a broad enough
umbrella to enable the types of partnerships with community-based organizations and social
movement groups that I wanted to provide scaffolding for in that class. It's not that I
think co-design is bad. There's a whole rich history of writing and thinking and practice
in co-design. I think I just worry that like so many things,
I don't know if it's that the term is loose enough
that it allows for certain types of design practices
that I don't really believe in or support
or that I'm critical of,
or if it's just that it started meaning more of one thing
and then over time it became adopted,
as many things do become adopted,
by the broader logics of multinational capitalist design firms
and their clients.
But I don't necessarily use the term that much in my own practice anymore.
I want to understand what you felt was useful
about that term when you first started applying it to your own work and why you've moved away from
it. What are good examples of, for you, a practice of co-design that stays committed to design
justice? And what are some examples of what worries you about the ambiguity of what's expected of somebody doing co-design?
So, I mean, there are lots of terms in like a related conceptual space, right?
So there's co-design, participatory design, human-centered design, design justice.
I think if we really get into it, each has its own history and sort of there are conferences
associated with each, there are institutions connected to each, and there are internal
debates within those communities about what counts and what doesn't. I think for me,
co-design remains broad enough to include both what I would consider to be sort of design justice
practice, where a community is actually leading
the process and people with different types of design and engineering skills might be supporting
or responding to that community leadership. But it's also broad enough to include what I call in
the book, you know, more extractive design processes, where what happens is, you know, typically,
a design shop or consultant working for a multinational brand parachutes into a place, a community, a group of people, run some design workshops, maybe does some observation,
maybe does some focus groups, generates a whole bunch of ideas about the types of
products or product changes that people would like to see, and then gathers that information
and extracts it from that community, brings it back to headquarters. And then maybe there are
some product changes or some new features or a rollout of something new that gets marketed back
to people. And so in that modality,
some people might call an extractive process where you're just doing one or a few workshops
with people co-design
because you have community collaborators,
you have community input of some kind.
You're not only sitting in the lab making something,
but the community participation is what I would call thin.
It's potentially extractive.
The benefit may be minimal to the people who have been involved in that process, and most
of the benefits accrue back either to the design shop that's getting paid really well
to do this, or ultimately back to headquarters, to the brand that decided to sort of initiate
the process.
And I'm interested in critiquing
extractive processes, but I'm most interested in trying to learn from people who are trying to do
something different. People who are already in practice saying, I don't want to just be
doing knowledge extraction. I want to think about how my practice can contribute to
a more just and equitable and sustainable world.
And in some ways, people are figuring it out as we go along, right? But I'm trying to be attentive
to people trying to create other types of processes that mirror in the process the kinds
of worlds that we want to create. So it seems like one of the challenges that you bring up in the
book is precisely design at some point is thinking about particular people, in particular,
often referred to as users journeys. And I wanted to step back and ask you, you know, you note
in the book that there's a
default in design that tends to think about the unmarked user.
And I'm quoting you here.
That's a cis male, white, heterosexual, able-bodied, literate, college educated, not a young child,
not elderly.
Definitely they have broadband access.
They've got a smartphone.
Maybe they have a personal jet. I don't know. That part was
not a quote of you. But you're really clear that there's this default, this presumed user,
ubiquitous user. What are the limits for you to designing for an unmarked user, but then how do you contend with this thinking so specifically
about people can also be quite, to your earlier point about intersectionality, quite flattening.
Well, I think that the unmarked user is a really well-known and well-documented problem.
Unfortunately, it often, it applies, you don't have to be a member of all those categories of the unmarked user to design for the unmarked user when you're in sort of a professional design context. And that's for a lot of different reasons that we don One is that it means that we're organizing so much time and energy and effort in all of our processes to kind of like design and build everything from, you know, industrial design and new sort of, you know, objects to interface design to service design. And if we build everything for the already most privileged group of people in the
world, then the matrix of domination just kind of continues to perpetuate itself. Then we don't move
the world towards a more equitable place. And we create bad experiences, frankly, for the majority
of people on the planet. Because the majority of people on planet Earth don't belong to that sort of default unmarked user
that's hegemonic.
Most people on planet Earth aren't white.
They're actually not cis men.
At some point, most people on planet Earth
will be disabled or will have an impairment.
They may not identify as disabled, capital D.
Most people on planet Earth aren't college educated and so on and so forth.
So we're really excluding the majority of people
if we don't actively and regularly challenge the assumption
of who we should be building things for.
So what do you say to the argument that,
well, tech companies, those folks who are
building, they just need to hire more diverse engineers, diverse designers, they need a
different set of people at the table, and then they'll absolutely be able to anticipate what
a broader range of humanity needs, what more people on earth might need? I think this is a yes and answer.
So absolutely, tech companies need to hire more diverse engineers, designers, CEOs,
investors need to be more diverse, et cetera, et cetera, et cetera.
You know, the tech industry still has pretty terrible statistics.
And the further you go up the corporate hierarchy,
the worse it gets. So that absolutely needs to change. And unfortunately, right now, it's just,
you know, every few years, everyone puts out their diversity numbers. There's a slow crawl,
sometimes towards improvement, sometimes it backslides. But we're not seeing the shifts that
we need to see. So it's like hiring, retention,
promotion, everything. I'm a huge fan of all those things. They do need to happen. And a much more
diverse and inclusive tech industry will create more diverse and inclusive products. I wouldn't
say that's not true. I just don't think that employment diversity is enough to get us towards
an equitable, just, and ecologically sustainable planet. And the reason why is because the entire
tech industry right now is organized around the capitalist system. And unfortunately,
the capitalist system is a resource extractive system, which is acting as if we have infinite resources on a finite planet.
And so we're just continually producing more stuff and more things and building more server farms and creating more energy intensive products and software tools and machine learning models and so on and so on and so on.
So at some point, we're going to have to figure out a way to organize our economic system in a
way that's not going to destroy the planet and result in the end of Homo sapiens sapiens along
with most of the other species on the planet. And so unfortunately, employment diversity within
multicultural neoliberal capitalism will not address that problem.
I could not agree more. And I don't want this conversation to end. I really hope you'll come
back and join me for another conversation, Sasha. It's been unbelievable to be able to spend even a
little bit of time with you.
So thank you for sharing your thoughts with us today.
Well, thank you so much for having me.
I always enjoy talking with you, Mary.
And I hope that, yeah, we'll continue this either in a podcast or just over a cup of tea.
Looking forward to it.
And as always, thanks to our listeners for tuning in.
If you'd like to learn more, thanks to our listeners for tuning in.
If you'd like to learn more, wait, wait, wait, wait, there's just so much to talk about.
Not long after our initial conversation, Sasha said she was willing to have more discussion.
Sasha, thanks for rejoining us.
Of course.
It's always a pleasure to talk with you, Mary.
In our first conversation, we had a chance to explore design justice as a framework and a practice. In your book of the same name, which has inspired many, I'd love to know how
your experience in design justice informs your current role with the Algorithmic Justice League.
So I am currently the Director of Research and Design at the Algorithmic Justice League.
The Algorithmic Justice League, or AJL for short, is an organization that was founded by Dr. Joy
Boulamwini, and our mission is to raise awareness about the impacts of AI, equip advocates with
empirical research, build the voice and choice of the most impacted communities, and galvanize researchers,
policymakers, and industry practitioners to mitigate AI harms and biases. And so we like to talk about how we're building a movement to shift the AI ecosystem towards more equitable
and accountable AI. And my role in AJL is to lead up our research efforts and also, at the moment, product design.
We're a small team. We're sort of in startup mode. We're hiring various director-level roles
and building out the teams that are responsible for different functions. And so it's a very
exciting time to be part of the organization. I'm very proud of the work that we're doing. So you have both product design and research happening under the same roof in what sounds
like a superhero setting. That's what we should take away and that you're hiring. I think listeners
need to hear that. How do you keep research and product design happening in a setting that
usually you have to pick one or the other
in a nonprofit? How are you making those come together?
Well, to be honest, most nonprofits don't really have a product design arm. I mean,
there are some that do, but it's not necessarily a standard practice. I think what we are trying
to do, though, as an organization, we're very uniquely positioned because we play a storytelling role.
And so we're influencing the public conversation about bias and harms in algorithmic decision systems.
And probably the most visible place that that has happened is in the film Coded Bias.
It premiered at Sundance, then it aired on PBS, and it's now available on Netflix.
And that film follows Dr. Bolognini's journey from a grad student at the MIT Media Lab,
who has an experience of facial recognition technology, basically failing on her dark skin.
And it follows her journey as she learns more about how the technology works,
how it was trained, why it's failing, and ultimately is then sort of, you know, testifying
in US Congress about the way that these tools are systematically biased against women and
people with darker skin tones, skin types, and also against trans and gender nonconforming people, and
that these tools should not be deployed in production environments, especially where
it's going to cause significant impacts to people's lives.
Over the past couple of years, we've seen a lot of real-world examples of the harms
that facial recognition technologies or FRTs can create.
These types of bias and harm are happening constantly,
not only in facial recognition technologies, but in automated decision systems of many different
kinds. And there are so many scholars and advocacy organizations and community groups that are now kind of emerging to make that more visible and organized to try and block the
deployment of systems when they're really harmful, or at the very least, try and ensure that there's
more community oversight of these tools. And also to set some standards in place, best practices,
external auditing and impact assessment, so that especially as public agencies start to purchase these systems and roll them out, you know, we have oversight and accountability.
So April 15th is around the corner, tax day.
And there was a recent bit of news around what seems like a harmless use of technology and use of identification for taxes that you very much, along with other activists and organizations, brought public attention to the concerns over sharing IDs as a part of our tax process.
Can you just tell the audience a little bit about what happened and what did you stop?
Absolutely.
So ID.me is a private company that sells identity verification services.
And they have a number of different ways that they do identity verification including facial
recognition technology where they compare basically a live video or selfie
to a picture ID that's previously been uploaded and stored in a system. They managed to secure contracts with
many government agencies, including a number of federal agencies and about 30 state agencies as
well. And a few weeks ago, it came out that the IRS had given a contract to ID.me and that people were going to have to scan our faces
to access our tax records. Now the problem with this, or there are a lot of problems with this,
but one of the problems is that we know that facial recognition technology is systematically
biased against some groups of people who are protected by the Civil
Rights Act. So against Black people and people with darker skin tones in general, against women,
and the systems perform least well on darker skin type women. And so what this means is that if you're, say, a Black woman,
or if you're a trans person, it would be more likely that the verification process would fail
for you in a way that is very systematic and has, you know, we have pretty good documentation about the failure rates both in false positives and false negatives,
the best science shows that these tools are systematically biased against some people.
And so for it to be deployed in contracts by a public agency for something that's going to affect everybody in the United States of
America and is going to affect Black people and Black women specifically most is really,
really problematic and opens the ground to civil rights lawsuits, to Federal Trade Commission
action, among a number of other possible problems. So when we at the Algorithmic Justice
League learned that IDME had this partnership with the IRS and that this was all going to roll out
in advance of this year's tax season, we thought this is really a problem and maybe this is
something that we could move the needle on. And so we got together with a whole bunch of other organizations like Fight for the Future and the Electronic Privacy Information Center.
And basically all of these organizations started working with all cylinders firing,
including public campaigns, op-eds, social media, and back channeling to various people
who work inside different agencies in the federal government,
like the White House Office of Science and Technology Policy, the Federal Trade Commission,
other contacts that we have in different agencies, kind of saying, did you know that this
system, this multi-million dollar contract for verification that the IRS is about to unleash on all taxpayers is known
to have outcomes that disproportionately disadvantage Black people and women and transgender and
nonconforming people.
And in a nutshell, it worked to a degree.
So the IRS announced that they would not be using the facial recognition verification option that ID.me offers. And a number of other federal agencies announced that they would be looking more closely at the contracts and exploring whether they wanted to actually roll this out. What's happening now is that at the state level,
through public records requests and other actions, different organizations are now looking state by state and finding and turning up all these examples of how this same tool was used
to basically deny access to unemployment benefits for people, to deny access to services for
veterans. There are now, I think, around 700 documented examples that came from public
records requests of people saying that they tried to verify their access, especially to
unemployment benefits, using the ID.me service, and they could not verify. And when they were told
to take the backup option, which is to talk with a live agent, the company, you know, was rolling
out the system with contracts so quickly that they hadn't built up their human workforce. So when
people's automated verification was failing,
there were these extremely long wait times, like weeks, or in some cases, months for people to try
and get verified. Well, and I mean, this is, I feel like the past always comes back to haunt us,
right? Because we have so many cases where it's, in hindsight, seems really obvious that we're going to have a system that will fail
because of the training data that might have created the model.
We are seeing so many cases where training data sets that have been the tried and true standard
are now being taken off the shelf because we can tell that there are too many errors
and too few theories to understand the models we have to keep using the same models the same way that we have used them in the past.
And wondering what you make of this continued desire to keep reaching for the training data and pouring more data in
or seeing some way to offset the bias.
What's the value of looking for the bias versus setting up guardrails for where we apply a
decision-making system in the first place?
Sure. I mean, I think, let me start by saying that I do think it's useful and valuable
for people to do research to try and better understand the ways in which automated decision
systems are biased, the different points in the life cycle where bias creeps in. And I do think it's useful and valuable for people to look
at bias and try and reduce it. And also, that's not the be all and end all. And at the Algorithmic
Justice League, we are really trying to get people to shift the conversation from bias to harm, because bias is one but not the only way that algorithmic systems can be harmful
to people. So a good example of that would be, we could talk about recidivism risk prediction,
which there's been a lot of attention to that, you know, ever since the ProPublica articles and the analysis of,
it's come out about COMPAS, which is the scoring system that's used when people are being detained
pre-trial and a court is making a decision about whether the person should be allowed out on bail
or whether they should be detained until their trial. And these risk scoring tools,
it turns out that they're systematically biased against Black people and they tend to over
predict the rate at which Black people will recidivate or will re-offend during the
period that they're out and under-predict the rate at which white
people would do so. So there's one strand of researchers and advocates who would say, well,
we need to make this better. We need to fix that system and it should be less biased. And we want
a system that more perfectly does prediction and also more
equitably distributes both false positives and false negatives. You can't actually maximize both
of those things. You kind of have to make difficult decisions about do you want it to have more false
positives or more false negatives. You have to sort of make decisions about that. But then there's a
whole other strand of people like, you know, the Carceral Technology Resistance Network, who would just say,
hold on a minute. Why are we talking about reducing bias in a pre-trial detention risk
scoring tool? We should be talking about why are we locking people up at all and especially why are we
locking people up before they've been sentenced for anything so rather than saying let's build
a better tool that can help us you know manage pre-trial detention we should just be saying
we should absolutely minimize pre-trial detention to only the most extreme cases where there's
clear evidence and a clear present danger that the person will immediately be harming themselves or
someone else. And that should be something that a judge can decide without the need of a risk score.
When you're describing the consequences of a false positive or a false
negative, I'm struck by how cold the calculation can sound. And then when I think about the
implications, you're saying we have to decide, do we let more people we might suspect could create harms,
leave a courtroom, or put in jail people.
We could not possibly know how many more of them would not,
versus would, commit some kind of act
between now and when they're sentenced. And so I'm just really struck
by the weightiness of that. If I was trying to think about developing a technology that was
going to try and reduce that harm and deliberate, which is more harmful. I'm just saying that out loud because I feel like those are those
moments where I see two strands of works you're calling out and two strands of work you're
pointing out that sometimes do seem in fundamental tension, right? That we would not want to build systems that perpetuate an approach that tries to take a better guess at
whether to retain someone before they've been convicted of anything.
Yeah. So I think like in certain cases, like in criminal, you know, in the criminal legal system, you know, we want to sort of step
out from the question that's posed to us where people are saying, well, what approach should we
use to make this tool less biased or even less harmful if they're using that frame? And we want
to step back and say, well, what are the other things that we need to invest in to ensure that we can minimize the number of people who are being locked up in cages? Because that's clearly a horrible thing
to do to people, and it's not making us safer or happier or better. And it's systematically
and disproportionately deployed against people of color. In other domains, it's very different.
And this is why I think, you know, it can be very tricky.
We don't want to collapse the conversation about AI
and algorithmic decision systems.
And there are some things that we can say,
you know, at a very high level about these tools.
But at the end of the day, a lot of the times,
I think that it comes down to the specific domain and context and tool that we're talking about.
So then we could say, well, let's look at another field like, you know, dermatology,
right? And you would say, well, there's a whole bunch of researchers working hard to try and develop better diagnostic tools for skin conditions, early detection of cancer.
And so it turns out that the existing data sets of skin conditions heavily undersample
the wide diversity of human skin types that are out there in the world and over-represent white skin.
And so these tools perform way better, you know, for people who are raced as
white under the current logic of the construction of racial identities. And so
there's a case where we could say, well, yeah, here, inclusion makes sense. Not everybody would say this, but a lot of us would say this is a case where it is a good idea to say, well, what we need to do is go out and create much better, far more inclusive data sets of various skin conditions across many different skin types should be people from all across the world in different climates and locations and
skin types and conditions. And we should better train these diagnostic tools, which potentially
could really both democratize access to dermatology diagnostics and could also help
with earlier detection of skin you know, skin conditions that
people could take action on, you know. Now, we could step out of that logic for a moment and say,
well, no, what we should really do is make sure that there's enough resources so that there are
dermatologists in every community that people can easily see for free because they're always going
to do, you know, a better job than, better job than these apps could ever do.
And I wouldn't disagree with that statement. And also, to me, this is a case where that's a both
and proposition. If we have apps that people can use to do self-diagnostic, and if they reach a
certain threshold of accuracy accuracy and they're equitable
across different skin types, then that could really save a lot of people's lives. And then
in the longer run, yes, we need to dramatically overhaul our medical system and so on and so forth.
But I don't think that those goals are incompatible. Whereas in another domain, like the criminal legal system,
I think that investing heavily in the development of so-called predictive crime technologies of
various kinds, I don't think that that's compatible with decarceration and the long-term
project of abolition. I love that you've reframed it as a matter of compatibility, because what I really
appreciate about your work is that you keep the tension. I mean, that you really insist on us
being willing to grapple with and stay vigilant about what could go wrong without saying,
don't do it at all. And I've found
that really inspiring. Well, can I say one more thing about that, though? I mean, I do.
Yes. And also, there's a whole nother question here, right? So, you know, is this tool harmful?
And then there's also, there's a democracy question, which is, were people consulted? Do against people's consent or against people's idea
about what they think should be happening in a just interaction with the decision maker,
then that's a type of harm that's also being done. And so we really need to think about
not only how can we make AI systems less harmful and less biased among the various types of harm
that can happen, but also more accountable. And how can we ensure that there is democratic and
community oversight over whether systems are deployed at all, whether these contracts are
entered into by public agencies, and whether people can opt out
if they want to from the automated decision system or whether it's something that's being
forced on us. Could you talk a little bit about the work you're doing around
bounties as a way of thinking about harms in algorithmic systems?
So at the Algorithmic Justice League, one of the projects I've been working on over
the last year culminated in a recently released report, which is called Bug Bounties for
Algorithmic Harms, Lessons from Cybersecurity Vulnerability Disclosure for Algorithmic Harms
Discovery, Disclosure, and Redress. And it's a co-authored paper by AJL researchers,
Josh Kenway, Camille Francois, myself,
Deb Raji, and Dr. Joy Boulamwini.
And so basically we got some resources
from the Sloan and Rockefeller Foundations
to explore this question of,
could we apply bug bounty programs
to areas beyond cybersecurity, including
algorithmic harm discovery and disclosure? In the early days of cybersecurity,
hackers were often in this position of finding bugs in software, and they would then tell the
companies about it, and then the companies would sue them
or deny that it was happening
or try and shut them down in various ways.
And over time, that kind of evolved
into what we have now,
which is a system where, you know,
it was once considered a radical new thing
to pay hackers to find and tell you about bugs in your
systems. And now it's a quite common thing. And most major tech companies do this. And so
very recently, a few companies have started adopting that model to look beyond security bugs. So for example, you know, we found an early
example where Rockstar Games offered bounty for anyone who could demonstrate how their
cheat detection algorithms might be flawed. So they didn't want to mistakenly flag people as
cheating in their in game, if they weren't. And then there was an example where
Twitter basically observed that Twitter users were conducting a sort of open participatory audit on
Twitter's image saliency and cropping algorithm, which was sort of when you uploaded an image to
Twitter, it would crop the image in a way that it thought
would generate the most engagement.
And so people noticed that there were some problems with that.
It seemed to be cropping out Black people to favor white people
and a number of other things.
So Twitter users kind of demonstrated this,
and then Twitter engineers replicated those findings
and published
a paper about it. And then a few months later, they ran a bounty program in partnership with
the platform HackerOne. And they sort of launched it at DEF CON and said, we will offer prizes to people who can demonstrate the ways that our image crop system might be biased.
So this is a bias bounty.
So we explored the whole history of bug bounty programs.
We explored these more recent attempts to apply bug bounties to algorithmic bias and
harms.
And we interviewed key people in the field and we developed a design framework for
better vulnerability disclosure mechanisms. We developed a case study of Twitter's bias bounty
pilot. We developed a set of 25 design lessons for people to create improved bug bounty programs in the future. And you can read all about that stuff at ajl.org
slash bugs. I feel like you've revived a certain 90s sentiment of this is our internet,
let's pick up the trash. It just has a certain kind of collaborative feel to it that I really appreciate.
So with the time we have left, I would love to hear about oracles and transfeminism.
What's exciting you about oracles and transfeminist technologies these days?
So it can be really overwhelming to constantly be working to expose the harms of these systems that are being deployed everywhere
in every domain of life all the time, to uncover the harms, to get people to talk about what's
happened, to try and push back against contracts that have already been signed, and to try and get
lawmakers that are concerned with a thousand other things to pass bills that will reign in the worst
of these tools. So I think for me personally, it's really important to also find spaces for
play and for visioning and for speculative design and for radical imagination. And so one of the projects that I'm really enjoying
lately is called the Oracle for Transfeminist Technologies. And it's a partnership between
Coding Rights, which is a Brazil-based hacker feminist organization, and the Design Justice
Network. And the Oracle is a hands-on card deck that we designed to help us use as a tool to
collectively envision and share ideas for trans feminist technologies from the far future.
And this idea kind of bubbled up from conversations between Joanna Varon, who's the
directress of Coding Rights, and myself and a number of other people
who are in kind of transnational hacker feminist networks. And we were kind of thinking about how
throughout history, human beings have always used a number of different divination techniques like
tarot decks to understand the present and to reshape our destiny. And so we
created a card deck called the Oracle for Transfeminist Technologies that has values cards,
objects cards, bodies and territories cards, and situations cards. And the values are various trans-feminist values like autonomy and solidarity and non-binary thought and decoloniality and a number of other trans-feminist values. everyday objects like backpacks or bread or belts or lipstick and the bodies and territories cards
well that's a spoiler so i can't tell you what's in them um and the situations cards are kind of
scenarios that you might have to confront and so what happens is basically people take this card deck and there's both a physical
version of the card deck and there's also a virtual version of this that we developed
using a Miro board, a virtual whiteboard, but we created the cards inside the whiteboard.
And people get Delta Hand and either individually or in small groups, you get one or several values, an object, a people, places card, or a body's
territory card, and a situation. And then what you have to do is create a technology rooted in
your values and that somehow engages with the object that you're dealt that will help people
deal with this situation from the future.
And so people come up with all kinds of really wonderful things that, and they
illustrate these. So they create kind of hand-drawn blueprints or mock-ups for
what these technologies are like and then short descriptions of them and how
they work. And so people have created things like community compassion probiotics
that connect communities through a mycelial network. And the bacteria develop horizontal
governance in large groups where each bacteria is linked to a person to maintain accountability
to the whole. And it measures emotional and affective temperature and supports equitable distribution
of care by flattening hierarchies or people created um a right now every listener is like
googling looking feverishly online for these for the oracle where where do we find this deck
what can we where please tell us so you you can, you can just Google the Oracle for
trans feminist technologies, or you can go to transfeministtech.codingrights.org.
So people create these fantastic technologies. And what's really fun, right, is that a lot of them,
of course, you know, we could create something like that now. And so our dream with the Oracle in its next stage
would be to move from the completely speculative design
on paper piece to a prototyping lab
where we would start prototyping
some of the trans feminist technologies from the future
and see how soon we can bring them into the present.
I remember being so delighted by a very, very, very early version of this.
And it was the tactileness of it was just amazing, like to be able to play with the
cards and dream together.
So that's, I'm so excited to hear that you're doing that work. That's,
that is inspiring. I'm just smiling. I don't know if you can hear it through the radio, but
wow, I just said radio. It is a radio, a radio in another name.
That's true, a radio by another name. Oh, Sasha, I could, I could really spend all day
talking with you.
Thank you for wandering back into the studio.
Thank you.
It's really a pleasure.
And next time, it'll be in person with tea.
Thanks to our listeners for tuning in.
If you'd like to learn more about community-driven innovation,
check out the other episodes in our Just Tech series.
Also, be sure to subscribe
for new episodes of the Microsoft Research Podcast wherever you listen to your favorite shows.