Tech Won't Save Us - Digital Redlining in the Frictionless Society w/ Chris Gilliard
Episode Date: April 1, 2021Paris Marx is joined by Chris Gilliard to discuss how decisions by powerful institutions over how to implement new technologies in cities, education, health, and more have the effect of creating a for...m of digital redlining that hides existing social problems.Chris Gilliard is a Visiting Research Fellow at Harvard Kennedy School Shorenstein Center and teaches at Macomb Community College. You can follow Chris on Twitter as @hypervisible.🎉 In April 2021, Tech Won’t Save Us celebrates its first birthday. If we get 30 new supporters at $5+ per month, we’ll start a weekly newsletter in addition to the weekly podcast to provide a new way for people to access critical perspectives on technology. If you like the show, become a supporter and help us reach our goal!Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.Find out more about Harbinger Media Network at harbingermedianetwork.com.Also mentioned in this episode:Chris wrote about how technology can hide racism in the "frictionless" society. He also wrote about digital redlining in education.Despite redlining being outlawed, the effects can still be seen in many outcomes, including health. See the redlining maps at Mapping Inequality.Amazon originally excluded predominantly Black communities when it rolled out same-day delivery in Boston.In 2019, Facebook was sued by the Department of Housing and Urban Development for allowing discrimination in its housing ads. In 2020, it was found to still be doing it.Bots are getting US vaccine appointments, and programmers are having to help relatives get appointments.Support the show
Transcript
Discussion (0)
The more we create and allow systems to embed themselves that treat every interaction as an
opportunity for like a computational thing to get in between people,
it encourages and brings on a lot of problems.
Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks. And this month, April 2021,
is the first birthday of the podcast. I started Tech Won't Save Us in April of 2020 as myself
and many other people around the world were in lockdown. I'd wanted to start a podcast for a
long time, delving into these very issues that I've been
able to talk to people about for the past year. And while obviously I hoped that it would do
really well, I've been really humbled and just happy to see how the podcast has resonated with
so many people around the world who tune in every week or, you know, just occasionally to catch an
episode that seems interesting to them and have told me that they've learned things from the podcast and that they've
started to look at technology and the tech industry in a different way. And obviously,
that is the goal. And that's something that I want to continue doing into the future.
In the past year, Tech Won't Save Us has had listeners from more than 130 countries around
the world. This will be episode 55, and we've had 56 guests in total.
The podcast currently has more than 170 reviews
across at least 28 countries on Apple Podcasts,
and more than 125 of the listeners
have chosen to become monthly supporters
so I can keep doing this work
and expand the kind of work that I'm doing
to bring attention
to criticism and critical analysis of technology. I can't tell you how much I appreciate that.
It's really fantastic. Last month, we hit an important milestone. We passed $500 in monthly
support. And as a result, I'm going to use those funds to put together a new website for the
podcast and to start working on getting
transcripts done so people can access the content of these episodes and these interviews in another
way and hopefully make it easier for people to reference back to, to quote from, and just to
read and learn from if they don't like to or can't listen to podcasts. But I don't want to stop there.
The podcast has been highly successful
as far as I'm concerned, and it keeps growing every single month. But obviously, there are
people who don't care to listen to podcasts and get this kind of information through audio.
And so what I want to do later this spring is to start a newsletter, 4Tech Won't Save Us,
that will give a weekly roundup of some of the critical stories I've been following and paying
attention to, and that I think are important for other people to be paying attention to. And
hopefully, you know, I can add some context and some opinions and things like that to make it
even more interesting than just a list of links or anything like that. So this month, April of 2021,
I want to ask you if you have been enjoying this podcast, whether from the start, from last year,
or whether you joined in along the way and have been liking this content, have been learning from
Tech Won't Save Us, I would ask that you consider becoming a monthly supporter. If we get 30 new
supporters in the month of April at $5 a month or above, then in addition to the weekly podcast,
I will also start sending a weekly newsletter. And like the podcast,
it will be open to anybody because, as you know, I don't like paywalls. And let me just put that
into perspective. If less than 1% of all of the listeners of the show decided to become a supporter
at $5 a month or above, today, our fundraiser would be over on day one. So I'm really excited
by how the show has been going.
I want it to keep growing and to keep producing more ways for people to engage with critical
views on technology to challenge the kind of Silicon Valley consensus that has existed for
many years, and that is increasingly and hopefully crumbling. But to keep putting in the time and
keep doing this work, I do need your support.
So if you like the show and you believe in its mission, please consider going to patreon.com slash tech won't save us, where you can join more than 125 other people who have done the same,
including supporters like David, Kevin McVeigh, and Luke Savo. Now with that said, this week's
guest is Chris Gileard. Chris is a visiting
research fellow at the Harvard Kennedy School Shorenstein Center, and he also teaches at Macomb
Community College in Michigan. We had a fantastic conversation about his work on digital redlining
and how so many of these tech tools kind of build in discrimination and difference and try to make
us ignore and forget how it is actually there. And so these kind of build in discrimination and difference and try to make us ignore and
forget how it is actually there. And so these kind of frictionless experiences that are increasingly
being sold to us actually hide a lot of problems that exist in our societies behind kind of a
technological screen or a technological curtain. And if we really want to deal with them, we need
to be able to look beyond that and see the issues that actually exist and not get distracted by these technologies.
I had such a great time chatting with Chris, and I think you are really going to like this
episode.
I'm really excited that the podcast will be turning one year old this month, and I hope
that you will consider becoming a monthly supporter so we can hit that goal and start
doing a weekly newsletter in addition to the weekly podcast. And obviously, you can do that by going to
patreon.com slash tech won't save us. And now let's get to this week's conversation.
Chris, welcome to Tech Won't Save Us.
Thank you very much. It's my pleasure to be here.
No, I'm excited to chat with you today and to discuss your work on digital redlining in particular and how that plays the practice of redlining that occurred a number of decades ago in housing in particular,
but obviously that filtered out into other areas of the economy. So can you give us a bit of a
background on what redlining was to set us up to understanding these kind of modern concepts and
how these things extend into the digital realm? Yeah, so I'd like to tell a little story that kind of illustrates it before I get into it.
I live in Michigan, and there's a town in Michigan called Grosse Pointe.
People may or may not be familiar with it from the John Cusack movie, Grosse Pointe
blank, you know, but Grosse Pointe butts up against Detroit.
And if you're not familiar with that, there's also a road in Detroit that's called Eight Mile. That's like the delineator between Detroit and not Detroit.
So if you drive down these streets even now, or down Eight Mile or the border of Detroit and
Gross Point, you can see a real distinction between how the roads are taken care of,
what the houses look like,
the amount of wealth on one side of the street or the other. And these are the vestiges of
redlining. So redlining was a policy put forth by the federal government that dictated where
loans would be allocated and insured and where they wouldn't be. And so at the behest of the
federal government, the Homeowners Loan Corporation drew maps of cities and color-coded them. And there's a really
great site called Mapping Inequality, where you can actually still see these maps and download
them in high res. So they would color-code these maps of cities ranging from desirable to hazardous.
Hazardous would be red. And that is where no loans would be guaranteed.
And so what typically happened, typically meaning kind of all the time, is that that
would be where in a city where not only where Black folks would be kind of forced to live,
but where city services would be less or non-existent, where incinerators and trash dumps would be,
and things like this.
And you can still see the vestiges of that.
And it's not even probably accurate to call it vestiges, because in some cases, it's still
in very stark relief.
So if you're familiar with a city and you look at a redlining map, you'll still see the disparities in health outcomes,
in education, and many times in the quality of schools. Again, where airports tend to be,
or incinerators or trash dumps, who has better water. All of these things are the leftovers of
a very racist and government-installed program. And so a lot of people kind of don't recognize
the ways that government-mandated policy
filters down into how people live
and what their health outcomes are,
their wealth outcomes.
Again, because in America for a really long time,
one of the primary ways that wealth was built
was through homeownership.
And so if you think
about the ways in which certain people would have been locked out of that cycle in the 20s and 30s
and 40s, you know what that means for their children and their children's children as being
locked out of that way of sort of building wealth, generational wealth that they never got to
accumulate. And so a lot of people don't recognize that for what that
means for how people exist today and for kind of like the wealth disparity outcomes that even still
exist. So that is a short, very dirty and not fully fleshed out definition of redlining.
Yeah, I think that gives us the key points, right? And I think the important thing to recognize there is obviously this was a policy put into place, I believe it was in the 30s.
And then, you know, was carried out through a number of decades. And then officially it ended.
But, you know, we can still see the effects of those policies decades later, even though they
are officially not in place anymore, with, as you say, these unequal outcomes that are still drawn
along these same lines that were drawn all the way back in the 30s and 40s. And so it's these kind of
invisible factors, I guess, in the city that a lot of people might not realize are there. They might
not realize the history of it and just think that this is kind of like a fact of life, right?
Yeah.
And I think that extends really well
into what you're also describing with digital redlining,
these things that become invisible
that we can't even see anymore.
So I guess, how do you see digital redlining
and how do you see that as an extension of
or what similarities it holds to the original redlining
that was taking place in the 20th century?
So the way I think about it and the way, the reason I think it's important to cast it as
digital redlining is because, so people now are less likely to deride an assertion that I think
has been true for a long time, which is that internet access is essential. So if you had said
that even a year ago, there are still lots of people who would claim it was ridiculous. But I think the pandemic has put that into, again, graphic relief.
But the reason I call it digital redlining is to show how these are active decisions that are being
made by tech companies, by the government in some cases, by municipalities. They're decisions being
made about access, about who gets what
kind of internet, about how much people pay for internet, about what equity means or doesn't mean.
And these decisions that are made have, again, very serious consequences for people's education,
for their health, just for, in a lot of cases, quality of life. So, you know, an example I use,
and this has changed, right? But an example I use a lot is Amazon when they're, as they're rolling
out kind of same day delivery and next day delivery and things like that, when they're
rolling it out in Boston for a long time, they had all the areas around Roxbury were set up for
same day or next day delivery and And Roxbury, which, as I
understand, has a large contingent of black folks living there, was excluded from that. And so that
was just like, you know, and like Amazon would basically just say like, oh, the algorithm did
that. Like we didn't, you know, it's not like we said no black people are going to get next day
delivery. But I mean, as you know, like as someone who does this work,
just saying that Algo did it,
you know, or it wasn't intentional,
doesn't erase the negative outcomes.
You know, it doesn't absolve responsibility either,
but it doesn't erase the negative outcomes
that result from those decisions.
And again, those are decisions, right?
They're not accidents or happenstance or anything like that. And so I call it digital redlining to point to the ways that it's very similar in that they're active decisions made by powerful individuals and institutions that have real world effects on people in ways that often are beyond their control. I think that's a fantastic point.
And in one of the articles that you wrote,
kind of outlining the ideas that you had
about digital redlining and kind of how it played out
in education and in a number of different areas,
you wrote that it is the creation and maintenance
of technological policies, practices, pedagogy,
and investment decisions that enforce class boundaries and discriminate against specific groups. And I think one, you know, example of
that that really connects the two concepts, the original redlining and the digital redlining that
you're talking about is, you know, how Facebook used to allow discrimination based on race in
its housing ads. And this was something that was illegal, but it was being carried out on Facebook because they had these like really intricate kind of targeting mechanisms
and didn't disable this aspect of it. So people could go in and exclude people who were black or
any other race from their housing ads. And that was illegal. And I believe they were eventually
charged with it and had to stop. But, you know, it's just one example of how this plays out, right?
Mm hmm. And I mean, we can go down the list of companies that have had some kind of similar
scandal, you know, whether that be Uber or Amazon or Facebook again, you know, and again and again,
you know, in things like age discrimination, Instagram, Google, you know, there are so many
well-documented cases and I'm just naming some of the biggest offenders. We could also talk about some of the smaller ones and lesser-known companies, but it's a thing that we see again and again. And one of the things I want to highlight is that, much like traditional redlining, of it is that most people who would use Facebook would never know
this. It required a digging into the system by journalists and researchers to suss this out
in ways that we would never know without that. And so people don't know what they're not seeing
or what opportunities are being denied to them often with these systems. And I think that that's
one of the most harmful and most important things to remember with these systems. And I think that that's, yeah, one of the most harmful and
most important things to remember about these instances.
Yeah, I think that is so interesting how you describe that, right? And it brings to mind
an example that you gave in a piece that you wrote for Real Life a few years ago,
where you described, you know, an experience of, I guess, kind of experiencing this racism
at one of the schools that you taught at. And then, you know, how guess, kind of experiencing this racism at one of the schools that you taught at.
And then, you know, how those same kind of experiences and social cues and that kind of
understanding of the environment that you're around cannot be seen when those things are
built into the technologies and I guess kind of the interactions that come of them. So could you
talk a little bit about that and how that is an example of this kind of digital redlining and how it plays out?
Yeah.
So basically, I used to teach at an institution in the summer, and it was a program, it was an advanced program for young people on their way to college, but still in high school.
And so it was hosted in the engineering building, and I would often go to the engineering building
to make copies or things like that.
Now, this is a place where I not only went to undergrad, but I was faculty there and had been faculty there for a long time, approaching double digits in years.
Almost without fail, when I would go to the engineering building to make copies, someone, whether it's a faculty or staff, would chastise me as if I were someone who didn't belong there,
right? Either thinking I was a student or some other kind of bystander who did not belong in
the engineering building. I don't think it was unrelated to the fact that I'm black and have
dreadlocks. In fact, I mean, I'm not going to be coy about it. I think that's exactly why they would do that. And so the context in
which I bring that up is about systems that seem to be designed to eliminate what's called,
what many of the designers and tech folks would call friction. And by friction, how they mean it
and how I take it for them to mean it is messy interactions with individuals, right? So to take like Uber for an example, in their way of thinking about it, Uber eliminates
the friction of actually having to talk to a driver, right?
To talk to another human being and tell that person where you're going and maybe have to
have some kind of interaction with them that recognizes them as a human being.
So that is seen in a lot of technical systems as friction.
And I see that the project of a lot of platforms and technical systems and computational tools is
to eliminate the interactions between human beings, or at least to stand in between those,
between human beings and sets and groups of human beings. That's seen as a positive and as a way of
eliminating friction. It's a long way to get around to this point. I have a lot of problems
with that. But one of my main problems, to speak to your question, is that as a person who's part
of a marginalized group who makes my way through the world, I think it's really important to
understand the cues. The friction that exists in the world is really helpful for me
in terms of understanding my own safety. So to go back to that example, I knew that I was unsafe in
the engineering building. And that's not hyperbolic, because if someone thinks I don't belong there,
then a common thing that they would do is call the police. I guess I have to follow that chain
for some people who might be listening. A common thing they might do is call the police, right? Which in a lot of cases is going to have
very unfortunate consequences for me and my safety. And so as a Black man going through the
world, that friction is important to me in terms of safety, but also I think that friction is an important part of being human in the
first place.
So not the friction, right?
I don't want the friction of racism.
I don't want it, but understanding when it's there and when it's not is super important.
But I do want other kinds of friction.
Like, so if I go to a town that I'm not familiar with and get in a cab, I actually want to
have a discussion with the driver. And so Uber doesn't necessarily prevent that, but it promotes it as a positive.
I mean, and they actually have a service where you can ask that the driver not even talk to you.
So it doesn't prevent it, but the design of it makes it seem as if it makes it a thing that
you can request. And the friction of those
interactions is seen as a deleterious thing, not a positive one. I'm sorry, that's a super long
answer to your question. It's a long answer. But I think, you know, I think it's an important thing
that we need to understand about these systems. And so I appreciate you kind of laying it out and
going through it in detail. So people can really understand like the things that these systems are trying to hide, to negate, to get rid of and how getting rid of those things will look different for different people. some guy making $200,000 that as an Uber software developer or whatever, it might not make so much
of a difference for them. But for other people who they're not thinking about, it could make
a difference to the way that they experience their everyday lives, their interactions in
the community, all these sorts of things, right? Yeah, I mean, I think my life is certainly you
know, I could say without question that my life is richer from interactions I have with people I don't necessarily know.
I mean, some of my best friends I've met that way.
But even just kind of on a day to day.
Right. It's the reason I go to a local coffee place instead of a Starbucks.
Again, that there are us all human beings there.
I don't want to say that, but like I think that that aspect of community that you can encounter and build
through interactions with people is really important. But also I don't want Uber or Amazon
or Facebook or Instagram or Twitter. I actually don't want those things as the mediator of my
relationships with other human beings to the extent that I can avoid that. I do. I mean,
zoom, you know, what have you. I
don't want those things deciding how I interact with other human beings. And I think that the
long tail desire of many of these designers and systems is to have that role, right? To reduce
all interactions to interactions with a technology and not with human beings. We could think about
Amazon Go, which is their grocery initiative, the ring doorbell. One of the things they integrated
now is that you can have Alexa answer the door for you through the ring. And so I don't want that.
But I mean, also, it's not only sort of a preference thing. I think that for us
as human beings, I think it's a very dangerous road to go down when all of our interactions are
mediated by people like Mark Zuckerberg and Jeff Bezos and the ways that they see the world. I mean,
I'm pretty sure that the last 10 or 15 years of going down that road has
shown us, you know, we should turn back. And again, like this is sort of, this is getting
pretty deep into it, but I do think that they've explicitly said over time, but also the products
that they release, they've indicate that their goal is to be kind of the operating system for
people's lives. Like the way you talk to people, the way you pay for things, the way you get goods and
services, like all of those things, right?
The way you get medicine, the way you get treated when you go to the doc, like all of
these things, right?
And so it's why we see all these different companies have tendrils in all of these different
aspects of our lives, because each of them is in a way trying to
become that system that mediates all of your other interactions.
Yeah, I think you can absolutely see that with what so many of these tech products want to do,
right? They want to get in the middle of these interactions, because that's the way that
not only can they, you know, record them, but that's the way that they can monetize them,
that they can take a piece of that transaction for themselves and ensure that the transaction is formed and mediated in a way that can be commercialized instead of just, you know, I think that there's this general conception now that over the course of a number of decades, our communities have been eroded, right? Like the same kind of community spiritterm trends, I guess. And a lot of the kind
of public institutions and community institutions that used to exist, that used to bring people
together, aren't there as much. And it feels like almost like these kind of technologies and this
kind of idea of frictionlessness, of taking even more of that human interaction out of it, is like
a further extension of this and is kind of designed to break down these
communities and these human interactions even further in a way that serves, you know, I guess
the profits and the power of these companies, but ultimately doesn't really seem to serve us and
just creating like positive social interactions and kind of the kind of lives that I think most
people want to lead. Yeah, absolutely. I mean, I'm going to go to the coffee example again. I'm sorry. So I imagine a system where I could press an app and have a coffee made to my
liking at an exact time that's made by some sort of machine, you know, that serves it to me right
when I arrive. And I never have to talk to anyone to look at anyone to smell anything i don't want to smell like all
these things right i can imagine that system right and it exists now but i think there are more and
more initiatives to push that but i can also imagine the system right of like what i did today
so i i went and got a coffee and i talked to the you know the barista and we talked about anime
and you know like it was like
really cool.
And it was a human interaction that I think is really important in my day, you know, and
I don't want to cast it as if like, you know, what's most important is like how my day went.
But I think people are better for that, right?
Like I imagine that probably his day is like a little cooler.
Like when he, you know, I was like, oh, like here's Chris.
And we talk about anime and like, you know, he told me about the, you know, the beats
he's making, right?
Like, I think that that is better, right?
I think that's a better society where we recognize the humanity of other people.
And so again, like I use like a very simplistic example, but we could think about this in terms of education or health care.
We could think about it in all kinds of different ways that to have a system that's designed by people who don't look like me or think like me, they don't look or think like the people I love and care about, in fact, don't care about those same things, that care
about monetizing data and extracting data.
And to exist in those ways, I think, is not the kind of society we want.
I mean, I can't speak for other people.
It's definitely not the kind of society I want.
And I think the more we move towards that, like the more we create and allow systems to embed themselves
that do treat every interaction as an opportunity for like a computational thing to get in between
people, it encourages and brings on a lot of problems. One of the things I would mention
about this is it's actually not friction free, right? I mean, I think this is really important to remember so that if
you think about like Amazon Go, for instance, all the steps that you would need to go through in
order to walk into a store, pick up something off the shelf and walk out. There are actually
more steps to that than there are to walking into a store, picking something off the shelf,
paying for it with cash and walking out.
There's actually way more steps. It's not friction-free because the way that people
envision these systems, it's only friction when you deal with human beings, not computational
tools. And so this is sort of the sleight of hand that's always involved in these things.
I completely agree with that. And
I think what you're describing is, is so important as we as we think through, you know, how these
changes are occurring and, and who they're ultimately serving, right. But I did want to I
did want to switch gears a little bit, because I think we've had a good discussion on how this kind
of plays out in the context of cities and communities. But as you mentioned there, you know,
this also plays out in a number of different areas communities. But as you mentioned there, you know, this also
plays out in a number of different areas, including education, which is one that you're particularly
interested in as, you know, someone who teaches students on the regular. So can you describe a bit
how this digital redlining also plays out in education and, you know, with the students that
you speak to and deal with, you know, in your teaching, I guess. I owe a tremendous debt to students for illustrating this to me.
I didn't know, or at least my concept wasn't as fleshed out until I saw how students experienced
it.
The example I often give is my institution for a long time had a pretty robust filtering
system for the internet on campus, which is not typical for a college,
but it's a long story why this existed, but it existed. And so students in my course,
we talk a lot about some of the issues I talk about all the time, privacy, surveillance,
the internet, computational tools. So students happen to be doing a thing on what is commonly,
or at the time was commonly referred to as revenge porn,
is now more commonly discussed as non-consensual intimate imagery. And so students were doing
research on revenge porn. And again, this is like a thing that is really important for, I think,
a lot of people to know, but particularly folks that age, given how much of their life lives online and some of the
narratives about sexting and things that were going on, like all these things. So it's a really
important thing to think about, to think through, to read about. So they were doing research on it
and they would type in revenge porn as people do. So the filter, the way the system was working,
just treated the search as if the word porn didn't exist. So not to, the way the system was working, just treated the search as if the word porn
didn't exist. So not to attribute thinking to a system, but like thinking that they were actually
looking for porn rather than research on revenge porn. So students came to me and said, hey,
you know, there's not any work done on this, which I knew to not be true, but it highlighted a bunch
of different things that were really important.
So often we ask students to do research on areas in which they're not experts. And so when you do that, in this case, it made them think that there was not work done on this. And it's a very logical
conclusion. In fact, it's the same conclusion that many faculty had when they would do research on things on campus.
Things just wouldn't come up.
And because a lot of people, again, students and faculty, don't necessarily understand
kind of underpinnings of how search works or how the internet works or how filtering
works or any of that, or weren't even aware in some cases that that internet was being
filtered, they would draw certain conclusions that made a lot of sense. And so I started to
dig into this, realized that the internet was being filtered and all the ramifications of that
for students who were doing all the things that they were supposed to be doing. So it means that
certain kinds of knowledge is walled off for them. And again, so you can think about what a filter keeps out. It keeps out things that
relate to sex. It keeps out things that might be related to sexuality. But it also crosses some
boundaries, right? It might keep out, well, let's say I'm doing research on hate speech or on white
supremacy. Let's say I'm doing things on our repressive governments. Let's say I'm doing
something on romantic poetry. Let's say I'm doing something on romantic poetry. Let's say I'm
doing something that has to do with the Bible. It would keep much of that stuff out, all of which
are legitimate research interests that one would do. And so it started to become clear to me the
ways in which the policy that was set by the administration had very real effects. I mean,
one of the things I also didn't mention would keep out health information and particularly like sexual health information. So the decisions made by
administration have very real effects on these students who often when they're on campus was when
they were able to do work. Many of them have a job or two are taking care of family members,
don't have broadband at home, you know, all
these things.
And so they're very clearly being harmed by a decision made by people who didn't think
about these questions and issues and who certainly didn't understand fully what the
ramifications were.
And so I align that very closely with some of the things that we discussed in the beginning about how decisions made by powerful institutions have health, education, and general welfare
outcomes for people who are subject to those decisions.
Absolutely.
And I think that's such an important example that really puts it into perspective, right?
Even beyond filtering, which I think is one important piece of it, you know,
when it comes to even accessing research materials, like when you think about in universities and the
way that so much research is behind a paywall, it can depend on which institution you go to,
whether you have access to those things. And especially, you know, if you're outside of,
say, North America, Europe and Australia, then it just might not be possible to access the same kind of research materials and resources
as other people because your institution just can't afford to pay those incredibly high fees.
And so that's just another kind of barrier that can be in the way beyond access to the internet
and all of these other really important considerations,
right?
Yeah, a lot of people do not understand the ways that journal access is.
I'm struggling for a kind of fancy word, but like how fucked up it is.
Like a lot of people just like don't understand that.
As someone who's been at a variety of institutions, you know, small liberal if at a community college,
we only have access to certain pieces of JSTOR and certain pieces of Article First and things
like that, or whether or not we even have those databases, people who are not experts in those
fields often don't even know what they're missing out on. Like similar again to like the Facebook
example, oh, they're missing out on or else they see gigantic price tags attached
to these things.
So to be very explicit, it's possible for me to have written an article being in the
state of Michigan and not being able to access it at my own institution.
And, you know, given the sort of Byzantine workings of the economics of those things,
in some cases, students are actually paying professors, you know, through their taxes,
they're actually paying professors to produce work that they can't access.
So the absurd is like absurdity piled on absurdity in some cases.
Yeah, it makes absolutely no sense at all. And I remember like even being at institutions as I was doing like my undergrad and my master's
and budget cuts happened while I was doing the program.
And you could notice how like you would lose access to some of the databases, right?
Because they had to cut the budgets.
And then all of a sudden, you know, there was a whole ton of stuff that you couldn't
access anymore as a result, right?
Yeah.
So it's really wild.
I also wonder, though, I think these are more general issues with access in education and
how using these digital tools can produce that access in some cases, right?
But during the past year, we, you know, have been in this pandemic and so much more education
has been facilitated by
technologies and by the internet and all this kind of stuff. What do you think that has done
to access and how has that affected certain students in different ways compared to others?
Well, I mean, I'll say this from the outset. I think there've been a lot more people recognizing the existence of these issues that have existed all along.
So, for instance, we saw and again, like these are not great solutions.
I think it's systemic and we need more widespread and dedicated responses to these things. set up hotspots and things like that in parking lots, understanding finally that not everybody
has broadband, right? Because there had been, I think, a widespread belief that everyone had it,
or, you know, that people's phones were sufficient to do anything they needed to do and things like
that. Getting that out of the way, you know, I think what has happened is that all the problems
that existed, all the cracks, all the fissures have turned into like,
you know, full scale, like fault lines, right? Everything that we knew about how these issues
kind of affected people has been magnified because, you know, so many people, so many
aspects of their lives have moved to primarily online. And so, you know, I think one of the
most important examples we can think about in terms of this stuff, right, because I can see someone thinking, well, you know, it's not a big deal if some kids in quotation marks in college don't have access to certain journals and things like that.
I would vehemently disagree.
I could see someone saying that, though.
But what we see now with the pandemic is it affects if people can get a vaccine, if people can get medical
treatment, who is able to get it first, right? And often that's not the people who are most in
need or the populations who are most in need because they have some of the worst access.
They aren't necessarily computer experts. Some people don't even, again, some people still don't
even have computers. And so when we see states set up vaccination, where the only way
you can get it is by having like an Eventbrite subscription, right? This shows like what a big
problem this is that we have, right? Like how widespread this is and that it can be life or
death. So, you know, if anything, I hope about the pandemic, if there's something, an insight we can gain is that these things are essential, that like internet access.
And again, even thinking about how we distribute tools and access and how we think about distributing public goods and services can't be disconnected from some of these other things. I mean, I think about the disaster in Texas, the climate-related
disaster in Texas, and the city of Austin was running their public service announcements
through Facebook. Like, the only way that you could get disaster announcements instead of the
city announcements was if you had Facebook. And we've seen this again and again in the past year,
that many cities, and in fact, many, you know, the federal
government often doesn't have robust systems in place to take care of what are public goods.
And they rely on these platforms who have a very messy relationship with individuals and how they
treat marginalized individuals and what kinds of harassment you have to deal with when you're on
them and all these things, right? Not to mention the data that's extracted. And again, like going
like super deep down the rabbit hole with this. But I mean, I think the pandemic's highlighted
this in ways that makes it impossible for people to ignore. Yeah, I think it's been fascinating
and really worrying to see how reliance on, you know, these websites,
these registration websites and things like that have kind of been the way to access the vaccine
and how, you know, just being able to get onto these websites and being able to fill them out
properly and just being able to book an appointment through them has shown to be in a lot of places,
a big hurdle for a lot of people,
not only because of the lack of access, maybe only having access through a phone or not having
very good internet access if they have it at all, but also because just of how the system is
designed, it's just not very good, right? To draw, again, a parallel. So I was really
trying hard to get a PlayStation, PlayStation 5.
And one of the difficulties, I mean, there are a couple of difficulties like chip shortage and things like that.
Right. But one of the difficulties is that these sites are beset with like sneaker bots.
The same type of mechanisms that are built up for, you know, sneaker heads and for certain people to dominate those markets existed for PlayStations and Xbox Xs and things like that.
It's like a minor issue and like not a big deal.
Some people got PlayStations and some people didn't.
But when then you think about this in terms of like vaccine access again, right?
A thing that is an annoyance when we don't have certain systems in place or don't consider how this affects certain populations. It's an annoyance or dictates who gets luxury goods and things like that. No big deal.
But when it is something like vaccine access, right? And I forget who ran the story. It was
in the Washington Post and New York Times about how coders were helping their relatives, their friends and
relatives get vaccine appointments.
And I mean, great.
Yeah.
But, you know, that excludes huge chunks of the population.
Again, like in the crossover between like who is the most needy and which populations
have been most affected by the pandemic and who's getting help with vaccine sort of sneaker bots,
that would be an interesting overlay,
right?
Like I don't think it's the same people.
So it's a real disturbing.
I mean,
in like nine,
is it disturbing?
It's just like a real problem that I think we'll need to address moving
forward.
I mean,
I hope we'll address.
I didn't know that about the coders.
So that,
yeah. Wow. That was fascinating. Yeah. I mean, I hope we'll address. I didn't know that about the coders. So that yeah. Wow. That was fascinating. Yeah. I mean, yeah. Yeah. It's, it's just wild. Like the kind
of stories that have come out of this, but you know, I think when I do think about the vaccine
rollout, I feel like it's a really good example of kind of bringing together what you're talking
about with digital redlining, but also the effects,
like the long-term effects of the original redlining that took place that are still
experienced today. Like I saw a map not too long ago of Ontario and around the Toronto area where
they are starting to do vaccines through pharmacies. And a lot of the pharmacies were,
you know, in areas that were wealthier and poorer areas,
which, you know, are also more likely to be areas of color, didn't have the kind of pharmacies.
And they also had lower vaccination rates because fewer people were getting access to
it through there.
And then when I think about that, along with the conversation that's happening now, and
I think it's further along in the United States than it is in Canada. But about vaccine passports and this notion that you'll have a digital document that gives you, I guess,
I guess, access to travel and other sorts of things because you're vaccinated. It does seem
to be an example of how these kind of existing physical or geographical inequities that are
long term and have have existed for a long time are, you know, affecting the rollout of these
vaccines, but also theout of these vaccines,
but also the kind of experience that people will have after they get vaccinated and whether
they can access these kind of benefits of being vaccinated in an equitable way along
with other people.
Yeah, I mean, that's a great point.
And I think it's why I always cringe at discussions on technologies and systems that doesn't account
for history.
You know, that pretends that, you know, because some dude who came up with an idea and that you can put it and turn it into an app, that it's a good idea.
You know, it makes sense or that it will work or that it will work for the populations who most need it to work for them. So without an understanding of history, right, you know, without an ability to overlay the racist history of a lot of how we got to where we are over the top of where we are
now, we're not able to understand kind of why things are happening the way they are. The
acontextual, ahistorical operation of just installing systems really hurts people in some
very real world ways, right? So yeah,
I think that's a great point. I was talking to Mar Hicks just a couple of weeks ago,
and they were describing how, well, we were kind of agreeing that Silicon Valley just has this
complete dislike of history and this complete ignorance about history and how, you know,
you can really see that in so many of kind of the ways that they approach these problems and stuff, right? Yeah. And so I know that we're kind of running near
the end of our time. So I wanted to end with one more question. You know, obviously, we've been
talking about the experience through this pandemic and how that has brought a lot of these kind of
digital inequities, these kind of things that have been around for a long time, but really brought
them into the light in a way that a lot of people can't ignore. But at the same time, it has also increased our
reliance on these technologies and these systems in a way that it seems like, you know, we're going
to be more reliant on these technologies that kind of put themselves in the middle of our
interactions even more in the future. So I wonder if you have any optimism or pessimism about kind of what happens next and where we go from here.
Well, you know, I mean, I'm an internal pessimist.
So I got to be super honest that what fuels me a lot of times is rage about how terrible these things are and wanting to dismantle them, abolish them and launch them into the sun.
So, I mean, part of my thing is pointing out like why these things are
bad. But I will say that I think this past year has illustrated so much to so many. I think one
of the things that's so important and that's come out of this year, this past year, is people
understand and realize that the idea to abolish or dismantle some of these technologies is not
a ridiculous assertion. As someone who's been kind of railing against facial recognition and
face surveillance for a super long time, one of the things that people would say is the genie is
out of the bottle, right? Like, I don't like that phrase for a variety of reasons. That's often what
people would say, right? Well, it can't be banned. It can't be
dismantled because it already exists out in the world, right? And this is a thing that tech pros
and tech companies and powerful institutions in general like to tell us. Again, history tells us
different, right? There are all types of things that we've, as societies, decided are too dangerous
or too harmful to exist and either have been banned or hugely regulated,
right? Like I imagine that at some point in the past, these people would have been saying,
like, well, you know, you can't ban ricin, right? You can't ban like whatever it is, right? You know,
you can't have regulations about whether or not like a factory should have fire safety regulations
in place, right? Like I imagine these would be the same people.
And so it's unfortunately a lesson that we have to keep learning. But it's become really clear that it actually is possible, in some cases desirable, to ban these things, to have abolishing
certain technologies and institutions as part of the conversation. Because before this past year, many people would just say,
like, it can't be done, right? It's ridiculous and fanciful and not possible. That's proven to not
be true. Like, it's proven again to not be true. And so that isn't a really important thing, I think,
that has come out of the past year. If I were forced to identify some bit of optimism, that's where I'd find it. But also that
so many more people understand the importance of these systems. I would compare it to the economic
crash, right? That I didn't know at the time what a credit default swap is. Like I didn't feel like
I needed to. Turns out I did, right? But, you know, I think a lot of people didn't understand or weren't as fully versed in ideas about algorithmic bias and, you know, algorithmic amplification and things like that as they are now. And as more people become aware of them, I mean, more people realize just how harmful some of these things are. So, I mean, I wish these systems didn't exist, right? So, like, I don't want this to be a lesson
that we have to learn, but that part isn't going to happen, right? I can't wish them out of existence.
So, the next best thing is that more people understand how relevant and important these
discussions are. I completely agree. And what you described there is also what is giving me hope,
you know, seeing that there's more people recognizing that these things can be undone, and more people are just realizing the problems that exist.
And I'll also add, you know, that I really appreciate your kind of interjections and
your kind of criticisms of so many of these technologies and things as they get written
about, you know, through Twitter and things like that. So I really appreciate that. And I appreciate
you taking them to the show today and having this conversation with me.
So thanks so much.
Yeah, it was my pleasure.
Thanks again so much for having me on.
Chris Gilliard is a visiting research fellow at the Harvard Kennedy School Shorenstein
Center.
You can follow Chris on Twitter at at hyper visible.
You can follow me and at Paris Marks, and you can follow the show at at Tech Won't
Save Us.
Tech Won't Save Us is part of the Harbinger Media Network, a group of left-wing podcasts
that are made in Canada. And you can find out more about that at harbingermedianetwork.com.
And if you want to help us hit our goal of getting 30 new supporters in the month of April,
you can go to patreon.com slash tech won't save us and become a supporter. Thanks for listening.