Tech Won't Save Us - Maybe We Should Destroy AI w/ Ali Alkhatib
Episode Date: December 26, 2024Paris Marx is joined by Ali Alkhatib to discuss the difficulty of holding the AI industry accountable and why sometimes it makes sense for people to destroy AI systems that are harming them.Ali Alkhat...ib works with Logic(s) magazine and was previously the director of the Center for Applied Data Ethics.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Ali wrote a blog post called “Destroy AI.”Support the show
Transcript
Discussion (0)
If people are designing these systems to cause harm, fundamentally, then there kind of is
no way to make a human-centered version of that sort of system.
Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and on this special holiday-ish episode, because I'm not recording
anything new over the holidays, I figured I would release one of the premium episodes
that I have been making available to Patreon supporters from our Data Vampire series in
October.
And I wanted to start with this interview with Ali Al-Khatib,
who works for Logix magazine
and was previously the director
of the Center for Applied Data Ethics.
Now, if you listen to the series,
you will have heard clips from this interview with Ali,
but I thought it was a really good one on AI
and the kind of political aspects of AI
and the broader consequences of this AI hype and the
rollout of these AI systems that really gave me a lot of food for thought and that I heard really
positive responses from the Patreon supporters when they heard it as well. So I figured if there
were going to be some premium episodes that were going to be released to everybody, I wanted to
include this one so that it could enter, you know, this broader conversation and maybe get you thinking about some other aspects of the AI conversation by going beyond what I
was able to include in the Data Vampires series. So I hope you enjoy this conversation with Ali.
And since this is the final episode that will be published in 2024, I also wanted to say a special
thank you to Bridget Palou-Fry. You will probably have heard her name in the outro to the
show if you, you know, frequently go that far. She has been helping with the transcripts for the show
for the past like couple years now. And she has been doing a fantastic job. We now have more than
100 transcripts available on the website for previous episodes. There has been a little bit
of a delay lately because we have both been very busy. And I wanted to say a special thank you to Bridget because she will be finishing up with
the show at the end of 2024.
And so, you know, just a big thanks for all the work that she has been putting into this
to make the shows more accessible to people by having these transcripts on the website.
I've really appreciated the work that Bridget has done for the show over that time, and
I wish her all the best on her future.
And so with that said, if you enjoy this episode, make sure to leave a five-star review on your
podcast platform of choice. You can share it on social media or with any friends or colleagues
who you think would learn from it. And as always, if you want to support the work that goes into
making Tech Won't Save Us and to get access to future premium episodes, because we still have
a bunch that we need to release from the Data Vampire series in 2025, you can join a number of supporters, and I'm going to say quite a few names here because I realized that I let the
list really kind of back up lately. So you can join supporters like Lucas from the Czech Republic,
Dan from the Netherlands, Aretha from New Jersey, Ed from Limerick in Ireland, MV Ramana from
Vancouver, who was also recently on the show, Alice from Boston, Kirsika in Finland,
Scott from Philly, Cam in North Pole, Alaska, Deb from Guelph, Ford from California, Peter in San
Francisco, Jessica from Seattle, Washington, Emery in Portland, Alan in Toronto, David from Spain,
Ruth in New Zealand, Julian from Austria, and Katie in Kelowna by going to patreon.com
slash techwontsaveus where you can become a supporter as well. Thanks so much and enjoy
this conversation with Ali Al-Khatib. What do you see as being the biggest
problems with generative AI and the hype around it right now?
So I think that there are a couple of problems. One of them is that the nature of the data that
you need for these kinds of systems is both in scope and in kind
so far-reaching that it sort of makes impossible a serious conversation about consent, about
the consent of the accumulation of all of this data that's out on the web in public,
the acquisition or the elicitation of consent from people who are using systems and services
that they need to use for myriad other
purposes in society and in life that they can't meaningfully consent or withdraw consent from.
And sort of the broader sort of like understanding of how is the data that's collected around me
going to be used? And do I consent to the speculative uses of those data? So that's
like one element of it. I think that another major element of the problems with generative AI is that the entire design and application of these
large language model systems, these generative AI systems, is kind of inherently speculative
and sort of undefined or ill-defined. And so it needs to be highly generalized. So people will
say that this system is as intelligent as a high school student, but
that's not a benchmark or a definition that means anything to anyone in the serious kind
of academic evaluation community.
So it's sort of just a free hand that kind of means nothing.
And that makes it very difficult to either design a system that is effective for any
specific task or even to evaluate this generative is effective for any specific task, or even to evaluate this
generative AI system for any specific task. And it makes for this very problematic shifting sand of
trying to build anything on top of it that's just kind of impossible.
Fascinating. What do you see as the biggest harms that come of these
generative AI systems when they're deployed out into the world?
I think one of the biggest harms is that it offloads a lot of the decision-making and particularly a lot of the
human discretion that needs to be exercised to algorithmic systems, which can't exercise
discretion, at least not in any way that we understand it or that we think about it,
and certainly doesn't understand any of the peculiar qualities or characteristics or anything like that that we have and live
in our actual experience.
So these algorithmic systems are just not capable of comprehending, understanding, whatever
word you want to use, the peculiarities of what we're dealing with or what our life is
and the things that make our circumstances unique from other circumstances or other cases
that it might have in the training data. That makes it very difficult to get any kind of semantic meaningful justice or the correct
kind of decision, the spirit of the rules or the patterns that these systems are trained on
to actually manifest in real life. And then it also makes it really difficult for people to,
even people who are using these generative systems to kind of quote-unquote inform their decisions, it makes it difficult for them to rebuke the recommendation that the system
makes, which means that people are technically in the loop, but they're actually not really in a
position to exercise that authority in any meaningful way, which makes it, again, very
difficult for people to imagine a more just world, imagine a better future of the world. And so
instead, they sort of have to
follow somewhat blithely the recommendations or the emissions of these algorithmic systems.
Yeah, that's fascinating. And it makes a lot of sense as well. It also makes you kind of
concerned about the broader impacts of that if it does become adopted at scale as so many of these companies expect and want us to do,
right? Yeah, certainly. I mean, one of the things that is both a challenge and a possible positive
future or a positive reality of having a lot of street-level bureaucrats and having people who
are making decisions in the world is that they can situate their experiences and their knowledge and they can learn things as they're going. And monolithic
algorithmic systems simply can't really do that, especially with these large language models that
you can fine-tune at the edges in incremental ways, but really you can't do deep retraining
of the foundation models or of the basic kind of foundations of these systems, which makes it very difficult to seriously talk about reforming these systems in any meaningful
way or asking these companies that have hundreds of millions or billions of dollars invested in
the training of these systems to radically rethink the way that they go about doing anything.
It's already difficult to do with bureaucratic systems and institutions, which is one way that
I tend to think about algorithmic systems.
The human sort of social structure is quite difficult to change and fix and reform,
but it's at least conceptually possible in a way that's much more difficult when everything is even
more localized and more sort of compressed into one locus of power.
You know, there's been a lot of talk in the past year and a half about what effective regulation of AI is going to look like or should look like.
You know, a lot of debate about those things.
A lot of CEOs speaking out and saying what that should look like.
And unfortunately, lawmakers listening to them as though they have the answers. But you recently wrote an essay about destroying
AI, you know, taking one step further than that, than just kind of passing some regulations to try
to, you know, reduce the worst aspects of what these AI systems can do. What brought you to the
point to write something like that and to take that further step? Yeah, sorry. So I've been
studying human-computer interaction for about 10 years, started a PhD program 10 years ago today, actually, or close to today. I mean, I had been spending a long time thinking about how to develop human-centered systems, and particularly writing papers that were trying to bring ideas from the social sciences about power, about oppression, about violence, into understanding how algorithmic systems can manifest these kinds of harms and trying to
encourage people to think along those kinds of lines to understand and then to design
consequential algorithmic systems in various different ways.
And part of my frustration was coming from the feeling that HCI was sort of not picking up some
of that. Although I suppose I should be grateful that a lot of my work has been very well received,
especially in the quantitative and awards sense and everything like that. But it really just feels
like a lot of the work that I had done didn't sufficiently or didn't adequately address what
was sort of a core problem with these systems
in the first place, which is that if people are designing these systems to cause harm,
fundamentally, then there kind of is no way to make a human-centered version of that sort of
system. In the same way, legislation that makes it slightly more costly to do something harmful
doesn't necessarily fix or even really discourage tech companies that find ways to
amortize those costs or kind of absorb those costs into their business model.
One example that I think I've given recently in conversation was that there are all sorts of
reasons or all sorts of powers that cause us to behave differently when we're driving on the
streets. Because as individual people, the costs of crashing into another car or of hitting a pedestrian or something like that
are quite substantial for us as individuals. But if a tech company that's developing autonomous cars
is going to put 100,000 or a million cars out onto the streets, it really behooves them to
find a way to legislatively make it not their fault to hit a pedestrian, for instance. And so
they find ways to sort of defer
the responsibility for who ultimately caused that harm or who takes the responsibility for
whatever kind of incident or whatever. And so that creates like these really wild,
perverse incentives to find ways to sort of consolidate and then offload responsibilities
and consequences for violence. And I just don't see a good way with design out of that, or even
with a lot of legislative solutions and everything else like that. And so I kind of wanted to explore
what I thought was a somewhat not overly provocative suggestion, which was that
sometimes a person will encounter an algorithmic system and it is not going to stop hurting them,
and they will not be able to escape the system. And given those two facts, I think it's pretty
kind of obvious that it is reasonable to start dismantling the system to destroy it. And I'm
not saying we should necessarily destroy everything that has silicon in it or something like that,
although I'm sure there are probably people that would argue that, and I'd be happy to hear them out. But it doesn't seem
radical to me to say, if you can't leave a system, if the system is harming you, if you can't get it
to stop hurting you, there really aren't that many other options. And I think it's reasonable
to say you don't have to take it. You don't have to continue to be harmed. And if it forecloses on
all of the other possible avenues that you have, then one of the avenues that we sometimes don't like to talk about is to start destroying the system. And I think that as a
person trained in design, sometimes I see papers where people say, maybe the implication of this
research that we're doing is that we shouldn't design systems, or maybe what we should do
is dismantle the algorithmic system. But these are the conversations that designers have amongst
themselves and not necessarily a conversation that we're telling people out in the world to
consider as a potential answer. And it occurred to me that I think I'm sort of struggling with
designers of tech systems who believe that they themselves can be the arbiters of whether to
dismantle an algorithmic system that's causing harm, but not the people that are affected by the algorithmic system or not the other, the people who are downstream of the system.
And I think I wanted to sort of continue to explore that thought and say,
if somebody can make a decision that the system should not exist, why can't the person who
is facing it on a regular basis? And what would that look like? Or what would be the vocabulary
that we would use or that we should develop to have an understanding of or an appreciation of the need to do that, the
mechanisms that people go about doing that with? And how do we make sense of that? Or how do we
just understand that in general? Is it even necessary to understand it? Or do we just
have to accept that that is a thing that people will do sometimes? And that that's not,
I don't even want to say that's not malicious, but it's certainly not something that we should be trying to rebuke or challenge
or fight. It's interesting to hear you explain that, you know, explain what led to you to write
that. And then to think about how wild it is that it's so rare to talk about dismantling or
challenging AI systems in that way. Like, you know, there's plenty of talk about regulation and plenty of talk about, okay, how something is bad and we need to do something
about it. But the idea of actually going and targeting that system and trying to dismantle it
is something that's so outside the realm of like the usual discourse when it comes to technology.
And it strikes me as, as odd that that's the case, that it's something that doesn't come up more
in these discussions. Yeah. I think we're, understand it and like i i'm not like i'm not
from some other planet like i totally get that it's like uncomfortable to talk about like we
should just destroy this thing like that seems violent it seems scary it seems what do we do
with the wreckage afterwards i don't know like um this is you know an entire space that's like
uncomfortable to dwell in but i mean i don't know. This is an entire space that's uncomfortable to dwell in.
I don't know whether this analogy is very good. I live in Ann Arbor, and it's a small town that
sometimes inflates to five times its size because there's 100,000 people that come here for football
games and things like that. I was workshopping this thought that what if there was a restaurant
in town that was constantly getting people really sick and the health inspector, for whatever reason,
was just not doing anything and no law enforcement was doing anything and visitors would just keep
coming and getting really sick and some of them getting really seriously, like their lives would
be derailed in increasingly catastrophic ways. And there was nothing that we seemed to be able to do
to affect change about that restaurant sort of operated or anything. And the reality of how
people are engaging with the city was that they've sort of passed through and just didn't realize
that this is a restaurant where people go and get sick or something like that.
I don't think it's crazy to say, we need to take matters into our own hands. I don't think that the idea that the market will solve this problem will
support that in this case, because that restaurant really only needs to make sales once every couple
of weeks, and they'll be able to coast through the next month, and they will continue to do harm.
They will not get better or anything else like that. I think the way that I was thinking about
this with regard to algorithmic systems was that these systems are not going to get better or anything else like that. I think the way that I was thinking about this with regard
to algorithmic systems was that these systems are not going to get better if we live in, let's say,
a regulatory environment where the FCC or various federal regulatory agencies or even
international regulatory agencies like the EU are just unable to wrap their arms around what these
tech companies are doing, what these systems
are doing. And in particular, given that people will suffer in the meantime while these organizations
are trying to do something about it, it just seemed sort of reasonable to say, well, of course,
there are things that we can do in the immediate term to try to mitigate that harm or try to
sort of stop that harm. I think, yeah, it certainly is uncomfortable to
think through and all of that, but at the very least it should be a credible possible outcome
that somebody will stop you if you try to hurt people. And it's not just that you will face
consequences later, but that somebody might just stop you. And of course that's uncomfortable. I
don't like confronting people. I don't like stopping people from doing things, but I also don't like them hurting people.
So like, it's kind of a catch 22 that we're in.
Yeah, no, I think it makes perfect sense.
And I think your, your analogy works for me.
What would it actually look like then to sabotage an AI system, to destroy an AI system, to
make it stop doing the harm that you're talking about?
Yeah, there are a lot of are a lot of ways, right? So I
think one of the things that I have thought about has been sort of a framework that I guess is better
known from how to blow up a pipeline, which is not just about how to blow up a pipeline,
but about how to change the political economy of operating oil pipelines, fundamentally changing the costs of operating these systems,
of operating extractive greenhouse gas emitting businesses. And I think that what we need to do
is shift how costly it is to run these systems completely irresponsibly. And so I think one of
the things that people can do in the immediate term is find ways to subvert or get around algorithmic systems when the system is harming them.
Find ways not to provide those systems with the data that the creators or the designers of these systems are trying to get from you.
Find ways to make the system more costly or less effective or less kind of advantageous in whatever way you can find or whatever,
given whatever circumstances you have. And sometimes that means intentional work slows,
like slowing down work or work stoppages. Sometimes that means feeding false information
to the system. One project that I was really excited about, I think it was at the University
of Chicago, Glaze, which is basically a system
which, at least at the time that we're talking about it, inserts various non-visual artifacts
into images to make machine learning systems that encounter these images and try to train
on them without the consent of the artist, basically sabotages the model in various ways
and causes all sorts of weird artifacts to emerge and things like that. And that's not just a way to encourage
the designers of these machine learning systems to actually go back and get permission. It's a
way to make it more costly for them to run these crawlers. It's a way to make it more costly for
them to incorporate these images into datasets. It's a way to make it more costly for them to run these generative AI
systems completely indifferent to how the data got acquired or collected, how the data got
incorporated into these datasets and everything else like that. It's a way to make it fundamentally
more difficult at every step of the way to do what they're doing the way that they're doing it.
Part of that might also include things like basically making a lot of
the use of tech less effective, less efficient, things like that. But I try to be mindful of the
fact that people encounter algorithmic systems in a million different ways. And I don't want to give
a prescription that says, feed false data into every system, because there are going to be
circumstances where if a certain individual is caught feeding bullshit data into every system, because there are going to be circumstances where if a certain individual is caught feeding kind of like bullshit data into a system,
they're going to face more retribution than I would in other circumstances, for instance.
But to think about ways to make it more costly, basically, to run these systems and to be
indifferent about the consequences of the systems or the inputs of the systems.
In another essay of yours that I read, you talked a lot about accountability,
in particular for the people who are creating these systems and how a lot of that accountability
is not something that they feel right now, the people designing these things. Why is it so
important for those who are making decisions that can cause these algorithmic harms to be
held accountable for that when it happens.
I think that there are a couple of reasons. I think the most micro level is that when people
make decisions about other people's lives, I think that they should internalize and understand the
consequences on the people that they're making decisions about. And if they don't care, or if
they're allowed to be indifferent to what's happening to the people that they're
making decisions about, that creates an environment where a person can be really indifferent and
really callous about what they're doing to other people, to other human beings. And I think that's
a bad place to be as a society. I think that, hot take, we broadly should not support the ongoing
administration of systems where people can make consequential decisions
about the lives of other people with total indifference about what that does to them.
But that's on the micro level. I think that that's one small component of it. I think that
if you think about, again, if you think about tech companies that run self-driving cars and
they think to themselves, well, our 100,000 car fleet hits 5, 10, 50 people per year,
that's 5, 10, 50 people whose lives are totally changed by
the thing that has happened to them because of this tech company. But to the tech company,
this is a cost. This is a question on a balance sheet about whether they should invest in lobbying
that says that self-driving cars are not responsible for the consequences of hitting a
person. And that's a fundamentally different kind of conversation than the ones that we should be having, which is
what are the human costs of these kinds of systems out in the world? And I think that at a larger
level, big tech companies do not have the capacity. They just are not built in a way in a capitalist
society to fundamentally, to basically work on the human costs. They can only really work on
these financial costs. And I think that regulatory agencies or regulations in general that say, here are the
financial costs to try to translate human costs into something that capitalist entities understand
sort of fundamentally don't work. If you have a company that's big enough that they can spend
as much money as they do in places like Uber or Lyft or whatever to actually rewrite the rules
of gig work or actually rewrite the rules of gig work or actually rewrite
the rules of accountability for self-driving vehicles or actually rewrite the rules for
an algorithmic system that determines whether to separate kids from their parents.
A tech company that has millions or billions of dollars in this kind of industry of providing
these services is going to find ways to make it cost effective to cause this kind of harm.
And I don't necessarily believe that as a society, we're adequately equipped to deal
with that from the top down without any bottom-up kind of force to also make that cost more salient.
You know, we've been talking a lot about the problems with these systems,
the harms that can come from them, and why it's important to push back on that, whether it's through means of technology in a way that benefits the broader public rather than just being used, you know, in the way that these
companies are using them as you've been describing.
What do you think that that future looks like?
So I think that there are a couple of like, I guess, entry points to this, right?
So one of them is, I think that we have like completely missed the exit, conceptually, on talking about what a future would look like where consent is at the core of everything that we do. And it's not too late to go back and revisit just, they're too committed themselves to that paradigm of skipping past consent and saying the information's out there or the data is already
on the web or whatever. And so therefore there's just no, there's no talking about whether we use
somebody's data on a website because their spiders.txt file wasn't updated in time for our
secret project that was crawling the web before we told anybody about it. But that's one element of this that we could say the core thing we're going to try to work towards
is that consent is part of the conversation. It is a central part of the conversation at every step,
and that includes the collecting of data. I don't think it's necessarily impossible to have a
conversation about collecting vast amounts of data from people who consent to that
data being collected if the project makes sense to them, if the project that the people are working
on is something that they want to participate in. The only reason that it seems unbelievable or
not credible right now is because we talk about systems that have no speculative value to them.
And so, of course, it goes without saying that nobody would consent to their data
being collected for this kind of system, because it's not clear what Chachapiti does. It's not
clear what these systems do. And in that kind of paradigm, of course, consent can't be acquired,
because this is like a doctor saying, I want to do doctor stuff. Can I have some of your cells?
And it's like, no. Can you tell me what the research is? Can you tell me anything about it?
And that was how we lived for a while. I mean, Henrietta Lacks is like an entire module for a lot of data ethicists, because doctors just took her cells without talking to her about it, without telling her about it, without getting her consent about any of it. produced all of these changes and helped medical science advance and all this. But she was capable
of giving consent if that was the case, if that was truly the argument that they wanted to make.
Did they think that she was not intelligent enough? Did they think that she was not capable
of making a compassionate, pro-social decision? Did they think that only they were capable of
adjudicating that? That's crazy, right? Anyway, all that is to say that in medical science today,
for the most part, people look back on what they did to Henri de Lacs as a gross and vile
transgression of human autonomy. And I hope that we can think about what kind of a future would
look like to say, we want to do research. We need to use your cells. We think that it can be useful
for this. Do we have your permission to collect a sample? That's not ridiculous. It's not impossible to think about that in the world of tech. And so I think that's
one part of it. I think that another part of it is, again, using technical systems to help us make
sense of complex problems is not something that I'm categorically against. I think that
computational systems can be great ways of trying to sort of like draw comparisons between two
fundamentally different things. But I think that when that system starts to become overly decisive
or have an outsized kind of like weight of what influence it has in making decisions
about consequential things, then it becomes obviously much more harmful and much more
problematic and dangerous. And again, I think it comes back to this question of like, do people consent to the influence that the system has over
this particular decision about my life? And I think that in a lot of ways, tech companies find
ways to claim that people are not stakeholders in decisions that are about them, or they find ways
to say, well, this person's just not informed enough or too stupid or whatever to make an
informed decision that is
for the benefit of society or whatever. And these are all terrible rationalizations that
don't really even hold up to scrutiny today. And I certainly hope don't hold up to scrutiny in the
future. And that I hope when somebody says that sort of stuff in the future, people can just
immediately say like, that's just ridiculous. Like we just don't need to live like this.
And that they can dismiss it without ever even taking it seriously in the first place. Is there anything else that you wanted to add or you feel good? I think like
one thing that I kind of want to get off my chest, I'm like, I'm happy to like have it cut or
whatever, but I'm curious if you have thoughts about this even. So I have been thinking about
the critique that people have made about the destroying AI post where they were saying like,
you don't know what AI is. And I want to kind of come back with a blog post. I mean, part of the challenge is it's very difficult to top that blog post because
somebody actually forwarded to the cops at one point because they were that freaked out about it.
And I was like- In a whole different city or something, wasn't it?
Yeah. Yeah. So they forwarded it to the campus police of the university where they graduated.
And I was just like, what is even going through your head? I almost wanted to help this person,
but I also realized like, no, I don't. That's actually self-sabotage. But yeah, so how do you
write a blog post? It's better than that, right? But the thing that I've been thinking about has
been how would I define what AI is? Because I think particularly as I have written about,
so in late 2020, maybe early 2021, it came out that Stanford had
quote-unquote like an algorithm or in some system or whatever to decide when and how the COVID
vaccine would be deployed to healthcare workers. And it was prioritizing senior researchers who
were never on campus, who could work from home. It was downranking people who were actually in hospitals every day. And so the
system was obviously roundly scorned. And then it sort of emerged that they were just using a flow
chart. It was like a thing on an easel that they could just show. It was an algorithmic system in
the most STS kind of way possible, like science and technology studies kind of
definitions of what an algorithm is. But it wasn't machine learning. It wasn't like AI or anything like that. It was just like
some weights and some flowchart things. And all of that is sort of to say, I think that
my definition of what AI is, is not about like, oh, does this thing use machine learning? Because
like 20, 30, 50 years ago, I don't even think machine learning was sort of not popular. Well,
50 years ago, I suppose it was like on the tail end, but there was like a while of machine learning being kind of popular.
And then there was the Lighthill report that was sort of like, we've wasted a lot of money on this
and it's not paying off. And like, these people don't actually know what they're doing. And then
there was the AI winter. And then I think through the nineties, there was like a lot of like expert
system encoding and stuff like that. Nothing machine learning related. It was not AI as we
know it today. But I think the thing that we would all recognize all the way through continuously is the
technopolitical project of taking decisions away from people and putting consequential,
life-changing decisions into a locus of power that is silicon or that is automated or something
along those lines, and redistributing or shifting
and allocating power away from collective and social systems and into technological
or technocratic ones.
And so this isn't really like a definition of AI that I think a lot of computer science
people would appreciate or agree with.
But I think it's the only one that, again, if you were a time traveler, kind of like
going back 20 years and then 20 more years and then 20 more years, you would see totally different methods.
But I think you would see basically the same goals, basically the same project.
And I think that's sort of like my definition of what AI is.
But I'm kind of curious, you've been talking to tons of more people, way more variety of people.
And I'm really curious, what sense do you have of how you
would define what AI is? Or does that definition cling in a bad way or resonate strongly in a
positive way? What are your thoughts? I don't know if I have a particular
definition of what AI is because I feel like when I think about AI, I think like,
oh, it's a marketing term for a bunch of tech companies to justify whatever they're doing and to like abstract what they're actually doing, you know, because
artificial intelligence gives a particular idea of what is happening there. And instead of using
a term like machine learning or whatever else, you know, it's this, this abstract term that they
can kind of fit anything under to justify what they're doing and give it this like air and mystique of, you know, something we should be interested in, right?
Yeah. Meredith Whitaker, I think has talked about like the marketing-ness of the term AI.
And I totally resonate with that. I think that's like sort of the beginning and the end of the
analysis, right? But I also think like part of me is trying to figure out like, what is the
refracting of the light that they're trying to do with this lens, with this weird marketing term.
And maybe this is me getting too caught up in trying to understand a cynical ploy,
but I think part of me wants to understand what is the goal that they're trying to pursue here
with calling things AI, with talking about a future of AI and things like that.
So yeah, I think I've been fixated on... I mean, I've been thinking about this for a while,
in particular, whenever I submitted a paper back when I was publishing at CHI at HCI conferences
and academic places and stuff. I would talk about AI and then I would get notes that were sort of
like, what do you mean exactly? And this and that. And I think I was sort of nibbling around the
edges, but then eventually I would be like, well, I'll just change it to algorithmic systems because I don't really want
to have this fight. But I think it is a thing that I think a lot about. And it occurs to me that
a lot of tech booster people will be very fuzzy and nebulous when they talk about AI,
but then they get very strict and annoying and nitpicky and pedantic when they hear somebody critiquing it. And so I want
to have something that I can just deploy quickly and be like, here's a link. This is the definition.
This is my operating definition. I don't care if you have another definition, but this is
kind of the most comprehensive or encapsulating thing. And then I think the other thought is,
maybe this is my computer science brain, but I also realize I don't want to use a term
that is indexical with another term.
If I'm talking about AI and I really just mean algorithmic systems, I'd rather just
say algorithmic systems.
Or if I'm talking about AI and I really just mean machine learning, I'd really just rather
say machine learning.
And I don't know what AI is providing to my analysis if I use that term when I could use
a specific other term and capture the exact same things that I'm talking about. But I do know that the word or the term or whatever has a kind of meaning. It captures
some overlaps of some ideas and excludes some other things. I don't know to what extent people
would really agree with this, but I think that for instance, I know this is background info,
but I have a cousin who's a researcher at a university
who does machine learning stuff, but it's protein folding and stuff. I don't think he describes his
work as AI. I think he talks about it as machine learning. And I think that it's about as close to
AI as anything gets, unless your definition is about the techno-political power system,
that power structure. When you include that definition, then it becomes clear why he doesn't talk about it as AI. And it becomes a lot less confusing why some things that
are machine learning are AI and some things that aren't are still called AI and everything else
like that. And then also why some things that are just like a flowchart behind a curtain at Stanford
are called AI or called algorithmic or whatever, when really only by the loosest
possible definition do they qualify by that. And trying to understand what is the kind of
collective meaning that we're all building when people say that they're working with AI or that
they're using AI or building an AI. I think that makes perfect sense. And I appreciate you teasing
it out. by me, Paris Marks. Production is by Eric Wickham and transcripts are by Bridget Palou-Fry. Tech won't save us or lies in the support of listeners like you to keep providing critical
perspectives on the tech industry. You can join hundreds of other supporters by going to
patreon.com slash tech won't save us and making a pledge of your own. Thanks for listening.
Make sure to come back next week. Thank you.
