librarypunk - 155 - AI Refusal feat. Kay
Episode Date: September 24, 2025This week we have Kay back on to talk about AI and what the refusal of AI means in terms of library practice and theory. Bsky: @kslater.bsky.social Media mentioned Kay’s article in Library Trends:... Against AI: Critical Refusal in the Library The AI Con (book) https://thecon.ai/ “Synthetic literacy” forthcoming from Dr. Kishonna Gray https://www.kishonnagray.com/ librarians doing infolit session part of that is telling the students how it will track you - privacy issues https://www.inthelibrarywiththeleadpipe.org/2016/putting-critical-information-literacy-into-context-how-and-why-librarians-adopt-critical-practices-in-their-teaching/ https://infolit.org.uk/privacy-and-surveillance-in-the-classroom-responding-to-new-information-literacy-challenges/ https://acrlog.org/2020/11/13/privacy-consent-and-the-virtual-one-shot/ https://datalabelers.org/ https://www.fortinet.com/resources/cyberglossary/cia-triad The People’s Platform (book) https://us.macmillan.com/books/9781250062598/thepeoplesplatform/ Transcript: https://pastecode.io/s/g3d9cgmm Join the Discord: https://discord.gg/zzEpV9QEAG
Transcript
Discussion (0)
Disney and Universal Sumid Journey.
That's pretty recent.
It hasn't gone anymore yet.
Here he is.
Wild.
Hello.
The boys, but.
The boy.
Arthur, be polite.
We have guests.
No, don't eat cables.
You have food down there.
But cables are yummy.
It wants the spicy, spicy twizzlers.
But you can't eat cables.
They're bad for you.
Arthur.
What is the new New York Times copy?
Copy their right.
Dude.
Is that part of?
A couple of years ago,
and that was like about news information specifically or like,
but I feel like there was some kind of claim,
like that there was some judgment.
Oh,
did it get put together with the anthropic one?
No.
I see something from May that said it was allowed to go forward.
Mm-hmm.
But I don't,
I'm not sure if the anthropic settlement, like,
actually,
I guess they let it go forward.
I'm just hitting paywall after paywall.
Yeah.
Because the judge rejected it September,
8. Welcome to the library
punk segment of looking up
case law. Yeah.
Well, looking up news.
Listen as we all sit in silence.
Yeah. For somebody who cares about my
AI, I'm like, I don't follow
these lawsuits. Like, something's going to happen,
I guess. I get stuff through
Google alerts, and that's mostly how I
keep up. Yeah. But I'm
trying to follow, like, specific things.
Like, AI use in academic
journals, you know,
stuff like that. Yeah.
stuff that's niche.
And then every once in a while, like a term will get used and then it will ruin the Google Alert.
So, like, I was using, like, AI and libraries.
And then the injection into code libraries was happening, whereas making fake code libraries.
So that ruined that Google Alert.
Oh.
Arthur keeps gazing out the window, like, a fucking, like, whaler's widow or something.
He, like, like, look at the window.
Like, when will my husband return?
From the C.
I made him this cable-knit sweater.
I've still not seen any of the houses around Massachusetts
that have widows watches on them,
but I know they exist.
It's like a specific feature of New England houses
where there's like a place or like almost like a plank, I think,
or like an area where like a wife could like go out
and it was on top or at least high enough
so that they could see the coast because it's like in coastal towns
and it was for like Whalers Wives and other Sailor's Wives and stuff.
So yeah, it's called the Widows Watch.
Wives and boyfriends.
Yeah, but I haven't seen any embassed.
I probably have to go down to New Bedford for that or something.
Yeah.
Okay.
Well, then I don't really have a segment.
We'll just jump straight into the article.
What if I just like start talking about Moby Dick?
Yeah.
All right.
Let's go.
I'm Justin.
I'm some kind of academic librarian and my pronouns are he and they.
I'm Sadie. I work IT at a public library and my pronouns are they then.
I'm Jay. I'm a cataloging librarian and my pronouns are he-him.
And we have a guest which likes to introduce yourself.
Hi, I am Kay. I'm a public library worker based in the Chicagoland area.
My pronouns are any of them.
Hell yeah. That's fucking go.
Biches do love anime. Get that bitch in anime.
I was so confused. I didn't see Sadie's mouth moving.
I found that drop from like, I don't know, because I had to redo my soundboard, right?
Because I got a new computer.
And yeah, I don't know when we got that, but there it is.
I don't remember saying that at all, but it definitely like sounds like something I would say.
Almost exactly like something you would say and have said.
I'm trying to get a picture of my dog pouting just beyond my desk while I record because I trapped her in the room with me.
No.
It's going in the Discord.
Okay.
Beautiful.
So welcome back, Kay.
Thanks for having me.
Third time guest, technically.
You might have got picked up on the live show.
I think so.
Yeah.
Because we handed you a microphone, do we?
At some point?
Yeah.
Yeah.
I saw the transcript, but I just don't.
Like, I'm not going to say a little part of me, but I did, yes.
It sounded great.
Oh, thank.
Yeah, I'm still surprised it came out as good as it did.
But that's having a good microphone for you.
USB mics are tough.
I got really confused.
Sorry.
The whole episode came good.
Yeah, the room is good.
It's like, no, we're surprised you in particular.
Like, hey, no.
So you've been working on a lot of things.
We met up at ALA and you were showing me some of the stuff you were working on.
But you've been working on this paper in library trends, critical refusal in the library.
Against AI, critical refusal in the library, which will be linked in the show notes.
So my first question is like, how did you start writing?
writing the paper and why I choose to talk about AI.
Yeah. I will clarify for listeners too. I'm recently out of library school. I finished in
December of last year. So I did, yeah, so I did the ALA Emerging Leaders Program.
Yeah, that was pretty good. We did a poster at ALA and we talked about volunteerism and
core specifically. Don't worry, I did it too back in late 2018.
Yeah, it was an experience. I hope they do it again. Yeah, I met a lot of great folks
for that program.
But I did a lot of post-grad, like professional development work, did that program, and
then I rolled right into doing the junior fellows program, the Library of Congress,
which was very fun.
Everybody was really chill there.
So I've been doing a lot of stuff in the last two years that has been engaged in projects.
But before I was even into libraries, I did a master's in communication at University of
Illinois, Chicago.
My track before libraries was really going to be like communication studies, media studies.
I did my master's thesis on deep space that provided me with a lot of a context for this work.
I mean, just like the background understanding of what AI is and like what's going on in like computer science at the time.
This was probably 2019 to 2021.
So COVID happened.
I applied to send PhD programs, didn't get into the ones that had funding.
And I was like, I don't know what I'm going to do.
but I obviously like do scholarship.
So I took a little bit of a break from doing that,
start doing library work.
And then once I got back into school
and learned about how there's like a whole discipline
of information studies and like history
that I didn't know about before,
okay, maybe this is like I'm a little more aligned here
in terms of scholarship and just my interest.
Because deep fakes, the study of that in communication
was a lot more about like,
how are people experiencing deep fakes?
What are the broader implications for?
for politics, like understanding speech and language.
And like, that's fine.
I think that's important work.
At the time, it was definitely like a sexy topic to talk about.
But at a certain point, I kind of hit a wall just in terms of like all of the solutions
to these problems were like regulation.
Things were going pretty slow in terms of like laws, people understanding certain things.
Illinois has been pretty on the cutting edge of a lot of stuff.
But Illinois has passed a couple of deep fake related laws, like since.
This is all kind of began a couple years ago.
And I'll provide context for that, too.
Like, a lot of that, this was before deep fakes were, like, part of the mainstream.
So this was when, like, they had, like, that Tom Cruise deep fake or, like, the Jordan Peel,
Obama impersonation one.
So it was a pretty good experience.
But I kind of got scared away from professorship.
Like, I was like, I don't know.
It was a path for me.
I don't know if I really wanted to teach.
So I just following a bunch of people doing information studies, work on social media.
I saw the special issue call.
for library trends.
It was about gendered of AI.
And I was like,
I have a lot of knowledge
that could be useful here.
And I've noticed that a lot of people
doing professional development trainings
or just talking about AI.
A lot of the stuff coming out of organizations
was very positive,
maybe even neutral, about AI.
And I was very confused about that
just because that was not any of my experience
studying that work before,
especially communication.
Everybody was very, like, aware of harms,
impacts, the violence of it all.
So a lot of it was just me approaching it like, you guys being for real right now?
Like this is really what we're being positive about.
Like I was very confused.
So I think the paper goes into a lot of touch points about why I think AI is harmful,
just kind of on the surface, which we'll talk about probably later on.
But I'm just trying to kind of coalesce all of the discourses.
She's a fancy word that I've seen about AI that are seen professionally, like in, you know,
ALA or other sort of places, as well as like just,
online on social media between the people being critical.
So yeah, so that's kind of where I came into it.
It was kind of a, it was sort of like I thought I was out, but I got pulled back in
situation with AI.
It kind of chose me, I guess.
Curious if, especially like relating back to your previous work on like deep fakes and how
that relates to like specific types of AI, I was wondering if you could maybe talk about like
the distinctions of the kind.
of AI and like what gets labeled as AI and like how that affects this discourse.
Yeah, definitely.
Something that I learned through doing that research was there were particular applications
that were being used in like hobbyists or niche internet communities that were essentially
just like importing different videos into a software application, those things being mashed
up in a particular way, whether that way is like constructed through algorithms and to put out
some kind of output that that's when the videos would kind of look a little wonky where you could
tell there's a lot of differentiation between like space and someone's body a little more like,
I don't know, just didn't look and seamless before. And then there, but then there's also this other
camp of things happening where there were more people who were doing specific training of like
gender of adversarial networks that was on a much larger scale than just like a smaller
application. So I think a lot of these things tend to be machine learning. They tend to be really just like
automated systems versus like things being trained in a network and then things being imported
into that network to then create a different output. And I'll also really add too that that training is
not automatic through just code. It's the labor of people creating those things and training those
systems. So it depends on the scale definitely of where things are happening. But a lot of the
applications that are commercially available are definitely operating on like a massive scale that
like involves an immense amount of outsourcing of labor to people in the global south,
etc. So yeah, I mean, like a lot of like audio transcription is really just processing frequencies
depending on how much that is really generative is just depends on the application. But when it is
used, when the material is used to train to create other outputs versus just like adding effects,
those are two different situations. But it's really complicated and you have to like know a lot
about like how computers work to understand like what the differences really are. Right. Like I know
there's like a lot of OCR and transcription software now that it's like, oh, we're fancy AI now.
But really it's just because it's pattern matching like algorithms and you have to like train it on like specific types of things.
But that's different.
And then like I want you to output this brand new thing based on this like library that's of data that like consumed a lake somewhere.
Yeah.
Yeah, definitely.
It's a lot of like the difference between like the computer understanding the edges of shapes and color versus.
something, like keywords or tags being attached to something based on a certain input,
those two things together creates a different output.
Like, the letter that is more generative AI.
Yeah.
Yeah, I think those distinctions are important for library workers to know as different types of
tools use AI as a marketing term.
And like, what is this thing that it's actually marketing at me?
Yeah, so much of it is, it's a sexy term to use.
Some companies are using it to pitch new resources.
to people. I mean, this is happening in a lot of our, not really my immediate experience as a public
library worker, but like our in the collective library workers here, just vendors like attaching
AI to things with really just doing audio transcription or it's doing OCR. So it requires us to look
deeply at what these contracts are and just to say like, okay, what is this really even doing? Emily, Bender,
and Alex Hannah's book, AI Khan is really, really great for just like thinking about approaches to all
of these things. They're also really nice people, but they are just like really into breaking down
all of these technologies in a way that it does the whole like, you know, this is all math, sure,
but like what are the implications of this being math? Like, what does it mean for it to be
algorithm, et cetera? So I really recommend that book for people. It's a really nice read. It goes
through different spheres of work too. So that's what healthcare talks about journalism,
talks about like, business, I think, even in marketing too. Yeah.
So I recommend that if you're like, this is kind of a lot of technical information and I'm needing some kind of like friendly or read that is from experts.
It's also harder to keep up with what the technology stack is with some of these as they become all products because like there's layers of software.
Like GPT5 I think it like uses an LLM to choose which LLM to use.
Like there's like layers and layers of recursive compute.
And so layers like, you know, I think.
some of the audio editing stuff that I've used will do just voice recognition, but then run it through
like a GPT and then go back and then change the audio to match the words that thought it heard.
So I was also on another podcast and the guy has like, does basically all of it automated.
And just like random like fragments of sentences were just popping up from like a third speaker who
wasn't there. It was just like creating audio fragments based on things that it thought I heard.
Very wild and he didn't remove them, which was strange. But yeah, it's even for me, like,
I try and keep current on this, but the amount of layers that gets thrown into, like, if you use
copilot and just type something in, it's trying to obscure what it's doing. Like, it'll show you,
oh, it's thinking, but it's actually like it's going to use this technology to do this task and
and you use this technology to do that task,
and it's actually like four or five different things going on
and it's just calling all of that AI.
So it's even more difficult.
Like you talk about GANS a lot,
which I feel like people don't talk about enough.
I was just saying the other day,
it's strange how people don't talk about GANS and copyright
because the way I've had GANS sort of explained to me
for image generation is basically you just keep statistically guessing
what the test image is
until you've statistically guessed what it is.
So you've done basically copyright infringement by algorithmic cheese grater.
Yeah.
It's like the on keyboards.
You hit enough output.
You're going to recreate something, right?
Something that I noticed when I was doing my master's thesis was like I was focusing specifically on deep fake porn.
And I was looking at a site called Mr. Deepfakes, which now I believe is default.
I think the person who ran it got caught or something.
But basically I was looking at the performance of race and gender on this website and frequency with which it was mostly at the time it was like Elizabeth Olson, Emma Watson, like a lot of these like white actresses being transposed onto bodies of Asian sex workers or vice versa and differences of those things being like really based on skin color because the GAN only understands like pixels in color.
There's no possible way for, at the time at least, for there to be any kind of contextual understanding.
And definitely talked about, like, the violence of, like, you know, the sex worker is not being compensated for that, like, exploitation and just, like, them never knowing that those videos are being made use in their bodies.
And I kind of found that, like, a lot of the website was, I was tracking sort of, like, how popular are these videos, like, what are people talking about on these videos?
And a lot of it was people in India, like, a lot of the videos were Bollywood actresses, which I didn't expect.
going in because all the popular videos were all like white actresses. And it was just like a particular
experience that I think was helpful in me understanding what the limits of a technology are, at least at
the time. And I don't know exactly where deep fake technology stands at the moment, but at least I know
that people who are making those videos obviously didn't care about, you know, explaining anybody,
which is, you know, horrible. But yeah. So it was just like a lot of, it was just kind of a lot to watch
all the time too.
Like, okay, like, because like,
I don't know, Elizabeth Olson was doing like some
Marvel. It was, that was when
Wanda Vision was out, I think.
Like, oh, hey, I'm seeing this woman
all the time. Yeah.
So.
Yeah, I remember seeing a lot of the ones with
the actors from the office,
her face was used a lot.
Yeah.
Use, and AOC was also on it a lot,
which that was also kind of,
I don't know, I had a lot of, like,
advisement from my people I was, like,
working with and my department just being,
And like, well, think about, like, the political ramification
for somebody like AOC, like, being depicted.
And I was like, I don't really.
Like, you know, she has the capital.
I don't know.
Like, these women are just like,
they have no compensation of what they're doing.
Like, or what's, you know, being displayed,
how they're being displayed rather, on these videos.
So I kind of felt a little jaded after the whole experience of just like,
when I realized that the end was just like regulation or like things being like,
people having to self-identify is like the video being deep faked or some kind of
encoding thing happening. I was just like, okay, I don't know if you were. But, but yeah, so that's,
that's my little thief fake story. Yeah. So in your piece, you talk about a, a piece that you were
writing before this article, which was your piece that was about, quote unquote, AI literacy. I know it's in
the article, but can you tell us the story of that editorial journey you went on? Yeah. So that was my colleague and I,
coworker, collaborator, Claire, Ong.
Her and I have been doing a lot of programming about AI at our library
and doing public scholarship to some degree
about trying to get library workers interested in critical AI.
Her and I decided to pitch something to Professional Association magazine,
and we wrote really about impacts, harms, et cetera,
thinking about what I ended up writing about in library trends,
but trying to just elucidate for, you know, the,
the public of this association.
Here are people in computer science
talking about why AI is harmful.
Like here are the resources that you might want to think about,
direct to the source essentially and kind of synthesizing those things down.
And then we pitched it as,
I think the title is something like,
is something to do with like kicking like critical AI seriously
or something like that.
Like it's something to do with like really naming that.
And it's a thing.
And then we got the editorial sort of feedback back.
And thankfully like none of their writing got adjusted.
but they were like, oh, we're going to call this AI literacy, something inspiring, engaging, empowering.
And I was like, I don't know why we just, like, called this, this.
I was really confused, like, because I emailed the editor and I was like, I don't think this, like, AI literacy is not like a thing.
Like, a professor of mine at UIC, I think she's now at Michigan, Kashana Gray, as writing this piece about synthetic literacy.
She tweeted about it, and she hadn't been anything about it yet.
And so I tried to sort of reconstruct this to this editor.
She was like, I don't know about all that, basically.
I'm just like, didn't take my change.
But yeah, it was really, it was very interesting to see the willingness to sort of call this something that it wasn't,
at least in the way that we had made legible.
Like literacy is like that I talk about like in the article.
It has to do with like comprehension, understanding, reading something,
synthesizing it in your own sort of self or with others.
And that has particular meanings.
I understand what those meanings are through ingesting that and observing it.
And it's different from-
We love Stuart Hall.
And coding, decoding model, definitely.
Just thinking about the difference from that kind of understanding of literacy
versus just reading code or understanding what computer systems are doing.
And there was a lot of conflation of the two and there still is, I think.
But we were talking just about like social impacts and environmental.
mental impact. You know, what's, like, going on and what's been reported and, like,
studied and, like, talking about Timit Gebrough and getting fired, like, heard what's a boy and
what's going on Google and stuff like that. And it was just, it felt really separated from what we
had thought it was. But they didn't really adjust any of the writing. So we were like,
okay, like, if they're not going to take that at, like, title change, at least people reading it,
we'll still get what we put out there. Is there anything that we could call AI literacy, or would
we prefer to call it algorithmic literacy or AI comprehension or something else?
I think it was you, Justin, who tweeted AI comprehension like a while ago or something.
And I was like, that's what that is. Because it's just like understanding what AI does.
It's just like a functional thing. So I think AI comprehension, algorithmic literacy feels to me at least a little more like understanding how code is working.
But it just depends on the context, I think. Because AI itself is like become this sort of
packaged thing that is like separate from just the code.
Most people are really far removed from the like the back end of things.
I think it's difficult to say that it's algorithmic literacy, at least to me.
But I like, I'm really looking forward to Dr. Gray's article,
a speaker chapter coming out.
I don't really know that about it, but she says it's synthetic literacy.
So to me that feels like an understanding of like how when I talked about like
these big videos like understanding like what is happening in that situation.
like understanding that there is a face onto a different body and like speech is altered and like like viewing that kind of like alteration understanding that as such. Yeah, I know I've complained about AI literacy as a term, particularly because what I hear at work is employers, I probably just read something recently. It's like employers want employees who are AI literate. To me, that's like, well, that you could just have that without ever using an AI. Like everything I understand about AI, I didn't learn for.
using it. I learned it from reading about how it works. I learned it from people talking about
how it's broken. I wasn't sitting there playing 20 questions with it. I think, I don't know if I
brought this up. I was in a professional development thing for our faculty day recently,
and one of the faculty members was presenting with one of the instructional designers who I talked
to before, and so I knew she had like a grasp on how GPT's work. And he was like, yeah, you know,
and if you tell Chad GPT to keep things confidential in the session,
it will keep it confidential.
And like me and her just like shot a look at each other.
Because he wasn't talking about the thing in chat GPT,
we used to be able to turn off the learning thing where it would learn from
interacting with you.
He said,
Chad GPT,
I'm going to paste my book in here now.
Don't copy it.
And he just,
and it would say,
okay.
And he believed it.
And this is a man with a PhD who teaches college students and was giving
professional development to other faculty members. He doesn't understand something basic.
They're like, this is a lying machine. And so, you know, on one hand, I understand why the term AI
literacy is important because it's like, this man is illiterate and it's in a way, but it's also like
he learned about it from using it and that's not what he should have done. Yeah. That's a very
interesting way to situate that because in his mind, he is becoming literate in the sense of like
he's learning and experiencing learning through that tool. That's just simply learning. I don't
think that's literacy. But I think when I've heard AI literacy, especially in the last year or so,
and, like, library professional development stuff, people are really keen on understanding
what the tools are, what they can do for patrons and other staff members, and that literacy.
But then, like, the idea of, like, quote-unquote ethics and impacts are not really a part of that
literacy. Like, I'm really feeling this frustration with this, like, separate sort of understanding
or like this attempt to like categorize literacy and ethics as if the two even if literacy
was the thing that they're talking about, those two things need to be together. And like in fact,
ethics like it's because ethic, like, situating it as ethics or an ethical dilemma like presupposes
that like there's a willingness to look at both sides, quote unquote, or multiple sides when like
these people like aren't even accepting dissent or criticism and feel very like overwrought. And
like get kind of like expensive when you bring up a lot of the harms and impacts and stuff. So yeah,
it's it's really weird how even people who are yeah, have PhDs or like our professors or like people
with some kind of authority are falling for these tools. Yes, Sadie. Oh, I was waiting for you to finish,
but okay. Well, this is a lot of this is in the IT side of thing too. Like I subscribe to a lot of
different, particularly computer, like, security information, like, newsletters and stuff. And every other,
like, article that's in all of these newsletters is about AI. And it's about how tech workers need AI or,
you know, they predict that AI is going to, like, if you're proficient in AI, this and that.
And it's, which really frustrates me as an IT person, because it's like, shouldn't we know better? But then again,
And it's, you know, Microsoft and Google and all of these companies that offer technical, like, free technical certifications and teaching and stuff that are also pushing all of this AI stuff. And it's like, if I can't turn copilot off, I will be going into the registry to find that. But yeah, so like it's a widespread problem in the tech world too, which like in terms of literacy, like as a sort of the information with integrity, which you bring up in your paper.
which is a really good way of putting it, in my opinion, there's none of that on the IT
backside of things too when it comes to IT. Like there's no discussions of, yeah, like the impacts
and the harms, there's no discussions of, you know, what's behind it unless it's to try to push it
as a product. So like, yeah, it's certainly everywhere, which is really concerning.
Yeah, and this kind of reminds me, and I'm sure I've talked about this paper on the podcast before.
I don't remember what it's called. I'm sorry. It was part of an assignment in my library school
102 and Dr. Knox was my teacher. So if she knows what I'm talking about, put it in the comments.
But it was, there's this paper that argues that like librarians doing infotlit, specifically
like in academic libraries, but I guess anywhere. Right. But like if you're doing a like library session or any kind of
information literacy session.
Part of that should be when you're teaching a database or something that you tell the
students like, this will track you or this has these trackers or like if your browser has
this kind of like, you know, anti-whatever features or plugins or whatever, it will break
the way this database works so that like the librarian not only has to be literate about
all of those things in these tools,
but it's part of teaching.
That's what the literacy is.
It's not, oh, the students need to know how to use the database
and how to use the whatever and let Elsevier track them or whatever.
It's letting students know that this exists in this tool
and they can either choose to turn off all of their stuff and use the database
or have it break on them.
but like students are at least aware that that's happening and that's an ethical thing, right?
Like students are now aware that this is a thing and they're being tracked.
Their information is being gathered, right?
And the librarians being honest about that.
And like, I don't know, like part of this of like, oh, well, we have to teach students how to use AI.
We have to teach patrons how to do it.
It's like I think what's more important is like letting people know where this already exists and what it can do.
and just letting them be aware of it.
And like, that's part of the literacy to me, I think.
Yeah, Sadie.
Well, you just reminded me of, I think it was,
I think it was somebody in our Discord talking about how it was an assignment for their
library program where they had to do something or the other or sign up for something
or the other.
And when they went afterwards to request that their account be deleted or their
information be wiped, it was like a nightmare.
And it wasn't something that they wanted to sign up to begin with.
They only did it because it was required for a.
a specific assignment in library science. And then it took them like a long time to actually be able
to confirm that their data with this company was deleted. And it's like that's, yeah,
that's exactly what is that. That's an illiterate approach to any sort of data privacy right there.
That is something that yeah, librarians should be proficient in. I just think if we're going to
do information science, we should do the information science. Like we should look.
at the stuff, see what's going on. How about that? In the computer. Like, which is it? Are we doing
library science or information science? Like, it just makes me feel like I'm in the Twilight Zone.
Yeah, I mean, it is kind of like you said you had to discover the information science side of things because a lot of people treat that as theoretical or as stuff that like PhDs do.
And librarianship is really pushed by practitioners. And something I wrote down earlier is like when you talked about,
the need to like embrace AI.
There's a lot of ideology in libraries.
Oh, yeah.
It's why they have a show.
But there's also like an insecurity.
There's a, there's this constant insecurity that like librarians will be left behind.
And it's been going on for like decades.
Or it's going to take my job.
I mean, this has been going on from like the 90s or it's like we have to keep up with, you know,
we have to be cyborians, you know, that term in the 90s.
We're going to become irrelevant, which is the thing I have heard so many times.
makes me want to bash my head against something.
Everyone go watch desk set.
And that's also the thing about, you know,
the comparison of like you have to get on the AI bandwagon
because it'll be like the internet.
But the internet was implemented over decades.
Yeah.
Through like a lot of infrastructure.
It's completely different.
This is like saying everyone needs to get online,
get on with Microsoft Word,
because Word is the future.
And it's like, well, there's open office and stuff.
And it's one software.
It's not a new,
infrastructure. It's just people trying to push you into
Emacs and you can do everything in Emacs,
like check your email.
Check your email, emacs.
I just kind of really disrupt saying,
I mean, I just want to hear from
who are like library leaders
talking about AI and like we have to like get with it
basically or we're going to be left behind. And you're talking
to officially like people who are in library school,
recent out of library school, facing a really
competitive job market who are like really
struggling to figure out how to
I have a full-time job with benefits in a place that I like to work. There's already enough
to deal with, like enough of burnout, enough problems. And we're adding this onto like, it's like,
I think people, like, because when you say, when you frame it in that way, I think people
tend to get defensive and they're like, well, we have to keep learning new things or whatever.
And it's like, it's not what we're saying. Like, we're just saying that like we shouldn't use the
racism machine. Like, I don't want to use that. Don't use it. Yeah. Like, I think part of this and I think
I talked about this a little bit in our like Bibframe Must Die episode that like,
there's such a problem with training and professional development and especially upskilling among librarians.
This is not a fault of the workers.
This is a fault of management and library leaders, right?
Because like, yes, things in library science and tech do change.
And you should keep on top, like cataloging, you know, it's shit changes all the time.
Right.
Like we're always coming up with new ways of describing things.
know. But like there's lots of development in the field and things we have to keep on top of. Like,
that's true. But like there's such a problem with like, especially in tech services, for example,
of people not retiring out of positions or like once you get in a position, there's no career path.
Right. There's no like, okay, I'll stay in this position and then eventually I'll get promoted
to this position and then this position and in this position is you're kind of just stuck in your
position. And if you want something better, you have to leave. And people don't want to leave.
And then those people don't upskill because it's not provided to them.
And so then you get all these hot, young, fresh, like, library school grads who have all the new hotness and know everything, who are trained in RDA.
And that's the only thing they're trained in.
And they understand Ferber and Wemmy and all this shit.
And they're fresh and they know these things.
And then they're not getting hired because the people who aren't upskilled aren't leaving those jobs.
so those jobs aren't available.
And it's this whole cycle of like, yeah, like we should keep on top of things,
but the people who know it aren't getting hired.
And then the people who don't know it,
their management isn't upskilling them and training them
in order to keep them abreast of things so that we don't have to use fucking AI, right?
Right.
Like, we can just be trained in other things and have skills.
Like, I think way more librarians of all ilk should have at least some sort of like
skill or literacy around
basic coding
or just like any kind of like
IT skills because you're
surprised you'll be surprised like
how often it comes in handy but like
because I'm the only person
who knows anything about it in
my department suddenly I'm
the person right? Oh yeah
right? Like what if
more librarians like took like
a Python course? You know like if that was
provided in library school or in training at all
like that kind of upskilling.
It's just not happening.
Yeah.
Rant over.
I would say it would be better even to not stick it to a particular language,
but just a programic thinking.
Yeah, that too.
Because there are a lot of parallels between library work and that sort of thinking, too.
So like, yeah, a Python course, but if you are just, you just are memorizing the syntax,
it doesn't help with the critical, like, this is how it works, so therefore I can do this and that,
which actually is a lot more transferable to other coding systems.
Totally.
I think just knowing how a computer works, like, just really like, what is the hardware of this thing?
What is RAM?
Like, understanding, like, these things, I think will really go a long way.
Like, honestly, like, even public services, I mean, I've worked in public libraries for my entire library careers as far,
and I will do a lot of one-on-one tech help with patrons.
And a lot of people I worked with across the big city system,
even the, like, the affluent suburban library that I work in currently,
a lot of the staff don't even know how to use the computer.
It's just like, you know, and then like I am having this current thing
where like not anywhere for me to move up at my current library
because everybody is like established.
And like, so I'm on the job market.
So basically.
Fire K.
You're an idiot if you don't.
If you're in Chicago, please hire me.
But yeah, I just, I think so important for library workers to like, I know coding is like this intimidating thing, but like understanding at least like how do you access the terminal on your computer, like know what applications are.
What are file formats?
Like I love talking to pictures about file formats.
It's good stuff.
You don't have to know about GAMs, but I mean, I would love it if you did.
But you don't have to come into all that if you don't want to.
I took a library just academy, of course.
and the fucking file types were insane.
I was like, pick, well, it was like,
insane.
Anyway, on jumping back to the article,
there are three, there's three areas of critique
that you focus around.
I wanted to, I wanted to like kind of get at why these three.
So you says, there's a reinforcement of algorithmic bias,
so racism and hate speech.
There's data collection practices
and a prolific lack of concern for user privacy
and the environmental impact.
Since this is mostly like a persuasive sort of article,
was that the top three that you felt were the most impactful for people?
Did you do any research on like what changes people's minds about AI,
makes them more skeptical?
That's a great question.
I think this is coming from my own experiences of studying AI.
And I will say I use the three of these things.
in the really broad categories.
The technology has also changed quite a bit in the last two or three years and reporting
as sort of altered depending on companies.
But a lot of these things have stayed the same in that there is no transparency at all
from the tech companies about what these things are meant to do, what the algorithms
are meant to accomplish at all.
So I think in terms of, when I think about the reinforcement of racism and hate speech,
the first thing comes to my mind is like facial recognition.
But I also, and like the enforcement of that.
bench law enforcement surveillance. But I think I tended to focus more on like text and chat
GPT sort of type of things in the article just to like have some kind of focus. But there are
so many possibilities for misrepresentation of people, history, context, lived experience,
just willful misrepresentation and in fact intentionally so. And just like I think I think I mentioned
in the article. I don't remember if I do or not. But I chat DPT at least used to be able to say that it
would not like produce hate speech or that we don't we're not going to do this for you and then like
it's so easy to break that or so easy to just like do a couple of commands and unlock that from
program but i think that was chaty free i don't know it like so it's like so that is definitely like
one broad category also i think in that part i talk about tim nick ebrew sort of being like hey
you guys don't care about black women or anybody who isn't white google was like we don't care bye
So that was like sort of the social discourse surrounding that as well.
Data collection, I mean, yes.
Like, it's web scraping is such a thing that is at least at the time,
Open AI was very open about scraping the web.
I mean, this is pre a lot of the lawsuits,
a lot of the copyright issues that were going on,
which I, in the article, I don't take a stand because I am against private property
this concept, but I also like, to this point,
I recommend Astro Taylor's book called The People's Platform.
She has a good chapter and copyright.
It talks about a lot of these issues that I think if I were to go back and add that citation,
I would do that there.
It just talks about the sort of different arguments of like,
we know copyright is the way to control people's likeness intellectual property,
et cetera, and really, you know, creates barriers to access.
It also encases, ensures that people get paid for their work, and like that's like complicated.
So because of that, I'm like, I'm not going to claim here.
Or somebody else can't.
they want to. But in terms of
user privacy, yeah, it's like,
I mean, I think
part of informational literacy for me and
sort of like library workers is like, like
Jay was saying earlier, people understanding
the systems that work behind the technology.
So like the feds can very
easily like get access to
information, things that you put into
chat TPT. I believe law
enforcement can access to a degree if there's a
warrant. Same thing with Discord. Same thing
with a meta platform, especially
pushing it in the last week.
So, you know, be aware of those things.
It's sort of my take on that.
And when it comes to environmental impact, I mean, there is a lot of, I have a lot of
reporting about like the sort of water usage and electrical usage of AI.
Definitely affecting people's communities most particularly, if it comes to mind,
is like what's happening in Memphis right now with XAI, like blowing methane gas out of
these plants and poisoning everybody around the area.
So that's sort of impact that I hadn't happened when I wrote the paper.
But so I want to bring up this whole anecdote about who went to a webinar last week that was about this new library book like generative AI.
It's something about whatever.
It's use in the library or something that degree.
And I asked a question in the chat about like they didn't talk about ethics or impacts.
And they said in the presentation like, you know, we didn't talk about this because we didn't want to get into all of the essentially the messiness of it.
And I was like, okay, at least you're saying that.
But like, we'd love to see more, obviously.
So, but someone in the, one of the authors made this claim that, like, being critical of AI was somehow reinforcing the traditions of librarianship because it means that we don't move forward or something or innovate.
And I was like, that's not what refusal means.
That's not right.
And so I cited, you know, like environmental impacts and the outsourcing of labor, exploitation, et cetera.
And then the other author was like, well, wait until you hear about the environmental impacts of, like, cultivating beef.
And I was like, oh, is that really how we're going to.
approach this argument right now. So I think there's more to be said to deconstruct those kind of
arguments. But yeah, those are sort of my main three areas of critique in the article that I hope
to expand upon in the future. Definitely looking at like for future projects, career and project,
thinking about data centers, their impact on local communities. You know, in Chicago, we have
data centers being built here that have raised our electrical costs 10%. No one contented to that at all.
And so that's besides the environmental impacts, at least just like the immediate sort of utility
costs, raising prices for residents, for businesses, schools, any place that uses electricity,
which is everywhere. So that's important to think about and also thinking about, as I've been
saying, like the data workers themselves who are actually training the information. And so I think
kind of environmental also like in a sense of like nature as well as like just like labor
environment, like the environment of people in society. If the AI booster says they got beef,
tell him I'm a vegetarian and I ain't
fucking scared of him.
Can I please get that
as a drop just so I can have it personally?
It has to be dropped.
It's my new ringtone.
I thought you were holding that in too.
Uh-huh.
Yeah.
He had that one ready to go.
Every time.
Just sitting there vibrating
at frequencies that you can't see.
I,
what does a politics of refusal
look like?
in practice.
Like if we are saying that this is a refusal of AI,
what does that mean we are facilitating in the meantime?
Because there's a thing of like,
it's acceptable to affix tech solutions to social problems
rather than to make space for social solutions.
So if we are refusal of this tech solution,
what are we trying to make space for in social solutions?
Or is that the wrong track?
I think the problems that I think AI is like people think that AI is trying to solve is like burnout and like accessibility things that like accommodations can be made in the workplace.
People can make the choice to change how they conduct themselves. So I think agency is a big part of that. But I think obviously like, you know, saying the technology, it's important that we understand too that like I think we all do obviously. But just like the everyone is a different context and material condition in which they're working in.
Like, not everybody is going to get access to, like, that vendor contract discussion.
So I think if you feel competent enough at work, to openly say that you don't want to use technology and that you, that particular technology and that you value, you know, your human labor that you're getting paid to do, that's, I think, the first thing.
I think Emily Bender and Aless Hannah in their book talk about the importance of understanding what are the actual outputs meant to be of a technology and asking questions of the people who are trying to put AI in the workplace.
like what is this really meant to accomplish? Are there ways that we can actually step in and say like, you know, what if we changed our method of management or administration of particular tool or, you know, in the workplace? I think at least as somebody who's doing like scholarship, I think like public scholarship is really important. Making information readily available and accessible to people. So I was really glad that the issue is open access just so that way people can actually learn about this and be able to share it with people. I think, you know, there is, there is.
a lot of space for
TEEC, and it is kind of a
hard situation sometimes. Like, I know
sometimes I can feel very, not like
afraid, but just like, I'm
getting into Wade capital S situation
when I am faced with somebody who
is like, positive about AI,
and I have to say like, hey, I don't agree
with this. So I think, you know,
having agency and saying, hey, I don't like this
and that's okay. That's okay.
I don't like it. Also, unionize.
If you can. Yeah, I was about to say, it's like, I'm about to grab
my microphone so tenderly and like, listen.
listener. I've got my arm around your shoulder like, hey, buddy, how you doing? How's your day? Have you
unionized your workplace yet? Have you put a tech clause in your collective bargaining agreement yet?
You can do this. You can refuse to reunizing. You can do it, I promise.
And if there is like in your workplace, if there is a situation where you may or I not get fired for trying to organize, if there's a risk that you, if there's a risk there, I say to, you know, talk to your peers.
Like, I don't, like, be socially engaged.
Like, say, like, hey, here are some resources.
I was just thinking this is kind of weird.
Trying to have conversations with folks, I think it's really, really important.
If you can, it's just really getting to, like, a proper bargaining agreement.
But try if you can.
Yeah.
In Carolina, they do meet and confer because you can't have collective bargaining in public service in North Carolina.
And in a lot of the South, meet and confer is something you can absolutely do when it works.
also just all, if people are afraid of organizing or all organizing, no matter how big or small,
is literally just about one-on-ones.
Like that is the core of what organizing is can you talk to another person?
If you can't learn, I'm tired of people going like, I don't know how to talk to people.
Learn, you can, I promise.
Please, we talk to people all the time.
Like, I'm still sorry.
Just do it.
So bad at that.
I understand.
Everyone has their own capacity, too.
like I sometimes I can get too like air sign with it where I'm like everyone's salad um but I'm
like I'm like no you're not and that's okay it's just like but I like so just for like you know I have
been in like substance like circles like substance use circles and like the idea of like meetings
are just sort of like you and somebody else in the room like it's really the same kind of concept of like just
because there's two of you doesn't mean that like there's no other group involved that you can't just have a discussion
and talk and try to find community online that there's nobody like physically near you.
There are plenty of people who are very open about being critical of this technology.
Yeah.
So I think there are definitely ways, at least in terms of like collective organizing.
I will face two, there are a lot of like data workers organizations that are specific to resisting expectation,
especially the tech workers coalition, as well as the data labelers.
I think association, let me find the link.
but those are people who are like data workers who are actually like doing content moderation
and like annotation and like who are being affected by AI in a very real physical material
are those the people in like Kenya who unionized or yeah I remember when that happened that was dope
and this has been K&J's union corner there was there was another book that you recommended
a couple of minutes guy I think it was a book but I didn't quite catch the
title. It wasn't the AI con.
Yeah. Maybe I'll just have to go and actually
listen to the episode. No.
If it's gone, I was just...
If it's not, it's data cartels by Sarah Lambendon.
That's what it's probably.
Shouts at Sarah Lambdon,
friend of the pod. We know you're listening, Sarah.
Hi.
We hope you're listening, Sarah.
Thank you so much, yeah.
You're so cool. Anyway.
Yeah, I, one of the...
You mentioned access to information that has integrity.
Yeah.
There's something I took a note of, I can't remember which section of the paper that was in, though.
It was closer towards the end, I think.
But it was one of the ways in which we can talk about the value of librarianship in response to AI,
because the information that you get out of an AI is non-repeatable and it's non-reversible.
So you can, like, ask it who's Tom Cruise's mother.
But if you type in, like, Tom Cruise's mother's name, it's not going to, like, if you say, like,
who is the son of Tom Cruise's mother's name,
it might not give you the answer because it's not a database.
So you can't do like back and forth searching
and then re-retrieving information because it's not structured in any way.
I'm curious how we like talk about information integrity
because I feel like in the current climate,
it's a very difficult subject matter to get people to care about
because there's this sort of nihilistic approach to information.
I empathize with this as somebody who did a communication degree
and learning about
Albertton window,
learning about, like, framing,
learning about, like,
different kinds of ways
that speech is manipulated
and or speech is framed
in very particular ways
to reach a certain end.
It's also just generally in, like,
business, you know,
societal kind of context or politics,
civic engagement, etc.
Like, things are trying to reach
of certain action or certain ends.
And, like, it's hard,
and I really understand, like,
a feeling of nihilism
because I do struggle a lot
with this, like, sense of, like,
okay, what is even like real quote unquote information?
I'm not super educated on information literacy, like to the same degrees, like a lot of my peers, I think.
I just didn't study it in library school, but I think I do it work.
Probably better off, to be honest.
Yeah, like, I don't know.
I saw that framework and I was like, okay, that's, that's true.
It's something.
It sure is something.
Yeah.
What I take away from it is like the authority point.
Like the authority is constructed and like this is.
something that is contextual. And so I think for me, two things I think about is like, however I think
about information, it's like, okay, what end is this information trying to reach? As you're thinking about,
I don't know, someone trying to give some kind of fact to you. It's like, I don't know, some kind of
statistical fact, like, say some politician or whatever doing that. And you're like, I think that's wrong.
Like, great, act on that impulse. Also just try to look up more information about like what that thing is.
I think that's kind of an obvious point. But just like, you know, taking the step to,
critically comprehend what people are saying. Also, thinking about integrity, I think also when
I was writing it, I was thinking a lot about like file integrity and like the literal like metadata
of files. And just like yeah, yeah. Exactly. Like it's like the technical metadata is like that
helps to like construct what this thing is. So that's like where I feel like comfortable
speaking on. I think it's definitely one of those like I have to wrap up this paper.
this is a solution that I'm thinking about.
But yeah, I really like the idea of being able to track where information is coming from
and understanding what is this meant to serve, who is saying this, what is their context,
why are they saying this to me in this particular moment?
Like, what is the goal here?
Like, you're all saying before about, you know, like understanding what the vendor's
goal is meant to be.
Like, surely to some degree they are trying to provide a service, but that service is
going to come to the expense of our money.
Right? So like what is that really impact? What does that really mean for us? And what is the like power dynamic there? So thinking about power and like the role of that as well as well as possibilities for framing and like an intent to a lot of things are going to intend to like mislead people and provide and also increase engagement as we've seen obviously in the last many years of just being on social media. A lot of information is just meant to animate you or meant to.
excite you in a way that gets you pissed off and gets you, you know, feeling a sense of like,
you want to be able to create content that then, you know, makes more money for these platforms.
Yeah. That's my little social media study soapbox. But on the topic of integrity, I'm throwing
this out here because I've, this is one of the parallels that I've seen for a long time is in
cybersecurity, there's the CIA principle, which is confidentiality, integrity, and availability,
and how you have to balance those three things when you are.
are basically doing a risk assessment.
And information is basically the same, right?
You want to think about the confidentiality of your information,
what the availability of it is, and the integrity.
And yeah, like the integrity part, it's dropped a lot, I feel.
So I'm just throwing that out there because it's one of those things that I think about
all of the time in relation to a bunch of different things.
And I wanted to get it into the show notes.
So there you go.
That's a really helpful resource.
I'm thinking also, too, about, like, subjectivity and information.
and the importance of thinking critically about the person themselves
conveying information to you and where they might stand within an institution
or less sort of infrastructure and how that frame a speech.
I mean, I think about this a lot with going back to library leaders and AI.
I think a lot of people are feeling a sense of, like,
if I don't parrot this talking point about AI,
then I might not get a job or I might not be accepted.
I think it's really about belong.
Like, I think it's about, because when I talk about this with people or when I think I'm social media or whatever, and sometimes I feel like there is a sense of like, well, the cool kids don't like AI.
And like, I'm not a cool kid.
And that makes me feel bad about myself.
And it's like, okay, like we're adults.
Like, I don't know why this is happening.
Like, yeah.
So I just sometimes I think there's a lot of ego and like emotion, affect evolves in these discussions.
that I wish more library people were talking about, but yeah, at least in scholarship.
I think they do.
But at least in the words and the books and such.
Yeah, there's a lot of signaling that has to happen, which is that you don't necessarily
need to believe something, but you sign on for the beliefs.
I mean, it applies to almost like any kind of social situation.
So you say certain things in order to show that you are in some kind of in group.
Yep.
So, yeah, that's, I think, definitely among boosters.
it's definitely like I'm with it
I'm with this group
I'm with the people who are making the money
who are doing the stuff
who are changing the world
even if they don't like
understand AI in any way
or you know
they sign up for those beliefs
and well
even if they don't believe
themselves entirely
or think about them
they have a belief
about those beliefs
that they're good things to believe
well anyway
I think we've covered everything
is there anything that we missed
I can talk about the junior
fellow program a little bit
oh yeah
in case other people
were interested in doing that
Yeah, I mean, professional development-wise, I think that was a really great experience.
Folks are looking for paid internships that have remote options.
I really recommend it.
However, you do have to become a federal employee, at least temporarily.
So that's just sort of the barrier to that.
So besides that, you get access to understanding a part of the library, which is pretty cool.
I worked at the web archiving section, and everybody there was really great.
And I got to work with the NASCAR communications web archives, which is really fun.
I got to actually, like, essentially do cataloging, which was really impactful and just, like, you know, help people understand, like, within a certain subject matter, how to organize more files and records and stuff.
Even though it was like three months, I felt like it was impactful.
So I recommend that to people that's there.
I think you have to be coming out of school.
It could be undergrad or grad.
I actually was one of the few people who were out of grad.
there were most people who were out of undergrad and going into library school,
or like thinking about library school, which is very great.
So yeah, it's paid depending on where you are.
So I recommend that.
ALA has something, certainly.
But I recommend, like, at least if you want something a little more concentrated,
project-focused, like work experience, I recommend the Junior Fellows program.
I'm glad they have remote options now.
I thought about doing it when I was going out of undergrad before I went into library school,
but there weren't remote options at the time.
And so it was like, I'm going to live in Washington, D.C. for three months.
And I was like, I can't do that dog.
So, yeah, I wanted to do it.
It's really cool you got to and that they do remote options now.
I was grateful to be able to do it.
And Tulsa would be able to take a leave for my job.
Like, that was really impactful.
I wasn't able to do that.
I wouldn't have.
And also, like, I live with my partner.
Like, we split costs.
Like, so there are ways that, like, it worked for me.
I would say certainly probably 50.
better for folks who are like, yeah, earlier.
And maybe their career journey who can take a couple months off from a job
or just start working at Library of Congress, I guess.
It's a great recruitment program, basically.
It's like it's a good way to kind of get people in the door.
But yeah, D.C. is not the moment for me.
I'm at the moment, at least.
But, yeah.
All right.
I'm going to put the article in the notes and everything that we mentioned, all the books.
Do you want to plug anything?
People can find you or anything like that.
I am on Blue Sky at K, the letter K, and then S-L-A-T-E-R.
D-S-S-G-E-S-Gy, that's social.
It's the main place you can find me that I talk about library stuff.
I'm also, I've worked with the Library Freedom Project on the AI and Library Survey,
and we're doing a lot of work with that,
at some point right now we're taking the survey in, and we're looking at results
and doing all the fancy coding and stuff, so that's cool.
Yeah, we're kind of a lot of current projects.
I have a lot of applications in for things, so I'm starting incubating,
but trying to do more work specific about data centers and data workers and connecting that to
information studies.
Yeah.
Also, higher K.
Yeah, please.
Yeah, okay.
I'm in the Chicagoland area.
Galcom would be cool.
I work in a maker space right now, but it's not forever for me.
But yes, Galcom archives A-plus.
Nice.
All right.
Well, thanks for coming back for a third time.
Yeah.
Thanks for having me.
I'm so happy to see your faces.
Yeah.
Yeah.
Good night.
