Tech Won't Save Us - How Interfaces Shape Our Relationship to Tech w/ Zachary Kaiser
Episode Date: February 15, 2024Paris Marx is joined by Zachary Kaiser to discuss the power of tech interfaces, why data isn’t an accurate reflection of the world, and why we need to discuss democratic decomputerization.Zachary Ka...iser is an Associate Professor of Graphic Design and Experience Architecture at Michigan State University. He’s also the author of Interfaces and Us: User Experience Design and the Making of the Computable Subject.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Support the show on Patreon.The podcast is made in partnership with The Nation. Production is by Eric Wickham. Transcripts are by Brigitte Pawliw-Fry.Also mentioned in this episode:Paris is speaking in Auckland on February 18 at an event hosted by Tohatoha.Zachary wrote about dream reading technologies for Real Life.Zachary mentions specific works by David Golumbia, Ivan Illich, Aaron Benanav, John Cheney-Lippold, Thomas F. Tierney, Marisa Brandt, Arturo Escobar, and James Ferguson.Support the show
Transcript
Discussion (0)
Having a compelling vision of living well with less, having a compelling vision of a different
way to be in the world is lacking because advertising is so good at convincing us
that the way things are right now is the best that we've got. Hello and welcome to Tech Won't Save Us, made in partnership with The Nation magazine.
I'm your host, Paris Marks, and this week I have a pretty interesting conversation for you.
My guest is Zach Kaiser. He is an associate professor of graphic design and experience architecture at Michigan State University. He's also the author of Interfaces and Us, User Experience Design and
the Making of the Computable Subject. Now we've talked on the show before about this proliferation
of smart or supposedly smart gadgets, where everything that we use now seems to have a screen
or internet connectivity or voice control or something like that. And the
commercial pressures by major tech companies and, you know, other companies that we wouldn't even
generally consider tech companies in order to do that, right? In order to add all of these
technologies to, in many cases, pretty mundane things that probably don't need them in the first
place. But in Zach's book, he focuses on a bigger element of
this, where the interfaces that we use from these smart devices to the apps to everything else kind
of positions us in a particular way and makes us understand ourselves in a particular way because
of the way that data is collected on us and then presented back to us as though it's an accurate
reflection of the reality
in the world that we live in, when that is not necessarily the case.
Because these things do have to be translated through the particular sensors that pick up
this data in the first place.
But when we build our world in this way, when we have all of this data being collected on
us and the expectation is that we shape ourselves in order to better reflect what
the data wants us to be, that has very serious impacts on how we think about addressing issues
in the world, how we think about addressing issues in our own lives, how we think about
changing the world for the better. Because if something can't be tracked or recorded with data,
then what is the point of doing it? Or is it even something that we recognize has to be addressed or can be addressed? And then, of course, there's the deeper question. If all of these technologies are having these deep impacts on us, should we really be collecting data on so much and using that as the metric that allows us to determine if we are doing well as a people,
if society is headed in the right direction, there's a deeper challenge that's needed here.
And so we end this conversation with a talk about this notion of de-computerization and
actually challenging this idea that the complete rollout of digital technologies is actually
a form of progress and something to be embraced.
So I quite enjoyed this conversation with Zach. If you feel like you learned from it,
make sure to leave a five-star review on Apple Podcasts or Spotify. You can also share the show
on social media or with any friends or colleagues who you think would learn from it. And if you do
want to support the work that goes into making the show every week so I can keep having these
critical conversations, you can join supporters like Dunry from Ann Arbor, Michigan, Ezra from
Seattle, Allison from Montreal, and Matthew from Tennessee by going to patreon.com slash tech
won't save us where you can become a supporter as well. Thanks so much and enjoy this week's
conversation. Zach, welcome to tech won't save us. Thanks. I'm so honored to be here. I just
have to say I love your podcast so much. I'm so excited. Thank you so much. I'll you know,
always accept that compliment.
I think the listeners will start to be like, is he just forcing everyone to compliment him
in exchange for coming on the show? But no, man, it's legit. It's legit.
So, you know, you have this book that I read, which I thought was fascinating called Interfaces
in Us. And I think that people recognize that the technologies we use are kind of shaped by
capitalism, by the socioeconomic reality that we live in.
But why is it important to look at the interface level of what is going on with these gadgets
and apps and things like that, rather than just kind of the broader level that we would
usually look at, I guess?
It's such a good question.
So I think for me, an ahistorical
answer in terms of my career would reference sort of a Marxist materialism. Like, if I had known
about that when I started this project, which I really didn't. Like, I think my learning as someone
on the left, as someone who came out of industry, I don't have a PhD, I have an MFA. So, and not to
say that people with MFAs can't like learn about this stuff, but like I just, I was of industry. I don't have a PhD, I have an MFA. And not to say that people with MFAs
can't learn about this stuff, but I was in industry. There's so many things I didn't know
about when I came to academia that part of this book is really itself a compendium of stuff I've
learned along the way. And I think the engagement, the direct engagement with interfaces is to me, part and parcel of understanding the relationship between
technology, if we can sort of, we'll put it in the singular, even though we acknowledge it as
multiple, between technology and politics and between political economy and society, and also
how it impacts us as individuals. And I think that not only is it important to have that as opposed to a sort of broader
engagement like you need a case study you need to be able to say oh well like
I can assert for example that the interface to my Fitbit or whatever is making me a particular way
or I can assert that even more broadly quantified self-applications are like making me feel a particular way about myself or even the aspiration to 10,000 steps.
But until we like really directly engage with those objects and also including the technologies underlying them.
We were just talking about the wonderful work of David Columbia.
That was one of the things I always appreciated about his work is like the really specific engagement with actual technologies themselves
and to have worked in that space and to be able to understand how those things work.
There is sort of what I might call like an exegesis about all the technologies and like
a first model of a Fitbit in my book.
And like there's a reason for that, which is it's sort of trying to live out
the materialism to which like I subscribe. I think that there's like a strong case to be made for
that. And also, I mean, the last part of that I'll answer to that question is we only experience we
meaning, and again, like I use the term we and it's feedback I got in the manuscript as well.
I think it's really important feedback. It's like when I use we, I tend to mean those of us in the global North. I tend to mean those of us
using these technologies as consumers. I might say we as a designer, and I'll try to qualify
that statement. But I think it's super important to say like, when I say we, there's a geopolitical,
sociopolitical and political economic dimension to that that is worth acknowledging. So when those
of us who are users of these technologies use them, we only experience them at the level of
the interface. We don't open them up. We don't build them unless we're designing them. And even
then, designers often don't know how the data about someone's body is captured. They're not examining the accelerometers or
any other sort of sensor, posture sensors or whatever people are using these days.
I found it really interesting as you describe the importance of those interfaces and that being how
we interact with these technologies. In the book, you talk about how those interfaces are also
kind of a place of idea transmission, right? These particular ideas that the designers and the companies behind these technologies
have for how we should interact with them, the ideas for, you know, I guess even how
we should like exist are then kind of transmitted to us through the way that this product or
this app or what have you is actually designed.
And I feel like when we come to these technologies, we are
often not thinking about that either, right? It's just like, okay, how do I use this? How do I get
what I need out of it? How do I make it work for me? But in the process of that kind of interaction,
there is also a shaping that is going on. Oh, 100%. And I would say also for those of us that engage in the critical side of it,
because of the culture or because of the relationships that we're steeped in,
I think we have some assumptions that allow us to skip over the interaction with the interface
to the critique that we engage in. But I think for me as a teacher, particularly teaching in
a pre-professional undergraduate program where my students are like
going to go and get jobs doing this thing. There's this sort of like instrumentalizing
function of design where, you know, we just sort of do the thing and we make it user-friendly,
you know, whatever that means. But I think taking a minute to stop and like look at the interface
and look at its rhetorical dimension and look at its ideological dimension, I think is like an important teaching moment that, and it's part of the reason I wrote
the book or part of the reason I conceived of it in the way I did is because I saw a lot of
literature like engaging with the specific technologies like underlying the interface.
And then there's a lot of literature talking sort of at a more high level about the sort of political, economic, and social dimensions of UX or of technology.
But then there's that middle ground, which is literally where people meet this digital version of themselves that a lot of folks were maybe addressing tangentially or just a little bit.
And so, that's part of the reason I think I wrote the book too. Yeah. But I think it's an important thing
to consider, right? Because obviously a lot of my work is like thinking bigger about the political
economy of these technologies, not so much going down to the actual site where I'm clicking on an
app on my phone and seeing like exactly how it works or whatever. Right. So I think it's an
interesting lens through which to consider the way that these technologies then affect us or impact us or have us start thinking about
ourselves. Like I remember, and maybe this is jumping ahead a bit, but we can circle back,
you know, like last year, you know, it was the new year. I was like, I'm probably going to exercise
more, blah, blah, blah. And one of the first things that came to mind when I thought about
wanting to exercise more was I should probably get an Apple watch so I can like track it. Right. And then like
immediately, because I think about these things all the time, I was like, why do I consider
wanting to exercise more means I need an Apple watch to like track it or whatever. I was like,
this just goes completely against everything I talk about all the time. I did not end up getting the Apple Watch.
100, 100%.
I wrote an essay about dream reading technologies for real life.
And the intro to that essay is basically me being like, I'm trying to sleep better.
So, I got all these apps to like help me sleep better, right?
It was like, you know, or like I have a smartwatch, like help me sleep better. And I'm like, wait to like, help me sleep better. Right. And it was like,
you know, or like, I have a smartwatch, like help me sleep better. And I'm like, wait a second,
what, what am I doing? So yeah, a hundred percent. Yeah. And I would say that this is a very common thing, right? We're trying to improve something in our lives or trying to address something.
And the immediate assumption is like, let's find the technology or the app that is going to help me to achieve this
goal that I have. Not even kind of taking the step to consider, is the technology going to help me do
this? There's just that kind of like implicit assumption that of course, everything is
technological. Everything is like appified. This is going to help me do whatever it is I want to do.
And also on top of that, even if it helps you do what you want to do. And I think this is like one of the things I try to address
in the book is there is an important distinction between even if something helps you. Like there's
the accuracy question. Is this accurate? And I think that's one of the like things that a lot
of criticism, especially in the early days of quantified self-technologies, a lot of the criticism hinged on accuracy and more recently bias. And those are really important. But I also
want to point out that like, even if those things are accurate, it is then shaping your behavior in
order to enhance the feeling of accuracy, in order to enhance the idea that you are nothing but a
computer and that you can be
quantified through these technologies that measure your behavior. And like part of what I want to ask
is like, is that desirable? Is that a way that we as a society or as people or as communities want
to live? And there's not like a binary answer to that. It's accurate or it's not accurate. You
know, like technology can help us and it can help you be healthier. It can help you work out, can help you track certain
things about your body. You know, I have one of my best friends in the whole world is diabetic
and I will tell you for certain, his life has been enhanced so greatly by medical technological
advances. And I don't want to discount that. You know,
when I say sort of flippantly, like computers were a mistake, I have to acknowledge that there
have been things that have materially changed the lives of folks for the better. At the same time,
the underlying impetus for those things and the manner in which they're distributed,
who has access to them, who doesn't, the sort of political economy of patents, all of this other stuff comes together to create a much
different world than the world that that one individual person might occupy and sort of
like be benefiting from.
And I think that like there's just, there's like a lot there. And I think that we have to try to unpack some of those things in addition to the accuracy
question, in addition to the bias question, like there's that like deeper level, which
is more complicated if we're honest about it.
Totally.
But that nuance is key, right?
Recognizing that that nuance exists and that it's not just that every like digital technology
is terrible and we need to get rid of it but there are ways that technologies can be used very concretely to enhance people's
lives and it's totally fine to embrace that like someone who is diabetic and who really benefits
from having this ability to kind of you know track blood levels and and all this kind of stuff
whereas the idea that every single one of us should have an apple watch strapped to our wrists
tracking everything that we do and you know all bodily functions all the time, that strays more into the I don't know, maybe you disagree with that, but is
definitely held by the tech industry that the human is computable, can be recorded, or can be
like quantified with data, and that that data is an accurate representation of the human. I think
that is an ideological statement that I don't agree with, not something that is rooted in reality, even though the people at the top of the tech industry, the people pushing these ideas definitely think that that is how the world works.
Oh, yeah. And that's, again, you know, to me, that's like one of these things that I'm actually writing a paper with Gabi Schaffsen.
Shout out to erstwhile Tech Won't Save Us listener and web developer.
Exactly. Maker of our website. Yeah.
So, we're working on a paper right now called, Should We Scare Our Students? And the question
that we ask in the paper is the title, but also we then drill down to like the question of whether
or not, like we are giving ourselves too much credit as educators in saying, should we scare our students?
And also, are we giving our students enough credit?
And by that, I mean like, just like what you were saying, like this is an ideological thing.
The belief that we are fundamentally computer-like, computable and computing in some ways, that's an ideological belief. But just like Stuart Hall knew back in
the 80s, when Stuart Hall writes The Great Moving Right Show and he talks about why Thatcherism
takes hold in Britain, he smartly says like there is a kernel of truth that maps onto the material
experience of people's lives in Thatcher's ideas. And so, it's not the sort of
angles like false consciousness, right? Like Stuart Hall and he criticizes his colleagues who say,
you know, he says like, you know, how is it that everyone I'm surrounded by can see through this
screen of ideology, right? Whereas all these other people in the world are just like living
in false consciousness. You know, that's a really self-important way to sort of like live in the
world. And Stuart Hall's right that that's like not how things work. Ideologies proliferate in
part because they're mapping onto at least someone's like material circumstances. And so,
if we take a look at, you know at how your Apple Watch may help you work
out or how certain other innovations produce certain personal goods or social goods, it would
be disingenuous to say that like, I don't know, we're all like operating in false consciousness.
I think that we have to acknowledge that there's a material validity that we experience when we engage with the products of Silicon Valley, even though...
And also, I think the other component of that is like, I'm curious the degree to which those in Silicon Valley, I think you've got like the PMC, right?
The professional managerial class, right?
Like, they've probably adopted this ideology so much more thoroughly than the people that actually own the means of class, right? Like, they've probably adopted this ideology so much
more thoroughly than the people that actually own the means of production, right? Like Musk
and Zuckerberg, those guys don't care whether we're computers or not. They're making a bunch
of money. It doesn't matter, right? That's different than somebody who's getting paid
$500,000 but is still a wage laborer, right? They might be getting paid a bunch of money,
but they're getting paid to internalize and maximize the value of that ideology from them on down.
I think there's some like interesting things there.
And also like my UX students, you know,
like when my students go out into the world,
in some ways they have to internalize the ideology
in order to make like the products and services they design usable. So there's like, again, like that nuance is really important. And
I think that's where, you know, an intervention like Illich's work and like Beninov's work,
I think becomes really important. We can talk about that later. I figured that's maybe on the
docket for later. Yeah, it's absolutely coming up. I do think it's interesting you talk about
the distinctions there, because I think I would say that some of the people at the top also very fervently believe it because they believe themselves to be computer-like in a sense.
Also very true, right? Like this sort of maximizing optimization is very much in line with the effective altruism crowd, right? Like that's exactly what those folks are doing. And of course, it's garbage,
just like to be fully clear here. Yeah, absolutely. You know, like I see the
statements that Elon Musk makes where he's talking about his brain as though it's a computer
all the time, right? And so, there's clearly something going on there.
And that's why everybody needs to read David Golombia's Cultural Logic of Computation. If there's a book that demonstrates the ideological dimension of believing that your brain is
a computer, I don't think there's a better book that does it than that book.
That's a great recommendation.
I also think that's a good opportunity to pivot to talk about some of the bigger ideas
that you present in the book.
Obviously, you've mentioned the term quantified self a few times. I think that one's probably pretty obvious to people, you know, this idea
that we are going to collect all this data on ourselves, we're going to quantify these aspects
of ourselves. But you also talk about something called the computable subjectivity. Now, you know,
that's probably a bit of an academic term. Can you break that down for us and talk to us what
it actually means, what this computable subjectivity actually means and what it means for us?
Yeah, sure. Like all good academics, I felt like I had to invent something in order to
continue to advance my career. I'm just kidding. I mean, there's a certain, there's a grain of
truth to that, right? And we can talk about academia at some point if we need to. But,
you know, to me, this idea that like John Cheney Lippold, who also had a huge influence on this
book, there's a shout out to him in the acknowledgements. Another like touchstone for
what the genesis of this book was, was John's article, A New Algorithmic Identity, Soft
Biopolitics and the Modulation of Control, came out in, I want to say 2011. It's old and it is an
awesome, awesome article. And he expands
on it in his book, We Are Data. And one of the things that I was thinking about is sort of like
the linkage between the ways that we're construed as data and then something like what Golumbia is
talking about, which is the ways in which we've come to be understood as functioning like a computer and how those link up to produce
a subject, a political subject specifically that operates computationally and that also
is made up of nothing but information that can be read by a computer.
And so, like, I think I might distinguish that from just like being made up of data
in the sense that like data can be understood maybe to be, you know, everything that's computationally legible.
But I would suggest that there's like metadata and all sorts of other stuff that are like properties of information that we feed into computers that then they respond. as a sort of global north, sort of western hegemony society, have come to understand
ourselves through the products and services that we use, which tend to be computational
and tend to have interfaces to them.
We've come to understand ourselves as both computing and legible to computation, in other
words, computable.
We do so not purely out of ideological commitment,
but precisely because of the convenience and ease and the way that these things allow us
to work more or to, you know, be more efficient or productive, right? There's a material benefit
that we derive from using these things in a particular way,
and it only serves to reinforce that as an ideological proposition.
As part of that, you also talk about how there's this assumption, I guess, as part of this, that
the data that we collect is an accurate representation of the world around us,
and that there's no kind of barrier there that, you know, it's not that there are some things
that can be collected and some things that cannot, but whatever can be collected, whatever can be made legible by these computers
is also the world, basically. Can you talk about kind of the problems with that understanding of,
you know, how we see the world around us or how these technologies position how we should see the world around us, I guess.
Yeah. I mean, there's like a depth to that idea that I think is really important. So like the
idea that there's a one-to-one correspondence between data and world is one that we often
see sort of just like manifest in our daily lives, whether it's counting how many steps we've taken, or whether it's assessing
the data about our neurological state, or whether it's quantifying data about pain.
All of those technologies are built on this sort of particular assumption of the one-to-one
correspondence between data and world. However, I think the people that build those technologies
probably have a more sophisticated understanding of that relationship because they understand like
the sensors and the actual like translations that are required to take a phenomena that is
qualitative or human or environmental and to translate that into something that is like
machine readable data. And one of the things that interfaces do that is like machine readable data.
And one of the things that interfaces do is they collapse all of that,
especially interfaces to consumer products.
So I'm not talking necessarily here about like scientific instruments, but perhaps more about the sort of like basic everyday UX kind of things that,
you know, you and I as consumers experience. All of those technologies,
because of the impetus for ease of use, because if users don't adopt them, they won't be profitable.
Although, as you've addressed on your podcast a number of times, Uber's profitability is
questionable, right? So maybe it's just about garnering as many users as humanly possible
in order to sort of artificially inflate the value of your company.
Great.
That's what they're doing, right?
But either way, it's not to the benefit of the company to reveal any of the translations
that are required to get from world to data and back.
And what that does is it reinforces an idea that our world is made up of nothing but data.
And you see this assumption like
all the time. One of the reasons I was sort of like disillusioned with actor network theory,
we were talking about ANT earlier. One of the reasons I was disillusioned with this is I saw
a presentation where someone talked about like data trails or something like that. And they were
saying like, they asserted that data pre-exists humanity,
that data has always existed. And like, there's no better example of that ideology seeping in so deeply to someone's like core beliefs about how the world works than that.
It's wild to think that even before there was the ability to like collect the data that were like,
you know, the world is just data. It's like, I think the ability to like collect the data that we're like you know the world is
just data it's like i think the world is like biological and stuff like that actually which
is actually quite distinct from what you're talking about what comes to mind though based
on what you were saying is really how these interfaces these technologies are designed so
that we look at them and we assume that what it is showing us is kind of a one-to-one
relationship and that the marketing promises being made by these companies are accurate
reflections of the capabilities of the devices, right?
And the kind of ideology that we're talking about is embedded in that and helps convince
us that those things are accurate.
But then you get these kind of marketing narratives that the Apple Watch can kind of help you
with your fitness and make you more healthy and all this kind of stuff. But then we get reports that studies actually find
that when you have an Apple Watch, you might not actually be as physically active as if you didn't
have an Apple Watch or that there are problems with kind of false positive readings of health
signals and things like that, which go against this idea or this narrative that the
companies want you to believe about what these devices actually do.
Yeah. The thing that I always think about that is I find this when I'm talking with my students too,
because the ideological water in which we swim suggests that although there are currently issues
with the accuracy of this technology, don't worry, it'll get better, right?
Just like with facial recognition, like same shit, right?
Where it's just like, oh yeah, don't worry, we'll fix the bias.
We'll iron all this out, right?
It's like Elon Musk, he killed like how many monkeys or whatever,
you know, with the neural link testing.
He's like, don't worry, don't worry, I got this.
This is such a common assumption that technology just progresses. There's, you know, this like
teleology, it's just always moving forward. And I think that's one of the most dangerous parts of
even reporting, like even popular journalism about the failures of technology, they tend to ascribe
a kind of progress to the technology. Present company excluded, of course, right?
Of course, of course.
I mean, we see it all the time. I mean, oh my God, I've been reading like...
So, do you remember the newsletter Protocol?
Yeah, absolutely.
Protocol was dope. And they went under, towards the beginning of
the pandemic, I think they folded. And Politico took over their newsletter list. And Politico
has this newsletter called Digital Future Daily. And let me tell you, the number of times I've seen
this is bad, but blank, right? It's like pretty incredible.
And I think that's characteristic of a lot of reporting on the failures of especially things like the accuracy of certain features of the Apple Watch or the accuracy of certain
features like facial recognition or facial expression recognition technology.
And so, to me, the buffer against that is like, even when it gets better and even
when it gets more accurate, who is materially benefiting the most from that enhancement in
accuracy? I can certainly tell you that the app that tracks your poop getting more accurate
is not benefiting you nearly as much as the functionality of that benefiting
whoever the capitalist is that owns the platform on which your poop tracking app is built,
right?
It's not the developer of your poop tracking app.
It's actually the capitalist class that owns the material infrastructure on which that's
built.
And I think that has to be part and parcel of the conversation around the accuracy and
bias of data collection and extrapolation is like the
asymmetries that are baked into the system regardless of the accuracy. I think that's
super important to think about. To me, that's very much a political economy question.
That's an interesting point, right? Because on the one hand, you can think about how
the interfaces are designed and how things are put together to make us think about these products or think about these technologies in a particular way.
But then that can't be divorced from the kind of larger commercial pressures that are being undertaken where people's fridges have computers in them now and they're microwaves.
And I've seen, I think it's Kohler advertising a toilet that's a smart toilet now
oh my god the ad about the where the toilet is in marfa it's like in the middle of the highway
in marfa you know and like marfa's become this sort of like weird place where like louis vuitton
does stuff or like whatever right and kohler's like yeah we're putting a toilet in the middle
of the highway in marfa it's like what is going on like it's a toilet i need it
to flush it's crazy it's so crazy to me yeah i think that's like we my students i talk about
this stuff all the time like it brings up a bunch of fascinating points here about like again just
going back to this like who is materially benefiting from these innovations and how and what happens
when we assume like our bar for like what makes our lives better is so low it's like crazy it's
crazy it's like oh my fridge has a computer in it but like a lot of people in the united states of
america can't buy food that's crazy to me that like we me that like a fridge with a computer in it constitutes innovation.
Like what would be really innovative is if we could feed everybody, you know? Like that,
I just, it's insane. So, my students, I talk about this all the time. I had a student the
other day, she was like, this is in my interaction design class. So, this is very pre-professional,
not a whole lot of like critical theory stuff going on. She was like, my sister's car has a screen,
you know, like this huge screen in it. And whenever I drive it, I can't tell if I'm turning
up the heat or not because I have to like look at the screen, but I don't want to take my eyes
off the road. And I was like, yeah, that's bad interaction design. In your 1990s Volvo or
whatever, like you knew you could be watching the road and you knew by feel, you know, what dial you were like grabbing. And there's feedback, there's like physical feedback that you're getting. There's nothing of history of ideas that are required to make us
believe that that is innovative in the first place and to make us believe that we should then consume
and purchase the thing with the giant screen as opposed to the thing with the buttons on the
dials that worked perfectly fine. Yeah. And I guess the point I was getting to was kind of like,
obviously the interface is an important aspect of this, but then there are
these other pressures that kind of force these interfaces into these products in a way that
people probably aren't really asking for, like who is really asking for Alexa in their microwave?
So they can say, you know, Alexa pop my popcorn or whatever, right? Like these are things that
I don't totally understand in the sense that I don't
understand the desire. I understand the commercial pressures that are pushing us in this direction.
The issues where this is being framed as progress because now the internet and now a screen is in
virtually everything, yet someone's fridge doesn't last as long because it can fail more quickly
because it has been redesigned in
this way. It just seems so broken and backwards. Yeah. And that desire, I mean, that desire has
been manufactured and that is the fault of what design has become under capitalism. You know,
again, like I think I'd be remiss if I didn't suggest that like part of what I do is problematic
in that way. You know, I'm like, some of my didn't suggest that like part of what I do is problematic in that way.
You know, I'm like some of my students will go and like work in advertising agencies and
that's not to say that those students don't need to make a living and they don't need
to like buy groceries and pay rent and pay back their massive student loans, which is
horrifying because public education should be free.
But at the same time, part of why we adopt those desires, part of why someone says to themselves
like, oh, isn't it amazing that Alexa can pop my popcorn for me?
I love that.
You know, I can walk into the house and be like, Alexa, turn the lights on.
Alexa, pop my popcorn.
Like part of that is the way that like that extreme level of quote unquote convenience
or even the way it's been construed
as convenient. In a lot of ways, it's not. That has – and I try to allude to this a little bit
in the book. That idea has such a long history, you know, and there's a really good book that
like I heard about from Cameron Tonkinwise. Shout out Cameron Tonkinwise. He posted a photograph
once, this was ages ago on Twitter,
of like a couple of books he was reading. And one of them caught my eye. It was called
The Value of Convenience, A Genealogy of Technical Culture. It's by this guy Thomas Tierney. It came
out in 1993, I think. And it's a wild book. It's really good. And that, you know, like talks about the story of how we come to see convenience as something to be valued. And then at the same time, how capitalism twists the pursuit of one's calling to become about your role in the economy as opposed to your role in like religious practice. And these things sort of come together in very strange ways.
It's a really cool book. I love like finding books like that from the 90s and even before that
feel so relevant to today, because it shows you how this notion that we have from Silicon Valley,
that we're in this like new era, that everything has changed. It's like not true at all. And
actually people have been criticizing these very same things for like a long time, but picking up on what you were saying there, you know, one of the
things that you write about in the book is that these technologies and the way that these interfaces
are set up kind of push us to think of ourselves more as individuals who are acting for our
self-optimization and that there's a very big difference between individualism under
neoliberalism and simply individuality. Can you kind of parse that out for us and the consequences
of having these incredibly individualist approaches that are encouraged by these technologies?
Yeah, there's a great term, which I think I read in one of Zygmunt Bauman's liquid books,
Liquid Modernity, I think he references it in, But I think it's a term that came from the sociologist Ul, economic, and technological forces over the
history of the last, I don't know, we'll call it like 70 years, like beginning in the Cold War,
we have come to see problems as being solvable exclusively through means of individual action
as opposed to collective action. Rarely do we see instances of collective action
doing the kinds of things that probably would benefit a lot of us a lot more and a lot faster
than an app designed to like help you track your electricity consumption. Like it's not bad to
track one's electricity consumption, but like ending a reliance on fossil fuels would probably be a lot easier through mass action, right? As opposed to like, you know, some of these
individual solutions. But it's interesting. I mean, we see this everywhere and it's not just
fossil fuels. It's not just consumerism. It's not like, there's so many things where it's like, oh,
my institution has a partnership with Apple's like developer academy
and one of their first projects. This is a very complicated situation. I don't want to
oversimplify it, but I think it's an important example. So, I'll say it here, but I think it's
important to acknowledge that it's like nuanced and complex and there's a lot there. Sexual assault
on college campuses is a really, really big issue. And so, one of the
things that students did in like one of their first projects with, I don't know if it was the
Developer Academy or if it was iOS Lab or something, we've been partnering with Apple on a bunch of
stuff. And the students developed like a safer campus app, which is really cool. And it like
has like these alert buttons and all this kind of stuff. But one of the things that I talk about in class around this project is that app on its own is not going to stop rape culture.
It's not going to end patriarchy, right? It's not going to change the fundamental
structure of oppression that leads to young men believing that women are objects, right? That's not going to change.
And so, to me, even if a technological intervention at an individualist level is
useful, it cannot be on its own a solution to what is a systemic issue that is like baked into
the fabric of society. It's so much easier though, right?
It's so much easier to say, oh, we can fix this with technology.
You were talking to Theoria Frankos, right?
Huge fan.
She talks about like carbon capture technology and electrifying vehicles.
And the American solution to carbon emissions is like electrify everybody's car, even though individual
transportation is actually the issue, right? And so I think like her work is also like a great
touchstone for thinking about the relationship between biographical solutions and systemic
problems. I completely agree. Obviously a big fan of Thea's work. Can't wait for her next book so I
can have her back on the show. One of the things that kind of stood out to me as I was reading the book is I'm sure that most
of this was written kind of before the AI hype that we've been in for the past year. I feel like
there are a lot of connections between what you're talking about in the book and what we've been
seeing where on the one hand, there is this view that human intelligence can be replicated in these
machines, that the human brain is basically
a computer. And so we just need to build a similar computer in these data centers that can then
have conversations with us or whatever. And then there's the other piece of this where
these companies are developing very specific kind of interfaces through which we interact with these
tools that are designed to, again, kind of make us believe
that it has these capabilities that it doesn't necessarily have. And so I wonder how you reflect
on, you know, how these things that you've been writing about apply to what we've been seeing with
this generative AI over the past year or so. It's such a good question. I've been thinking about it
a lot, partly because I just finished working on an installation in a museum that's about AI, loosely speaking.
You know, speaking of things that go way back, people predicting and kind of thinking about this kind of stuff, the installation is called Blessed is the Machine.
And the reason it's called that is because that's the mantra of the citizens of the global sort of subterranean world society.
In E.M. Forster's short story, The Machine Stops.
I don't know. It was published in 1909. It is an incredible story and encourage folks to go check
it out. But I think in a lot of ways, like the hype around AI, particularly as it relates to
the interfaces to those products and services,
those interfaces, again, like we're talking about sort of collapse all of the things that are
required to make that thing appear sentient, right? Or to make it appear as though it knows
whatever it is that you're asking it. And I think that that's an important piece of the puzzle is that like the interfaces to those products intentionally lie.
They intentionally conceal certain things about whether it's the resource needs of those technologies, whether it's the notion even that they are sentient in the first place.
There's a great term, stochastic parrot, which is, you know, like basically like these are sort of predictive things, you know,
it's predictive algorithms that are like producing some of these inferences that then
get spat out. I mean, the AI thing is just, it's hard for me to talk about it without getting like
super angry because for the vast majority of it, we don't need it. We don't need it in any way. We're making images
that are artworks, quote unquote, that are like super derivative and like, who cares, right?
We're doing things like writing poems that are whatever. We're doing literature reviews.
And a literature review is really important. Great. But why do you need the AI to do the
literature review in the first place? Because you're under a whole ton of pressure to publish a bunch of journal articles so that you can go and get tenure, right? Or so that you can get that next job or you can get that next research job or whatever, right? Which makes these things appear necessary when actually in a world where we would somehow democratically determine what we would want our technologies to do, I guarantee you that none of that shit would be on the list.
Like food.
Food for people would be really cool.
There's a lot of things that would be great, you know, that would be much higher on my list than like a AI that can make a bad painting.
What? We don't want digital paintings, digital artists?
Yeah, I just think it's such a complete joke. And listeners of the show and you will know how frustrated I am at the past year and all of this generative AI that we've been subjected to. But I
think that what you were
talking there gets to another important point in the book where, you know, you told us about this
computable subjectivity, the way that the design of these things, you know, makes us think about
the world and ourselves in this particular way. But in the book, you talk about how certainly
this is a capitalist problem. We've been talking about how this is rooted in capitalist political
economy, but it's not solely a capitalist problem. If there was this optimization and this degree of the quantified self within a socialist society think for me, part of the danger in adopting
this idea of oneself is a danger of falling into the trap of sort of like social optimization in
general and what that optimization looks like. What does it include and what does it leave out?
And so, you know, I offer the example of Cybersyn in the book. And part of the reason I offer that
is not to critique the project of Cybersyn or to critique, you know, what Allende was doing, which is that in some ways, any technology
that seeks to optimize something is going to leave something else out that it's not
optimizing for.
So instead, I had a little post-it note on my computer for many years that just said
the value of the suboptimal.
And I think that to me, there's something to be said for the things that fall outside of the optimization project or the
necessity, the apparent necessity to optimize certain things. I also think that there is a
flattening or a sort of like totalizing, and this is maybe a more like recent thought for me,
to be honest. Most of this book was finished in 2021. So, things change, right? And so, like for me, I've also been thinking a
lot about communal autonomy and self-determination. And again, this just comes from if you're lucky
enough to be an academic, you should be a lifelong learner. And I think that like our
ideas change over time. And so, one of the things that I've learned a lot about recently to the credit of folks, scholars, particularly from the global
South. And so, I'm not saying anything new here, like folks like Arturo Escobar and folks who have
written about the Zapatista communities for a really long time. My friend, Marissa Brandt,
who's a science and technology studies scholar at Michigan State, learning about communal autonomy and
self-governance and democratic self-determination, I think to me, because that looks different
in different places across the globe, to me, that means that like a computable subjectivity,
which is inherently sort of a flattening of human experience because the data that is
required has to be standardized in particular ways.
I mean, imagine how different computing would look if computing was autonomously self-determined by
communities across the globe. Who knows? It'd be super crazy. I have no idea what it would look
like. Maybe we would be quantifying different things or who knows? Literally, I have no idea.
It's hard to even imagine. But I think the hegemony of the projects that have all kind of come together, the patriarchal, capitalist, Western neocolonialist kind of projects have created a situation where like, not only do many of us sort of believe ourselves to be computable, but we believe that in a very specific way that has optimization
for our economic roles under capitalism at the fore. And even if we don't behave rationally,
right? So, like game theory, I talk a little bit about game theory. I had like my Adam Curtis
moment in this book. I love Adam Curtis's films, but I think there is something to be said for
the different ways that
those can be a little conspiracy theory-ish. And so, I don't know, I think that there's like
just so much to kind of think about when we adopt this idea of ourselves and the way that it emerges
through like, you know, the Cold War and sort of like the way that people become able to be modeled as nation states in nuclear war,
you know, implacable enemies. And we all come to be seen as that and it results in this sort
of bizarre individualism. But like we don't behave rationally, but that almost doesn't matter.
That's the other thing that I think a lot about with this book is that I had to navigate very
carefully the counterclaims, for example, that like, if you are a computer, you will behave rationally,
right? So, why do people do certain things or why do they behave in certain ways, right?
And I think the computable subjectivity manifests itself in like material everyday existence in
different ways. So, you can't necessarily say that there's like a one-to-one correspondence
between like how a computational model of someone maps onto how that person
behaves, even if they believe they are fundamentally the same as that computational model.
And so I think like as a kind of intellectual thing, that's a very tricky space to navigate
because, you know, on the one hand, I'm criticizing sort of the game theoretic notion of people,
but at the same time, I'm not necessarily suggesting that they behave
rationally. No, that's all good. What you were saying about the totalizing nature of these
technologies, right? How there's this one kind of particular idea of how computing should work and
how digital technology should work and the internet and all this kind of stuff and has
been pushed out globally is kind of fascinating, right? Because you see these discussions occasionally where maybe people push back against the kind
of ideas of nudity held by Apple or Facebook and how they kind of then push that onto the
rest of the world coming from the United States or how these technologies arrive in certain
parts of the world and their languages, their kind of letter
systems and things like that simply don't work with the way that these technologies have been
designed and set up, right? And so there's this kind of clash between this technology that's
created in a particular space by particular types of people within the world and then expected to,
you know, become applicable to everybody because Silicon
Valley has to have this kind of globalized nature, right?
They have to take over everything.
They can't just be for Americans.
They have to be for everybody because that is what works for the market value of these
companies and all this sort of stuff.
And there's such an incredible history of, again, going back to like the history of scholarship
around these ideas, you know, like critical development studies, which like Arturo Escobar is like, you know, one of the
kind of like key figures in the history of that field as well. You know, Arturo Escobar,
James Ferguson, who wrote this book, The Anti-Politics Machine. I went to Malawi with
a colleague of mine who's very kind to put me on a grant that she got. I don't know why. She was
like, Zach, you can do stuff. Stephanie White, amazing scholar of food systems. And we went to
Malawi and I was reading James Ferguson's Anti-Politics Machine on the Plane. And one of
the things that he talks about is the way that the symbolic dimension of particular material aspirations. So he talks about homes in Lesotho and the way that like the construction of a house
reflects your geographic and environmental sort of situatedness.
But what happens is that the aspiration to Western wealth translates into the construction
of houses that are totally inappropriate for the environment
and to the space. And I think that you could say something similar about computation and about UX
and about technology writ large, in that there's an aspiration across the globe because we have
made it aspirational to be a certain way with technology
and to live with technology in a certain way and to use it in a particular way that then like,
you know, reinforces all sorts of ideas that would never happen if we were to be able to
engage in communally autonomous decision-making about our lives and about the way we interact
with technology. Yeah. And I think it's interesting to see Aaron Beninoff's work come up in your book as well,
kind of touching on some of these ideas and the democratic nature of deciding
how society should be organized, how technology has a role in that and these kinds of questions.
To end off our conversation, I wanted to talk about something that you get into
at the end of the book. Obviously, you talk about what the potential responses to this are in,
you know, the lens of design education, because that is kind of your focus. But I think that those same ideas can be broadened out far
beyond that, right. And one of them you talk about is, of course, a reform scenario that focuses more
on, you know, making sure that these kind of critical understandings of how these interfaces
and how these technologies are used, you know, among designers, but we could say, you know,
among people much more generally than that. But then there's also a scenario that you position more
as kind of revolutionary, right? More as a Luddite approach, we might say. And that is to consider
the role of actual decomputerization, right? Of pushing back on these ideas that we need to be
constantly expanding digital technologies into every aspect of our lives. And we need to be collecting data on virtually everything. So can you talk to us a
bit about that kind of reframing of this and how we begin to think about the role of technology and
about computers and about the internet in our lives in a very different way than what this
tech industry is trying to get us to believe? It's a really important question. And, you know, I have to say, like, it's funny, I feel like the
term, you know, when you're in the midst of writing something, you're excited about it. And I even
think the term revolutionary, like I would hesitate to even use that now, in part because I think
maybe things are even more dire than they were a couple years ago. So, you know, to me, like Luddism is a hallmark of what we need to do
in order to figure out what our relationship to technology is going to be going forward.
And to me, that means understanding the ways that technologies become exploitative of the
working class and try to figure out how to eliminate technologies that are exploitative
of the working class and
embrace a democratic approach to the development of technology. Doing that in the classroom space
is very difficult in a pre-professional design program. However, I do think that there are
opportunities that require like broad solidarity. So, I think like there are one-on-one moments and
I'll just offer a couple examples here. I taught a special topics class this past summer called
Design for Degrowth. It really changed how I think about teaching design and the students in there
were incredible to go there with me. Basically, what we did was we sat for a few weeks and talked about, okay, like, what would
the shape of our community, let's call it like East Lansing, Michigan, you know, like
this little town, college town in Michigan, what would the shape of that town be like
if we democratically determined what constitutes socially necessary labor. And we divvied up that labor according to aptitudes and proclivities, and then took
the rest of the time we would have, which would probably be a lot of free time.
How would free time look different if we were to commit to living well with less, complete decarbonization, and sort of understanding
that any consumptive behavior really, like acknowledging the basic law of physics of
entropy, right? That like, there is no such thing as like renewable energy, right? Anything you make
to capture energy requires an expenditure of energy. And so, nothing's renewable in that way.
And if we acknowledge all that stuff, what would the shape of your free time look like? How would
you freely associate with people in different ways? And the responses and the conversation
were really something. I had a student make a video about like what the local news would be like. And it was really funny.
It was about like someone's chicken.
And like another student, totally different, designed a bunch of interfaces and basically
like built out the UX for the sort of fundamental social infrastructure that we would use to
do that sort of democratic determination of socially necessary labor or to allocate people's time and like how they would decide, you know, like how would they express interests,
things they wanted to apprentice with versus things that they're already good at.
So, we took up examples of like childcare and elder care and we talked about like the shape
of the university system. Like, okay, what would we really do in a design class? We're like, the vast majority of what I teach them is explicitly to augment the surplus value for the capitalist class, like full stop.
I mean, that's like most of what I teach my students, right? So, what would design school
look like? So, exploring the shape of that and then I had a student who was like really into
golf. This was an amazing experience. I had a student who was really into golf who said,
oh my God, well, how would I golf? And I was like, well, you like golf?
Golf is, that's fine.
I'm not opposed to golf, you know, like, okay, fine.
Yes.
It's like really unsustainable if we think about it.
Like, and I was very candid with her about that.
And she was like, yeah, you're right.
And she basically came up with like a plan for how different autonomous communities who
had golf interest subgroups could then connect with each
other and democratically invest in the building of like a communal golf course that would adhere
to certain requirements that would not, you know, use fossil fuel infrastructure. And it was like
a really interesting exploration. Again, like the design outcomes varied widely across the group.
They had different majors.
They were interested in different things.
And I think we spent so much time like kind of unpacking what this world would look like
because it's just so hard to wrap your head around that I think we didn't do enough of
the design work.
And that's on me.
Happy to take the blame for that.
But these students were incredible.
And I'm really looking forward, like I'm hoping to be able to present about this with them to just like get this idea out there more. So to me, like that's a vision. And I think part of what I didn't address so much in the book maybe, but I think it's important is that having a compelling vision of living well with less, having a compelling vision of a different way to be in the world is lacking because
advertising is so good at convincing us that the way things are right now is the best that
we've got.
I think it's a really interesting example.
And when you talk about, you know, being able to do different things kind of throughout
the day, have more power to choose different things, it certainly brings to mind, you know,
potentially hunting in the morning, fishing in the afternoon, rearing cattle in the evening, criticizing after dinner, you know, pulling from the old
Karl Marx quote, of course. Totally. Totally. Yeah. And I think, you know, Beninov is like
right in that kind of canon, I think. He's spot on with that stuff. Shout out, Aaron Beninov.
Absolutely. But I think what you're talking about there is obviously having this democratic input
to decide what
production should look like, what society should look like, how technology should be used,
rather than an assumption that everything should be ingested into the machine, kind of given over
to the machine. Its algorithms kind of sort everything out. And we supposedly live in this
kind of utopia where we don't have to work. And I think that the democratic approach
is not only more realistic, but also one that much better fits with the politics that many of
the people advocating these things tend to ascribe to, or at least claim to ascribe to, right?
So true. I think the last thing I'll say about that too, is that
the reason I positioned this in a design setting, that class about degrowth, the reason that it's so important to me that that comes from the space of design is because the visual, like Jacques Ranciere, I might be reading it wrong.
I'm not a philosopher, whatever.
But Ranciere talks about this idea of the distribution of the sensible, Meaning like the world that we experience basically
reveals certain things and conceals certain things. And that revelation or concealing
has some determining impact on our participation as political subjects. And I think to me,
it's really important to like reveal something else, to use like the visual media of design,
which is like the lingua franca of everyday life now,
right? This is very much Henri Lefebvre kind of as well. And to use that visual language to put
forth a totally different idea of what things could be like. And this is, I think, different
too than like critical design or speculative design, which has sort of a dystopian flavor to
it. Oftentimes, they're speculating on like, you know, things that have happened a lot
of other places in the world, but just not to like white Europeans. And so, I think like doing
something like this in a design setting where we're talking about degrowth, we're talking about
sort of the democratic determination of all of these things, you know, as Illich says, the
democratic determination design criteria for all tools. I think putting forth a vision of that,
and particularly in the visual space, can be useful because it maybe suggests an alternative
that without that visual dimension, we might be lacking. And again, maybe that's my just
sort of predisposition as a designer. I think it makes sense. It brings to mind
Elon Musk saying when they were planning out the Cybertruck that the future needs to look like the future. And so for him, the future is this
kind of dystopian vehicle that's huge and dangerous and doesn't even work particularly
well. But I think that we can also think about the future looking very different ways if we make very
different decisions about what that should be. Zach, really great to speak with you. Great to
dig into this. Thanks so much for taking the time. Thank you so much for having me, Paris. This was awesome.
Zach Kaiser is an associate professor at Michigan State University and the author of Interfaces in
Us, User Experience Design and the Making of the Computable Subject. Tech Won't Save Us is made in
partnership with The Nation Magazine and is hosted by me, Paris Marks. Production is by Eric Wickham
and transcripts are by Bridget Palou-Fry.
Tech Won't Save Us relies on the support of listeners like you to keep providing critical perspectives on the tech industry.
You can join hundreds of other supporters
by going to patreon.com slash techwontsaveus
and making a pledge of your own.
Thanks for listening and make sure to come back next week. Thank you.