Tech Won't Save Us - Tech Criticism Before the Techlash w/ Zachary Loeb
Episode Date: July 8, 2021Paris Marx is joined by Zachary Loeb to discuss the history of tech criticism with a focus on Joseph Weizenbaum and Lewis Mumford, as well as why the techlash is a narrative that suits Silicon Valley....Zachary Loeb is a PhD candidate at the University of Pennsylvania whose dissertation research looks at Y2K. Follow Zachary on Twitter as @libshipwreck, and check out his Librarian Shipwreck blog.🚨 T-shirts are now available!Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.Find out more about Harbinger Media Network at harbingermedianetwork.com.Also mentioned in this episode:Zachary wrote about Y2K, Lewis Mumford’s criticisms on technology, the life and thought of Joseph Weizenbaum, and theses on techno-optimism.Books mentioned: “Dismantlings: Words against Machines in the American Long Seventies” by Matt Tierney.Support the show
Transcript
Discussion (0)
The question is, if we want to make society in a different image,
what technologies would be in that society? It's a different way of approaching it.
Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks. And before we get to
this week's episode, just a quick reminder that t-shirts are now available for those who would
want them. There's a black version with a red logo and a version with a number of colors and
a black logo. So if you want to wear the Tech Won't Save Us logo on a t-shirt or a hoodie, you can find a
link to make an order in the show notes. And if you want to get one of those in the first batch,
you just need to order in the next week or so. This week's guest is Zachary Lowe. Zachary is a
PhD candidate at the University of Pennsylvania whose dissertation is on Y2K. That research is really fascinating, you know, looking at the
idea that we have of Y2K and then what actually happened behind the scenes to avert this kind of
computerized catastrophe around the dates and these systems. We're not talking about that in
the interview today, but I have linked a piece in the show notes if you want to find out more
about Zachary's work on that topic. You might also know Zachary as Librarian Shipwreck on Twitter and also the blog that he
runs where he writes about his research and other, you know, important technology topics.
But obviously in recent years there has been this kind of resurgence in tech criticism, right, that
we've seen since 2015 or 2016 where after a period of techno-optimism, we have been seeing more and more
criticism of these massive tech companies and some of the technologies that they're putting out into
the world. Obviously, this is a really positive development, but criticism of technology is not
a new sort of thing. And so what I'm discussing this week with Zachary is the history of this kind
of criticism of computers and the technologies associated with them going back to the 1950s,
1960s, 1970s. And in particular, we talk about the work of Joseph Weizenbaum and of Lewis Mumford,
who were prominent social critics who often focus on computers, artificial intelligence,
and other technologies that were being developed
through this time. And you know, something that is really important that comes out of that
discussion, that comes out of that history, is recognizing how so many of the arguments that are
being made today are not completely original. And we can go back decades and see similar criticisms
being leveled at these technologies than what are still
being leveled today. And, you know, I think that there are two really important aspects of this to
draw attention to before we get into the interview. And that is first that Zachary talks about how
Weizenbaum, you know, creates this artificial intelligence, ELISA, that, you know, is very
rudimentary, doesn't really have much intelligence, but kind of mimics what a psychiatrist would do and is kind of concerned that people are being fooled by this
technology. And then when he tries to explain how it actually works and that it's not intelligent,
he finds that they want to believe that it has greater capabilities than it actually has.
They want to be fooled by the technology, which is obviously incredibly
concerning. And you know, we can see that today with how so many technologies come with these
grand promises. And then a few years later, you know, the future that they were going to herald
in never comes because those achievements are never made. Yet so many people kind of fall for
the marketing gimmicks that are put forward for these ideas. And then beyond that,
he talks about how Mumford discusses how so many of these computer technologies come with a bribe
to get us to use them. But then once we are using them, the bribe is not so necessary anymore,
because we are kind of within the system already. And once we've adopted it, once we are depending
on these new technologies, we also don't want to recognize the downsides, the negative things that come of them.
And so people start to ignore those negative consequences because they don't want to feel
that they are bad people for using these products that do bad things in the world,
or at least some bad things. And so I think that those are important kind of takeaways that we
should be considering, especially as we are, you know, criticizing technologies in the present
and want those criticisms to be, you know, kind of adopted and accepted by a larger range of people.
You know, how do we convince people that there are problems with these technologies without them
feeling that, you know, they are personally flawed and thus
want to ignore these downsides so that they don't need to like feel bad themselves, right?
So I think that this is a fascinating conversation. I think you're really going to like it.
Just a reminder that Tech Won't Save Us is part of the Harbinger Media Network,
a group of left-wing podcasts that are made in Canada. And you can find out more about that at
harbingermedianetwork.com. If you like this conversation,
make sure to leave a five-star review on Apple Podcasts
and make sure to share the show on social media
or with any friends or colleagues
who you think would learn something
from this conversation.
And finally, every episode of Tech Won't Save Us
is free for everyone
because listeners who are able to support the work
that I put into making the show every week
choose to do so. So if you feel that you have been learning from these conversations,
you can join supporters like Ben and Rhys from Calgary by going to patreon.com slash techwon'tsaveus
and becoming a supporter. Thanks so much and enjoy this week's conversation.
Zachary, welcome to Tech Won't Save Us.
Thank you very much for having me, a longtime listener. Wonderful to be a guest on the program.
I'm really excited to talk to you. Obviously, I've been following your Twitter account,
Librarian Shipwreck, for a while and reading your blog and the great insights that you provide there
on what's going on with tech and tech criticism, the history of tech criticism, and so many other
topics. And so naturally, I wanted to talk to you
a bit about that history today, because I feel like, you know, we're in this moment where critical
perspectives on technology and the technologies of the day are getting more light than they would
have received, you know, five or more years ago. But I feel like even in that conversation,
there's an idea that this kind of criticism of technology,
it feels like there's an idea that this is still kind of a novel thing. Like we've come out of
this period where there was a lot of optimism around technology. And now there's more criticism
happening. And I feel like that can get disconnected sometimes from the longer history of
this criticism and these critical perspectives toward computers and various other
technologies that have been developed over time. And so that was kind of what I wanted to talk to
you about today. And I wanted to start with Joseph Weizenbaum, because he's someone who I first
encountered through the works of Mar Hicks. And I would say I have read a lot of more recent tech
criticism. And going back into the history of it is something that is,
you know, a bit more recent for me, you know, learning about what people were saying several
decades ago versus, you know, just in the past 10 or 20 years, I guess. And so it was fascinating
to me to read about what Weizenbaum was writing. So can we start, I guess, by having you explain,
you know, who Joseph Weizenbaum was, and how he kind of developed this kind of critical
perspectives toward the technologies of the time that he was working on.
Joseph Weizenbaum is born in 1923 in Berlin. He's Jewish. Eventually, his family is forced to
flee Germany because of the rise of fascism. They wind up in the United States. And eventually,
Weizenbaum goes off to university, and he gets kind of involved in the United States. And eventually, Eisenbaum goes off to university,
and he gets kind of involved in the early stages, the early days of computers. He's contributing to
the construction of early computer systems at Wayne State University. He was working on
Whirlwind and Typhoon, some of these very, very early computers. He's working for the General Electric Company.
He's helping develop the automatic bookkeeping system, IRMA, which is the Electronic Recording
Machine Accounting, which was a system used by Bank of America. And he winds up at MIT
as a visiting professor in 1962. And this is kind of where Weisenbaum's story like really, really picks up. So he's
somebody who has his feet in on kind of like the ground floor of the history of computing in a lot
of ways. He is involved in computing in the very kind of early World War II, immediately post-World War II era. But then the thing that most people
know Weissenbaum for, if they know him for anything, is ELIZA, which is a program that he
developed in 1966. This program, which is also sometimes referred to as doctor, it kind of mimics a certain type of psychiatry where somebody will say to the program, you know, they would type the message in saying, you know, I am blah.
And the program would work to transform I am blah into how long have you been blah?
So it's independent of the meaning of the word blah.
The system just kind of knows how to rearrange the words in the sentence.
Pretty quickly after developing this program, pretty quickly after publishing, you know,
the article about it, Weisenbaum kind of has this unsettling moment where he's realizing that so
many people are interacting with ELIZA and thinking that it understands them. Thinking
that because when they say, I am depressed, ELIZA echoes back, how long have you been depressed?
That it means that the program understands them,
that the program is genuinely concerned for them. And for Weizenbaum, this kind of makes him
deeply uncomfortable about what he's created here, about the potential of what he's created.
And so even though he's somebody who's, you know, a foundational figure at this moment, and kind of the history of artificial intelligence, right from the start, he's anxious about this, he's concerned about this, he sees the way that people are projecting their own hopes, their own ideas, their own beliefs, onto this system. And he's very quickly going like, that's not how this really works. That's not
how this functions. You shouldn't believe these, you know, fantasies that you're projecting onto
this system. But he's at MIT, he's surrounded by all of these other people who are working
on computing or working on artificial intelligence. And he is surrounded by all of
this hype, all of this excitement about computing, about artificial intelligence. And he pretty
quickly decides that he's not okay with this. And that one of the things that he's going to try and
do is that he's going to try and push back. He's going to try and kind of sound the
alarm about this. You know, there's this Rand symposium that he participates in in 1965. And
this is before Eliza has been finished. And you know, there's this wonderful line that he says
at the Rand symposium, where he's thinking about the Bulletin of the Atomic Scientists, which was a huge cultural presence, especially in the Cold War with, you know, start publishing a bulletin with the clock hands
showing five minutes to 12. So it's 1965. And Weissenbaum is already saying, you know,
people who are working on computing, you know, computer people are already putting the world
putting societies close to doomsday. And right from the beginning, right from the time that he's even
working on Eliza, one of the things that is really, really at the forefront of Weisenbaum's mind
is that just because we can do something technically, just because some technological
thing can be done, it doesn't mean that we should be doing it. You know, that it's really important
for technical professionals, scientists to say, no, we're not going be doing it. It's really important for technical professionals,
scientists to say, no, we're not going to do that. He's very, very concerned with the idea
of responsibility. For the next while, he's kind of a bit of a gadfly. He's kind of a bit of a
presence who's pushing back on some of the hype. then the next big thing that he becomes known for
is he writes a book in 1976. Well, the book comes out in 1976, and it's called Computer Power and
Human Reason from Judgment to Calculation. And this is a book in which he's trying to demystify computers. He's trying to explain to a wide and public readership
how computers actually work, what computers actually do, what artificial intelligence
can accomplish. He's drawing attention to the ways that the people who he kind of mockingly
refers to as the artificial intelligentsia, you know,
they kind of hype and overhype what computers can do, what artificial intelligence can do.
And beyond this book being, you know, an explanation, there's also kind of this powerful
moral argument that runs through this book that is really emphasizing that human beings, people,
they are the ones who are responsible. They cannot just push all of the onus and responsibility
onto machines. They cannot trust machines to do everything. It's kind of an early pushback
against the kind of techno-utopianism that he was surrounded by in that time.
The book meets with a somewhat chilly reception. It does get him some attention. Much of this is
because he is a computer scientist speaking out, so this isn't a case of some humanist social
critic voicing their critiques. This is somebody who is a computer
scientist, somebody with impeccable credentials who is sounding the alarm. So the book comes out,
it generates some discussions, it brings some attention to him, and it also kind of serves to
make him kind of get ostracized from the technical community. He's kind of seen as a
grump and a curmudgeon and a doomsayer and a bit of a grouch. But it also is kind of the line that
then he is walking for really the rest of his life of being an outspoken critic. And he does
not consider himself to be a technological critic. He does not consider
himself to be a critic of technology. He considers himself to be a social critic.
But he's very aware that once technology starts to play such a massive force in society,
that if you're going to be a social critic, then one of the things that you need to be willing to criticize is technology. I think that's a fascinating description of part of Weisenbaum's
career and how he kind of develops this perspective. I wanted to kind of go back to a few of the points
that you made there. And first, you know, when it comes to Eliza and creating Eliza and observing
the way that people react to Eliza. You know, as I recall from your
article, you write that Weissenbaum was kind of, I guess, surprised at how willing people were
to be deceived by the technology, how they almost wanted to be deceived. And then that's on like the
user side. But then on the flip side, there is, you know, the people who are actually developing
these technologies, you know, his peers at the time,
who he describes as being entrenched in a technological metaphor and who he criticizes
for failing to first ask whether it was appropriate to delegate certain tasks to machines.
And so at a time now, in the present where these kind of critiques of these digital technologies
that we're dealing with are happening, and we're thinking about the way that they're used, looking back at what
Weissenbaum was saying, you know, all those decades ago feels incredibly relevant. It feels
like something that you could read today, and it still is very applicable. And, you know, even then
where he's talking about artificial intelligence back in the 60s, you know, it just shows how long
some of these technologies have been worked on that, you know, we're still talking
about the development of artificial intelligence today. And we still haven't reached that point
where, you know, it's this kind of human level intelligence that is constantly promised,
but never arrives. Yeah. And I think that this brings us back to a point that you
raised in kind of your introductory comments, that there are
things in Weisenbaum's work, some of his essays, some chapters of Computer Power and Human Reason,
he was not like the most prolific writer. He didn't generate like mountains and mountains
of articles or mountains and mountains of books. So you really can get a sense of kind of his written work. But in a lot of his stuff, a lot of it holds
up really, really well. I mean, almost frighteningly well. There are chapters in Computer Power and
Human Reason, especially some of kind of his later chapters, where he's talking about computers as
metaphors. He's talking about the artificial intelligentsia. He's talking about the ideologies
that wind up getting attached to computers that hold up as well now as they did in 1976.
I think that an argument could even be made that some of his stuff holds up better now than it did
in 1976. Because some of the points that he's making about computers and society, you know,
the personal computer wasn't really a big thing yet. Smartphones didn't exist yet. The everyday
level of interaction that the quote unquote, average person had with computers in 1976,
versus the level of daily interaction that many people have with computers and computerized
systems in 2021 is very different. So I think that one of the reasons that perhaps it was easy for
him to be dismissed of or to be kind of laughed at is that he's making these predictions and he's
making these comments about what computers are going to do to the world,
what computers are going to do to society. And he's making these predictions, and they're
predictions. You know, he's saying, these are the types of things that might happen.
But when you read his work today, a lot of these things, they have happened. You know,
it's no longer a matter of speculation with many
of his claims, like we kind of can see it in our world around us. And the point that you made about
how when he's talking about Eliza, and how he's kind of shocked at some of the reaction to it,
one of the things that he keeps kind of struggling with is how willing people are to be deceived,
how willing people are to believe that the computer does all of these things that it
doesn't actually do. There are a couple of chapters in Computer Power and Human Reason.
These are actually kind of dry chapters. And in the introduction, he even suggests that people skip over them.
But there are two fairly technical chapters in there where he's really trying very hard to
explain to readers, okay, this is how a computer actually works. This is how the program actually
functions. It doesn't understand. And he has this idea that's kind of in the back of a lot of his work
and thinking that if you explain to people how the magic trick works, they aren't going to be
fooled by it. If you show them that it's a slate of hand, if you show them it's all done by mirrors,
then they're no longer going to be fooled by the illusionist. But what he realizes is that a lot of people want to be fooled.
They want to believe in the magic.
It's such a fascinating observation.
And like, you know, yet again, it's something that just feels so relevant, right?
And when you're talking about the predictions that Weiss and Bob is making,
like it just made me think back to what Tim Morgan said when I spoke to him in December. And he was saying that, like, you know, if you just understand how capitalism
works, or how these systems work, and then see how things might progress, like your predictions
can actually be quite accurate, if you just, you know, have a proper critical understanding of
what's actually going on here. But obviously, you know, Weizenbaum is not the only person
making these critical observations at the times that he is, right, especially back in the 60s and the 70s when he begins to do this kind of work.
You talked about how Weizenbaum kind of existed in a community of other people who were making critical observations about computers and these various technologies at the time and how he developed a particular relationship with Lewis Mumford, you know, who was a critic of technology, but also cities and, you know, many, many other things. So can
you talk a bit about the kind of community that he was within there, and the particular relationship
that he developed with Mumford? At the time that Weizenbaum is writing, at the time that Weizenbaum
is, you know, publishing some of his thinking. He's kind of coming in at the
end of the 60s. You know, again, Eliza's 1966. We're kind of in the long 70s here,
as some people have discussed them. And in this period, there is some kind of percolating
Czech criticism that, you know, you can find out there that's in the
society. We have lots of different movements that are going on at the time, civil rights,
women's movement, student movements. All of these are kind of happening in this period.
And many of them do direct some of their critiques, some of their attention to greater or lesser extents in the
direction of technology. Matt Tierney wrote a really great book called Dismantlings, which does
a lot to capture tech criticism in the long 70s. And Weisenbaum, one of the things that makes him
significant in this period, one of the things that makes him kind of stand out as a critic,
is that Weisenbaum is a computer scientist. He is a computer scientist working at MIT.
You know, he isn't some long-haired hippie, although he did have long hair. You know,
he isn't some kind of humanist, romantic figure who's like dreaming of, you know, turning everything off.
Some of what makes Weissenbaum's critiques harder to dismiss of, and some of the reason why people
within the computing community are so frustrated with him, is that he's kind of the warning coming
from inside the house. But one of the people who Weissenbaum is quite friendly with, one of the people who he is
close with, somebody who is an influence on computer power and human reason, and somebody
who Weissenbaum is actually talking to as he's working on the book, is the social critic Lewis
Mumford. Lewis Mumford is kind of one of these figures who it's a little bit hard to just kind of easily
call him like a technological critic or is he a historian or is he you know a public intellectual
over the course of Mumford's life Mumford unlike Weissenbaum just like was constantly writing he
wrote so many books so many articles that's kind of an overwhelming amount of content. And Mumford is somebody who is
critical towards just about everything. And Mumford is primarily training his kind of critical
attention towards technology and cities. These are kind of his big things. Mumford is older than Weisenbaum. Mumford is born in 1895. And by the time that Mumford and Weisenbaum's friendship kind of picks up, by the time they really start. And Mumford is criticizing computers
kind of at a point where I think that many people still weren't aware even what computers were.
So in one of his books, The Transformations of Man, which is published in 1956. In this book, Mumford is already warning about what he's calling
the new cybernetic god, that with the creation of computers, people have created a new god,
a new all-seeing eye that they're going to worship. And throughout the following years,
after 1956, in his work, Mumford is continuing to criticize computers,
so that in his kind of two-work final opus of sorts, The Two-Volume Myth of the Machine,
Mumford is warning about the danger of computer-dominated society. And he's warning
people about the danger of computer-dominated society in 1970. So at a point where the idea of that
still seems, you know, kind of silly or a little bit laughable, Mumford is already trying to
sound the alarm on that. And Mumford and Weissenbaum are friendly, they're corresponding
with each other. And in the correspondence between the two,
there's a sense, and this is something that I've also found in a lot of Mumford's other
correspondence that I've looked into, of these social critics really trying to sound the alarm
and wondering why it is that they're sounding the alarm, some of the things that they're warning are going to happen
are happening, and they kind of don't get listened to. So there's this shift that you see in kind of
the relationship between Mumford and Weisenbaum right around the years when Weisenbaum's book
becomes published. So when Computer Power and Human Reason comes out,
Weisenbaum at first is, you know, very pleased with the reception, and that colleagues of his
are telling him that they agree with his points, and students are interested in what he's saying.
But then, within a couple more years, that high goes to quite a low as Weissenbaum starts feeling like,
so I said all of these things, I issued this warning, and nobody actually listened to me.
Mumford was kind of an expert at not being listened to. Kind of a running theme that I think percolates a lot of Mumford's frustration
is a bit of a sense that he keeps trying to say people, hey, this is going to happen.
And then people just kind of laugh at him. Mumford was somebody who was repeatedly referred to
as a prophet of doom. Someone once memorably in a review of his referred to him as our most
distinguished self-flagellator. And I think that for Mumford, and I think that this is also true
of Weisenbaum, and I think this is true of many other social critics, there was this sense that
they weren't warning these things because they wanted the bad things to happen. They were warning about
these potential threats because they were desperately trying to get people to change
direction. They were trying to kind of wake people up. And they kept finding that their
efforts weren't exactly panning out the way that they hoped. I'm just reflecting on what you said there and how that seems to be the case so often
when, you know, there's criticism of systems that, you know, are moving forward and that
obviously capital and, you know, certain other powerful forces in society want to see move
forward.
And then, you know, even though the criticism is there, it gets pushed down or ignored or it's not fully taken into account until it potentially becomes too late, which is obviously something that Weissenbaum was warning, you know, his colleagues about back in the 60s and 70s.
So we're talking about this relationship between Mumford and Weissenbaum in the 70s.
But then criticism doesn't go away.
But personal computers later, the Internet keep evolving, you know, through the keep evolving through the coming decades before the period that we're in right now. which is this kind of fever pitch and criticism really doesn't make it through at all in a certain way,
or it's really difficult to get through, you know, what we, I think, have experienced post-recession,
maybe even a bit before that with the dot-com boom, things like that.
Was it still so difficult to get people to listen to these critical perspectives on these technologies back in that period? Or do you think that that gets worse over time as there's even more of an economy that develops around
these technologies and it becomes so core to the growth of capitalism?
So I think that when you look at the people who are being critical of technology in the 20th century, in the middle of the 20th century,
many of them definitely feel that their critiques, their criticisms are going ignored, going unheard.
I think that they definitely feel like they are having a hard time getting their views across.
So this is a slight shift, but still vaguely on topic. You know, somebody else who
Lewis Mumford is very good friends with is Eric Fromm. And in the letters between Fromm and
Mumford, one of the things that comes up is that Fromm is frustrated that when he writes books like
The Art of Loving, he's very popular. And then when he writes books like The Revolution of
Hope, or when he writes books that are more critical, that also include a lot of critiques
of technology, people don't want to read those. People don't want to hear it. I think that there's
a very clear sense here, and this gets at what you were saying about capital in this period,
that people don't want to hear a message that
tells them that the life of plenty they are enjoying is bad. People don't want to feel
guilty about their shiny new gadgets. One of the things that a lot of consumer technology
allows people to do is when you get to moments where people are skeptical of
or losing faith in the degree to which actual social progress is happening and societal progress
is happening, well, it's very easy to pin your hopes then on technological progress. Because,
yeah, maybe things in social and political and economic
spheres aren't improving for you. But hey, look, you just got a new gadget. And this new gadget
is better than your old gadget. And doesn't that tell you that like, things are improving,
because technological things are improving. And this is one of the things that I think
Mumford really does a phenomenal job of understanding and nailing down.
So as I mentioned before, Mumford writes a lot. There are dozens of books by Mumford, mountains of articles. He has massive,
massive amounts of correspondence with tons and tons of people. It's really an overwhelming
body of work. And so within that, Mumford has kind of a tendency to like, in one article,
he'll coin like a really interesting term or idea, and he'll kind of like flesh it
out a little bit in that article, and then like he won't ever turn back to it. But one of the pieces
that is a really, really significant one is an idea that he develops and he puts forth and he
keeps working with, which is an attempt to explain why it is that people are willing to go along with the technological
system that is confronting them. In the face of these destructive and alienating and controlling
and impersonal and environmentally destructive technologies, why is it that people are
complacent? Why is it that people are kind of just going with the flow?
And what Mumford argues is that technology, the technological system here, it bribes people.
And his argument is that the kind of technological system, the companies who are engaged in it,
the people who are behind it, they get people to agree to go along with it
by offering them a share in the benefits of the new technology. They get to enjoy all kinds of
exciting consumer gadgetry and so forth that allows them to feel like they are the ones who are benefiting, like they're getting
something out. So sure, there's still, of course, the control and the alienation and all of those
things. But people feel like they are getting something out of it, too, like they're getting
part of the good thing here. And the argument that Mumford is kind of making is that
what the bribe does is it gets people to feel like they are even more reliant on these systems.
It gets people to feel like they really like these systems because they see themselves as
the beneficiaries. They see themselves as living a life of enjoyment and plenty that is contingent
on these technical systems. So they then don't want to deconstruct these systems or break them
apart because they're enjoying the stuff they're getting. They're enjoying the bribes that they're
receiving. And one of the big concerns here that Mumford has is that the bribe is what's
needed early on when the system is kind of new. The bribe is what is offered in order to get people to
accept what otherwise they might push back against. But over time, the bribe functions to kind of bring people into that technical system.
And even though it will keep churning out periodic bribes to keep people hooked, to keep people happy, to keep people kind of sedentary, Concern that Mumford has is that over time, what this massive technological system does
is it kind of removes alternatives to itself.
You know, it kind of becomes the dominant thing from which there is no escape.
That the thing that begins as a bribe eventually becomes a shackle.
The thing that begins as kind of, ooh, this is great. Ooh,
this is exciting. Ooh, this is fun, eventually becomes the thing that you cannot live without.
It eventually becomes a requirement. I wrote a piece a while ago for Boundary 2 for their issue
on the digital turn where I talked about Mumford's idea of the bribe.
And I tried to kind of push it a step further and argue that after the bribe comes blackmail.
So at the beginning, the bribe is what kind of brings people in. But then eventually,
you wind up in a situation where it's more like blackmail. And you have to keep using these things, not anymore because of
the benefits, but because if you stop using them, you're going to miss out. You're going to be left
behind. You are no longer going to be able to participate in society. But so Mumford's big
thing with the bribe, the kind of thing that this idea really, really does, is it gets at that
question of why. It gets at that explanation. And it's a really discomforting idea. It makes
people uncomfortable. I'll admit that when I first read it, it made me feel uncomfortable,
because I think that a lot of us don't like to think about things like our
smartphones, or social media, or our laptop computers as being bribes that convince us to
then kind of be complacent participants in the broader society. But I also think that to a very
strong extent, Mumford's idea of the bribe does a better job of explaining
why people are so hooked on their devices and why people are so hesitant to give them up
than many other things. And another important thing to just bear in mind here, you know,
for a bribe to work, something of value needs to be offered. You know, A bribe doesn't work if it's not giving people something
that they want, something that they see as valuable, something that they see as worthwhile.
And so Mumford isn't saying that all of this technological gadgetry, all of this kind of
share of technological power is worthless. He's not saying that it's without
value, but he's saying that once people accept this, it kind of freezes out their ability to
make other choices. It remakes society in the image of this technology, and then it kind of
reduces the availability of other options.
Yeah, you know, when I was reading that article that you wrote about Mumford and, you know,
the bribe and where he's describing the computer as, you know, an authoritarian technique and things like that, what you described in that extension that you made to discuss the blackmail
and to discuss, you know, how these things relate to the technologies that we're using today, I think,
was really kind of not only relevant, but really tried to identify, I think, something that's really important, as there are these discussions today about what to do about these massive tech
companies, what to do about these social media platforms, you know, whether to break them up,
etc, etc. Recognizing that aspect of how they bring people in, I think is really
important to that discussion, right? Because if they still have this bribe that they can offer,
then they are still going to be able to bring people within their system and keep them within
the system, especially if that bribe turns to the blackmail. And I think that also provides us a good
kind of bridge to the tech criticism that we're seeing today. You know, there was this kind of tech lash or whatnot through 2015, 2016, where we started to
see more of this kind of turn on Silicon Valley and the technologies that they have been putting
out into the world, you know, where this kind of techno optimism, you know, I think got kind of a
notch in its image. And, you know, more critical perspectives on these technologies
have been coming out. But, you know, some of these criticisms I think are in the vein of,
say, a Weizenbaum and a Mumford that are really questioning the technologies, what goes into them,
et cetera, et cetera. But, you know, there has also been discussion of another form of tech
criticism that you've also talked about that is more based around questioning
some of the aspects of these systems, but not kind of the larger ideology, I guess, that would come
out of Silicon Valley that's associated with these technologies that seems really distinct from,
you know, what you're describing Weissenbaum is doing as someone who is within the industry and
is really kind of sounding the alarm. So, you know, I wonder, what do you think of this kind of thread of tech criticism that we're seeing? And how do you relate that
back to what you're talking about in the past with Weizenbaum and these other critics?
So I think that this is like a very complicated and multifaceted issue, as I mean, are all of
these. And I will openly admit that I think that I have some skepticism towards the
TechLash argument. I kind of think that the TechLash argument is an argument that Silicon
Valley helped push itself in order to make itself look embattled. Silicon Valley really likes to
think of itself as the rebel alliance, even though it has
definitely become the empire.
It wants to do everything possible to make itself seem small and rebellious and embattled
again.
And I promise that's the only Star Wars reference I'm going to make in all of this.
But for me, when I look at kind of the shift that has happened, I really think that the moment that tech criticism shifted, at least in the United States, was when Trump was elected.
I really think that that was the moment at which especially kind of progressives and liberals started much more openly turning a critical eye
on the tech companies. I often joke, you know, on social media along the lines of like,
I'm old enough to remember when criticizing Facebook got you accused of being a Luddite.
And by old enough to remember, I mean 2015. I mean, it really wasn't that long ago
that Facebook and Google, these were seen as progressive liberal companies. And I mean,
here, obviously, there was definitely criticism from the left, but the perception of the tech companies was still like fairly favorable. And I
think what really, really changed things was Trump's run for office and then Trump's election.
I think that a lot of people really felt betrayed by Facebook, that they thought that like Facebook
was theirs. They thought that it was a good progressive company because people
attach progress values to technology. They think that technological progress is synonymous with
social progress. And then suddenly, when they are forced to reckon with the ways in which
technological power is reinforcing the rise of all of this reactionary power, they feel betrayed. And I think that one of
the things that informed a lot of the newfound criticism was a sense of betrayal, a sense of
frustration, a sense that these companies that we know, and we like, and we use, and how can they
be bad, because we know them, and we use them, and we use them and we like them that suddenly,
you know, they're helping empower Trump and all of these other groups.
And people recognize that as Facebook users, as Google users, as, you know, name your platform,
name your company, people don't like to feel bad about the things they're using.
People don't like to feel guilty about the things they're using. People don't like to feel guilty
about the devices they're using. You know, I think it's one of the reasons that people hate it when
conversations come up about like e-waste or the labor practices that are part of, you know,
the creation of technological gadgetry or, you know, coltan mining or things like that.
People don't like to look at their smartphone and feel things like that. People don't like to look at their smartphone
and feel bad about themselves. People don't want to log on to Facebook and feel like they are
participating in a system that is helping to empower fascism. People don't like to feel like,
oh, I watched a cat video on YouTube, and somehow I'm contributing to kind of the way that YouTube will then push
right-wing content or something. And I think that it created a space where people could be more
openly critical of Facebook, people could be more openly critical of Google, people could be more
openly critical of Twitter. But I think that a lot of the critiques that start being publicly acceptable
are critiques that are very surface level in some ways, where people are going to critique Facebook,
but they don't want to talk about the ways in which Facebook is a reflection of how computers
are built, how computers work, how, you know,
computer systems are designed. People are willing to criticize YouTube, but there's the bigger
question of like, the technological society, the technological order, the technological
arrangement of our lives and our society beneath this. And so my concern, and one of the things that I've kind of
written about in some of my talk about some of the current tech criticism, is that are we talking
about specific platforms that we don't like, because we think that the problem is just this
platform. And if we get rid of this platform, we can have a new platform that's going
to be better? Or is there a question or is there a discussion that's really about what kind of
technology do we want as a society? What kind of technology is going to produce what kind of a
society? And that's kind of one of the things that I think is an important split between some of
this kind of current tech criticism. And just to be clear, here, I'm talking a lot about kind of
like pop tech criticism. There is and has been tons and tons of wonderful work also taking place.
But the points that Weissenbaum gets at, the points that Mumford gets at, the points of criticism that a lot of critics of technology throughout the 20th century, and many today also, are trying to push at computers. You know, what are the historical moments and
pressures out of which computer technology as we know it emerged? What kind of biases and values
are built into computers themselves? If we get rid of Facebook, but aren't willing to look at the broader ideology and belief in technology beneath it,
then we're just going to have another Facebook, and it's just going to have most of the same
problems. It's one of the reasons that Weissenbaum didn't like to see himself as a critic of
technology. He saw himself as a social critic. The question isn't about this or
that platform, this or that specific gadget. The question is a broader one about what kind of
society do we want to live in? What kind of tools and technologies are going to help that society, what kind of technologies are going to hinder it.
Mumford's big concern is that these massive technological systems gradually remake society
in their own image. So the question is, if we want to make society in a different image,
then what technologies would be in that society? It's a different set of questions. It's a different image, then what technologies would be in that society? You know, it's a different
set of questions. It's a different way of approaching it. Obviously, I completely agree
with you. You know, I think what you are outlining there about, you know, the tech criticism that
we're dealing with right now that is looking at these platforms and these other technologies that
we're using right now in a way that, you know,
is not really getting to the root of the problem, but it's just looking at these particular aspects
of them that have offended, you know, particular liberal sensibilities in recent years is
particularly concerning, especially when we are focused on, you know, the broader effects of these
technologies on our societies. And I think what you're saying there about Lewis Mumford and about what he said about how these technologies
then transform society. And, you know, I think we can see that with the computerization that
happened, the rollout of the internet after it, you know, mobile technologies, the gig economy,
how these things have transformed work, the ways that we communicate, our interactions with one
another, the way that we exist in physical space even, you know, and how those technologies have
enabled a completely different way of living in this world. And if we're going to think about,
you know, remaking those interactions in a more positive way, it needs to go beyond simply looking
at, you know, the rough edges of these platforms or wanting to replace them with really similar ones that just don't disrupt these sensibilities,
but, you know, will likely require a whole new set of technologies that have a different
idea of the world built into them.
And, you know, to that point about what you were talking about and what we've been talking
about through this entire interview, what do you think is the importance of understanding this history of tech criticism, especially as we're thinking about, you know, ensuring a better world, I am required to say that history matters.
But I think that it's really important for people who are interested in criticizing technology,
people who are just interested in technology in general, to dig into some of this history.
Because one of the things that you find is that a lot of the current critiques of technology
that you encounter, people have been making those same critiques for a long time.
A lot of the things that Weizenbaum is saying, a lot of the things that Mumford is saying,
we could rattle off the list of lots of other technological critics as well.
These things are almost unsettling in how closely they
resonate with present concerns. And so when we see current popular tech criticism that takes a stance
of like, oh, people are critical of computers now. And isn't that new and funny because people have
never been critical of computers before. It's really important to be able to respond with, no, people have been critical of computers for as long as
there have been computers. We need to be paying attention not just to the history of the technology,
but the history of the social response to that technology. We need to be mindful of what the
critics are warning about at the start, because a lot of the times, the critics are right. And a lot
of the times, the critics who are willing to see the bad as well as the good, have a better idea of where things are going than the people who are only
willing to see the good. And this stuff is also just fun to read.
Absolutely. You know, I couldn't agree more. And I think that those are really great,
you know, kind of words of advice to leave us with. So Zachary, I really appreciate you taking
the time to come on the show to give us all this information about the history of tech criticism at Weissenbaum and Mumford as well. So thanks so much.
Thank you so much. Great to be here.
Zachary Loeb is a PhD candidate at the University of Pennsylvania, and you can follow him on Twitter at at LibShipwreck. You can follow me at at Paris Marks, and you can follow the show at at Tech Won't Save Us.
Tech Won't Save Us is part of the Harbinger Media Network, and you can find out more about
that at harbingermedianetwork.com.
If you want to support the work that I put into making the show every week, you can go
to patreon.com slash tech won't save us and become a supporter.
Thanks for listening. Thank you.