Tech Won't Save Us - AI Criticism Has a Decades-Long History w/ Ben Tarnoff
Episode Date: August 24, 2023Paris Marx is joined by Ben Tarnoff to discuss the ELIZA chatbot created by Joseph Weizenbaum in the 1960s and how it led him to develop a critical perspective on AI and computing that deserves more a...ttention during this wave of AI hype. Ben Tarnoff writes about technology and politics. He is a founding editor of Logic, and author of Internet for the People: The Fight for Our Digital Future. You can follow Ben on Twitter at @bentarnoff.Tech Won’t Save Us offers a critical perspective on tech, its worldview, and wider society with the goal of inspiring people to demand better tech and a better world. Follow the podcast (@techwontsaveus) and host Paris Marx (@parismarx) on Twitter, and support the show on Patreon.The podcast is produced by Eric Wickham and part of the Harbinger Media Network.Also mentioned in this episode:Ben wrote a long article about Weizenbaum and what we can learn from his work for The Guardian.Paris wrote a skeptical perspective on AI hype and the promises of ChatGPT in Disconnect.Zachary Loeb has also written about Weizenbaum’s work and perspective on AI and computing.Support the show
Transcript
Discussion (0)
He begins to feel that AI as an ideological project is actively harmful, not just too
ambitious, not just a bit unrealistic on what can be achieved, but that it actually has a
sinister social and political dimension. That's what he begins to dig into in the course of the
1970s. Hello and welcome to Tech Won't Save Us. I'm your host, Paris Marks, and this week my guest
is Ben Tarnoff. Ben writes about technology and politics. He is a founding editor of Logic
Magazine and the author of Internet for the People,
The Fight for Our Digital Future.
Now, Ben recently wrote an article in The Guardian about Joseph Weizenbaum, who might
be a name that you're familiar with, or maybe it's not.
But he was kind of one of the pioneers of AI technology and built this chatbot called
Eliza back in the 1960s that led him to develop a much more critical stance on
technology and artificial intelligence and computers after he created that and saw the
response to it and saw how people kind of related to computers and believed that computers could
have the kind of intelligence that a human could have. Ever since this kind of big boom in AI hype has happened,
since the release of ChatGPT late last year, I've been wanting to have a discussion about
Joseph Weizenbaum and his work to see how his critiques and see how his experiences can inform
what is going on today and the kind of narratives and discussions that we're having about AI today,
because I do think that there's a lot that he learned and that he wrote about that is very
informative to this moment. And that has not totally been sidelined, like there has been some
writing about it. But largely, when we hear, you know, people like Sam Altman, and these other
influential folks in the AI industry talk about what their products can do and talk about their
kind of vision for AI, it seems like a lot of these learnings that we should have from, you
know, a much earlier stage of this technology and this thinking about what computers should be doing
is really not present in a lot of those discussions and are not being kind of brought into this broader
discourse and discussion about AI and the role that it should actually serve in our societies.
And so that is why when Ben wrote this piece, I figured I had to have him on the show to talk
about it because these ideas are just so important to everything that we're talking about in this moment and really push back on a lot of the ideas
that people in the AI industry have for what chat GPT and their image generators and all that kind
of stuff should be doing. And I think that in this discussion, you'll find a lot of links back to
other discussions that I've had with people like Timnit Gebru and Emily Bender and Dan McQuillan and Molly Crabapple about AI and
the potential impacts that it might have in our world. So I really hope you enjoy this conversation.
You know, I really, really did. If you do like it, make sure to leave a five-star review on Apple
Podcasts or Spotify. You can also share the show on social media or any friends or colleagues who
you think would learn from it or kind of benefit from hearing this kind of conversation. And of course, if you want to ensure that I can
keep doing this work, that I can keep having these critical conversations with people like Ben,
who provide just these fantastic perspectives that, you know, you're probably unlikely to hear
in many other places, make sure to join supporters like Annie from Richmond in California,
David from Malaga California, David from
Malaga, and Tim from Baltimore by going to patreon.com slash techwon'tsaveus, where you can
become a supporter as well and help support the work that goes into making the show every single
week. So thanks so much and enjoy this week's conversation. Ben, welcome back to Tech Won't
Save Us. Thanks so much for having me, Paris. Big fan of the show, so it's always great to be here.
Thanks so much. A big fan of your work,
of course, you know, from Logic Magazine to all the other writing and stuff that you've been doing,
your books and everything else. You know, you're an essential contributor to the critical perspective
that is so essential on the tech industry. So it's always great to have you back on the show.
And I'm very excited to discuss what we'll be discussing today. I appreciate that. So you had
a recent piece in the Guardian discussing kind of
the life and work of Joseph Weisenbaum. Now, he is a figure who we've discussed on the show before,
you know, he's kind of come up in some of these discussions around AI that we've been having in
the past number of months, you know, as we're in this kind of moment of AI hype, and his Eliza
experiment or software in particular, or chatbot, we might say today. But you know, we discussed
them in the past with Zachary Loeb as well, if people want to go back and discuss that. But I think
that there's a lot to get into, especially in this moment, you know, ChatGPT is getting all
this attention and stuff, because Weizenbaum's work, it's very kind of linked to what is going
on today, and I think provides a lot of lessons for it. So I just wanted to start by asking,
you know, for people who aren't familiar, who was Joseph Weisenbaum and why do you think his work continues to be relevant?
Well, before I step back to give you his full biography, and I can give you as much or as
little as you would like, I have acquired probably more information about his backstory than anyone
really needs to know. But the reason that he is in the conversation these days, and the reason
that his name is appearing in places like the New Yorker and The New York Times over the past year or so is because of ChatGPT.
And ChatGPT has obviously generated a lot of interest in chatbots, which are not new.
In fact, Weissenbaum is commonly credited with creating the first chatbot in 1966 called Eliza. But that chat GPT has, let's say, renewed interest in
chatbots as a conversational interface to something we might call AI, bracketing for a
moment that AI is this contested and often ill-defined term. so I'm always putting quotes around it. But nonetheless, chat GPT gives us a character that we can interact with, which is underneath a large
language model, which is in 2023 defined as AI. So, Weizenbaum as a figure has attracted
greater interest, both because he created the first chatbot, but also because
partly through the experience of the reception to ELISA, which did make a big stir in 1966 when it
was released, he began to develop this broader critique of AI, which again, sidebar, meant
something a little bit different at a technical level then than it does now, but nonetheless develops this critique of AI and really of computation more broadly
that occurs over a series of articles, but really culminates in a 1976 book called Computer Power
and Human Reason, which is his magnum opus. So that's why folks are talking about him. And this article for the Guardian Long Read was
my attempt to intervene in that conversation and say, hey, Eliza is really important and very
interesting, and we should revisit that as folks are doing. But also, there's a lot more there.
Eliza for him was actually a starting point to developing this broader critique that is of immense and urgent relevance for today.
Yeah, I'm really happy you outlined all of that because that's basically what I want us to talk about in the conversation, right?
Not just Eliza, but this broader critique that he developed as a result of creating this chatbot and then seeing how people were interacting with it and responding to it, and just kind of developing these ideas around technology at this time. Do you want to give us
just a brief idea of his early years and then how he actually got into doing work with computers and
what we call AI now? Sure. So let me just give you a brief chronology of his early life.
So Joseph Eisenbaum was born in 1923 in Berlin. He's a German Jew. His father
came from Eastern Europe and established himself in Berlin, becomes a moderately successful furrier,
someone who creates tailored fur clothing for women, and secures a somewhat secure foothold
in the upper middle class of the German Jewish community in Berlin.
This is Weizenbaum's father, marries a much younger Viennese woman and has a relatively
successful shop in Berlin. Incidentally, or just kind of parenthetically, although it will have
immense consequence for Weizenbaum's later life, his father is quite
abusive to him, both physically and verbally. Tells him he's worthless from day one. And this
is related to the mental health challenges that Weizenbaum develops from an early age,
which will be of great consequence to him, not only personally, but for his intellectual project,
because it helps stimulate his interest in psychology and in psychoanalysis.
In 1936, Weizenbaum and his family leave Germany for the United States.
The Nazis are now in power.
They have passed a number of anti-Jewish laws.
As a result, Weizenbaum is forced to drop out of his public high school and move to a Jewish school where he
meets a number of much poorer Jews, so-called Ostjuden from the East, and develops a very
intense friendship with one of them. But all of this is cut short when the family decides to leave
Germany for the United States. They end up in Detroit in 1936 because Weizenbaum's aunt had a bakery there,
so that's where they had somewhere to stay. His father reestablishes his practice,
sets up a shop in Detroit. Again, kind of rejoins the middle class, one could say.
And Weizenbaum ends up studying at what is now Wayne State University, what was then Wayne
University, which is a working class, local
public university in Detroit in the 1940s.
Weissenbaum then serves in the army from 1942 to 1946, which he experiences as a kind of
liberation from his family where he is very unhappy.
And in the army, he gets to travel all over the country.
He gets a degree of independence.
In the course of one of his furloughs back home in Detroit, he meets and marries
a woman named Selma Good, who is a Jewish socialist who will go on to be one of the
early members of the Democratic Socialists of America. She's very involved in the left-wing
activism of the time, the 1940s being a kind of high watermark for the American
left, period of very intense class struggle. They're in Detroit. This is the period in which
the UAW is in the process of winning its first contract against Ford. So there's a lot going on.
This is a period of very intense social mobilization and Weissenbaum gets caught up
in that, develops a left-wing politics as a result. Now, in the late 1940s,
he and Selma get divorced. This is an extremely painful experience for him because by that point,
they have a baby boy and they decide that Selma is going to take the boy to raise on her own.
Out of this experience of heartbreak, he goes into psychoanalysis for the first time and also
goes into computing for the first time and also goes into computing for the
first time. This is of significance for his later career that his first encounter with
psychoanalysis is happening around the same time as his first encounter with computing.
His first encounter with computing, however, is accidental, serendipitous. He's studying math at
Wayne University. His professor decides that he
wants to build a computer. This is the late 1940s. The modern computer, the kind of architecture,
what we now think of as the von Neumann architecture, this is all kind of starting
to get consolidated in the late 1940s. It's obviously still very difficult to acquire a
computer, right? You don't just go to the Radio Shack.
I guess that reference itself dates me. But you don't go on amazon.com or wherever one buys
computers these days, I wouldn't know, and just buy it, particularly for a working class university
in the middle of Detroit. So, they decide to build one. And this is an extremely exciting
experience for Weissenbaum that helps actually heal his heartbreak in a way. It brings
him out of his pain, connects him to what will become a passion for him, which is computation,
which gives him a sense of self, a self-esteem, a sense of self-worth that has been so missing
in his family life with his father telling him he's worthless all the time. He becomes quite
good at computers. It really fits. He ends up marrying a woman named
Ruth Mainz in the early 1950s and then becomes an early computer engineer and programmer.
He works for General Electric in California in what will later become known as Silicon Valley,
where he develops a project for the Bank of America that helps them
use a computer system
to automatically process checks. By 1963, he is invited to become a professor at MIT.
He experiences it as a great honor because it means he's reached a stage of his career
where he can be invited into this high temple of technology where MIT is the
epicenter of the emerging discipline of computer science in the United States at the time.
Now, before I pause, because I've been talking for a long time, let me just tell you why did
he join MIT? What was the context for him to join? The context for him was that in 1963,
an initiative was launched at MIT called Project Mac, M-A-C. This was funded
by Pentagon's R&D arm, which at that point was known as ARPA, be later renamed DARPA.
And they were getting millions of dollars at MIT from the Pentagon to work on interactive
computing. And in particular, to perfect a technology that was known as time sharing.
Now, what does that mean? I promise I'll be brief. To understand why these things
were revolutionary, we actually have to take a step back and talk briefly about what did
computers look like when Joseph Eisenbaum was getting really into computers? Well,
you had punch cards. If you were writing a program, you had to encode that program onto the punch cards and then run these punch cards through the computer. This was known as batch processing. And typically what would happen is you would bring your program to the operator, they would run it overnight, and you'd come back in the morning to see if it ran correctly. Now, anyone who's ever done any kind of programming might imagine that this is a very painful way to develop a program, right?
If you sit down in Python and you're kind of at the interactive shell or whatever, you can just
try out, hey, does this throw an error? Or if I just write up a little script and run it,
does that run? Imagine having to wait a whole day and come back in the morning and say,
oh, there are all these errors. It's a very slow, very difficult way of working with a computer.
The idea with Project Mac was what if we could create a more conversational way of interacting
with a computer, which would not just help us develop computer programs faster and more
efficiently, but would also create a new kind of human-computer interaction.
And in particular, this figure was very well known then and since, J.C.R. Licklitter,
who was at MIT and had moved to ARPA and was instrumental in getting Project Mac funded,
for whom this idea of creating a more interactive experience of computing
was absolutely essential. So that's the context in
which Weizenbaum was hired into MIT. And that is crucially the context within which, and quite
materially because they provide the funding, that ELISA, the first chatbot, is developed.
I appreciate you outlining all that history and even noting the significance of those pieces
to what he later does and to the work that he ends
up doing. It leads us really well into talking about Eliza. But before we do that, there are a
couple things I just want to pick up on there, you know, in what you described, because I think
that is interesting in understanding kind of Weissenbaum's perspective and his approach toward
this, and just trying to ensure that we understand, you know, how he is approaching this work as he
goes into Eliza. You know, one of the things that you described in your piece was how Weissenbaum had kind of a difficulty with humans and human interaction,
I guess, that probably comes out of his experiences as a child, and how that made the
computer and kind of working with computers something that was appealing. And I think that
that is a familiar story that we hear from people in Silicon Valley or people who work with computers
from time to time.
You know, Mark Zuckerberg obviously comes to mind as someone who's very obvious there.
But then the other piece is also that we're talking about how, you know, he had this critical
stance on technology. But as I understand it from reading your piece, in this moment,
he considered himself to be a real advocate of these technologies and a real believer in what
they could do. So can you talk to us a little bit about that before we go into talking about Eliza? That's a good point, Paris. When you write about
somebody, there's always this risk that you're projecting too much of yourself into them. And
in fact, one of the key concepts that we need to understand in order to understand how and why
people responded to Eliza and respond to chatbots
since the way they did is this idea of transference, which we'll get into later.
But I'm also very conscious of myself as a writer projecting perhaps too much of myself into my
subject. The reason I mentioned that just as a frame is that I work in the tech industry.
And I also have a complicated relationship to technology, one that I think
is probably not unlike the one that Weizenbaum had, which is on the one hand, I love it. I'm
fascinated by it. I want to be around it. I want to know how it works. It has given me a certain
sense of myself, a certain stability, a career in the same way that it has and did for Weizenbaum. On the other hand, I am aware of
the many destructive purposes to which it can be put, to which it's currently being put as he
eventually was. So, for Weizenbaum, I think both he and I would resist this conversion narrative,
where he was a kind of naive believer in the promise of technology and then turned against it.
I think something more complicated was going on, which is that he developed certain political
commitments quite early, really in college in the 1940s. And while his political passions kind
of recede over the course of the 50s and the early 60s and don't really get
reactivated until he joins the movement against the Vietnam War on MIT's campus,
that critical edge never entirely disappears. So even when you're reading articles in which he does
exhibit a certain enthusiasm about Eliza as a chatbot demonstrating the potential for a more enriched
kind of human-computer interaction. I mean, this is his initial idea of what Eliza could be is
not really a joke or a critique, but as an instance of a more conversational approach
to computing in which we could talk in natural language to a computer and want to talk
to a computer. And thus, through that process, the computer could learn more about us and about
the world. So there is, I think it's quite important to say, a kernel of optimism in the
initial idea for Eliza that nonetheless has a bit of hesitation, a bit of ambivalence attached to it.
Because when he releases Eliza out into the world in 1966, the institutional context in which it is
being received is this field of artificial intelligence, which we can talk about in
greater detail, in which figures like John McCarthy, who had been at MIT and was by then
relocated to Stanford, or figures like Marvin Minsky, who was at MIT at the helm of the AI
project there, had much more audacious and one could say even arrogant views about the potential
for computers to simulate human intelligence, the whole range of human intelligence.
So there are kernels of optimism and kernels of critique as early as Eliza. And then what's interesting about watching him develop over the course of the late 60s and through the 1970s is
how the kernels are still there, but the proportions become different,
where the critique gets turned up and maybe the optimism
gets turned down. But then even towards the end of his life, we have these flashes of optimism about
the possibilities for friendship with an artificially intelligent agent. I'll pause
there. Perhaps I've nuance-brode it into something incomprehensible.
No, I found that fascinating, to be honest, to think about, you know, the way he was
approaching it and how that approach kind of developed over time and how there was always
these kind of conflicting or both of these viewpoints kind of coexisting together, but then
seeing how they both evolve as his experiences with these technologies change and how he sees
people interacting with the things that he's created. Right. And, you know, you've already
started to kind of bring us into
the ELIZA program or chatbot or whatever we want to call it. So how exactly did that work? And why
was it significant at the time? And I guess, how did it shape how Weissenbaum started to kind of
change his views on these technologies and the role that they might play.
So, Eliza was a fairly simple chatbot in which you would sit initially at an electric typewriter,
because this is actually before the era of the computer console with a monitor, right? These
are very early days of human-computer interaction. You sit at this typewriter, you type something in,
and you get a response, right? And the character that Eliza is performing is that of a
psychotherapist. So, the responses are ones that you might hear from a therapist if you're familiar
with that kind of language. Now, why did Weizenbaum choose
to have Eliza perform this role? Well, we know that he has this history with psychoanalysis.
He's interested in psychoanalytic concepts, psychoanalytic themes dominate his work.
So there are those considerations. On the other hand, there's also a very practical consideration,
which is in fact quite funny. For Eliza to perform the role of a psychotherapist, you don't have to encode any knowledge of the
outside world into the program because all you have to do is write these fairly simple
transformation rules that take the input and rejigger it and put it back to the user.
So an example might be, I'm thinking a lot
about my mother. Why are you thinking about your mother? You know, simply turning things back into
questions. And this is funny because if you've had an experience of therapy, often that is the
experience. People are reflecting back and taking you deeper. And as Weizenbaum says in an aside in one of his articles,
in a normal interaction or let's say a non-therapeutic situation, if someone responded
in that way, you would think there's something wrong with them. But in a therapeutic situation,
it actually signals wisdom and depth and knowing, right? That type of transformation,
which is essentially using your own language and reflecting it back to you.
So, Eliza produces a very powerful response in people.
And this is a response that I think we could recognize as a transferential response.
This is not a word that Weizenbaum himself uses, but I think it's one that is appropriate, where the response that a number of people have to this chatbot is to
impute humanity, empathy, understanding to the program itself. This is a phenomenon that
Sherry Turkle later calls the Eliza effect. What is most useful about Eliza is not really
the chatbot, which is not terribly sophisticated by the standards of its time, but rather this discovery of the ELIZA effect that we have this
tendency, which is connected to our tendency to project feelings about people that we've known
onto people who are present, which is transference, but that we can do that with computers,
actually, as well. We can do that with computers actually as well.
We can do that with software. Eliza generates a fair bit of interest at the time. The Boston
Globe writes about it. They send a reporter to go sit and talk to Eliza and they run an excerpt
of the transcript. It generates a fair bit of interest in his professional circles. Weizenbaum,
partly as a result, secures tenure at MIT the following year. But one of the major
responses in addition to this transferential kind of ELISA effect, one in which people feel seen,
feel heard, feel like there's a real person there, another set of responses or let's say
a related set of responses are from the experts who think that this demonstrates a real understanding of natural language,
that this is in fact a promising path for authentic, genuine, what we would call natural
language processing now. And even among some psychotherapists who believe that this indicates
a promising path for automated psychotherapy. And it's really these responses by the people who should know
better, by the experts that bother Weissenbaum. But again, the timeline of this is interesting
because while the response bothers him, how he processes that takes some time.
This is kind of a moment where you have to be a bit careful about timeframe, where often when people are writing about Eliza, they're reading Weissenbaum's
reflections on Eliza towards the end of his life. At that point, he's seeing things a bit differently
than he did at the time, that he kind of telescopes this process of evolution, where Eliza, again,
later in life, seems in retrospect to him to always have been a critique,
to always have been a kind of parody of artificial intelligence. Whereas at the time,
as we've discussed, it did have this element of optimism of this could provide a promising path
for developing a more enriched form of human-computer interaction. But then the set of
responses that it generated, again, through this Eliza effect
that he essentially discovers, bother him in a way that actually takes years to fully process.
So it sets in motion the threads of thought that will eventually culminate in his 1976
book. But that comes out 10 years after Eliza. And in between, there's a lot of evolution
and development in his thinking. It's really interesting to hear you describe all of that,
and also how the kind of reflections take some time to actually kind of come to a position where
he has this kind of critical view on what this technology meant and how he kind of perceived it
in retrospect. But I think also what you described there, even though you're talking about something
that happened in the 1960s, there still seem to be so many parallels to what we've been seeing
in the past number of months, right? With people interacting with these chatbots and feeling like
it understands them and it's talking to them and whatnot.
And experts saying that this means that we're really close to like artificial general intelligence or something like that. And even the reporters going out and speaking to it and publishing
transcripts of it. Like, it just seems so fascinating that after all that time,
the response can be so similar. And, you know, at least the immediate reaction can have so little of
the kind of learnings or reflections from criticism that Weizenbaum and many others have
made in the decades that have passed since then.
I think that's right. And I think maybe that helps us lead to a point that I would like to make,
which is that the Eliza effect doesn't mean that people are stupid.
Transference doesn't mean that you are stupid. Transference doesn't mean
that you're an idiot. It's not this kind of moral category of like, you idiot. You think that's your
mother? That's not your mother, right? In fact, in psychoanalysis, transference is kind of what
makes psychoanalysis work or is what's supposed to make it work. It's how we bring the past into
the present to try to get some clarity on the distinction between the two. You actually have
to make, again, in kind of a classical theory, you kind of actually have to make the analyst
into this figure that holds all this transferential energy in order to uncover all of the things from
the past and bring them into the present. The reason I mentioned this is because I think there's
a similar perspective that Weizenbaum
brings to the Eliza effect, where even in this original 1966 article, he's very clear that the
software is producing an illusion of understanding. It's not real understanding. It's an illusion of
understanding. It's a more powerful illusion than he had anticipated. People really seem to believe
that this program understands them, that it's actually listening to them. But that illusion can be useful because it makes the user
want to talk to Eliza. And through that process, Eliza might learn something about the world,
that that illusion could actually contribute to a more interesting, more enriched form of human
computer interaction. But he also points
out in that original article, he says a certain danger lurks there, which is that through this
illusion of understanding, we may attribute a certain level of judgment to computers that they
really aren't capable of. So again, it's not that people are dumb, people are stupid for thinking that this computer is a human. In fact, that sense might be constructive in certain contexts, but there are also certain dangers that we need to be mindful of. And it's the dangers, of course, that he becomes increasingly preoccupied with and which form the centerpiece of his broader critique in computer power and human reason. Yeah, I think that's a really good point. And we'll return to that in a few minutes.
I wanted to kind of pick up on the fact that obviously Weizenbaum is joining MIT at this time
when the kind of concept of artificial intelligence is growing, is kind of being promoted by people
like John McCarthy and Marvin Minsky, as you mentioned, who are also at MIT, if I have that
right. And their kind of views on computer intelligence or the type of intelligence that
computers can hold seems quite distinct from Weissenbaum's view on computers and intelligence
and whether a computer can ever kind of achieve human intelligence. Can you talk to us a bit about
kind of the distinction between both of those different approaches or perspectives on this term artificial intelligence or this
notion of computer intelligence? Absolutely. This is a distinction that is present from the
beginning, that even if Weissenbaum has not developed the full critique that he will publish
in the 1970s, he's always quite distinct from the AI diehards,
figures like McCarthy and Minsky, who really believe that a computer can precisely simulate
human intelligence. That human intelligence, and by extension, human experience,
is essentially computable. What that meant in this era, particularly for McCarthy,
is that you could encode rules, very elaborate sets of rules. This is the paradigm of so-called
symbolic AI, which is different than the connectionist paradigm of neural networks
that we're in today. But nonetheless, that you could encode rules that would arrive at a certain
simulation of human intelligence that could match or even exceed
human capabilities. Weizenbaum is quite suspicious of this claim early. There's a radicalism to
the AI project that he is always wary of. McCarthy is credited with coining the term artificial intelligence
in the mid-50s. And there are various reasons that he comes up with that term and why he feels the
need to come up with a new term. But one of them is that he wants to convey the breadth of his
ambition. This is the high Cold War. There's an enormous amount of money on the table for science and technology.
And there's a lot of optimism about what information technology in particular can achieve, which is somewhat reasonable given the relatively rapid pace of development in that period.
And McCarthy, as a result, has enormous optimism about the kind of intelligence that can be developed
in a machine. And this is the kind of optimism that Weizenbaum really never shares.
I think as time goes on, for Weizenbaum, it becomes less about a certain kind of wariness
or a certain suspicion or a certain kind of insistence on the need for more modest
ambitions about what we can achieve in computation into something sharper, into something harder,
into something where he begins to feel that AI as an ideological project is actively harmful.
Not just too ambitious, not just a bit unrealistic on what can be achieved,
but that it actually has a sinister social and political dimension.
And that's what he begins to dig into in the course of the 1970s.
Yeah. And I think that gives us a good kind of bridge to talk a bit more about those
wider ideas as well, right? This kind of broader critique that he develops over time
as he's kind of reflecting on these experiences and these issues. And one of broader critique that he develops over time as he's kind of reflecting on these experiences
and these issues. And one of the things that really stood out in reading your article was
that Weizenbaum wrote that he believed the computer revolution was ultimately
a counter-revolution, right? Something that was fundamentally conservative.
And that goes against a lot of the narratives that we have around kind of personal computers
and the internet as being this moment of empowerment where the individual is kind of
getting all of these kind of additional abilities to enhance their capabilities or their skills or
whatever, right? Why did he believe that? And what is the kind of importance of recognizing
the computer revolution in that way? In many ways, it's his most provocative idea. And it's one that I am both very drawn to,
but also struggle with. And I think it's worth saying that Weizenbaum is a writer that one
struggles with. He's a challenging writer to read, I think. Not in the sense that he uses a lot of
technical language or a lot of jargon, but that his thinking, particularly in Computer Power and Human Reason, his 1976 book,
has a kind of meandering quality, which we could charitably describe as essayistic.
And it is quite brilliant at points. But at others, it feels a bit disjointed,
that he follows a thread, picks it up, drops it, picks up another thread.
The reason I mention this is because there is
a bit of interpretation that is required to make meaning of this very provocative
point of the computer revolution being a counter-revolution. What does he actually
completely mean by that? You have to fill in some of the blanks. I think what he meant by that
is that on the one hand, the computer revolution as it takes place, let's say if we had to periodize it, it really emerges in the 50s and the 60s. The 60s being the turning point, the decade in which computation kind of enters mainstream American life in a profound way, no longer a specialized military technology. I think what he means by that is that
this is a period in which economic, social, and racial hierarchies are being strengthened and
consolidated. This is the period of the early Cold War. This is a period in which that high
watermark of class struggle and struggle for racial justice that occurred during World War II in the United
States has been defeated. That wave has receded. That we are certainly by the late 40s and the
early 50s in a much more conservative period of American life, which we, I think, in pop culture
associate with McCarthyism, but goes much deeper than that, of course. And that the computer is an instrument for
strengthening those conservative, those counter-revolutionary forces, because it makes
it possible to automate decision-making at a certain scale, and thus provide very narrow
criteria for how decisions will be made that reinforce existing logics. I think that idea
is actually quite familiar to us now when we think about how algorithmic policing, to take one
example, reinforces existing analog racist policing practices. But I think at the time,
Weizenbaum was saying something that perhaps felt a bit newer. So that's one dimension of the
counter-revolutionary aspects of computing. And I should say, perhaps this contradicts what I said
just a moment ago about how perhaps he was saying something a bit newer. This notion of computation
as counter-revolutionary is widely shared among members of social movements of the 1960s.
The computer becomes a symbol of not just the war
in Vietnam, because computers are being used to wage war in Vietnam, which is why they're being
attacked by student radicals at computer centers and campuses across the country,
but also a symbol of stifling bureaucracy, of this very regimented institutionalized form of life, which is connected to capitalism as a
system, but also kind of the specific imprisoning cultural codes of 1950s America, which is of
course part of what the student rebellions are about. So in that sense, he shares that intuition that computers are
counter-revolutionary, but tries to develop that idea a bit further. The other piece of
his argument, I believe, is that not only do computers reinforce existing concentrations of power, existing social hierarchies, but that they also constrict
our understanding of what it means to be a human being.
And actually, this latter point I think is more important for him, that computers encourage
us to think of ourselves as computers, that they encourage us to mechanize our rational faculties,
to embrace instrumental reason or instrumental rationality, which is a concept that he borrows
from figures like Max Horkheimer and Theodore Adorno, for whom instrumental reason means
an attention to means rather than ends, an attention to optimizing processes without
reflecting on what those processes are for. For Weizenbaum, the computer is an agent of instrumental
reason. It encourages us to adopt this engineering mindset where we're just trying to make
things more efficient, but we're not really thinking about what is this efficiency for.
He gives an example from the anti-war movement, where during the campus protests at MIT, there was a proposal floated of why don't we create hotlines so that campus protesters can communicate with the administration, and this will ease tensions. He presents this as an example of instrumental reason of the kind that
computers automate and proliferate because he says instrumental reason converts moral, political,
social problems into technical ones. In doing so, it suppresses the possibility of conflict,
that you can't actually have conflict between different sets of interests, between different
sets of values. It's simply a technical problem that can be solved with a technical solution.
So in this setting, the notion that the student protesters of the administration would have
entirely opposed sets of interests, entirely opposed sets of values that actually can't
be reconciled through a telephone wire
is a difficult idea for instrumental reason to accommodate. But then you can see, I think,
through that example, how instrumental reason and by extension, computation as a whole serves the
status quo. Because if you're not allowed to ask questions about ends, if you're just thinking about means, then the established way of doing
things continues. It sets very narrow parameters on what you're allowed to tinker with.
I think you've put that so well. And I think there's so many things I could say in response,
but I think just a few things I'd want to pick up on that. On the one hand, when you talk about
people seeing themselves as computers, I think that this is something that we have experienced for a long time.
But you notice in particular, when it comes to the people in Silicon Valley today,
there's a strong kind of belief in or view that we should be trying to achieve kind of
transhumanism, kind of to merge ourselves with computers.
You know, you see people like Sam Altman kind of comparing us to stochastic parrots using a term used by
Timnit Gebru and Emily Bender and those sorts of people to kind of draw comparisons between these
chatbots and large language models and, you know, ourselves as humans to try to make us kind of look
as though we are one in the same. And you talked as well about this view of computers at the time
as these things that are controlled by kind of
large institutions for they're very bureaucratic and it's interesting because i guess on the one
hand you describe how there can be this view that kind of weisenbaum has where he's looking at how
these computers are used kind of what the politics behind them are and kind of taking a bit more of
maybe an oppositional stance to this way that computers are operating.
Whereas then you have kind of the Steve Jobs of the world kind of come in with the personal
computer revolution and say that the problem isn't that kind of fundamental to computers,
but just the fact that large institutions control the computers. And if we put computers in
everyone's hands, then we take away kind of the negative effects of that. I don't know if you
have any further reflections on those points, or we can certainly discuss other aspects of his work.
I think something that makes me think of is this term humanism. I hope that doesn't take us too
far afield. But it's a term that has kind of an interesting history within the history of
computers. Because as computers begin to be capable of performing
certain functions that we might associate with human intelligence, it always poses this question
to us of what a human being is. I think this is really the central preoccupation of Weizenbaum's
work. What is a human being? This is a question that becomes active and urgent and challenging in an era in which computers seem to be able to do more and more of the things that But it signals some investment in an idea of the human,
some attachment to the human as variously defined. That set of ideas has been on the one hand very
useful for developing information technology. We talked previously briefly about Project Mac
and the influence of JCR Liquider. There is a lot of attention in those circles as they are
developing the fundamentals of what we now take for granted as interactive computing.
A lot of attention to this category of the human. You mentioned Steve Jobs. That gets inflected with
the kind of 60s counterculture and an interest in Eastern philosophy that is one of the inputs
into the personal computing revolution. And then,
of course, Jobs is central to the mobile computing revolution with the iPhone.
So, I guess this is a long way of saying that talking about humans as something distinct from
computers is not necessarily oppositional, right? It can actually be a force that
greatly develops the power of these
technologies. It may also develop the usefulness of these technologies. I'm grateful that we have
PCs and that I don't have to run punch cards through a mainframe. But nonetheless, it is
something that the tech industry has made use of, that the humanization, if we would use that term, of information technology has made the industry
much more powerful, much more profitable. So on the one hand, I want to be wary of humanism
full stop, but I also want to be attentive to the different ways that humanism is defined and
deployed. And what I find interesting about how Weizenbaum uses the term is that for him,
he has a very historical understanding of what a human being is. That a human being is a person
who has a human history, who was born to a human being, who was raised by a human being,
who inhabits a human body, who has a human psyche, who goes about the world as a human.
And that to me resists some of the kind of mysticism that I dislike in some humanist
discourse and also refocuses us on the real distinctions that he's interested in between
the human and the computer. It's not that there's an essential goodness or an essential
spiritual quality to humans that computers don't have. It's really just that it's quite simple in
a way, that they simply don't have a human history. That I think opens the door to a point
that I make at the very end of the piece, which is that it's – and this is a point that Weissenbaum
himself made, but I wanted to draw attention to because I thought it was a nice place to end.
That opens the door to the possibility of a computer system developing its own history,
developing its own embodiment perhaps, developing its own set of relationships.
And that through that process could acquire something like intelligence, but an intelligence
that is very alien to ours, a very different kind of intelligence. And I think that's an important
point to make is that Weizenbaum, unlike some other figures, never thought that intelligence
could not develop in a machine. He did not want to make that type of claim. He just thought that
if it did, it would
look something completely different than human intelligence, that it would be as different to us
as a dolphin's intelligence is to us. I think that there's a whole conversation and a whole
rabbit hole that we could go down in discussing that further. And I think I'm just going to allow
that thought to exist as it is and people can reflect on it because there are a couple other things that i want us to discuss before we end off this conversation and i think
that when you talk about his idea of you know the importance of kind of the human history it's a
narrative that i feel like we are returning to in this moment where you know we have the threat of
large language models and image generators and things like that. And one of the arguments that we hear by, say, writers in Hollywood is that ChatGPT doesn't have this kind of human experience
that could go into writing these stories that people enjoy. Or artists, for example, saying
that image generators, again, don't have these human experiences, so they can't make kind of
unique art in the way that we would expect and that we want kind of humans to do.
And I think that bridges us into the discussion of what Weissenbaum writes about in his book,
Computer Power and Human Reason, where he really draws a distinction between
judgment and calculation and what a computer should do or be able to do and what should be
left to humans and should not be given over to computers. So can you explain kind of the distinctions that
he draws there and why he believes in the way that Silicon Valley presents today that we should be
trying to have computers kind of do as much as possible and virtually everything, how he really
believed that that should not be the case and how there should be very clear things that computers
should be designed to do and other things that should be left to humans because computers will
never be able to effectively do those things.
I'm glad you asked that, Paris, because that's really the center of his book.
And as he explains in the preface, the book has two major arguments.
The first, which we've been discussing, is that there is a difference between man and machine.
That is very important to Weizenbaum.
The second is that there are certain tasks that a computer should not do.
And this gets to the distinction between
what he calls judgment versus calculation. Now, calculation for Weizenbaum is a quantitative
process. It uses a technical calculus to arrive at a particular decision. We could think of it
as algorithmic, right? And there are many occasions in which we need to use calculation to arrive at a decision. This're using judgment. Because judgment is rooted in
values. And these values arise through the course of human experience. They're related to this
question of human history that we spoke about just a moment ago, that we acquire values by being human
beings in the world. We acquire them from our parents, from our surroundings,
from our socialization. We define our own values as we grow older. Values are something that one
can't acquire without having the experience of being human. That's a very important point for
him because if you're going to make judgments, you need to rest on that foundation of values. This is why,
for instance, he considers it obscene to imagine that a computer could perform the functions of a
psychotherapist. Because for him, the functions of a psychotherapist require access to a set of
human values, which are in turn predicated on a set of human experiences without which
you can't actually provide a therapeutic encounter to someone. Similarly, he would consider
obscene for a judge to be automated, right? To pass judgment on someone. A computer can never
do that because a computer doesn't have access to those human values and human experiences. So that distinction between judgment and calculation is really
essential for his thinking. Yeah, it definitely, again, brings to mind conversations that we're
having today, right? It seems incredibly relevant in this moment where there is a renewed push
for AI to be integrated in many different ways. And it brings to mind the work of
someone like Dan McQuillan, who wrote the book Resisting AI, about, you know, where should we
be okay with AI being implemented? And should we be okay with it being used to kind of shape
our lives and kind of ensure that we have less power over how we live, essentially,
because we are handing that over to AI and to computers and
to machines. Should welfare systems that are offered by governments be determined by artificial
intelligence systems that could end up getting something wrong or wouldn't be able to listen
to a specific situation that you're in where there might need to be a compromise made or
something like that, or to do visa systems with AI tools, or to turn
policing into something that is done by AI that can have a lot of negative consequences because
the AI does not have human values, but also allows humans who do have values to say, oh, the AI
said that this is okay, so that's fine now. So I think that you can see a lot of different ways
that this is still
incredibly relevant today and is still something that needs to be present as we consider what
computers should actually be used for. Exactly. And importantly, those are not
complexity arguments because today it's very difficult to make the argument, well,
those systems simply aren't complex enough to perform these functions. They're quite complex. These are not just rule
following programs. They're relying on these massive neural networks. And the reality is we
know how they're trained, but we don't actually really know why a large language model does what
it does. It's a complexity that in many cases eludes human understanding, which is probably
a problem of its own. But we're not saying that if only these systems
were more complex, they would be able to perform the function of a psychotherapist. We're saying
that they never could because they can't access human values, because they can't have human
experiences. And it doesn't mean they might develop their own weird AI civilization,
you know, God bless them, but that they should not be permitted to do things that
only humans can do. And that when you allow them to do so, not only does it degrade the quality of
that experience, but it shrinks the scope of decision-making. This is an important point,
I think, about instrumental reason, which is that by reducing the richness
of human reason into this algorithmic process, you are actually also constraining the decision
space quite significantly. We can't actually make certain choices because we're enclosed in this
much narrower form of decision-making. And this has tremendous political consequences.
Again, this is, I think, part of Weizenbaum's point about the conservative impulse at the
heart of computation. Yeah. And I think that brings us back to something you were talking
about earlier and that you wrote about in the piece where you describe how Weizenbaum was
less concerned by AI as a technology and more by AI as an ideology, right? And I think that that
really, you know, kind of
links up to what you were just saying. And so I just wanted to close with more of a general
question. You know, we've been talking about the history and of Weissenbaum's work and of its
relevance to this moment. But we are in this period where there is a lot of AI hype because
of large language models and chat GBT and all this kind of stuff. But there's also this growing
skepticism of Silicon Valley and kind of the worldview that it holds that we've been seeing
growing over the past number of years. What do you think is kind of the takeaway that we should have
from Weissenbaum's work in the present? Wow, that's a big question. Well, I would
encourage folks to read Weissenbaum. I mean, his great book is not in print, but perhaps it's on LibGen or something.
I think returning to his work is quite useful. Again, not as a prophet, as someone who got it
all right, but as someone to struggle with, as someone who challenges us and who is not
always right. There's times when he's often not right. Really, he doesn't get everything right.
But this is someone who,
to a large extent, was present at the creation, who centrally participated in the computer revolution, and who saw something inside of it that I think we are still working through.
There's so much one could draw from his work. And as I mentioned before, it's not always very
coherent. I'm not sure it adds up to a very clean, integrated
picture. But if I had to give people a takeaway that I thought was most valuable, I would say
it really resides in this very simple sentence that there is a difference between a human being
and a computer. It's a very obvious point, but it's a point that AI as an ideology is
constantly trying to deny. The ideology of AI is that everything that humans do,
computers can and should do and will eventually do better. And if you think of humans and computers as entirely distinct, entirely alien entities,
that concept that is proposed by the ideology of AI becomes nonsensical, right? And you could even
do a thought experiment of if a bunch of aliens landed from Mars, that would be very cool.
It would be very interesting to have a conversation with them and see where they're coming from and figure out how do they think about language and culture and art and all these things.
That's awesome.
I've been wanting that to happen since I was like a little kid, right?
I'm obsessed, frankly.
And I believe that they're out there and that they will come at some point and that they'll
be friendly.
But we wouldn't say to the little
green man or whatever they look like, hey, would you like to be my shrink? Would you like to be a
judge? Would you like to be the president of the United States? Would you like to have your finger
on the nuclear codes? Whatever it is, that would be insane. I don't think anyone would ever – of
course, one can never be too careful. I'm sure there's some strange internet subculture that
celebrates that possibility. But I don't think that most people would find that very reasonable.
But Weissenbaum's point is that that's kind of how we are approaching computers, right? That we are
empowering them to an extraordinary degree to make decisions about people's lives. When in fact,
they're aliens. They don't have access to human experience, so they shouldn't be given
such extraordinary power. But perhaps if we understand that really profound difference
between human beings and computers, we can find a form of coexistence that can be quite useful,
constructive, satisfying, even fascinating as they develop in their
capabilities. So I think that's a note of cautious optimism that we can end on.
Yeah. And I think that's a fantastic point to lead the listeners with and to leave them thinking
about, especially in this moment of AI hype and with so many kind of CEOs expecting us to believe
that the AIs will and should do so many different
things. Ben, it's always fantastic to be able to pick your mind and to talk about these tech
topics because I think that, you know, you're an essential voice on the kind of key things that
we're grappling with today when it comes to technology. So thank you so much for taking
the time again to chat. Thanks so much for having me, Paris. This was great.
Ben Tarnoff is a founding editor of Logic Magazine and the author of Internet for the People,
The Fight for Our Digital Future. You can get his book from Verso Books. You can follow me or the
show by searching Paris Marks or Tech Won't Save Us on a whole range of social media platforms.
And if you want to support the work that goes into making the show every week,
you can go to patreon.com slash tech won't save us and become a supporter.
Thanks for listening. Thank you.