Future of Coding - Computing Machinery and Intelligence by Alan Turing (feat. Felienne Hermans)
Episode Date: April 27, 2025You know Alan Turing, right? And the Turing test? Have you actually read the paper that introduced it, Computing Machinery and Intelligence? No?! You… you are not prepared. With very special g...uest: Felienne Hermans Notes $ Patreon Mystery AI Hype Theatre 3000 podcast, from Emily M. Bender and Alex Hanna. "Always read the footnotes" [The Language Game](https://en.wikipedia.org/wiki/Language_game_(philosophy) by Ludwig Wittgenstein Can Machines Think? by W. "Billy" Mays Lu's paper with Dave Ackley, Dialogues on Natural Code describes how the symbiote will spread to consume all of humanity. Reclaiming AI as a Theoretical Tool for Cognitive Science by Iris van Rooij et al. Ned Block's Blockhead Nick Cave's thoughts on AI song lyrics. For instance: "Writing a good song is not mimicry, or replication, or pastiche, it is the opposite. […] It is the breathless confrontation with one’s vulnerability, one’s perilousness, one’s smallness, pitted against a sense of sudden shocking discovery; it is the redemptive artistic act that stirs the heart of the listener, where the listener recognizes in the inner workings of the song their own blood, their own struggle, their own suffering." What Computers Can't Do by Hubert Dryfus Wittgenstein on Rules by Saul Kripke Is chess the drosophila of artificial intelligence? by Nathan Ensmenger Computers as Theatre by Brenda Laurel ! Send us email, especially questions or topics you'd like us to discuss on future episodes, share your wildest ideas in the Slack, and: IVAN: 🐘 🦋 🌐 JIMM: 🐘 🦋 🌐 TODE: 🐘 🦋 🌐 FELI: 🐘 🦋 🌐 See you in the future! https://futureofcoding.org/episodes/076Support us on Patreon: https://www.patreon.com/futureofcodingSee omnystudio.com/listener for privacy information.
Transcript
Discussion (0)
There's like a little blue plaque at a house near me that says Alan Turing, you know, and he's like remembered and treasured as this hero.
You know, someone who saved many deaths from happening, prevented many deaths in World War II. And also he's kind of seen, I think by a lot of us in the queer community as this kind of like this, this gay icon, right?
And, you know, after the war happened, went through lots of tragic things of being forced through chemical castration,
ironically was forced to take some of the medication that now the government does not allow me to take, right?
Was only like pardoned extremely late,
many, many years later.
And so I'm curious, I'm conscious of the reaction
that this episode will get from many people like me,
you know, like this was shocking for me,
you know, to go back and read and actually think,
wait a second, is this, is this like the genius hero,
you know, like innovator that many of us saw him as?
And you know, there was some things which, you know,
were very prescient, prescient, prescient,
I don't know how to say it,
but there's also some very clumsy, bad,
unclear, confused work in here
and some deeply problematic things too.
And that's been like a journey for me to go through that and look back. And sometimes
we're so starved for these role models that we want to exist. We want to have this like a hero,
like a queer hero of computing or history. So I think sometimes I've only heard of this from other people who have only heard of this sometimes we, I've only heard of
this from other people who have only heard of this from other
people who've only heard of this from other people from someone
who read the paper ages ago, you know, and I think each time it
gets passed on, you know, we add a little bit of what we want to
be real. And it makes me think, you know, we have so few choices
for people to look up to in the field, that we sometimes like turn a blind
eye to some of these wishy washy things, problematic things. And I'm really pleased to
have gone straight to the source and discover that. And that was that was something I essentially
wanted to communicate on this episode.
But I did also want to hear about...
Okay, cool. So then the next thing we need to check is, is everybody happy with the levels?
The levels of what? CO2 in the atmosphere?
Because in that case, no.
Well, at that, I think we're ready to go.
Did anybody come with an opening bit?
I'm shocked that Ivan doesn't have... I have one, but...
This, okay, I would say an opening bit of how we don't introduce a guest or we pretend they
don't exist or they just magically appear or we can't mention the thing because if we do the thing,
we mention the thing, then it's too meta.
Yeah, because normally you don't have guests, right?
It is not normal that you have guests on the show.
It's like, only weird people are like,
hey, can I be on your show because you're so cool and I want to have like,
a little bit of your coolness, like, be adjacent to you virtually.
That's very kind. That's very kind.
However, I'm just, I'm still in a feeling of relief that you listened to us talk about your paper
on the previous episode and you're still here.
No, it was super nice, right?
I think you said something like this is one of the best papers we ever discussed.
Yeah.
So that was cool.
I was texting my co-author like, listen to this, they're super nice.
Oh, okay. As you all know very well, not everyone has been like super excited about this work. So
Yeah. Oh my God.
It helped that it was super good. Now that you're here in person, we can break K-Fame and say,
yeah, it's actually, it's good that the paper was good, but we're not here
to talk about your paper today. A good paper. Yes. No, we're not here to talk about your paper today.
A good paper.
Yes. No, we're not here to talk about a good paper.
Yes. I will go ahead and say, I think this is probably the worst paper we've ever done.
At least in my personal opinion here. I mean,
probably one of the most influential papers we've covered, right?
Yeah, that is the thing, right?
I teach a course on AI and I was just asking my students the other day, like in the first lecture,
what do you already know about AI? And these aren't computer science students.
These are people in the teacher academy. So some of them become computer science teachers,
but most of them become other teachers. So I'm like, what do you know about AI?
And I have them write it down on post-its and, and two of them wrote down, what
do I know about AI, the Turing test?
So it's like, this, this is very much influential and it matters so much now.
Right.
Five years ago, we might've all been like, who cares about this shitty paper?
But now it is so in the public eye what it means, what AI means.
And then people go back to this and I don't think many people have read it.
It must not be the case that people know what they're talking about because if they would,
they would maybe see it in a different light.
Yeah.
I have never read this until, you know, the reading for this podcast episode. I really wanted you to be like, I have never read this until, you know, the reading for this podcast episode.
I really wanted you to be like, I have never read this, even now.
I was like two paragraphs in and it was like, nope, no.
I mean, it was pretty wild, like reading this, because it's part of pop culture, even, you know,
like, I watched the film, I watched the Imitation Game film, you know
I I feel like I've heard about this paper
Through so many like secondhand thirdhand sources, you know, like I thought I knew what this paper was
And I think it also says something
It says something about our field that so few people
actually read this, right? Like mathematics or even physics, if you do an undergrad in those courses,
then there will be a history of the field course that actually has some substance, because there's
a lot of physics history that matters, whereas many programs don't have a history of computer
science course. And if they do, then they will not engage with such sources. And they
will also not engage it with a way in here's a paper, read the paper. Then it's more like
the history of computing will be like, well, first there was like C and then there was
C++, which is a different thing, which is also the history of our field, but it's not this deep rich history of let's critically engage and what can we take away and what can we
maybe leave in the 50s.
So in case any listeners did not read the title of the episode that you're currently
listening to, I like to say, you know, what we're actually reading here is Computing Machinery
and Intelligence by A.M. Turing.
So this is the paper by Turing, by Alan Turing,
where we are, the Turing test was invented.
Now, I think it'll be interesting to talk about
what the Turing test actually is once we read the paper
because there's some ambiguity here,
but popularly the idea is supposed to be like, well, if you can't tell the difference
between a machine and a human, then the machine must be intelligent.
That's like the popular understanding.
But yeah, this paper's, I think there's quite a few ambiguities on what really is the Turing test and what would
it mean for things to be intelligent, etc.
Yeah, and if only the paper stopped at that.
If only it was, hey, what's this test that you can use to tell the difference between
a machine and a person?
But unfortunately, that's like what, like the first page of a, you know, relatively
short but still quite longer than one page paper. And it, wow, does this go places I
was not expecting. I'm amazed that this paper is not on lists of like, hey, everybody on
Halloween, read this paper for a spooky tour.
It's wicked. Like I was just stacking my microphone
with Donald Duck comics because it wasn't high enough.
And you know what's funny?
If you buy Donald Duck magazines now here in the Netherlands,
if you get old stories,
or there's this old Donald Duck comics
that depicts Native Americans in a way
that we no longer do this.
And Donald Duck comics here, they have a little marking, which I actually think is nice.
They say, this is an old comic and it represents things and people in a way that we would no
longer do today.
And I was like, I want to cut this out of a Donald Duck magazine and I want to literally
Photoshop this on this PDF.
There is stuff in there.
Like if I may give one example, um, that I would like this in, in the
episodes to be very clear that this is not me saying this, but in the paper,
there is actually the sentence that Muslims believe that women have no soul.
You know, it's funny.
I, when reading this, I was pretty sure that was not what Muslims think.
I know many of them, they don't think this.
But just to be sure, I actually put this in Google.
And if you Google the question, do Muslims think that women have no souls?
You get a Reddit thread on this paper.
Oh my God.
It is such a wicked thing that
the only place that you can find this. And then this guy, right, he's in Cambridge. It's
like the center of knowledge, right? Did he know no one that he could ask whether this
was weird, whether this was a lie, whether this was like a slur or an insult or a gross
misrepresentation of millions of people. No, no, you can apparently just write this down
and people are like, yeah, it looks good to me. Maybe that's not even the weirdest thing
that's in this paper. Like this is like the whole reason this episode came to be is like
I was, I was getting very angry and my blood pressure was like unhealthy levels and I thought who can I commiserate with like oh my friends are red. I am so pleased you're here
Filina this is you know sometimes I feel like a broken record when we're looking back at these old
papers you know some are wilder than others you know this is what very, very like triple wild, you know, but, but
always there's, there's things that annoy me. I, you know, I get tired of going
through and highlighting the things which like assume someone's gonna be a
man and stuff like that. And honestly, yeah, I need, I need-
Oh yeah, there's the whole gender thing.
Oh yeah, that whole gender thing. Yeah.
Yeah, we'll get there. Yeah.
Oh God. I mean, I need that Donald, Donald Duck thing, yeah. Yeah, we'll get there, yeah. Oh, God. I mean, I need that Donald Duck disclaimer, basically.
Oh, yeah, it's in Dutch,
but I can definitely send it to you.
Please.
Yes, please.
I wanna be clear, like, also,
I'm not saying we should ignore this part, of course.
We should talk about all of the bad social things.
The argument for the topic itself is also awful, right?
Like it even if you even if you're like somebody who's like the Turing test is right
Like let's just say that that's you you think you know what the Turing test as we popular understand
It is a good test if you go read this paper Turing definitely does not do a great job defending that claim
No, he doesn't do a good job explaining what the claim is
There's a lot like I had read this paper
Before the podcast this was a this was a paper I had on the list and like part of the reason I didn't want to
Do it was because like it's it's not a good. It's not a good paper on AI
Just to be frank, there's lots of really interesting work on AI consciousness, all of these kinds of things.
And while this one's been very influential, it is not good.
So I think we do need to like, those disclaimers out of the way, maybe somebody's skeptical,
like, oh, you just don't like it because it says some socially backward things or something.
Now I think we should dive in to the paper. You just don't like it because it's you know says some socially backward things or something now. Let's look
I think we should dive in to the paper
It's my natural thing it's not my fault you turned it into a jingle
It's what I would say normally and I had it not been turned into a jingle. It wouldn't sound weird so
We should we should like you know read some of the, the, the start of this paper.
I have more preamble.
No, no, no, no, no, no, no.
Oh no.
Okay.
Okay.
Go ahead.
That's not the segue, Jimmy.
No, no.
Okay.
So two additional points of preamble.
Two preambles.
Yes.
It's important to say this.
We are all going to be laughing a lot as we read through this.
We are laughing at how bad this paper is and how wrongheaded it is, just in case there's
a lick of confusion anywhere.
We are not laughing because we find, you know, any of the dynamics that are at play in this
paper funny on their own.
We are laughing at how painful it is that this is like foundation in our field.
I just want to make that like super clear. We're all here having a fun time, but it kind of sucks
that this is what we have as like a canonical religious text of our field or whatever you want
to call it. The other pre-ambly bit that I wanted to check in with everybody before we start the usual,
this is the usual check-in, what did you all use for highlighting schemes this time?
Was there a homework assignment?
I just like marked the stuff.
Yeah, no, that looks good.
All right, so just single color highlight.
Green and some notes.
And then sometimes I scribble something on it.
Nice.
Yeah, I heard you said you had some angry scribbles in there too, is that right?
Yes.
Very good.
Yes.
Very good.
For the audio listeners who are being shown an iPad of scribbles.
Full of a lot of green actually, if the green means anything.
And I try to make a diagram, right?
Yep. Diagrams.
Nice. Jimmy, yours is your usual.
Mine's my usual, but with the occasional question marks and exclamation marks off
to the side. Also a double question mark for some things where that the one
already quoted got a double question mark of like, wait, what? Yeah, so lots of like, hey, I almost went through and like, tried to be extra and go
through each of these, there's this whole section of like possible objections to what
he says.
And I tried, I almost went through and like diagrammed his bad logic of like answering
the objections, but I was like, nah, this is not worth my time.
So yeah, just lots of question marks.
Yeah, because the structure is interesting.
That sort of the last 75% of the paper is, well, maybe you have this objection.
No, this objection.
So the structure also is interesting.
Yeah. I love that even, where is it?
Like towards, at the beginning of the final section here, he says, and I directly quote,
the reader will have anticipated that I have no very convincing arguments of a positive
nature to support my views.
Yeah, that's great.
That's the beginning of the final section.
Like the confidence of a dude, right?
Of course he could not have predicted the cultural impact of this.
Yes.
But still, right?
At one point I started to look up like, where is this in his life?
Is he maybe still a child, right?
I don't exactly know the history of this.
Okay, like it's 1950, so he was an adult in the war.
Like what has already happened?
But he already has an undergrad
from Cambridge and a PhD from Princeton, right? So, ah!
One paper that was written in direct response to this like a year or two later, I know it
says that.
In the true Cartesian manner, nearly half of Turing's paper, 12 of the 27 pages, consistent
answering objections.
This was not like unnoticed at the time that this was this was like I wanted to see like okay
was his thinking just this bad and I'm like maybe I'm you know only in hindsight is it this bad
right like maybe in 1950 this felt good so I went and looked at a bunch of contemporary
papers like answering objections or written white before or after or whatever. And now
it's just a bad paper.
Yeah. And you know what's also really interesting? Do you know this podcast, AI Mystery Hype
Theater 3000? I just want to do a shout out. It's Emily M. Benders. It's really, really
good. They're always picking apart like AIpes. And their slogan is, always read the footnotes.
So what I also find so interesting about this paper is...
who he cites and who he doesn't cite.
So all the work that he cites is computer sciencey people like Church...
and people that he worked with, but no philosophers.
And then I started to really go on this rabbit hole of the history of Turing.
Like when he was in Cambridge, you know who also was in Cambridge that he actually hung
out with?
Wittgenstein, right?
So he must have had like there's documented history of Turing attending Wittgenstein's
lecture on the language game, language game, imitation game,
but he doesn't talk about this and he must have known this, right?
So he's known to have regularly hang out with people in philosophy,
so he could have cited philosophical work, but he chooses not to do that.
He chooses to only engage with the computer science side of things, where he must have had
direct access to philosophy knowledge. Well, he was in Cambridge, so there must have been philosophy
around, but also directly to people that we know he knew then. And then there's this philosophy he
hangs out with that has something that is the language game that actually is quite related.
And he doesn't talk about that.
Was there some drama going on, you think? You know, did they have a falling out? Was
there a rivalry?
So my best guess, but maybe this is colored by, you know, my work and also colored by
present day, but I think it's more related to like the overvaluation of math and and Computery work and the undervaluation of the humanities and social science that maybe even in those days like a computer
scientist could not be seen as too
Softly, I don't know right if I had to do one guess that would be my guess
But but I don't know but it it's still weird, right? That as a computer scientist, you sort of have the balls to go into clearly an area
of someone else, right?
And then he even says something in the beginning of the paper.
He's saying, well, I can also try to read in my native language, which is hard.
But I propose to consider the question, can machines think?
That's the first line of the paper. The next line is, this should begin with the definitions of the
meaning of the terms machine and think. Like, hey, friend, okay, so you're trying to define thinking.
Maybe there's like 2000 years of research where people have really, really done their best. Like we can go back
to Plato, Socrates, maybe even further back and in different cultures also to ask the
question what is thinking me, but no, no, we don't have to engage with this. Let's just
not do that and do something entirely different.
Yeah. Well, and I love how he follows that up by basically sort of hand wringing and
saying, well, actually, we can't, we're not going to really define those terms. So instead
No, that would be hard.
And I'll quote here, instead of attempting such a definition, I shall replace the question
by another, which is closely related to it and is expressed in relatively unambiguous
words. So he opens with this question, can machines think? And is immediately like, well, we're not going to answer that question, but we're going to answer a different question
and let readers fill in the blanks in their mind, connecting this different questions,
answer with the thing that he's teasing us with, which is machine thought. And he does that over
and over again. And then he promises unambiguous words, but those are nowhere to be found.
Yeah, he does this over and over again. I have so many, as we get into this, I have
so many sections where he introduces a question or says, here's an answer to some earlier
thing, and there's no relation. It's so tenuous. It's so weakly supported. It's just bad, bad work.
So should we tackle this second paragraph?
Well, okay, this is perfect.
This is perfect because I need to explain
what my highlighting scheme was this time.
Ah, good, yeah, we didn't finish that.
No, no, no, this is coming on naturally.
So normally, I think normally when we do papers,
I go through highlighting things that like I like, points I like, points I disagree with, that sort of thing.
Here, I don't know, like I just kind of went with highlighting stuff that seems interesting, but not like good interesting, like interesting, you know what I mean?
Interesting. Yeah, scare quotes, yes.
Interesting. I mean, so, but you know, okay, so some of it, which is probably in this
this paragraph coming up, is like, you know, yeah, real scare quotes, interesting.
Some of it, some of the stuff I highlighted was when I got a bit confused, right, because
some of the stuff I highlighted was when I got a bit confused, right? Because when he says computer, I think it's just because, you know, I'm a millennial and
to me a computer means one thing. I actually found it really hard to keep up
with, like, I don't know, taking my mind back to the 50s when computers, as we now
know them, did not exist and things like that. So some of it is me being confused
just because I'm a long way away in time.
And some of it is me being confused because,
okay, this second paragraph, right.
So I had no idea this kind of stuff was here.
I had no idea this was in the paper, right?
And I directly quote,
"'The new form of the problem can be described
"'in terms of a game which we call the imitation game.
"'It is played with three people, a man, A, a woman, B,
"'and an interrogator, C, who may be of either sex.'"
Okay, so I read that sentence and I'm like,
wait, wait, wait, wait, where is this going?
Wait, what's going on here?
I'm like, you know, like, you know, just stopping there.
I'm like, wait, does it have to be a man or a woman?
You know, can it be like any, I don't know,
like descriptive quality?
Could it be like brown hair, blonde hair, black hair?
Does it have to be, and wait a second,
like the interrogator I would have assumed
could be either sex, but why was it so important for him
to like explicitly say they can be of either sex?
Like what do you think I'd
assume if you say interrogator? Oh, that's got to be a woman then. Right. Or that's got
to be a man, you know, or that's got to be, you know, like, you know, non-binary person.
It was like, you know, he was walking right into it, a man or woman or...
And remember, very importantly, we're here to figure out if machines can think.
Right, right, right.
That's what we're doing today. It's like, hang on, he's saying like, wait, wait, wait, we're trying to define a machine
and think?
Okay, so three people walk into a bar, there's a man, a woman, and an interrogator, you know,
right, like.
Of either sex.
Oh, yes, and the interrogator, who may be of either sex.
I'm like, wait, why is this important?
And honestly, like having read through the rest of it,
I still don't like really get it if I'm completely honest.
You know, like the interrogator who may be of either sex
stays in a room apart from the other two.
The object of the game for the interrogator is to determine
which of the other two is the man and which is the woman. He knows them by labels X and Y,
and at the end of the game he says either X is a man and Y is B woman, or X is B woman and y is b woman or x is b woman and y is a man. The interrogator, who may be of either sex,
is allowed to put questions to a and b thus. You know, for example, they can, they could say,
will x please tell me the length of his or her hair? And Turing is one of these people
of his or her hair. And Turing is one of these people that has not discovered the word there, right? They refuse to say that, right? You know, they say anyway, anyway.
There's no there there.
There's no there there.
Also, they don't know, they don't know me and they don't know Lou, who have long hair, and are, between the two of us, you know,
not going to be able to use that to differentiate anything. It's quite rooted in norms of the
time.
Women with short hair have always existed, right? Just for practical reasons, or maybe
they have a sickness.
Yeah, or fashion, which changes constantly.
Or fashion. Yeah. Like like Miss Twiggy, this model from the 60s.
She had really short hair.
I mean, go look at I was I saw some vintage film from 1920s Paris.
And like every woman had short hair in this.
Like it was yeah, like it was just film out on the street.
Like this is not a new thing. But like, I will just say like a quick, with these questions,
it was not clear, like, does the interrogator know these people and he's just asking to
differentiate between the two of them or like, nah, it could be any random woman or man.
You might not know them. You're just trying to figure out. Like you don't meet them beforehand, right?
Like it's just that anyways. So for me, I think this is so interesting because being female, I think,
well, there would be some questions that I could ask that I think would make it quite easy.
This doesn't hold for all women ever. I do understand this, but a question you could
ask is, can you tell me how you put in a tampon?
Yeah.
I think that is actually quite a good question that at least gives you some confidence.
Yeah.
And that most men, well, firstly, probably they would like be like, and also they would
have a hard time just describing the steps that
you have to take. And again, I know this is not all people, like there are men that also
menstruate and women that don't, but this would be like a relatively easy thing. And
I'm sure that men could also come up with questions that are uniquely male experience
that would at least give you a bigger chance than random to actually decide
who is male or female.
Yeah.
Whereas Turing puts forward this like, gotcha, how long is your hair?
Yes.
Well, that will do it.
But I think the next part, I claim in this gender business is super funny, but I think
the next part is much more interesting for our current world,
because he says it is A's, man's, object in the game to try and cause C, the interrogator,
to make the wrong identification. And I think this is actually key for the current implementation
or understanding of Turing tests as well, that a chat GPT or whatever algorithm we have that implements this
can also be programmed to intentionally mislead and to a certain extent does this, right?
Chat GPT is programmed to say I, right?
I am sorry, I made a mistake.
Whereas you could also say, well, it just says,
the most likely answer on the internet is colon and then the answer.
So it presents itself as an I. And I think that is actually in the vein of this deception.
The whole goal, I scribbled in the margin, this is lying, right? So the goal of the imitation
game is to lie. That is in the basis of it. And I mean, we can go on the gender stuff, which
is super funny and fucking weird. But that's not so important. But the deception is actually
baked into the test. And that is also baked into our current understanding and the way
people make these things, that it is good or valuable for a machine to deceive you and
to say, I am not a machine. The machine is in the other room. I'm a human.
That's in there, right? Right in the first page.
Will X please tell me the length of his or her hair? Now suppose X is actually A. Then
A must answer. It is A's object in the game to try and cause C to make the wrong identification.
Why we needed so many letters X, Y, A, A, B, and C, I don't get it, but A, the guy is trying
to say, hey interrogator, I'm going to trick you, right?
His answer might therefore be, my hair is shingled and the longest strands are about
nine inches long.
In order that tones of voice may not help the interrogator, the answer should be written,
or better still, typewritten.
The ideal arrangement is to have a teleprinter
communicating between the two rooms.
Alternatively, the question and answers
can be repeated by an intermediary.
The object of the game for the third player, B, the woman,
is to help the interrogator.
So the man's trying to fool the interrogator,
the woman's trying to help, and the best strategy for her
is probably to give truthful answers.
She can add such things as, I am the woman don't listen to him to
her answers but it will avail nothing as the man can make similar remarks. Okay.
We now ask the question, what will happen when a machine takes the part of A in
this game? Will the interrogator decide wrongly as often when the game is played like
this, as he does when the game is played between a man and a woman? These questions replace
our original can machines think?
What the hell. What the hell. Yeah, yeah. Okay, I have to point out the ambiguity that I from every further remark
I don't think he means but it is ambiguous whether the machine is trying to pretend to be a
Human being or is trying to pretend to be a woman
Maybe he got caught up in his own letter soup
But if you replace a which is the man by a computer
and you do not change B, the woman, then indeed you get a woman versus a computer.
I don't think this is the intention.
I am stupid like that.
And then I read it and then somewhere in page seven, he offhandedly says, yeah, yeah, this
is a game between a man and a computer.
I was like, ho, ho, no, the man is letter A and you have replaced this letter A with a computer.
But I do think but but then this is problematic right then okay but what are
you even doing and why are these letters there? My best guess is these
letters are there to make it seem math yeah because of course a math paper needs an X and a Y and where else
are we going to do them? But it is very, very under specified.
It keeps going and it keeps, it doesn't get any better. The immediate next section.
Now we're only at page one, right? This is 25 pages. We're at page one with the wickedness.
So the beginning of page two, this new section,
this is called critique of the new problem.
So it starts as well as asking,
what is the answer to this new form of the question?
One may ask, is this new question
a worthy one to investigate?
Remember, this is the new question about,
can a machine take the part of A in this game,
which is replacing the original question.
Can machine think?
So he's doing this rhetorical trick to try and get us to think that, oh, if we can answer
this chain of incrementally easier and easier questions, then we can come back and say,
yes, machines can think.
It's funny.
He goes into this, he does it again and again where he's like, wow, but I know what you're
thinking.
You're going to say, is this worthy of investigating? Right? But that's not what I'm thinking at all, really. Like,
you know, you know, like whether whether a completely different question is worthy
of investigating or not, you know, that's for me, that's something else. The thing I'm thinking is,
like, are these things at all related? You know, you know, you sure you could say, okay, well, our question is, can
machines think, but how about we answer, we ask a different question, how do we
reduce the CO2 in the atmosphere?
Right?
Sure.
That might be a worthy question to answer, but I don't really see the link
between, you know, the imitation game and that question can
machines think at all and he doesn't explain this yeah he doesn't yeah can I make this it's not in
the text I will admit it is not there so like we're having to read into the text but given
you know the things I know about the time period but also like some of the other writings that I read. So one of them is by W. Mays, who was a colleague of his, who was a philosophy professor at-
Does the W stand for William?
Uh, no, I can't remember what it stands for.
Like Bill? Like Billy?
It is. No, it's not Billy Mays. Billy Mays here. Yeah. So like they're mentioned,
you know, they were they were colleagues at Cambridge
So this is someone he could have talked to by this paper
I read it's called can machines think and it's a direct response to this
So what what it talks about there and what the connection is here at the time in 1950s?
Behaviorism was all the rage
right behaviorism said all the rage, right? Behaviorism said, ah, all of this talk of
mental anything, thinking, feeling, desires, beliefs, we can just talk about behavior instead.
All of these things are pseudo-scientific words for behavior. And so what Turing, how
Turing thinks these are connected, at least
implicitly here, is by this idea that like what does it mean to think? Well, it
just means to be able to respond in certain ways that people would expect
you to respond in, right? There's no internal dialogue in your head, there's
no internal anything, there's no private language going on, right? There's nothing there. It's just
Behavior and so if a machine can replicate the behavior of a human being therefore by
Definition it can think but even given that right there are other
Descriptions of behavior that you could have picked and you could say here's ten tasks that I describe in a lot of detail
Making a coffee. I don't know, right? A number of behaviors and say, if a computer or a machine can do these
10 behaviors, then it is a human or then we say it is intelligence. Already the reduction
from behavior to text, right? This I find find so interesting because there are so many things
that our behavior, like Pavlov and behaviorism is like drooling and eating. There are so
many behaviors that aren't linguistics. Intelligence is maybe also riding a bike. We do not see
other animals ride bikes, people ride bikes, maybe some apes, I don't know.
But this can also be intelligence, right?
Or knitting or sewing or drawing.
There are so many behaviors.
So already the reduction of all behaviors to only what we can capture linguistically,
or not even linguistically, because linguistically is also tone of voice
and expression which he specifically rules out. So expression in written text, that already
is such a leap. So I was going to give you rights that we cannot say, oh, it is intelligence
if a human feels feel that it's intelligence because that would be not in scope, but definitely
there would be so many other behaviors that you could talk of.
Or you could have a list, right?
This putting it into the hands of a decision maker that doesn't have all the facts, that
is deceived by definition.
I find that to be a leap from, no, behaviorism was in fashion.
Just to be clear, I completely agree with you.
Right.
It's hard to defend.
Like there's some sophisticated version of a Turing test
that might be defensible, but it's not found here.
Yeah, it's not this one.
It's not found here.
There's two quotes from this page that I want to touch on just because they build on what
Felina was just saying, which is, and I just, I like these quotes.
These are, to me, good bits of the paper.
Wait, you like them?
Yes, I like these bits.
Are you ready?
Okay.
No engineer or chemist claims to be able to produce a material which is indistinguishable from the human skin.
It is possible that at some time this might be done, but even supposing this invention available,
we should feel there was little point in trying to make a thinking machine more human by dressing it up in such artificial flesh.
You just like the texture behind that, don't you?
Yes, I do. And then the
other one here. We do not wish to penalize the machine for its inability to shine in beauty
competitions, nor to penalize a man for losing in a race against an aeroplane. That one's good.
It's wild. Yeah, so that last one I think is so interesting. I made it green and I wrote down.
So we can just assume that machines will always beat us at flying,
but we cannot just assume that machines will always beat us at thinking, right?
So why the one and not the other? We can just say,
oh, you know, machines can fly much better than us. We do not need to complicate the test. It is
very clear that machines can do this. So in that way, we can just say, well, we will always be
better at thinking, right? Machines can fly. We can think sort of to each their own. Why can you
just say this? Like, oh, we cannot make a machine for flying that is better than...
No, humans cannot be better at flying, but maybe then also machines cannot be better
at thinking.
Case closed on page two.
Why do we need 20 more?
I think I'll try defying gravity.
I don't want a machine to always beat me at flying.
I'm going to beat the machines.
Yes.
Page two, we get some examples of questions and answers we might get, right?
You know, so a question could be, please write me a sonnet on the subject of the fourth bridge. Answer, count me out on this one. I never could write poetry.
Count me out on this one. I never could write poetry. Question, add 34,957 to 70,764.
Answer, and this is stage directions,
pause about 30 seconds and then give as answer,
105,621.
So they're just examples of, I guess,
interactions that could happen in the imitation game.
I enjoyed how it actually gave pretty extensive examples.
Yeah, and they all sort of seem plausible that a machine could give these.
Like none of them are, are kind of like, and it goes on in, in,
in following sections to sort of explain like, here's why it's plausible that machines could
give these answers, right? The thing that I'm thinking as I'm reading these examples is kind
of at this point, it's still very early on in the paper, just the utter kind of ridiculousness of the scenario,
I'm still in my phase of going, wait, wait, wait, what? You know, like, why are we, why are we trying
to imitate at all? And, and, you know, I guess maybe I'm skipping ahead, but the thing I bring it to
in our present day is that this is, when we say imitation,
like the focus is on replacement for me, right? Like when you try to imitate someone or something
you're trying to kind of replace it and that's like this slightly dark undertone I guess.
slightly dark undertone, I guess, I take out of it when reflecting on current AI, right?
You know, when companies like OpenAI define something like AGI, you know,
artificial general intelligence, you know, there are many ways to define such a thing, but they define it around being able to replace someone as an employee,
right? So I can't help but at this point in the paper, you know, like make that link from
imitation to replacement. And yeah, I don't know, that's, it just worries me.
Yeah, I think that's a great perspective. That's just the word imitation already
gives a certain vibe., however you would then define
imitation game.
Yeah.
I mean, clearly though, that can't be your thought because Turing does a perfect job
predicting all of our thoughts and all of his objections here.
And he says, the game may perhaps be criticized on the ground that the odds are weighed too heavily against the machine.
If the man were to try and pretend to be a machine, he would clearly make a very poor showing.
He would be given away at once by slowness and inaccuracy and arithmetic.
May not machines carry out something which ought to be described as thinking,
but which is still very different from what a man does this objection is a very strong one but at
least we can say that if nonetheless a machine can be constructed to play the
imitation game satisfactorily we need not be troubled by this objection okay
how do you like I tried to like okay so what I guess what he's saying implicitly
here is like well if the machine can win the imitation
game, it can think.
But we've replaced that question, right?
The question is not can a machine think, it's can it win the imitation game?
So then it's like, well, if the machine can play the imitation game and win, we don't
need to be troubled by the objection that it can't do that.
Yeah, like this, this is, I think this is him trying to say, here's how we can use
the answer, uh, like we can use the imitation game to answer that first
question, but it's so weak because he's saying like, look, if you don't allow us
to, you know, um, follow this particular chain of reasoning, you know, if you, if you don't allow us to, you know, follow this particular chain of reasoning,
you know, if you object to that, well, you know, we need not be troubled by that objection. Like,
he just basically says, yeah, if you object on these grounds, we're just not going to worry
about that. And that's not just here. That's in many of the other points later in the paper.
He says, well, you may say that this is a reason for objection, but that's not true. I think even somewhere in the paper,
he says, well, I'm sure that this guy that disagrees with me would change his mind after
hearing this argument.
Yes, he literally says that.
Have you asked this person if they agree with this? So it's all this assumption that here's
some evidence. Now you are convinced,
right? Right?
So yeah, like, and to make it like, you know, not just how he presents it, but like to think
about the terms, like imagine, you know, of this objection, imagine that there's super
intelligent beings that can think, let's just buy fiat, we're saying they can think, but
they're really bad at imitating people. And's really obvious they give it away immediately because they can't describe
basics of human experience and you know, they lose the imitation game.
Should we conclude that they can't think?
I think the answer here has to be no, no, that's not the conclusion.
We don't conclude that they can't think.
So what he's saying is it's not a necessary criteria, but it's sufficient, right?
If I'm trying to be as charitable in this little section, he's saying this, it might
not be necessary.
Like in order to think you don't have to pass the Turing test, but or, you know, play the
win the imitation game, but it's sufficient.
If you can win the imitation game, but it's sufficient. If you can win the imitation game, you can
think.
Hey, you know in section three?
Yeah.
Yes.
You mean that awkward sentence? No, I don't know that part.
Wait, wait, wait, wait, wait. I mean, there's...
So section three is trying to now give us a definition of machine, right?
We didn't give a definition of think, except for in different terms.
But we're going to try maybe kind of to give a definition of machine.
Yeah, which also continues in section four where he explains what computers are, because
indeed those were not so both advanced and well known at the
time, which I think is three and four are maybe the least worrisome sections because
it is quite possible to describe what is a machine and it is quite possible to describe
what is a computer.
The only weird thing is that somehow in section three he says that this machine can be made
by a team of engineers, but they should all be of one sex.
Yes.
Did you all get that?
Yes.
I wrote down, I think the assumption is that they shouldn't make babies, because otherwise
they have produced something that, given time, could win the imitation game.
So they should all be of the same sex, so the team of engineers doesn't say, haha, haha, we created something that is a baby.
But I had to read it a bunch of times before I understood this.
Yeah, so he talks about it, he's saying like, hey, you know, what do we mean by machine?
Well, we have to have like any kind of engineering technique you could ever allow.
But like, well, he says,
finally we wish to exclude from the machines
men born in the usual manner.
It is difficult to frame the definition
as to satisfy these three conditions that he listed before.
One might for instance insist that the team of engineers
should be all of one sex,
but this would not really be satisfactory
for it is probably possible to rear a complete
individual from a single cell of skin of a man
Yeah, yeah, yeah, yeah, I highly that was all yellow
Yeah, he was saying they can't have babies or whatever you can create from the single cell of a man
Honestly, that would be more impressive though
So that's that's an interesting I think there's actually Honestly, that would be more impressive though.
So that's an interesting, I think there's actually, it's terribly handled, but I think
there is actually something interesting here, which is that the question is not can a computer
think, the question is can a machine think.
And it's taking a very open definition of what a machine is. Like it could include some kind of like new living organism
that was made by scientists, right?
Like scientists create the symbiote in the lab
and it gradually spreads to infect the entire earth
and convert us all into a hive mind
and that hive mind is capable of thought,
thus affirming the original question,
yes, this thing that we've created can think.
This is funny because this was what me and Dave's paper was kind of like exploring a bit, right?
You know, like the first line of our paper is Dave saying, I think living organisms can be
meaningfully viewed as machines, right? But, you you know so it was kind of like this this section
really confused me with this sort of like the weird uh I don't know the weird emphasis on like hey
what if they have sex right but like um but it is pretty wild actually reading section three and section four because it's like, wow, they were writing this in a context where, yeah, computers were not known.
And in section four, he refers to a human computer.
And to me, that's wild. Like, we don't talk about human computers at all now.
And I actually got a bit confused in certain points
when Turing said computer,
because I was wondering, wait,
are you referring to a human computer
or a digital computer?
Yeah, and he does restrict his game
to only be played by digital computers,
despite at first being a little bit more open-minded.
He says, following this tradition,
we only permit digital computers to take part in our game.
Section five even continues this exploration
of the kind of computer,
and the opening paragraph of section five is a banger.
Is it?
I will read it to you now and then you'll see why.
Okay.
The digital computers considered in the last section
may be classified amongst the, discrete state machines. These are the machines which move by sudden jumps
or clicks from one quite definite state to another. These states are sufficiently
different for the possibility of confusion between them to be ignored.
Strictly speaking there are no such machines. Everything really moves
continuously but there are many kinds machines. Everything really moves continuously.
But there are many kinds of machine which can profitably be thought of as being discrete
state machines.
I just like that.
Like, he's, you know, completely insane, but he's still not wrong about this one thing.
I thought for sure you were going to try to say this is what you argued in our last episode.
That like, discrete and continuous are the same thing. But what you argued was
that, was that space, that time is discrete and not continuous. And that continuous-
No, time is- Sorry, and that there is no difference between
those two notions. Yes. Which is not what is said here at all.
No, that's not what Turing's saying because he's insane.
Okay, I would say if you're trying to use this paper
as justification for a view you hold,
I'm worried about you.
No, I like that he's speaking to the interesting tension
between discrete computation and continuous physical reality.
Okay, that's fine.
That those two things create an interesting liminal space within which there is no distinction.
That's not what he says, but okay.
Well there is a distinction.
You have to read some other things to understand how to read between the lines of what he's
saying and really understand the point he was trying to make but failed to make.
Wait, are you telling me these lines aren't discrete lines?
There's continuous parts in between the lines?
Oh my gosh, my mind is blank.
Alright, and that's-
Sorry, I didn't know that.
I read them as discrete lines.
This is immediately followed by table deleted.
There's some table about machines and whatever.
Yeah.
Yeah, sorry, I should have sent you all.
I found the better print, but I found it too late, and my print's better.
Do we have a lot of comments on sections four and five?
They're explaining what computers are, how they work.
That's the part of Turing that we talk about, right? And we know he was visionary in helping
create computers and a lot of the things like, wow, that was prescient. It was like he had
four sides of what would happen. And of course, partly like he had foresight of what would happen and of course partly
because he had contributed to this. But I sort of skipped through it because I'm like,
you and me, you and me touring, we are in agreement what computers are. So there's not
so much interest there.
Yeah, that's not the problem.
That's not the problem. Let's go on to real problems like theology. That's a real problem.
Yeah, I'll just say just to make sure we get, you know, his argument throughout here that
he gives us a reiteration of the question now.
It says, can machines think should be replaced with are there imaginable digital computers
that would do well in the imitation game?
He goes on to give a bunch of caveats, but that's, that's not, I actually want to read
this whole paragraph.
I think this whole paragraph is wicked.
Okay.
Wicked, wicked bad.
Not Defying Gravity Wicked, bad wicked.
Did you just watch Wicked?
No, I saw it on the stage play version.
I haven't seen the film.
We may now consider again the point raised
tentatively that the question,
can machines think, should be replaced by, are there imaginable
digital computers which would do well in the imitation game? If we wish, we can make this
superficially more general and ask, are there discrete state machines which would do well
in the imitation game? But in view of the universality property, we see that either
of these questions is equivalent to this, the following. Let us fix our attention on one particular digital computer called C.
Is it true that by modifying this computer to have adequate storage,
suitably increasing its speed of action and providing it with an appropriate program,
C can be made to play satisfactorily the part of A in the imitation game,
the part of B being taken by a man?
So we have a computer called C, not to be confused with the interrogator called C, who may be
of either gender. And this computer called C is now taking the place of part A in the
imitation game and part B, which was previously a woman, is now being played by a man.
Jared Yeah, this is where we get like, oh, he meant the whole time not to be the
computer pretends to be a woman. Yeah. He meant. Okay. Yes. Yeah. The computer is going
to pretend to be a liar and be the man who is going to be the woman. And the woman is
no longer in the picture. But the computer C is also the interrogator at the same time. So it's a rigged game. Oh, no.
It's two computers against one man.
That's so...
But it's so messy, right?
Yeah.
One thing that scientific writing should be.
This just clearly needed an editor.
Yes.
Hey, can you like clean up what you said there?
Because it's a little confusing.
Like, this feels like a draft just like
Submit it out the door without ever reviewing it. Okay, but part B
which
Being taken by a man here, but really for his argument that could also be a woman
It doesn't have to be yeah
It doesn't make a difference and but he didn't specify that they could be of either sex, a man of
either sex. The point, and he didn't even make it clear that you're not trying to pretend
now to be a male, you're trying to pretend to be a human as the machine, right? The machine
is trying to trick you, is like implicit in this. And if you look at like other things
that Turing said later, whatever this seems seems to be what he meant, was the machine
is pretending to be a human.
Part of B being taken by a man.
Probably man in the general sense meaning person.
Yes.
Right?
Yes.
Yeah, but it's still weird.
Yes.
It's super weird.
Given that the original example was a man and a woman, you should have used human being
here at the very least.
And it's also interesting, again, let's read some footnotes.
Like, did you all look at where this was published?
So this was published in a journal called Minds, which has on their cover, I don't know
if they had then, but now is a quarterly review on philosophy.
And this is allegedly a peer-reviewed journal. So that means that at least
one or probably two or three other
scientists probably philosophers have looked at this, right? So
What the fuck?
I don't even, right? This is like if an undergrad of mine submits such an essay, which they do, right?
They exhibit this messy thinking, but then there's an adult in the room that says, what
is A, right?
This is questions I ask my students, like, what is A?
And why is A something else on page five?
We should do some more history and dive into who was responsible
for this.
It's so...
Anyway.
And if you thought it was bad so far, or you're like, yeah, we're making a big deal about
nothing.
Yo, that's worse.
No, no, no.
You know what's next.
You know what's next.
We're getting to...
Okay, I will, before we get into all of the things things I will say there's things in here where you know
Turing was very clearly right about like hey, you know, he says like I believe that in about 50 years time It will be possible to program computers with the storage path through about 10 of the 9 to make them play the image game
So well that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning.
I'm not saying we've gotten there. I'm saying like, he clearly saw like the scaling laws of computers, right?
He clearly was foreseeing some of the stuff.
Okay, but then do the next census next.
Yes, I know.
Yes, I agree.
The original question, can machines think, I believe, is too meaningless to deserve discussion?
So, okay.
Nevertheless, I believe that at the end of the century, the use of words and general
educated opinion will have altered so much that one will be able to speak of machines
thinking without expecting to be contradicted.
Which, okay, so this whole question we start off with is too meaningless to deserve discussion.
So we've replaced it, and I mean this is why I say the behaviorism has to be coming in
here, is because, you know, there is this like logical positivism, and in all honesty,
I think we're in early Wittgenstein still, where he was a logical positivist, so this
would make sense.
No, no, no, this is later Wittgenstein. Or is this this late? Oh really? Okay, never mind. I take it all back. Because I think
he died in like 51. Yeah, no, you're right. You're right. This would be later.
Okay, well apparently Turing didn't get updated on where the philosophical
world had gone. Yeah, he died in 1951. So this was after he came back from hitting middle schoolers.
Did you know this?
With design, he did all this mathematical work.
I cannot pronounce it.
And then he was like, oh, this is shit.
I hate my work.
And then he went back to Austria to teach middle schoolers. And allegedly there was some physical abuse. Oh no. Because he came back later.
Yeah. He went back to the school and he apologized for his behavior. And then he came back.
We came back to Cambridge. It's like, hey guys, I'm back. Everything I wrote before is nonsense.
But it's actually, I think the rest, the middle schoolers not so much, but
the rest is quite relevant for the stripper. Because he says with the language game, he
says that language should not be understood as mathematical, but language should be understood
in context. This is this game, which is very different from game as an imitation game where
you can win. This game is sp spiel in German which is much wider.
It also includes like theater and which it's close to my native language where you can also say this
and so he says language you cannot capture it mathematically because every word it so depends
on context. Every word it always depends on context what it means and I'm like
so a text that is just printed out by a teleprompter or
whatever word he was using, you are stripping it off context, right? So, no, this is definitely
late Wittgenstein when Turing was hanging out with him. Yeah, that's a good point. Yep.
He must not have been listening very closely. Yeah, so this is reeks of this like positivism that you know that the early Vickenside would have liked, which like,
hey, if we can't empirically verify something, then it's meaningless.
And so the idea is thoughts or something in your head.
I can't prove at all that you have thoughts. All I can see is your behavior.
So therefore the question of can machines think would be as meaningless as
can humans think?
Right that would also be a meaningless question because thinking is a meaningless idea
There's still people who believe this today. It is not the most popular view, but it is still a fairly popular view in the
neuroscientific community
Alright the theological objection that Lou mentioned. I just find one thing interesting is on the page that starts introducing these objections,
he says, and I directly quote, I now proceed to consider opinions opposed to my own.
But actually, we've had a whole load of that already, you know, like it's, it's been a lot of like trying to foresee objections
right from like the second page.
And now we have like a whole set of other objections.
The first one being the theological objection.
And it starts,
thinking is a function of man's immortal soul.
God has given an immortal soul to every man and woman,
but not to any other animal or to machines.
Hence, no animal or machine can think.
That's his characterization of the objection, right?
Yeah, okay, but then apparently the question,
can machines think, has already been answered
because it is equivalent to, do they have a soul?
So now it is a meaningful question to think of, can machines think?
This literal thing that two pages ago we say, no, it has no meaning.
Okay, but now you're saying it has meaning because they cannot think because they have
no soul, which I think is less ridiculous than some of the other stuff he says.
Because sure, right, then it just moves the goal to what is a soul, but this is just top, top, top, top.
Oh, well, you have an immortal soul, you do not have a mortal soul. Oh, so you can think. Case closed.
Yeah, I guess the weird thing I find, or like the weakness of his writing here is that I think he sort of
flits between these two different problem statements kind of like I think you know
there's this question can machines think and then there's this question of or like you know can we
beat the imitation game and I'm almost like I want to make up your mind, man. Like, you know, if you're going to discard this first question,
then, you know, truly, I don't know, truly discard it.
I find it gets quite like muddled because, you know,
literally just on the previous page, he said,
this is not a question like that means anything.
And now he's kind of like, I guess,
trying to respond to this question and
answer it. So it just seems seems really like messy and nothing to me. I don't even know how
to start picking this apart. And of course, this, you know, yeah, it feels like saying saying not a
lot, really, because you know, he's already said it's not worth anything. I just want to comment
on the like, okay, so first off, if we tried to be consistent with Turing,
we could just say winning the imitation game
is a function of man's immortal soul,
which would sound really, really sad.
Right, right.
But, but also like this is not,
I just want to be clear, like,
if you try to go like to like a Thomistic idea
of what the mind is, this is just like a bad
version of Descartes.
This isn't like the Christian orthodox view.
This is like trying to pretend Descartes was the orthodox Christian view and replacing
the soul with the mind because Descartes didn't believe that animals.
Like this is just a mismatch of, like, not even clear Christian theology,
let alone the next statement that we already saw, which is, how do Christians regard the
Muslim view that women have no souls? Oh my God. But then I honestly, I'm not going to lie, I like
his trying to answer this objection, not because I agree with it, but because I think it's,
I don't know, it's courageous as the answer.
So he says like, this is a serious restriction on God, right? He says,
It is admitted that there are certain things he, God, cannot do, such as making one equal to two.
But should we not believe that he has the freedom to confer a soul on an elephant if he sees fit?
We might expect that he would only exercise this power in conjunction with a mutation
which provided the elephant with an appropriately improved brain to minister to the needs of this
soul. An argument of an exact similar form may be made in the case of machines." His whole argument
is that we only look at the behavior of the machine. We don't look at the way in which it's
constructed. But here he's saying God would look at the way in which the machine is constructed and decide that
it deserves a soul, not at its behavior. He's saying it would be possible in this theological
realm here that the machine can pass the imitation game because it has all the same behavior.
God's not changing the internals, but it doesn't think because it hasn't been gifted the soul by God. So like she undermines his
very argument.
Total, total nonsense. And offensive and bad. Offensive to many people in different ways.
And also offensive, again, offensive to scientists, because this page, this is the first page that actually
references text outside of this paper, and the first reference is to the Bible.
I do not object to citing the Bible.
There's nothing wrong with that, specifically not in the section about theology, but there
have been pages in this paper before where it would have been very appropriate
to cite something like people that have thought about thinking. So I just wrote down here in the
margin, like this is the first reference to text outside of this paper and then it is the Bible,
that is just special, right? Even in those days, maybe the Bible was cited more in
science, I can imagine. But it is so special that that is what struck me most as offensive to my
science heart. It's like, yeah, but dude, like standing on the shoulders of giants, Who are you? Who's shoulders are you standing on here? I mean, God is a giant, I'm sure, but it is just weird.
It's just icky.
I found it quite jarring, to be honest. Like, you know, so far, the paper has been trying to lean into this, like, I guess, like, the aesthetics of a mathsy paper, of a computer sciencey paper with X and Y and A and B,
and then suddenly we, you know, like, the first citations we have are from the Bible. I found it,
like, yeah, jarring to read. I mean, I guess maybe the reason it's jarring is not that it
shouldn't have done, you know, discussed these theological matters,
but it's more that like, I guess we would have hoped to see some of this philosophical citations
come earlier throughout, and then it would have felt less jarring. Yeah, I'm not trying to downplay
that this is the first time we see citations, because this is one of the things that did strike
me as well. But I will say just for the listeners, he's not citing them in support of anything he's saying. He actually says that like, he's not very impressed
with theological arguments, and then gives examples of how people use theology to argue
against Copernicus. And then that's what he's citing as the Bible is anti-scientific arguments
that people made.
Matthew 10 I want to read the first sentence of the next section, because it's delightful.
So the next objection, we're listing out objections
to this thing that Turing's doing,
this thing he's doing.
And the next objection is called
the heads in the sand objection,
and here's how Turing characterizes it.
The consequences of machines thinking would be too dreadful.
Let us hope and believe that they cannot do so.
Which you know what?
True, based, valid, 100% agree, really good objection.
Yeah, end of story, right?
But also this is so interesting
if you think of the time it was written, right?
So it's 1950, so this five years after Hiroshima and Nagasaki. So apparently, right? We have just seen machines
That are too dreadful, right? This is there are machines where maybe we we really don't want them to exist
We have just seen 200,000 civilians being killed by a dreadful machine.
That is something you can engage with that maybe sometimes machines are so
horrendous that they are not wanted in this world.
That is definitely something I was thinking of.
It's like, well, dude, some of your fellow computer sciency people have
just contributed to mass civilian casualties in the nicest form with a machine and also
with computers, right? With computering, some heinous stuff was done.
That's why this position, this like consequences of machines thinking would be too dreadful.
It's like, I say it's based, I think it's like very true today,
not because there's anything about the machines themselves
that makes this dreadful,
but it's that the machines are this embodiment
of so many qualities of humanity
that are painful and hurtful,
and we use them as tools to subjugate other people
or to spread misinformation
or to create cycles of addiction or all sorts of things.
And also what Lou was saying that these machines are made to imitate, right?
This is why for me now, chatbots don't feel awesome because stuff I like to do, right,
like prepare lectures or grade students, which is my job, then people are saying, no, but
your job that you enjoy
can be imitated by a machine. So indeed, it's not even the dreadfulness of the machine itself.
For me, it's mostly or the exploitation or all these other things that you're saying,
this imitation. I think that's such a great observation. That imitation is what makes
it dreadful for me. I do not wish to be imitated by a computer.
I have a soul.
I'm like an elephant with a capable brain.
I have a soul.
The thing about this headness and objection is like,
he could have used this to give like a serious objection
instead of just, he says that like,
oh, I don't think it's expressed so openly,
but clearly people think this.
He could have used it to say, we shouldn't make machines that think because it will lead to
ethical problems or, you know, like all of the things we were talking about, about, you know,
society having these issues with AI. He could have brought that in here, brought in some ethics,
brought in something serious. Instead, especially since he puts this as number two, as if this like, you know, there's
the theological objection and then there's the naysayers who just say it won't happen.
I don't like this.
Yeah.
Yeah.
And it's like, and they even links them to up saying like, that's probably why those
people who say the theological objection, they really just have their heads in the sand.
It's like, this is not a way you are. This is,
this reads like a bad blog post,
not a journal, a peer reviewed journal paper.
So he was ahead of his time. Yeah.
Hacker news comment right here.
And it's also that even the thing he's saying, right?
That many people might think this,
but they don't say it openly, that in itself is interesting, right?
Why don't people say this openly?
And is that even true?
Right?
I don't think there were so many people thinking about thinking machines at the time.
So the fact that the very, very small percentage of people in the world that were thinking
about thinking machines, that some of them weren't in the world that were thinking about thinking machines,
that some of them weren't openly saying what they were thinking. Why? Like, why does this
make people feel uncomfortable? That is an interesting question to unpack. And he's like,
oh, well, they don't say it. So, ah, okay, moving on.
It reads like he's subtweeting some colleague of his that maybe he discussed this with earlier
and they raised this objection and so he wants to get in a quick.
All of the sections are like this, right? I think in some of the sections he literally names people
that he's arguing with true peer review paper. But I think you're right that for everything,
these are probably things that people have said in those words or slightly different words.
Yeah, then we get the mathematical objection. Oh, I want to say so many things about that. Do it, do it, go off. Okay, two things. One,
this is the first time he's referencing another scientific paper, which isn't the Bible, which
is like, here comes all the mad citations like Gödel, Church, Kleene, I think you say,
Rosser and Turing, of course, he's self-citing like any proper scientist. So that is interesting.
But what's even much more interesting, there's a recent paper by a Dutch cognitive scientist,
she's called Iris van Roy, and she has actually mathematically proven that to answer a question in chat GPT is an NP
heart or an intractable problem.
I will admit that I do not understand this paper, but I trust that it makes sense to
people that actually understand how it's done.
I think I can intuitively explain how I understand that to make sense.
Is that if I have to answer a question, for example,
has it ever happened to you that you felt very embarrassed by something?
What I have to do now is I have to linearly search all my memories.
This is linear search.
For all of my memories, I have to think, is this a memory?
Is this a memory? Is this a memory? Clearly, that is not possible because you have many memories. Whereas you can
do this instantaneously. A person can do this straight right away. And this is probably my
half-bottched explanation. I definitely recommend people to actually check out this paper. It's called Reclaiming AI as a Tool for Cognitive Science.
But I think that is immediately what I was thinking of when he says that this cannot
happen.
And if there's any argument that he, me Turing, could have thought of, it would be this argument, like, what is the computational
cost of being able to answer any question, going back a few pages, I think we read this as well,
within five minutes, right? So this is the one argument where his work and the knowledge that
we can assume he has, right, apparently he doesn't have much philosophical knowledge or he doesn't show it, but he has this knowledge where he could argue, hey, but what is actually the expensive, like the
big O notation of answering any question? Maybe that's intractable. And then seven years later,
someone else did it because of now, you know, these stochastic parents. And that was what I was thinking about. It's like, why didn't you Turing think of how slow this would be?
Yeah, I think this is not quite exactly the same. But it just reminds me, I wanted to
bring it up, which is, in my opinion, my personal favorite objection, like modern objection
to the Turing test. So even if you give the sophisticated versions of the Turing test that aren't in this paper,
it comes from Ned Block. And the solution has now been called Blockheads, not by him.
Other people have called this idea a Blockhead. And so what he suggests is that you can construct a computer that could pass a possible, we're
not saying you could actually physically construct this, this is just a thought experiment here,
but that you could construct a computer that can pass the Turing test by just making a
big tree of every possible conversation that could be had in this five minute or an
hour long time span, right? And so you just, you have people go and spend all their time
thinking of every possible question that could come up. And that is a finite set because
there's a finite set of valid English word that would make grammatical sentences and
you make a big, huge tree of them and now
this computer, all it has in its memory is this massive tree of every possibility and
then it just goes and walks the tree for each, you know, question, answer, question, answer,
question, answer.
And you could even have indeterminacy on, you know, which answers you have multiple
answers for each question, blah, blah, blah.
And it would pass the Turing test,
and clearly it would not be intelligent, is the argument.
Wow, that is lovely. I didn't know that objection. I mean, it's somewhat similar to the Chinese
room argument, maybe, where someone... It is like that, but I think this is much clearer
and also more in computer sciencey terms. Oh, I really like this.
Yes. And he points out that, like, what he's trying to ultimately argue for is this behaviorist view
that is implicit in the Turing tester, explicit in these more sophisticated versions, always falls
apart. What you actually have to pay attention to is this like psychological component. You have to
look at like, he's still a materialist about these things. And he thinks it's that there needs to be certain causal mechanisms in the, you know,
the structure of how these answers are coming about.
And that's what matters.
So if it's just a lookup tree, it obviously isn't thinking.
But if it were actually mirroring the causal structure of how our brain works, blah, blah,
blah, it could be thinking.
He thinks it's, you know, it's complicated how you would know that, etc.
But I think this is a really good one and it just reminds me of what you're talking about there
This mathematical objection is really interesting. I do have to say he you know, the self citation
Yeah, not only comes with the self citation the next sentence
So he says there are other in some respects similar results due to church client clean a client
I don't think it's Kleene.
Kleene?
Yeah, I think that's how, yeah.
Rosser and Turing, the latter result is the most convenient to consider since it directly
refers to machines.
And he didn't even say, my result, right?
It's like the latter result, you know, whoever that Turing guy is, which I just, I don't
know, I just found it funny. Because you know this isn whoever that Turing guy is, which I just, I don't know, I just found it
funny because you know, this isn't blind peer review. Like he didn't submit this and nobody
knew who it was. He submitted it and they're like, Oh, Turing wants to publish. Great.
And you know what I also think is interesting? I made a little nose in the quote that goes
over from one page to the other. It says, whenever one of these machines is asked an appropriate
critical question and gives a definitive answer, we know that this answer must be wrong. And
then I'm skipping a little bit. We, people he means, we too often give wrong answers
to question ourselves, to be justified in being very pleased at such evidence of fallibility on the part of the machines.
And I wrote on that, like this is, oh, but people hallucinate too
avant la lecture, right?
This is literally what people say now.
I always say, Hey, you know, algorithms, they hallucinate and
people always come back to me and they say, yeah, but people also make mistakes.
I'm like, oh, oh, yes, yes, but it's not the same. Anyway, I thought that
was just funny. It's so funny, though, too, because I want to like make sure this phrasing
like this, how he thinks these objections are working. So these are objections not to
his criteria of the imigration game, right? These are objections to that machines can
think and this mathematical
objection is about like, you know, the incompleteness theorem of Gödel, and that there's certain things
that, you know, machines can't answer, blah blah blah. But then at the very end of this, he says, be willing to accept the imitation game as a basis for discussion. Those who believe
the previous two objections, the head and sand and theological ones, would probably
not be interested in any criteria." So like, he's like, oh, here's this objection to the
question I'm not asking. The objection to mine, there won't, no one would have an objection
to mine if they thought this sophisticated math stuff, because they're smart enough that
they would see my good criteria.
If you're smart enough to know this math, yeah.
And also, right, I am a person who
believes at least in the second and maybe also in the first.
And indeed, I am not interested in any of these criteria,
because I am not discussing whether or not
a machine can think, right? I am not discussing whether or not a machine can think.
I am not interested in this question.
It's not a helpful question in any way to anything that I want to do.
So again, like with the other thing, like with people not saying something openly,
if people are really not convinced by your arguments,
maybe that says something by your arguments, right? Maybe that says
something about your arguments.
And it reads like Turing is not even especially convinced by his arguments, right?
Yeah, absolutely.
Like we get a lot of self-doubt expressed in this. And it might just be that, you know,
he finds this question intellectually stimulating to consider or whatever, but he
knows that it's sort of a question that's not going to be very well received by the
scientific community at the time.
So that might be the source of some of this doubt.
But it does not inspire a lot of confidence reading any of this.
I will say why this mathematical objection probably came up as a thing is girdle actually in
They might have been unpublished papers. So, you know, I don't know for sure that Turing windows
but I will say has has said that
The conclusion of girdle's incompleteness theorem is either
We dualism is true about minds that we are not computers that we we are not turning machines, that our minds go beyond this like formal system, or that math is platonic, that there are actual
mathematical objects in Plato's heaven. He said it's at least one of the two was the conclusion
in his mind for the incompleteness theorem, which I do find really interesting that that's like,
so he is trying to answer an objection here, but he doesn't even answer it. Like he just is like,
well, he kind of gives a thing about like, well, you know, yes, they can't do all of this stuff,
but there's other machines that could do it. And yeah, humans hallucinate too. So just use
my criteria. But yeah, I do love the point about hallucinations. We haven't had an AI episode since
this whole AI craze because I do feel like, you know, people, and I'm happy that this is our AI
episode. I feel like people overhype all of this AI stuff. And at the same time, I will say people,
some people want to say like, it's literally completely useless and you never can get anything good out of it.
And I'll say, I have used it for fine things.
It also sucks at so many things.
I did use it here and its summaries were really bad
to find some papers from, I used deep research
to see if it was AGI, which definitely wasn't,
to find papers from 1950 something for that
respond to this.
And it found me some papers.
Its summaries of those papers were completely and utterly wrong and had nothing to do with
what the paper said, but it found me some links, which was nice.
Yeah, I don't know.
I think it's so interesting because like reading back, I do think if you had gone back pre chat GBT,
people still would think like the possibility of passing the Turing test is very low. I think today,
if you do it like a random sample of random people who are not, you know, people like us,
like clued in on all the AI and know what it might or might not answer. You know, they don't know to ask it how many Rs are there in strawberry.
Uh, like I think, uh, you know, I think people could be fooled in a five minute
conversation as he put by, by chat GPT, right?
Like, and if they wouldn't be fooled now, they would believe that it will be
possible soon that this form of AI is imminent.
Yeah, exactly. And I don't think that, you know, anything in here gives us a reason to
think like that's what I kept wanting in this. Like, I want to continue through these objections,
but I kept wanting to find him give a justification for how do these two questions connect? If
it passes the Turing test, why think that it's thinking? And we
just never are given that, which is sad.
And what does this say about thinking and which philosophical theory that already exists
about thinking corresponds with this and which disagrees, right? And do we need a new theory
about thinking or consciousness, which is the next thing? Do we need new theories of thinking?
That is what I would be interested in.
If machines can think, let's take this as a statement
and let's explore what that means about thinking.
What does that mean about humanity or the future of work or whatever?
It could be a very interesting thing to explore.
What does this mean that a computer can do
this but that is also not there.
Yeah.
So the argument from consciousness is like...
Can I read this first paragraph?
Oh, please do it.
Yes.
Yeah, I don't know why this is called the argument from consciousness.
I'll just go ahead and put because that's not what this topic is.
But yes, let's go ahead and...
I love this first paragraph.
This is exactly my shit.
This argument is very well expressed in Professor Jefferson's Lister oration for 1949 from which I quote, not until a machine can write a sonnet or compose a concerto because of thoughts and emotions
felt, and not by the chanceful of symbols, could we agree that machine equals brain.
That is, not only write it, but know that it had written it.
No mechanism could feel, and not merely artificially signal an easy contrivance,
pleasure at its successes, grief when its valves fuse, be warmed by flattery,
be made miserable by its mistakes, be charmed by sex, be angry
or depressed when it cannot get what it wants.
I love this.
I love that.
Did you see that blog post by Nick Cave, the singer Nick Cave?
Early in the Chad GPT craze, people started to send him texts like this is how Nick Cave would write it.
And he wrote this magnificent piece on his blog where that includes the quote that algorithms cannot produce songs because data doesn't suffer.
That's, that's what I wrote down here.
Data doesn't suffer.
It's such a great piece.
I love it.
Sometimes I think we should stop
listening to computer scientists about AI. We should listen to artists because it's so
good. But it was almost what this dude was already saying. I didn't know this guy. In
1949, it's like, yes, this is the objection for me that is central, that I can make something
and then I'm proud of it. Really. When I was listening
to your episode about my paper, I was like crying. It was so nice that you were complimenting
my work. You were saying, oh, it's really good. And I was like, there exists another
person that has read something that I have written and it actually touched them. A machine can never do this QED, right?
Case closed.
A machine is never going to listen to someone discuss their work and have a
physical emotion of crying.
So we can be just, we can stop right now.
Right?
That is the definition.
And he doesn't engage with this at all.
No, not one bit.
After this beautiful poetic stuff that we've just read, it is so good.
The next sentence, the next fucking sentence is, this argument appears to be a denial of
the validity of our test.
That's exactly what it's doing.
Yes, okay, tell me more,
right? But he doesn't tell me more. He's just saying, oh no, but they're just denying our
test. No, they're not denying you. They're just simply disagreeing with you, right? Here's
a person that exists that does not share your worldview.
Yeah, he says, you know, according to the most extreme form of this view, the only way
by which one could be sure that a machine thinks is to be the machine and to feel oneself
thinking, which is not the point at all that was just made.
It was not that this is this weird.
This is why I'm saying there's this like weird positivist behaviorist thing going on with
them where he thinks, oh, well, what we need is some objective criteria
by which we can determine and rule whether something is thinking or not. But that's not
what this passage said. What it said is that there's this causal relation that has to be in
place between the words you're writing and the emotions that you felt, right? You're writing a
sonnet or composing a concerto because of the thoughts
and emotions felt. And then he like tries to replace this with this like, well, what
if, you know, we could question the, let's say it wrote a sonnet and we couldn't question
it and says like, in the first line of your sonnet, which reads, shall I compare thee
to a summer's day? Would not a spring day do as well or better?
And like the answers that this robot supposedly gives also are like not, like he acts like
this was a good response to it, like it wouldn't scan. How about a winter's day? That would
scan all right. Yes, but nobody wants to be compared to a winter's day.
These aren't good questions. These aren't good answers. But even ignoring that, he thinks that like this leads to solipsism that like if you believe that people have emotions and that they write things in response to them, that somehow automatically leads to solipsism, the idea that I'm the only one
that exists.
And it's like, wait, huh?
And then he tells us at the end, in short, then I think that most of those who support
the argument from consciousness would be persuaded to abandon it rather than be forced into the
solipsist position.
They will then probably be willing to accept our test.
You have to see what I drew here.
I'm not sure if you can see it, but this is the meme of that guy sitting outside a campus.
So he's like sitting there with his coffee and it's like machines can think, change my
mind.
Like this is the vibe.
That's the vibe that's going on there.
I like change my mind.
And then also, like if you two things, like if you read what this Professor Jefferson
was saying, I'm pretty sure that he would not agree with this because it's he like throws
no shade, right?
It's very clear what he means.
He says, no, it's only thinking if it comes from emotions. And also,
he could have actually engaged with this person if they wrote this like a year ago, right?
It's 1950, this is 1949. He could have given them a ring or written a letter or he could
have.
Yeah, instead Turing just says, Alan just says,
I'm sure that Professor Jefferson does not wish to adopt
the extreme and solipsist point of view.
You're sure, are you?
Maybe he wishes to.
Well, yeah, why don't you go find out?
Do some science.
Yeah.
One of the Alans of all time, really.
I will say, like, I just wanted, again, I wanted to see, like, is this bad?
I knew it wasn't, but I wanted proof that this is not just bad because of its time period or like,
because some people are going to say, oh, but he was on the frontier of this thinking and therefore.
So again, this this Billy Mays here writes about this section. It says, from what has already been said, it will be seen that the question can machines
think meet something very different for Turing than it does for Professor
Jefferson. For Jefferson and I, and I should say for most ordinary people, any
definition of the word thinking would also include psychological
characteristics. Turing and Jefferson are in fact speaking two different
languages. In the behaviorist or physical language of Turing, words which only have objective physical
content appear, or should appear, electronic tubes, flip-flops, circuits, programs, is
the terministic machine language in the grand manner of the 19th century Newtonian physics.
So this is 1952, somebody criticizing a contemporary who was a colleague of Turing's
being like, hey, you're, you're not even engaging with Jefferson's work.
He's not saying like he talked earlier about how this is a causal statement that has nothing
to do with solipsism that like Jefferson doesn't have to, he would definitely refuse to accept
the imitation game because the alternative is not solipsism, because that doesn't follow at all.
It's not like people weren't able to talk about this in a more sophisticated manner.
It's that Turing just chose not to.
He chose not to really engage with anything well.
Yeah, okay, so let's go to six.
There I was getting excited excited because the title of six
is Lady Loveless's objection. So I'm like, yes, I would like to hear what Ada Loveless
had to say about this. And she said some also very, very foreseeing things very, very early
on. So one of the things that she says is that the analytical engine, which
is what she was of course talking about when she was talking about computers, has no pretensions
to originate anything. And I just wrote next to that in the margin. Yes, yes, this, this
is also true for chat GPT. It never starts a conversation. So I thought that is actually an excellent remark, right?
That I as a person, I'm sitting in silence next to my husband or on the train or whatever.
And at any given moment, I can start a conversation.
Whereas, and how would an algorithm start a conversation, right?
Because these this type of algorithm or a chat GPT, for
us a conversation can start with, did you see that bird? Maybe it's a very rare bird
or whatever. And you can just do that. You can just point to something and then a person
sitting next to you, you don't even have to say anything. If you just start looking out
the window of the train, other people will also look outside of the window, right? If enough of you do this, and you could start anything, right? How would the process of
a machine doing that even work? So I can see this question and answer, and there's a clear
goal the computer is trying to pretend to be man, woman, elephant, whatever, but this starting, I don't see that working.
So I think this is such an excellent objection.
And then as we have seen before, it doesn't go anywhere.
Yeah, he ends up replacing it with that a better way to put it, I can't find the exact
one, but that is the computer can't surprise us, but I'm surprised by computers all the time.
That's not the objection.
Yes.
Yeah.
Oh yeah.
This is on the next page.
So it says a variant of Lady Loveless's objection state that a machine can never do anything
really new.
That's not the same as originating anything.
So I also wrote down in a margin there.
Okay. So from originate to something new, this is really something different.
And also a variant of what lady love lays was trying to say.
I also wrote down, uh, I think what Hannah was trying to say is right.
This is just mansplaining.
No, no, Alan, you cannot speak for a lady fucking Lovelace and say,
but what she actually means, no, she made it quite clear what she meant, right?
You cannot just say a variant of this person's opinion is something entirely
different and specifically not about someone that was quite instrumental.
And if you actually go back to Lovelace's writing in the time, it was so interesting
how she was on a much more deep level, I think, than Turing engaging with what does it mean
when a computer starts to think.
And then that was even way farther away from mechanical computers.
And already she was saying, well, it can never do something it wasn't programmed for,
it cannot originate anything.
It's like, why are you just rephrasing what she's saying
in a lower voice, stop.
Yeah, and then he does continue on, I found it here.
A better variant of the objection says
that a machine can never take us by surprise.
And then he says, this is very serious,
but machines take me by surprise with great
frequency. And it's like, if it's better, how is it better if you're just going to immediately
contradict it?
Matthew 10 And change the sense of what surprise means here, right?
Jared Yes, yes.
Matthew 10 Yeah.
Jared He's equivocating on it.
Matthew 10 Yeah. He wants it both ways. He wants surprise in the sense of like original
thought, creativity, you know, that spark of life kind of surprise.
But then also, so what, and what's going on here is who is feeling the surprise or like
what, where is, where is the surprise coming from in the latter where it's like, Oh, machines
take me by surprise to create frequency, right?
Oh, I accidentally got a shock of static electricity.
That is a kind of surprise that is wholly originating within you Turing, not surprise that comes as a result of a, you
know, the both entities existing together and having this interaction
between them. It's very cheap. It's, it's, this is very, very bad reasoning here,
very bad thinking. And what's also interesting is that he says that these surprises are caused by creative mental acts on my part, right?
So if a machine does something, I find this to be surprising. And this is also
what Loveless is saying. I don't think this is cited in this paper, but she
does say something like, if a machine finds something, then we are likely to find
this finding already interesting before the computer found it.
So if it does something, then we humans are like, oh, that's interesting, but we already
found this very interesting.
And that would actually be interesting again to correspond to the Turing test.
Okay. So we are now surprised by this computer that can do something.
Is this actual thinking or is this maybe we are surprised by this
because we want the computer to be thinking.
So this mental act, the mental gymnastics we have to do to see humanity
or intelligence or whatever in the machine is again something that would
be interesting to pick but he does not do this.
Of course he says no.
Yeah I have the quote here of this exact reply to this thing that I just think is hilarious
because it's so bad.
I do not expect this reply to silence my critic.
He will probably say that such surprises are due to some creative mental act on my part and reflect no credit on the machine.
Yeah, in fact I did, yes.
Yes, thank you Ivan.
This leads us back to the argument from consciousness, and far from the idea of surprise.
No it doesn't.
It is a line of argument we must consider closed, but it is perhaps worth remarking
that the appreciation of something as surprising requires as much of a creative mental act whether a event originates from a man a book a machine or anything else absolutely fucking meaningless like
Completely missing the point of this objection
so so
So off the mark like he just repeats at the end like yes
It requires the creative mental act or whatever.
On my part.
On my part.
But he doesn't engage with this at all.
Yeah, of course I could be surprised by a book that fell on my head.
Does that mean the book did something?
Like it just, it's so confusing.
Like what are you talking about?
You can be surprised without there being content that
is surprising, right? Like, I just don't, it doesn't, doesn't make sense. And then we get argument
from continuity in the nervous system, which I don't, I don't have much on this one. No,
I didn't have anything. Yeah, the only thing I have here is like, yeah, this is a difference,
but you can't detect it in the imitation game, so it doesn't matter.
That was my summary of that section.
Yeah, there were so many numbers. It was tiring.
Yeah, the informality of behavior. I also don't have...
Yeah, that I actually think is very interesting. So I was reading an amazing book.
You should all read this book. We can do an episode on that book as well if you want to. It's called What Computers Can't Do. That's from like the 60s and the end of
60s I think. And then there's a newer version, What Computers Still Can Do from Dreyfus and
Dreyfus. Very cool. They're brothers and one is a computer scientist and the other one
is a philosopher. And that actually makes sense because they do useful things together. And what I was writing here is that this first line, that is,
it is not possible to produce a set of rules to describe what a man should do
in every conceivable set of circumstances.
So he says you cannot make a decision tree for everything that you should do ever.
And this is also an argument from this Dreyfus book
that if you would want to make a machine
that can respond like a person,
like a human in any situation,
then there's just simply too many situations.
Maybe it's also a bit like the Van Roy argument
that there's just so much information that you need.
A question like, how are you?
How do you respond to this?
It totally depends.
Is it your neighbor?
Is it your husband?
Is it your employer?
Is it a random guy on the street?
And we people, we know, most people know in the culture that they grew up in, what is
an appropriate response.
If your next door neighbor says, how are you, you're not going to say, I'm so, I'm so depressed,
right?
I don't know if I can live through another day.
In most circumstances, that is not an appropriate response, but it is an appropriate response
in another situation.
So I think that is good. And then somehow the next paragraph starts with,
from this it is argued that we cannot be machines. What? Where is this going? And then again,
he goes with the rephrasing. He says, oh, I shall try to reproduce the argument, but I fear that I shall hardly do
it justice.
Okay, well, maybe if you cannot even succinctly summarize your opposition or the objection
against your position, if you cannot just rephrase it and not do it justice, then what are you doing? Right?
Yeah, I think this one's gotta be a subtweet of Wittgenstein here,
because Wittgenstein had a lot to say about rules and how you cannot explain
behavior in terms of rules.
Yeah, maybe.
And so like, I do think like it feels like something where he's trying to bring in this
argument that Wittgenstein had, because his point was that if you tried to explain behavior
in terms of rules, you need a rule to apply the rule correctly, and then you would get
this infinite regress, and so rules can't be the thing that explains.
And I feel like this has to be, this has to be something where he's trying to do that.
And I will say famously, that section of Wittgenstein is considered impenetrable.
Kripke has a whole book, Wittgenstein on rules.
Where he gives his interpretation of what Wittgenstein meant about this whole thing,
which is now kind of the canonical thing is people actually point to what Kripke said that Wittgenstein
said about rules rather than looking at Wittgenstein. So this is what it felt like to me.
I hadn't read it like this, but even without Wittgenstein on rules, Wittgenstein on language,
which is much more readable, would also fit. Yes. But then also,
if that were true, and I see absolutely why you would think this, then it is unbalanced.
Why in a few pages ago, he would be so specific saying, exactly, Professor Jefferson said
in that lecture in that place. And then if that would be the case, why wouldn't he just say it here?
I apparently is not afraid to say that dude said something I disagree with.
So then it would be weird.
Like, I don't know how big Wittgenstein was in this time period.
I mean, he was sort of go go.
So maybe, I don't know, but yeah, it's at least weird that one sub tweet is so
named so specific. And this then other sub tweet, it's not I don't know, people probably have
thought of whether or not he meant that here. And we get we get the same problem that I talked about
with this like positivism sort of thing where he says like, well, how do basically his argument is like, hey, somebody says there can't be
rules that govern how we all think.
But how do we know that there can't be those rules?
Have we really searched conclusively and therefore proven that they're not there?
So if we can't do that, then you're wrong.
There could be those rules, right?
And like he kind of like tries to appeal
to like laws of physics.
It's like, well, those are the rules, right?
That we're really talking about, not.
And it's just not a,
the objection itself feels a little muddled.
Like he says he can't even give a good reason,
you know, statement of it.
And then we get to the best one.
Yes.
Yeah, this is the argument.
Section nine.
Yeah, section nine.
Highlighted the entire paper.
This is this is the peak of the mountain.
This is incredible.
I want to be clear.
He thinks this is the best objection.
Yes.
He saved the best for last.
Yeah.
Can I read it? Yes, please. Have the honors. Okay. Okay. I'll just do from the first sentence. I assume that the reader is familiar with the idea
of extrasensory perception and the meaning of the four items of it, telepathy, clairvoyance, precognition and psychokinesis.
These disturbing phenomena seem to deny all our usual scientific ideas.
So far so good.
How we should like to discredit them?
Yeah, yeah, we very much like to discredit telepathy.
Unfortunately, the statistical evidence, at least for telepathy,
is overwhelming. I'm going to skip because the rest just makes no sense. I'm going to
skip to the final paragraph of the section. If telepathy is admitted, then, then we have
to tighten up our tests. Still, still it's not enough. I wrote this in the margin.
Still, still Turing, we're not refuting it.
No, no, we simply have to tighten it up a little bit.
Maybe with like lead in between people
so they cannot communicate by telepathy.
What?
Yeah, he says this argument, this idea that like telepathy is a real thing and that it's going
to interfere with the Turing test, this argument is to my mind quite a strong one.
He buys the ESP, extrasensory perception, telepathy, clairvoyance, precognition, psychokinesis,
that these are valid concerns, that this is the strongest thing that we need to
be concerned about that might interfere with our ability to determine if machines can think.
Like you might be able to, you know, guess, hey, what suit does the card in my right hand belong to?
A man by telepathy or clairvoyance gives the right answer 130 times out of 400 cards.
The machine can only guess at random, and perhaps gets 104 right so the interrogator
can make the right identification.
Suppose the digital computer contains a random number generator.
Then it will be natural to use this to decide what answer to give.
But then the random number generator will be subject to psychokinetic powers of the
interrogator.
Perhaps the psychokinesis might cause the machine to guess right more often than would be expected on a probability calculation,
so that the interrogator might still be unable to make the right identification.
So yeah, of course, if ESP is real, if there's psychokinesis and clairvoyance,
you could use that to manipulate the random number generator of the machine.
Oh, it's, oh, this is, this is.
And the answer is to put the competitors in a telepathy proof room.
Yeah, like a let place, right?
Yep, yep. That's the answer.
Like, even if this is the case, machines won't be able to do telepathy, which I'm really surprised.
I expected him to go, maybe machines can do telepathy, which I'm really surprised I expected him to go,
maybe machines can do telepathy.
Yes.
Like machines could have a soul,
why can't they telepathy?
Right?
The contrast also, the contrast of it,
everyone that is a living person with a soul
has experienced emotions,
like these emotions that Jefferson was describing,
sadness and horniness and happiness.
Emotions are real. We do not need to prove emotions are real.
Hot take.
And then we can just say, nah, we don't deal with that.
Don't care.
However, what's...
Don't need to worry about that.
Even if it would be real, these disturbing phenomena, a tiny, tiny percentage of the
population, even then probably says, hey, I'm clairvoyant.
I have telepathic powers, right?
But that we have to take very serious, so serious.
In fact, we must make some changes.
We have to tighten up our test.
I don't even. So that's that this is the yeah this is this is peak Turing right
here this is like you know highlight of the paper the next section is also
quite quite quite a lot I would say I don't know that I want to, like, okay,
there's like this whole idea of a child machine.
Hold on, you're getting ahead of us, Dewey.
Okay, okay, okay, sorry, sorry.
So this next section, this next section is actually
kind of interesting because I think it's the part of this paper
that most densely weaves together prescient sort of ideas that have actually come to bear
in the modern interpretation of AI in a big way
with just like batshit nonsense.
So this next section, final section, I believe,
learning machines, right?
Could we perhaps
make one of these thinking machines through some kind of learning process? So, but the
first paragraph here, I read it earlier, but I'll read it again. Introducing the section.
The reader will have anticipated that I have no very convincing arguments of a positive
nature to support my views." You don't say.
If I had, I should not have taken such pains to point out the fallacies in contrary views.
Which, oh yeah, great job, Terry. Yeah, you really debunked all those, all those
fallacious contrary views here.
Such evidence as I have, I shall now give.
Okay, so what's your evidence in support of your position?
And he goes on to give some similes between the way that a mind works and some other things,
right?
You can inject an idea into the machine and it will respond to a certain extent and then
drop into quiescence, like a piano string struck by a hammer. Sorry,
that somehow implies thinking is going on here, that there's more to what a machine does than
we might otherwise imagine, because it has this period of activity followed by inactivity. It's
kind of like a piano being struck by a hammer. That's going to help us understand it as thinking.
Then he says, another simile would be an atomic pile of less than critical size. An injected idea is to correspond
to a neutron entering the pile from without. Each such neutron will cause a certain disturbance which
eventually dies away. If, however, the size of the pile is sufficiently increased, the disturbance
caused by such an incoming neutron will very
likely go on and on increasing until the whole pile is destroyed. Is there a corresponding
phenomenon for minds? And is there one for machines? There does seem to be one for the
human mind.
How does that have anything to do with the way that thinking works? That like, oh yeah,
there's some activity, and
then the activity keeps going on its own for a while. Like, I'm sorry, that's just how
energy works, right? That's how like reactions of energy works. That's physics that has nothing
to do with the phenomenon of the mind that has nothing to do with like thinking and consciousness. And he describes
it as a simile, right? Like the word simile is the refutation of this idea immediately,
right? These aren't the same thing, these aren't equivalents. They're just like not
even strong metaphors. They're just like weak, sort of like, oh, this thing is kind of like
that thing.
And then also comes another bait and switch, because then this paragraph and switch
adhering to this analogy, okay we have switched something for something else and then in this
something else we're going to ask can a machine be made to be super critical, which are atoms that
create many many reactions. So you take this analogy, you do not explain why it has any value, and then you just say,
well, from there, hop, we go to the next topic.
Yep.
And the next one is great.
I love this.
The skin of an onion analogy is also helpful.
Helpful for what?
In considering the functions of the mind or the brain, we find certain operations which
we can explain in purely mechanical terms.
This, we say, does not correspond to the real mind. It is a sort of skin which we must strip off if we were to find the real mind.
But then in what remains, we find further skin to be stripped off and so on.
Proceeding in this way, do we ever come to the real mind or do we eventually come to the skin which has nothing in it?
In the latter case, the whole mind or do we eventually come to the skin which has nothing in it?
In the latter case, the whole mind is mechanical.
So I don't understand why we needed this analogy.
It's not as if no one had thought materialism was true at this point in history.
There are people who would believe that the mind is mechanical in the sense of it's a mechanism, it's material, etc.
It's not like this had never been considered at this point in history.
We don't need an analogy to say, hey, materialism is true.
But like what we have here seems to imply much more because we're using this metaphor that like,
oh, well, not only can we explain the brain in purely mechanical
terms, but like, you know, there is no consciousness there.
There is no thoughts, feelings, beliefs, desires.
They're just it's this kind of reductionist idea.
And again, like there are people who believe this still, but this isn't an he could
have made a positive argument for that.
He just made an analogy.
Yeah. And he could have cited a positive argument for that. He just made an analogy. Yeah, and he could have cited people
that are materialists.
And would have explained,
even though I don't agree with what they're saying,
they explained in clear terms what is their position
and why it would have been easy for him
to engage with existing literature.
No citations anywhere here.
The next paragraph, this is my big lull of the paper,
the next paragraph. Okay, I already had lulled two paragraphs ago, but go for it.
These last two paragraphs do not claim to be convincing arguments. They should rather be
described as recitations tending to produce belief. Great, great Turing.
Like I will just keep going until you give up, right?
Yes, yeah, exactly.
Use another repetition, please.
Yeah, I'm not trying to convince you,
I'm just trying to like bamboozle you into acquiescing.
This is incredibly weak.
And I just wanna show like, again,
I found this Billy May's paper so much better.
So just, you know, just to show that like this is in already like commonly accepted at this time,
you know, this idea of like materialist or behaviorism, he says,
here, perhaps it is as well to make a confession of faith here. I accept the evidence of my own
introspection as well as those of other people,
that there are such things as private psychological events. However heretical such a view may seem
today." So like he's even saying like in the time like Turing is acting like he needs to say
something. He's saying something so radical here that he couldn't just cite people already making
this argument. and we see from
1952 this guy being like now I think we have like feelings beliefs and stuff and I know that's like heretical today, but like
We have this like private psychological life and sorry, sorry, I think that sorry I have an inner world
Yeah during just cited people being like Hey, I'm a behaviorist. Like Lervorians.
I mean, we would like it to not be true, but it just is true.
An inconvenient truth.
Yeah.
So the, the section act, so the next place the section goes, I think is actually kind
of interesting. Um, and it gets into considering like, what if instead of trying to program
this machine, to build this machine, like it's, it's, it sort of says like,
we've explored, you know, is there a mechanical like hardware component to
solving this, this question, answering this question, creating this, this game?
Is it, you know, we, you know, we're getting human skin
out of the picture, we're getting the taste
of strawberries out of the picture.
We're just gonna focus on the ability to like,
ask and answer questions.
And it becomes one mostly of programming,
but it's the programming here that's the most interesting.
And what if instead of trying to produce a program
that simulates, and I'll quote here, instead of trying to produce a program that simulates, and I'll quote
here, instead of trying to produce a program to simulate the adult mind, why not rather try to
produce one which simulates the child's? If this were then subjected to an appropriate course of
education, one would obtain the adult brain. Presumably the child brain is something like
a notebook as one buys from the stationers.
Rather little mechanism, and lots of blank sheets.
Mechanism and writing are, from our point of view, almost synonymous.
Our hope is that there is so little mechanism in the child's brain that something like
it can be easily programmed, the amount of work in the education, we can assume, as a
first approximation, to be much the same as for the human child.
Wow, that's a lot of conjecture. That the mind is sort of like a notebook with lots of blank sheets and little mechanism. How bold of you to proclaim that Turing and to base your argumentation upon
that. But the general idea that we'll produce something simple that is capable of learning
that will produce something simple that is capable of learning and go through a learning process with it, I think is actually super-precious, because that's what we've done to unlock modern AI,
is produce a very simple mechanism, a very, very simple mechanism,
and subject it to something akin to learning, something equivalent to learning. And then on the next page he says, we normally associate punishments and rewards with the
teaching process, which of course is behaviouralism.
But it's also, I wrote there like, this is like a fitness function, right?
That is how we train algorithms, it's just to tell it, this is good and this is bad,
even though, I mean, it comes from a sort of a weird place it is how AI is trained so this part it also again has weird stuff in it but it
is also describing if you if you do it like this if you look through the hairs of your
eyes then and you can see how it is describing machine learning.
Yeah I have I have written in the, either total nonsense or remarkably astute.
I mean, he compares it to evolution and, you know, this like genetic algorithm type stuff,
which we do actually use for things.
I will say just again on Billy Mays, he's got a great little comment on this one.
So he's talking about this like punishment and rewards for the child machine.
He says, the use of such emotively toned words, which also seem to express value
judgments, makes one think immediately of someone precariously balancing a calculating
machine on his knee and chastising it.
Oh, wowzers. 1952.
Yes.
I thought that was that was funny.
And then also there's one other sentence in the next page, the next page starts with,
it's probably wise to include a random element in a learning machine, right?
And this is also what is in chat GPT-like algorithms.
Yeah, temperature.
They have some randomness, temperature. Yeah.
They have some randomness because otherwise it would be too mechanical. So there's a few nuggets
in this section, which is like, wow, he did somehow see some mechanisms that would be capable to
actually work to such a machine even if the rest of it is like, gook gook. He honestly should have just read on this paper to say,
I'm going to assume that machines can think at some point.
Here's the technological justification for why I think computers are going to grow
and be complicated enough. Here's some ideas for how we might start making.
If he just stuck to the technical stuff
Yeah, rather than trying to dip into philosophy. That would have been fine. It would have been a fine paper
It's just once he starts trying to comment on all of this philosophical stuff
He's clearly out of his depths or didn't put in enough time
Yeah, and has no interest also to engage with literature or to talk to people
who know more about this. So here's another episode that we could do. There's this scientist
that I really love. He's called Nathan Ensminger. He's like a historian of computer science.
Have you heard of him? I have a science crush on him. So here's one sentence. We may hope
that machines will eventually compete with men,
not with women of course, with men, in all purely intellectual fields. But what are the best ones to
start with? Even this is a difficult decision. Many people think that a very abstract activity
like the playing of chess would be best. So Nathan Ensminger has a paper called something like
chess, the drosophila of artificial intelligence.
And this drosophila is like a fruit fly.
So he describes how chess became the one thing
that AI wanted to solve.
And it's such a great paper because it describes
many of the things that Turing is not doing,
engage with,
oh, but what does AI mean? And how the chess, at least before, you know, deep blue in the 90s,
why is chess a goal, right? Why is that even a worthy goal?
Ansemanger is so cool. I also cite the hell of him in the feminism paper. So that's maybe
what people might also know him from if they're familiar with my work.
in paper. So that's maybe what people might also know him from if they're familiar with my work.
I do have a little dog. It's an hour past when she's supposed to get food. She's been
very patient.
Yeah, it's also like two hours past my bedtime.
Yeah. Yeah. You want to do closing thoughts?
I sort of think I did my closing thoughts with Entzmanger.
Okay, awesome. Yeah. Okay. I think this paper is worth reading. Not sort of think I did my closing thoughts with ends Manger. Okay. Awesome. Yeah, okay
I think this paper is worth reading
not because I think it's a good paper, but because it really shows you like
What was said and kind of how we?
Ignore what Turing said here and we've kind of made this popular version now
I think I will say if you actually want to engage with this
I think there's lots of interesting literature out there. Ned Block has a bunch of really interesting stuff. John Searle
has a bunch of interesting stuff. I mean Chalmers, there's like so much in like the philosophy of
mind. Dreyfus and Dreyfus. Yeah Dreyfus, absolutely great. Like there's so much that's interesting in
this in this area and it's sad that that this kind of meme of the Turing Test
is the thing that a lot of people know.
It's just not as interesting of a question.
I do think it's still worth reading,
especially if you think we're being harsh on this.
We basically read this paper to you.
This was a pretty darn close reading.
There's not the good parts we're hiding from you
other than those parts where it explains what a computer is.
And that was fine
And we admit that's fine
So like this is one of the reasons why I like doing these papers why I like the format of reading papers is
I think we have a lot of
We we don't have a lot of historical knowledge as a field
but we also have a lot of hero worship of
Those people who were in the past that we think just somehow they were these great
geniuses who'd said all of the stuff and like
Clearly this paper is awful
Engelbart had a bunch of weird awkward things in it that weren't great like the arguments weren't good a lot of these papers like
to me this is
Maybe you know, maybe in some ways
It's this like Alan Kay-esque project of like you have
to know the past to invent the future.
But also to me, it's like you have to deconstruct the past and undermine the authority of those
past figures if you really want to construct the future.
And that's what I enjoy about reading these texts that like, maybe they didn't get it
all right.
Maybe we do need more, like the thing that I do like is Turing wasn't a philosopher,
he was trying to comment on this topic, he did a bad job. Maybe we do need more people in computer
science to engage with these topics. Maybe we don't, maybe they'll all be as bad as Turing.
I don't know. I just hope we can, I like this topic. I wish it was good. I would love to do
more papers on this kind of stuff, but I also know
Ivan can only take so much. Yeah, but so yes, everything you said, but it's also interesting
because these papers take up space where other people could have also been, right? So there's so many people that aren't programmers, that aren't computer scientists,
that have done a lot of thinking about what computers are, sometimes explicitly or sometimes
implicitly. Like recently I read a wicked book, I want to do an episode on that one
too, it's called Computers as Theatre by Brenda Laurel and she has a bachelor and master's
degree I think in like theatre and she has so many weird ideas about programming but weird in a good
way coming from the outside. So it's really a question where I'm like should we read more
half-assed attempts contemporary or historical of computer science people that do not engage
with literature, or should we look at what other people in our field are doing and we
can make the mental switch probably of applying those theories to our own field.
I'm authoring another piece and I'm not sure if I'm going to keep this sentence in, but
in the sentence, in the piece I'm writing the sentence, we should save computer science from the hands of computer scientists. And
I do think I'm going to keep it in because after having read this, right? I know it's
a long time ago, but this is still the shit we live in where computer people somehow think
that they are so very smart because they can understand the magic of computers and therefore
they can just without any other background or clearly without any interest in other fields,
just shout random things. And I'm like, yeah, you know, yeah, this is where we are now.