Embedded - 485: Conversation Is a Kind of Music
Episode Date: September 20, 2024Alan Blackwell spoke with us about the lurking dangers of large language models, the magical nature of artificial intelligence, and the future of interacting with computers. Alan is the author of M...oral Codes: Designing Alternatives to AI  which you can read in its pre-book form here: https://moralcodes.pubpub.org/ Alan’s day job is as a Professor of Interdisciplinary Design in the Cambridge University department of Computer Science and Technology. See his research interests on his Cambridge University page. (Also, given as homework in the newsletter, we didn’t directly discuss Jo Walton’s 'A Brief Backward History of Automated Eloquence', a playful history of automated text generation, written from a perspective in the year 2070.) Transcript
Transcript
Discussion (0)
Welcome to Embedded.
I am Alicia White, alongside Christopher White.
Our guest this week is Professor Alan Blackwell.
And we're going to talk about AI and programming languages,
but probably not in the way you think we're going to talk about them.
Hi, Alan. Welcome.
Hi, it's nice to be with you.
Could you tell us about yourself as if we met at the user experience conference that
Marianne Petra just keynoted?
Yeah, so I've come into this field as an engineer, and a lot of the jobs that I've done over my career have involved designing new programming languages.
So I guess the first one of those was nearly 40 years ago.
The systems that I was deploying, which in my early career were generally industrial automation or laboratory automation systems,
I always found it super interesting to talk to the people who were using the systems.
And then quite often I thought I could really help them with a job if rather than just giving them a static user interface, I gave them some kind of simple scripting or configuration or
programming language. So long ago, I used to talk to people about what's your job and how can I make
it easier for you by helping you to instruct a computer to do pieces for you?
So, yeah, that's been a long-term interest, but I'm kind of intellectually curious.
I did a degree in philosophy and comparative religion, and then I heard about this thing called artificial intelligence,
which is a way of combining an interest in people, an interest in philosophy, and an interest in engineering.
So that was a little while ago, and some people are surprised that I started my research career in artificial intelligence in engineering. So that was a little while ago, and some people are surprised that
I started my research career in artificial intelligence in 1985, which is before many
of today's leading experts were even born. So basically, I got into artificial intelligence
because of an interest in programming languages. Over the years, I worked for corporate research
labs and big companies, and actually deploying new AI tools. And often they included
novel programming languages. So I went from programming languages into AI and then out of AI,
I became a programming language designer again. Okay. And a professor.
Yeah, that was a byproduct, to be honest. I really didn't think I wanted to do that.
And my dad, who was a proper engineer, was really disappointed. Like,
I can't believe you're going back to school again. You're a good engineer.
Yeah, but things are pretty sweet here in Cambridge because the university has got a
very liberal attitude to intellectual property. And it's super easy to start companies here and
do consulting for big companies like Intel or Microsoft or Google or whatever. So after I
actually ended up late in my life doing a PhD in applied psychology
in order to understand the cognitive ergonomics of programming language design,
I thought I would go back to a commercial lab again to design more programming languages.
But I discovered that by becoming a professor,
I could work for far more companies and design far more languages.
So I've really been enjoying doing that for the past 20 years.
All right.
We want to do a lightning round where we ask you short questions
and if we're behaving ourselves, we won't ask for a lot more detail.
Weirdly, I ended up with a lot of these for you,
so we'll try to go kind of fast, but don't feel any pressure.
Okay.
Would you rather have dinner with Bootsy Collins, Geddy Lee, or Carol Kay?
Oh, tough, but definitely Bootsy Collins, because I'm a
bass player.
Sometimes we
ask when the singularity will occur,
but for you, I think, what year
do you think we'll stop using
a floppy diskette to indicate
saving things? Well,
you know, so many words in our language, we can't
even remember where we got that word from.
So I think a floppy disk has already become just a kind of emoji.
Like, you know, I would be very hesitant to ever put an aubergine in my text message because I sort of know that what it means is not what I think it means.
So I think the same with a floppy disk.
People don't really care what it looks like.
They just need to know that it's a symbol and it means something.
Would you rather spend an hour with a well-trained parrot or a stochastic parrot? I don't like parrots very much. Stochastic parrots are kind of fun.
Yeah, but they don't bite you. They don't poo all over the place. So yeah, probably not yet anyway.
Heaven forbid. Yeah, no, I'll go with the stochastic one.
Would you rather have dinner with Alan Turing or Claude Shannon?
Oh, cool. Wow, I would love that so much.
Which one?
Yeah, so I guess the thing is that Alan Turing was a student and professor at Cambridge. I know a lot of the sort of people that he hung out with.
You know, I know I have a friend who played chess with him when they were kids.
So although I would love to meet him, I sort of feel like I probably pretty well understand what
sort of person he was. Claude Shannon, like, oh, absolutely incredible. Yeah, I think I would,
yep, 10 minutes with him would probably already double my knowledge of the real ground truth of information theory.
I would totally crash that dinner.
If you were teaching an adult to program, would you start with Scratch, Excel, or Python?
Or something else?
Yeah, so I have done this.
I used to run a class where I taught humanities professors what they could do with programming languages. And I liked Scratch. And the reason for that is that it's more interactive.
You can use multimedia immediately, so you're not stuck just in text and number world.
Yeah, so yeah, Excel definitely is good. And I use Excel a lot. I can definitely
help people do useful stuff in excel but for example when i
designed a programming language for artists uh and then i spoke to a sculptor friend and said
what would you like to do with this and she was like i use a computer when i have to do my accounts
using excel but why would i ever want to use a computer for for my actual job because that's fun. Have you ever held a wetter? Oh, worse than that. Yeah, I have had a wetter.
And the horror of a New Zealand child is to have a wetter, which is a sort of giant cricket with
big poisonous spines on its back legs. I have had a wetter inside of my rubber boot when I put my
bare foot into it. That is super nasty. But yes, I have also held a wetter inside of my rubber boot when I put my bare foot into it. That is super nasty. But yes,
I have also held a wetter because in my sixth form biology class at the end of high school,
we didn't dissect mice and lizards, but we did dissect wetters.
I didn't know they were poisonous.
Actually, I'm exaggerating a little bit. If I give you a nasty scratch,
it gets quite inflamed. Maybe it's just a little bit allergic
rather than poisonous. Yeah, we don't have poisonous animals in New Zealand. It's not
like those Australian spiders. Everything in Australia is poisonous. Even the mammals are
venomous. The first time I spoke at an AI conference in Australia, I stayed with friends
in Sydney and they picked me up from the airport and they said just one thing alan uh while you're staying in sydney do not put your finger into any holes in the ground
yeah i wasn't anticipating that i would do that but now that you mention it i will definitely
really mean it though exactly uh what is your favorite fictional robot
i'm sort of interested in the like the really extreme ones that like transcend the boundaries
of what we might think it means to be human i guess i liked anne mcafree's the ship who sang
like what would it be like to have a spaceship as your body yeah they had lots of adventures
and anne leckie wrote a book with that yeah exactly yeah yeah anne leckie for sure. And in fact, I think her trilogy starts with one of those minds that has been ejected out of a ship and then is unconscious in a desert planet or something.
Yeah, that is a great book.
I really enjoyed that. that I'm doing a lot of research at the moment, just saying, now that we've got these things
which produce language but are not really intelligent
in the way that we used to think of, like being human,
what does the body really give us?
And that asks us a lot about what part of our intelligence
is actually only there because of our body.
I think there's a lot of fiction that is relevant to that.
I think even Bradbury kind of asked the question of
why would you make an android, why would you make a robot look like a human?
There's so little that we can do. That's for us, not for the android.
A robot bartender, I think, was one of the places that came up. Why would you only give it two arms?
Yep, absolutely. And I think that this is, of course, one of the things that I riff on a little
in the book is where I claim that AI is a branch of literature, this is, of course, one of the things that I riff on a little in the
book is where I claim that AI is a branch of literature, not a branch of science, because
so much of AI is about just imagining what it would be like to be a human if a human was a
different kind of thing. And that puts us in that tradition of all those stories going back centuries
and even a millennia to what would it be like to have a statue that comes to life and behaves like
a person? Or what would it be like if you made something out of clay and then you put a scroll
inside it and it came alive? You know, for me, science fiction, you know, all of those things
are science fiction. And what they're about is imagining different ways of being human.
One of the things that you didn't cover in your book, which I don't even think we've said the
name of. No, we were still in lightning round. So we had to transition out of lightning round.
Oh, okay.
We kind of did.
So your book is, wow, this is something I really should know.
Moral Codes.
Could you talk about your book for, I don't know, 30 seconds to two minutes?
Sorry, that question didn't come out well, but let's just go with it.
So the title of the book is Moral Codes, Designing Alternatives to AI.
So the designing alternatives part, I think, is what's going to be most welcome to embedded
listeners, because what the book is all about is saying
the world needs less AI and better programming languages. So the whole book is really arguing
for that. And I talk a little about AI and I talk about what it can do and what it can't,
just addressing this kind of public anxiety. But then I get back to the theme that I'm really
interested in is to how can you make the world a better place if you have got better programming
languages? And it turns out that that can deliver a lot of the things that I'm really interested in is to how can you make the world a better place if you have got better programming languages? And it turns out that that can deliver a lot of the
things that AI people think are their biggest problems. So a shorthand way of saying this to
people who know a little bit about computers, but also have some training in philosophy,
would be to say, just imagine that you wanted to tell a computer what to do in a way that you could be confident
that it would do the thing that you've asked it to. And also that if it behaved differently,
you could say, why did you behave the way you did? So those are the things that have got technical
terminology in the philosophy of AI. The first one they call the alignment problem. The second
one they call the explainability problem.
But then I say to the philosophers, just imagine if we did have a way of telling a computer exactly what we wanted to do, and then ask it why it did those things. If we designed a special language
that would allow us to achieve those things, what kind of language would that be? And the
philosophers go, oh, wow, yes, that's really profound. Yeah, definitely.
I can see where you're coming from here.
That's a really interesting philosophical question.
And they'll say, well, guess what?
We have a language like that.
It's called programming language.
Yeah, sort of.
We have been designing programming languages for 50, 60, 70 years.
And for that long, there have been computer scientists who have been improving
them to make sure that the computer does a better job of doing what you wanted it to do. And also,
so that you have a better job of being able to read it and understand why it does those things.
That's like the fundamentals of programming language research. But that is a different
branch of computer science to AI. Just to level set here, how would you define AI? Because it's gotten a little muddled,
at least in recent decades.
Something that's coming in 10 years.
No, that's nuclear fusion.
Yeah, definitely AI has been coming in 10 years, as long as I've been in the field.
So I think this is a book that is written for the
general public because I think it's important that people who are not computer scientists have a
better idea of what sort of things you can do with AI and what sort of things would be better to do
with programming languages. But when talking to the general public, of course they don't know the
difference between what those two things are. They don't know which kind of algorithms are compilers
and which kind of algorithms are large language models.
But what they do know is that software is changing the world a lot.
So to some extent, everything that's happening today,
they think of as being AI.
And that's definitely true of the recently completed European AI Act
because when you look at the way that software
systems are described, it's not really about AI at all. It's just about like, here's what software
ought to do. And I've even spoken to people who work in Brussels as members of the policy teams
that were contributing to drafting that legislation. And I said, would it be right to say
that you could just replace the word AI with the word software throughout this legislation and it would really make no practical difference? And what I was told by
the Brussels Policy Researcher is, yeah, absolutely. That is definitely what we're
wanting to achieve here. We're wanting to provide a better way of governing software
and we just use the word AI because there's so much kind of hype about that. And that tells
people kind of that we're working at the leading edge.
And that has been...
No!
Yeah.
Wow.
Don't worry.
They're lawmakers.
They do this sort of thing for a living.
So I can tell you from my perspective, you know, I said I've been working in AI since
1985.
The algorithms that I was using back then, like nobody calls those things AI today, like
A-star search or functional programming.
You know, those were the sort of day-to-day tools.
Nowadays, those are just useful algorithms, useful languages.
We don't call them AI anymore.
And in fact, I think that's been a pattern throughout my career is that stuff that is
AI one year, five years later is just working software.
In fact, there used to be an old joke from my master's supervisor
who had spent years at the MIT AI Lab.
He used to say, if it works, it isn't AI.
Yes, exactly. Yes.
When we were coming up, expert systems were the only real AI that was out,
quote, real AI that was out there.
And they were just glorified databases and lexical parser kind of things, right?
I did my research on that.
I know you did your research, and it was very cool.
And nobody talks about it anymore.
Yeah, I've built some cool expert systems.
Yeah.
Absolutely.
And, of course, in 10 years' time, we'll look back at what ChatGPT does, and we'll say, oh, yeah, that's just a what it.
The phrase LLM is going to be right in there. Oh, that's just a, what, you know, the phrase LLM is going to be right in there.
Oh, that's just a transformer or, you know,
we got better things than transformers
or we'll understand more generally why cross-modal AI
seems to be a thing of interest.
But yeah, a lot of what I'm interested in here
is separating the value of the algorithms, which are great.
You know, I love a cool new algorithm.
And then what are you going
to be able to do with that algorithm? And I would say throughout the long history of software
engineering, the really super interesting things you can do with an algorithm are seldom apparent
right at the start. So usually people give demos that are based on things they read in science
fiction books or maybe just what they wrote in a proposal or told their manager. But if it's a good
one, over the next five or ten years,
you think, oh, wow, actually, there's something super cool you can do with this,
even though I didn't know that when I first made it.
You separate the concept of AI into control systems and social imagination.
Yeah. The reason for this is that there's two causes of confusion between what I call the two kinds of AI.
Somewhere else I called one kind the objective kind of AI and the other the subjective kind of AI.
Okay.
So they get mixed together for two reasons.
One is that they have both made good use of recent advances in machine learning algorithms,
so especially deep neural networks or maybe Bayesian machine learning more generally.
And the other reason they get confused is that it's in the interest of some companies
to not have you think too hard about the differences between them
because one kind is generally pretty useful.
What is now called reinforcement learning
by trendy young researchers
is what, when I did my undergraduate major in it,
was called control theory,
or closed-loop control,
or previously cybernetics.
But basically, you need good learning algorithms
if you want to have a system
that observes the world somehow
and then changes things.
So of course, that is the foundation of robotics
and all industrial robotics and all
industrial automation and all kinds of super useful stuff that I've got in my own house.
So that's objective because it's just observing the world, making measurements and doing things.
So that's good engineering stuff. There's the other kind of AI, which is the science fiction
literary stuff of what would it be like to be a different kind of human? Can we make a computer that pretends to be a human?
And I consider that to be a branch of literature because it's about reimagining what we are.
So companies that want to say reimagining what it is to be human is a useful thing to do. Quite often do that by blurring the boundaries between stuff that humans do in their own minds and with each other and stuff that robots do when they're usefully pottering along the roads, not driving over the curbs and things. So when it comes to autonomous vehicles, for example,
that's a real challenge
because some of the things we do in cars are human things.
Some of the things that cars do are mechanical control systems.
So you can do some parts of autonomy very well
and other parts really badly.
Personally, I prefer the word cruise control
because that was sort of an understood boundary
of there's some decisions I want to make,
there's some decisions I want my vehicle to make.
And as long as we don't confuse the subject of an object of parts,
I'm very happy for my cruise control to do automatic stuff
that I don't want to be attending to all the time.
But there's other things I really don't want my car to do for me.
And that confusion exhibits itself in our industry.
We see it quite often where somebody's like,
well, I have bought this new
chip. It's got an NPU on it. I would like to make a model that, you know, controls the temperature
of this, you know, to make up an example, but we've seen similar things that controls the
temperature of this thermistor or something like that. And it's like, well, we have a thing called
the PID loop that can do that in like 15 lines of code. Why are you using a neural network to do that? It's like, oh, because that's,
and I think certain companies doing autonomous vehicles,
which will remain nameless,
have been pushing to put the entire self-driving stack
into a neural network and take all the,
have it manage all the control systems,
which strikes me as completely insane
because you have well-established
ways of controlling motors and actuators and sensors. The neural network part should be deeper,
smaller, a smaller kernel that's making, quote, you know, fusion decisions about the world and
stuff like that, if at all. So I think the confusion is both, it seems like it's both
in the public, but it's also at the engineer level where people are like, oh, I'll just throw a model at this problem when there's well-established books full of...
Actual solutions with transparency and explainability?
Right, working control systems, yeah.
Yeah.
Part of this is about engineers, and I'll hold up my hand and say I'm guilty of this too.
We loved reading science fiction when we were kids.
The three of us, we were all just geeking out about science fiction books before.
Sure, engineers love reading science fiction, partly because it describes worlds where engineers get to be the boss and rule the world. But quite often, you're working on a relatively
mundane piece of engineering, and you see something that looks like a thing that you
read in a science fiction book. It's like, oh, this would be so cool to do this. I'm actually
having fun in my job. So I think that's one of the challenges that we need to face,
is the temptation to use inappropriate solutions when something simpler
would work better. And I think, you know, right back in my early career, something that the old
gray-haired engineers would always say, you know, there's a simple way of solving this problem,
why don't you do that? Because it's not any fun. I don't know. I feel like some of these
pushed-upon neural network AI solutions to simple problems.
It's just marketing.
Oh, there's a lot of that happening because people have chips to do it
and they want to sell them.
So engineers are receptive to marketing.
There's a super pernicious kind of marketing that is going on right now,
and that's the use of the term artificial general intelligence.
And I really come down hard on this in the book
because the word
general here is meant to imply, oh, no limitations at all. This can do anything. AGI is basically a
synonym for magic. So I claimed in the book that whenever somebody says to you AGI, what they're
actually saying is magic. And when they say this problem pretty soon will be solvable because we'll have AGI,
what you should hear is this problem pretty soon will be solvable because we'll have magic.
I sort of thought that nobody would ever be dumb enough to just outright say,
by the way, I've invented magic.
The interview between British Prime Minister Rishi Sunak and Elon Musk last year,
when Elon literally said the thing about these AI
technologies is that they're actually magic, so they can do anything. I'm like, oh my God.
Yeah, I had a little doubt about Elon's engineering credentials, but
we get skeptical when people are making direct appeals to magic. And I try to explain in the
book for a general audience that there's some basic problems of reasoning about the arguments for AGI that
saying I've invented something that's beyond definition and has got full generality such
that it isn't constrained by what I told it to do. I try to argue that this is just a direct
equivalence in terms of argumentation structure. I'm saying
now that we've got magic, we don't need to be engineers anymore. So I think a little bit of
that maybe is underpinning the foolish students. And I've seen this myself. As you say, they need
a PID controller. They implement it with a deep neural network. And then they show you that it's
only slightly less stable than the 5-cent chip they could have bought.
It only requires 20 watts.
Exactly, yeah, and a GPU.
One of the things that you definitely take like a big bat to a pinata is LLMs and whether or not they represent a form of AGI. And you used the word
pastiche, and I really liked that as an explanation. Could you talk a little bit about that?
Sure. Pastiche is a great word, which I learned more about writing the book because I talked to
an Italian colleague who was able to
explain to me where the word comes from. It's a cooking term from Italian. The Italian word is
pasticcio. And pasticcio means a meal where you just sort of mix up your favorite ingredients
in a way that doesn't really have a kind of a grand scheme. It's just you put a bunch of stuff
in there that you really like. And I've tried this out a number of times on Italian people now. And I say, what's the example
of, you know, the first thing that comes to mind when I say pasticcio is, oh yeah, like a lasagna.
Lasagna is a pasticcio. It's not a sort of grand cuisine thing. It's just a whole bunch of stuff
that's nice. It's got pasta, it's got meat, it's got cheese. Like, yeah, why would I not like all
these things?
So stirring stuff together just because the ingredients are things that you kind of like,
but with no overall grand scheme is a term that's been used in art history for a very long time.
Back way before we had any kind of printing technologies or ways of reproducing art,
people in places like Pompeii, if they're decorating the walls of the house,
they didn't have wallpaper,
but they liked to have nice pictures. So they would get some guy to come around and just put a bunch of pictures on the walls of their house of people feasting, or if it's a
brothel, people having sex or whatever. And the guy that does this kind of work, he's just like
your house painter. This is not a great artist who's going to be celebrated in history. He probably
has been down to see the artists that the Pope pays, who are the really good ones, and he sort of has an idea of all
those things, then he just puts some version of them on your walls. So that's where the term
pastiche comes from in art history, is just sort of a jobbing piece of work where you kind of mix
up nice ingredients that you've seen somewhere else, but you don't really claim to make it
original. If you do a degree in art college or a music college or something,
this is definitely what you're warned against.
So nowadays when you can just have a direct reproduction of the Mona Lisa if you want,
you don't need someone to come and paint on your wall some poor imitation of the Mona Lisa.
Nowadays we want every artist to do something that's original.
So pastiche is a term that is used by art tutors
to tell their students, don't do that.
Don't just imitate other people's stuff.
You should have an original idea.
And the definition, if you look back at the kind of textbooks
on art theory that are trying to define exactly what it is
that you don't want artists to do,
there's this nice definition actually for a couple of centuries ago
of saying the thing about a pastiche is that it's not a copy, but it's not original either.
So I think that really gets to the nub of what it is that we find a little uns not a copy because it's not exactly like any other piece of text.
But every time you read it, you think, oh, you know, this is really not very impressive.
Everything here is just stuff I could have found on the Internet.
So I think that kind of makes clear what the limitations are.
And in the book, I talk a little bit about the fact that this is one of the problems of morality that we're dealing with here. Because a lot of these engineering decisions, you know, it's not that nobody's life is affected by this.
People's lives are affected.
So, of course, one of the things that we're really concerned about with LLMs is that there are people out there who are doing actual original artwork,
but they're not being paid for it anymore because their original work got stirred into ChatGPT or something.
And if you ask ChatGPT, please, can you write a novel kind of like this?
Then as long as you don't use the novelist's name, which triggers the guardrails and says, oh, no, no, no, no, I'm not going to infringe anyone's copyright.
But as long as you just sort of say, this is the kind of novel that I would like without saying who it is that you're ripping off, then it'll very happily give you a whole bunch of stuff just like that person's work. And so that's why the Screenwriters
Guild and so on are really upset that maybe studios will just kind of plagiarize their
stuff at one remove by asking for things and not paying for them. So there's a whole sort
of economics there of where the training data is coming from, why the architecture has been designed in a way that includes no traceability at all.
And I even sort of argue maybe it has no traceability
because that was just so convenient for the companies that invest in this.
Even if there had been a technical solution
of how you could make traceability from the training data to the output,
even if that solution was available,
it's not in a company's interest to pay for
it or even to ask the question.
I wouldn't be at all surprised to learn that some researcher inside of Google or inside
of OpenAI did discover a way of pretty effective traceability to original source documents
and was told, we never want you to ever mention that again or do any work on it, because why would they want that? That's called evidence. Yeah,
it's like neural networks are sort of like a plagiarism machine. They're custom designed
to rip off intellectual property without traceability of the original sources.
And the guardrails are a lot like Star Wars guardrails. Yeah. They don't really exist.
I think that one of the things that bothers me, and you kind of touched on it with the larger issue of taking original work, which has been trained on, and then reproducing it,
is the flip side of a lot of working musicians and artists are not producing the Mona Lisa and getting $10 million or whatever. They're making ad jingles for $50 or $200 a couple times a week
or making magazine covers or small pieces of artwork
that they're doing on commission.
And that's the sort of thing that LLMs are getting kind of okay at
to make just kind of a junky image
or a not very great song that's appropriate for an ad.
And so the vast majority of working musicians and artists are, you know, working class people doing small work.
That's kind of being replaced or the intent is from some companies to replace that kind of work.
And I think that's going to be a major disadvantage for just society because those artists are paying the bills with scut work to work on actual passion pieces and things like that that probably won't be funded.
Totally true.
And in the book, I really make a point that so much of what we call AI is in fact not a technology.
It's a set of business models,
and it's a consequence of certain economic policy decisions.
And one of the sources that I draw on there is a great book
by Rebecca Giblin and Corey Doctorow, Chokepoint Capitalism,
where they talk about the way that the digital realm
is configuring so many different forms of media
so that creatives can only sell their
products to one company, whether it's Spotify or whether it's Ticketmaster or whether it's
Audible or whether it's YouTube. And once you get to that situation, that company can just drive
the price down to what they pay and they can drive it down to practically zero because creatives are
going to create. Even if you don't pay them, they will create.
And once you've got a monopsony, which is the opposite of a monopoly where you don't
have one seller but one buyer, and it drives prices down instead of driving prices up,
and that has the effect that there's serious danger that all creative professions will
have to turn into hobbies, unless you're Taylor Swift, that everybody else in the world
will maybe do the work for the love of it, or even worse than that, maybe they'll have to
pay to publish their stuff. Your royalties will be a thing
of the past. And I sort of stop for my possible academic
readers and say, you might think that sounds really crazy, but wait, this is actually exactly
how professors have to work already.
Like the most prestigious scientific journals, you literally have to pay to put your work
into those journals.
They don't pay you.
I liked the section where you talked about the effect of photography on painting.
Of course, it puts portrait painters out of work because anyone could get the figurative
images.
And then it went from you had to have specialized equipment to us all having cameras in our
pockets.
Some of them using AI, sorry.
That aside, but I mean, this reminds me like Socrates didn't think writing was a good idea
because it would externalize thoughts when you should have them internal.
The jury's out on that one.
Yeah, maybe.
When I'm being provocative, I think I tell people maybe with LLMs we've finally proven that Socrates was right after all.
Because we look at all these books and we think, wow, this is actually a lot of crap older.
It's talking to real people that's the interesting part um no i think uh you're absolutely right that
um you know being modern humans uh for absolute centuries has about reinterpreting ourselves in
the light of new technical changes whether it's the printing press or whether it's desktop
publishing which put a whole lot of typographers and graphic designers and illustrators out of work because a lot of everyday scut work, I think, as you said before,
I could just do that myself.
I didn't need to hire a typesetter to publish my church newsletter because I could just
do that on my desk.
So yeah, throughout my lifetime, there have always been whole professions where the people who were doing that job, it becomes mechanized.
And within a couple of years, they say, oh, what was my real job and what was I really wanting to do?
So I used to worry about this a lot in my early career as an industrial automation engineer.
So I would be going and putting automatic devices into a factory and I would be chatting to the guys who worked in the factory so that I could learn more about how to make the system more usable and more efficient. And I would get worried and say,
but I'm sort of worried, are you going to lose your job because I put this machine here?
And the response that I got more often than anything was say, you know, this is a terrible
job. Nobody should have to do this. So I will
be super pleased. I think actually my employer respects me. So I think I'll probably get a job
somewhere else in the factory. But even if I don't, to be honest, I would rather be doing
something else. And I think we've seen that happening over hundreds of years is that
if people are exploited by their employers and they're driven into poverty, it's not the machine
that's doing that. It's the decisions the employer is making about the economics of the company,
or maybe it's decisions the government is making.
So I think adjusting to automation is super interesting,
and it's something that engineers can be really engaged with.
And the downsides of that, we need to be clear that that downside
is not a direct technical effect of a decision that engineers are making.
That's an effect of the way that a company made use of that technology.
Totally changing subjects.
Yeah.
What about LLMs and coding?
Yeah, exactly.
Not changing subjects.
No, I was going to go on another rant about heart, so this is good.
We certainly need to get back to the fact that there are many, many people writing books about AI at the moment, but not so many who are saying that the alternative to AI is better programming languages.
So let's make the transition to that.
This is a book for the general public, and some of what I do is just to give them a little bit better understanding of what programmers really do from day to day.
Because I think that's helpful for everybody to understand more about where software comes from
and why it's difficult to make software.
And why we don't spend all of our time actually writing code.
As much as that's what we talk about and what it looks like,
that's actually usually a much smaller part of our job than people expect.
Indeed, yeah.
And, you know, LLMs are great assistance for code writing.
And I explained that, you know, it's not so much different to predictive text.
And, in fact, the programming tools I use have had great options for auto-complete and refactoring.
Sometimes the people selling them call that AI.
So programmers are very good at making their own job more efficient.
So we always take whatever is the latest advance
and use them to make better IDEs and programming editors and so on.
So that's nothing new.
And of course, transformer-based large language models,
they definitely help with allowing you to quickly generate syntax
from pseudocode and all that kind of stuff.
So that's great. What I take care about in the book, though, is to say that this is not the
same thing as the singularity coming into being because it has programmed itself to become kind
of intelligent beyond what we can believe, what we can imagine, because that's an argument for magic, really.
Yes, lots of programmers every day use Copilot and other LLM-based tools,
but definitely it's not writing software by itself.
So I try to strike a balance of acknowledging that these tools are super useful,
but also that because they're not magic,
they're not going to do some of the things that are claimed.
There's a couple of interesting boundary conditions though.
So one of those is really trivial things.
Make me a super simple computer game
or put some JavaScript to my web page
to make an animated bouncing ball following the mouse.
And it seems that you can do jobs like that pretty well with ChatGPT
because they're so small and self-contained
and you don't need to know much about the context
to make it work the way you said.
And practically speaking, of course,
we know that ChatGPT has been training data
includes the whole of GitHub.
So you can be pretty confident that somewhere,
somewhere on GitHub has made the program you want
and you've just got this kind of plagiarism avoidance thing here
that it's remixed their original code just enough
that you can pretend that you generated that from scratch.
And if you're lucky, it's not using a patented algorithm.
Although I can tell you that the companies that sell these tools
are pretty nervous that it might be.
So that's one kind of, that's an edge case where
you can produce code straight out of the box and it's
a little bit more effective than just an even better
auto-computer, an even better refactoring tool. And then the other thing that they can do
super well is producing
plagiarized answers to student programming assignments.
So student programming assignments are the most context-free things you can imagine,
because the poor professor grading them does not want to have to stop and think.
Like, a student programming assignment has got to be some really dull, out-of-the-box
algorithm, because otherwise it's going to be too hard to mark.
And of course, students are just relentless.
They are always uploading their class exercises to GitHub and Stack Overflow and stuff.
So it was already the case that any student who was determined not to learn how to code
could find the answers online.
And guess what?
ChatGPT can do it just as well
because it's been trained with the same stuff.
So for me, that doesn't tell us anything
about whether AI will be able to program itself in the future,
but it tells us quite a lot
about the nature of learning programming.
A number of times I've used ChatGPT
to try to write a script.
Write me a Python script to do something incredibly boring
that i can't be bothered to do um i've had to spend a lot of time fixing it or correcting it
or thinking you know what it is trying to do because it's not what i told it to do um and it
it requires a level of skill that is pretty high to interpret what it's saying and correct it
and writing it.
And writing it would just be easier.
Well, sometimes that's true, definitely. So I think the message that, oh, you can just use ChatGPT program, I see that a lot. And people are like, oh, I'll just do this.
And it still requires a high level of skill to take that and turn it into something that's
actually correct. And you can miss things
too, right? Like even if you have a high level of skill, you know, bugs are hard to find sometimes,
especially if you haven't written the code. Undoubtedly, LLMs are pretty useful everyday
programming tools. The statistics suggest that a lot of working programmers are using them
all the time. And I think the co-pilot integration into Visual Studio is pretty good. And I've got friends who do empirical
studies of professional programmers, just finding out they are genuinely useful in lots of ways.
I don't cut a lot of code recently, but next time that I do a big project, I certainly expect that
I'll get another productivity gain. But I think there is one serious danger here for people that are not professors building prototypes like me,
but people who are actually producing safety-critical code or stuff that you really want to rely on.
And the way that I explain this is to say everybody knows what it's like to have a new programmer join your team
who writes just really lame code.
Like, you know, it doesn't compile.
It's formatted really badly.
The identifier names are stupid.
Me in 1995.
Exactly.
Exactly.
And we're used to having people like this on the team.
You sit down with them.
You do code reviews.
You beat it out of them.
And after 10 years, they're sort of moderately competent programmer.
And then sometimes, you know, sometimes they are super smart.
They just make a lot of dumb mistakes.
And that's okay, too, because you can fix the dumb mistakes.
So basically it's alright if it
looks bad, but it's really quite good.
It's kind of alright if it looks bad
and it is bad, because you can see it needs fixing.
The worst though, is you get
programs sometimes that just, they produce
code that looks quite plausible,
and it's beautifully formatted
and stuff, but it's got some really
terrible underlying logical flaw. Worst of all, but it's got some really terrible underlying logical flaw.
Worst of all, if it's in some edge case that wasn't in the specification and the person never stopped to think about it properly.
Well, that is exactly how LLM's code.
It's the worst possible programmer in your team because you don't want code that looks completely plausible but actually has a subtle bug that you had never thought about.
So, yeah, in proper large-scale software engineering,
this is not the kind of code you want on your team.
We did an informal poll of our Slack group on who uses GitHub Copilot.
And I was kind of surprised at how few people loved it.
A lot of people had tried it, and some people used it intermittently or used it until it became irritating and then stopped using it.
There were several people who used it regularly, but they knew its limitations, too.
From my knowledge of the literature and empirical studies of software engineers, your informal poll definitely aligns with what researchers are finding as well.
That certainly is the case.
Well, we're pretty niche.
I mean, embedded isn't standard.
It's probably not well-trained into ChatGPT right now.
It's not as well-trained into ChatGPT.
Give me an STM32, whatever, how, yeah.
I mean, it's out there.
There is stuff in there, but it's worse.
Yeah, interesting.
I mean, yeah, a lot of my research colleagues in the compiler groups and so on,
there's really interesting intermediate languages.
The last assembler that I did a lot of work in was 68,000,
so we're talking more than 30 years ago.
I think people are certainly making virtual machine languages and intermediate languages that are designed as LLM targets.
So I think we'll get more interesting stuff.
What do you mean by that?
Oh, so what I mean is, at the moment, as you say, the kind of languages that embedded software engineers work with,
there is not a lot of examples of those in the training data for your mainstream
commercial
language models. But
if you
designed
a very low-level language, so like
maybe LLVM or something like this,
and constructed
its syntax in a way that you know
is going to get efficient weight distributions
in a typical transformer architecture.
At the moment, I don't think there's been a lot of customization.
Interesting.
Yeah, I think they've just been sort of relying on the fact
that it looks kind of like text.
But yeah, I'm sure they've done custom tokenizations,
but I don't think that you see PL semantics.
There used to be guys actually,
I did know people who worked on use of attention architectures
for program language synthesis,
but that was in the days before BERT
and the huge swing of enthusiasm
towards very, very large natural language training.
So you're saying that we could design programming languages
that LLMs would consume better?
Totally.
Oh my God, okay.
I think so.
Yeah, probably.
I've got colleagues here in the computer science department
who are the real experts.
But if I was chatting to them
and just brainstorming what we might do.
Yeah, I guess to get a big enough training set,
yeah, LLVM, we've got good compilers to and from,
also definitely to C and to various assemblers and so on.
So that means we could actually synthesize quite a big training set
where we've got cross-modal data from specifications and comments.
So that would probably be, I imagine people are attempting this.
It seems like that would be a good approach.
When I say ours is niche, it isn't necessarily about it being C.
It's more about trying to deal with
one of a thousand different microprocessors
interfacing to one of a hundred thousand
different peripherals
and doing it with or without DMA
at this certain speed and blah, blah, blah.
And I mean, that's what makes the job challenging
and interesting and hard and sometimes impossible.
And slow to develop.
Yeah.
So that used to be my job.
Assembly programming and dealing with all those nasty little issues.
Yeah, not only did I work at that level, but I used to design the CPU boards that
ran the code that I was installing in factories. So yeah, right down to the hardware is a thing I
used to do. But to be honest, my current job is a lot easier. So all respect to your listeners
on Embedded, because I totally understand that that is real work. One thing that I do like to do
when I'm talking to proper programming language designers and compiler builders and so on
is to have them think a little bit more about whether you can give some power of programming
to people who are not actually engineers and who didn't take computer science degrees or
engineering classes. End-user programming? Exactly. This is the field of end-user programming. So it's giving
people the ability to just write enough code to be able to automate the drudge work from their
own lives. And the absolute classic example of this is the spreadsheet, which is a programming
language. It's a very specialized domain-specific programming language. And the source code is kind
of weird because it's all hidden inside and you can only see it one line at a time but despite all that you know it's super useful for
people that would otherwise have to spend a lot of time sitting down with a calculator typing in
numbers so giving that kind of spreadsheet power but for people who aren't accountants and we're
getting a lot of variety of things that are probably Turing complete in some kind of way, can definitely automate stuff that would otherwise be real drudgery.
A lot of these are the things that are called low-code, no-code languages, you know, whether it's wiring together data flow components or, you know, specifying simple transformation rules.
You can give people a lot of computational power without necessarily telling them that they're programming.
And that's why the title of the book is Moral Codes.
So codes is a reference not just to legal codes and things,
but also to the power of giving people the ability to program.
So codes can look like anything. They can even look like graphical user interfaces.
And I realized about halfway through writing the book that the title Moral Codes
also made a nice little acronym. So the acronym or backronym is more open representations,
accessible to learning with control over digital expression.
And that's kind of what we would all want our programming languages to be,
and also what we would like our UIs to be. We would like them to be representations that show
us what the system is doing, and we would like that to be open, and we'd like it to be learnable.
And quite often, we would like it to be, you know, for many times, we want to be creative,
or we want to be exploratory, we want to express ourselves.
So quite a lot of the book actually talks about just experiences of being an engineer that are sort of things that are fundamental to being human.
Expressing yourself and having control over what goes on around you, not having to spend too much of your life in drudgery and repetitive stuff that could easily have been automated.
I liked that section of the book. It reminded me a lot of expert user interfaces versus novice user interfaces.
And expert user interfaces, you think about Photoshop with all of the buttons and windows and toolbars.
Shortcuts.
You can do everything you want.
You can do half of it from the key commands if you're willing to learn those.
But as a novice, walking up to Photoshop, it just makes me want to turn around and walk away.
Yeah, it's easy to confuse the needs of what kind of usability you need for someone who's
an expert that might be using the system every day and what kind of usability you need for a person who's only going to interact with this thing once a year like doing my tax return so
you know maybe when i was younger i could remember things that long but i have to say nowadays
i come up to do my tax return and every year it's like oh this is the first time i've ever seen this
so i really want that to have the most basic tutorials and things.
I only do it once a year, so I don't mind. As long as things are clearly explained,
I don't mind if the buttons are very big with pictures because I don't want this to be any
more painful than I'd like it to be. But my accountant, well, actually, I don't make enough
money to have an accountant, but if I have an accountant, they would be really pissed off
if they had to use an accounting software that has got great big buttons and explanations of everything to do
because they use this thing every day.
So, you know, they want to have all those power facilities right at their fingertips.
So, yeah, there's a difference between a system like Photoshop
and a basic drawing application.
But I think we see gradual evolution in many parts of computer user interfaces
where things that at one time
were considered to be accessible only to programmers
turn out to be kind of useful for other people.
So the very first paint programs
didn't have any notion of putting layers of an image
over other layers.
So if you put another paint stroke,
it would destroy all the stuff that you had already done. Whereas a good user of Photoshop knows you get your base layer and maybe you put your
photograph there and then you put your masks over the top of that. And then if you've got some text,
you put that in another floating layer. And that sort of makes it easier to rearrange things. It
makes them easier to visualize what you've done. You can show and hide them. You get all sorts of
sophisticated capabilities from this pretty simple abstraction.
Like there's just one thing I want to add here
is the idea that there are layers to your image.
So I think that's an example of the kind of advance
that I look for in end-user programming systems
is some simple abstraction
that has turned out to be super useful
and even essential to professionals.
But it's not so hard
to learn and if you just figured out a way to put it into the ui in a way that is intuitive and
understandable so the first time you use it you can see how it works you don't need to go on a
training course and say oh now that i see what i can do with this okay i can achieve a bunch more
stuff so i think a lot of media authoring tools do have those sort of abstractions built into them.
And
to some extent, a lot of the
stuff that you do in graphical
user interfaces,
there's always more potential
to think a little harder about what those mean
in terms of the semantics of the diagram,
what the syntax is, just thinking
back to the fact, oh yes, that's a programming language.
But what we really want to avoid is taking that away from us, because quite often the
companies we buy software from are not very interested in us controlling the experience
better.
What they would really like is that they have more opportunity to surveil us and sell our
data to their customers.
And my friend Jeff Cox said, basically, you've got a choice, program or be programmed.
And I think that is really where we are here.
That's right up there with, if you're not paying, you are the product.
Absolutely, absolutely.
Program or be programmed.
And yeah, asking whether our user interfaces are things that show us things about the system state
and allow us to modify it with direct instructions,
or whether the system state is hidden,
and we're sort of allowed to provide training data,
but where it's very difficult to give an unambiguous instruction
of you definitely want the system to behave differently.
In my classes where I teach user interface design,
and I ask my students to think about just how good is a smart speaker,
like Amazon's Alexa or something.
Would you like to do your programming in future by just talking to Alexa?
Oh, that would be brilliant.
No.
Well, exactly.
You talk them through.
So this would mean that the source code is invisible,
and you have a one-shot. You just have to speak it right the first time,
and you can't go back and edit it. How good a programming language is that? It's like, oh,
yeah. And, you know, there are many, many people who suggest that the graphical user interface
will go away, that we won't have any representations, we won't have any diagrams,
we won't have any words. You know, We'll just speak to the artificial general intelligence.
And there's even a phrase for this.
They call it, there's this research community
who calls it natural user interfaces.
And supposedly this is going to be the successor
to the graphical user interface,
the natural interface where you just speak
and you never have to look at anything.
Well, that is classic program or be programmed.
That would be heaven for the surveillance capitalism companies
because they can sell you whatever they want
and you've got no remaining control at all.
Meanwhile, we've got typewriters attached to every computer still
and that doesn't seem to be in any danger of changing.
Nobody's beaten that in terms of input.
I had the privilege to work with David Mackay And any danger of changing. Nobody's beaten that in terms of input.
I had the privilege to work with David Mackay,
who was one of the geniuses of the 21st century, really.
Among his many incredible things was to create the first machine-learning-driven predictive text interface,
a thing called Dasher,
which is still available online in some places.
And he proposed it as an alternative to the keyboard.
It was super impressive, and it could predict text
words ahead at a time when we still had the T9 system
for spelling out your words on your feature phone handset.
So I worked on Dasher with him about
25 years ago, and it was pretty clear at that point
that language models would be able to produce pretty long streams of texts that looked just
like the kind of text that I would have wanted to write anyway. Uh, it turned out though that,
um, although the Dasher interface was very information efficient, uh, it was like playing
a fast video game because you could type text very, very fast as
long as you watch what was coming out of it and control the mouse to steer towards the sentences
you are wanting to write. As it turned out, it was far more effective to integrate that with the
keyboard. And actually, a researcher who worked with David and then with me and is now a professor
in Cambridge, his PhD dissertation was to invent the thing
that we now know of as the swipe keyboard.
So what he did was to integrate Bayesian language models
with something that looked like a keyboard,
which meant that if you didn't want to go too fast,
instead of drawing those fancy shapes,
which is pretty fast if you do it well,
you could just go back to pressing the keys one at a time.
So I think that's, yeah, we've trained ourselves to be keyboard users,
not just QWERTY keyboards, but even, you know,
musicians get to use the piano keyboard with all the black and white keys,
which is pretty fast.
Once you know how to play it, you can get those chords out very quickly,
but it's also very constraining because if you wanted to play a note
between the keys, oh, sorry, you can't do that on a piano.
Yeah, so we've got some trade-offs there.
The keyboard is not fully optimal, but a lot of it has been kind of optimized into muscle
memory so that we may not see it disappearing very soon. Certainly, it's pretty annoying to interact with people who can only type as fast as they speak.
Most practiced keyboard users can type interesting stuff quite a bit faster than they can speak.
I can type faster than I can think sometimes.
Just check my email.
Which is why waiting before sending is very important.
You mentioned predictive text and its helpfulness to you, even writing in your own style.
How much AI, whatever that is, did you use in writing the book? Did you experiment with any of that?
Yeah, I really play some games with my readers.
There are pieces where I ask them to guess how I wrote a particular sentence.
I wrote the whole of the manuscript of the book, actually,
originally before the launch of ChatGPT.
So I'm a bit relieved that I did quite a good job
of anticipating what was going to happen next.
Yeah.
So between the original delivery of the manuscript
and then delivering the final revisions the following summer,
ChatGPT was released in that gap,
and I had to go back to the book and say,
oh, how much of this do I need to change
now that everybody in the world knows what this stuff is?
Because previously I'd had to put a lot of effort into explaining
what a large language model is
and why it was going to be interesting in the future.
So, yeah, so ChatGPT wasn't there when I wrote the bulk of it,
but I reported some experiments that I did with GPT-2 and other earlier models.
I think something that I reflect on a bit is the role of craft and sort of embodied practice,
which I guess I've alluded to a little bit when we were discussing keyboards just now. And I think a lot of people who do a lot of coding, that stuff comes through
your fingers. And you don't necessarily, just like you said, Alicia, you don't necessarily think
about, it comes out before you've even thought about it. Definitely this happens when I play
my musical instrument. I've been playing in orchestras for 40 years, and I definitely cannot
describe to you the notes that I'm playing.
They go off the page, into my eyes, and into my fingers.
There's no brain bypasses.
So I think that what was interesting to me as I was writing the book is those craft elements.
And I was definitely reflecting on the tools that I was using to write the book and how they related to what I was saying. So I used a couple of predictive text technologies routinely.
So one is that I use a MacBook Pro with a little sort of smart display bar above the keyboard
that in a lot of applications on the Macintosh, it'll come out with a choice of words, which
occasionally I find it faster to grab that from the top of the keyboard rather than keeping typing.
What I did far more of was that I wrote quite a lot of the final draft of the book in Google Docs.
And that wasn't complete whole sentences. And then I had to say to myself, hmm,
well, I could complete the sentence that way. Is that what I want to do?
I mean, these were experiences that were sort of novel experiences a year ago.
Nowadays, this is everybody's everyday life, isn't it?
So you guys, I'm sure, are thinking about this all the time.
But in a sense, it was what was already happening with our mobile phones because your predictive text keyboard is doing the notorious autocorrect and saying, yeah, well, you typed some stuff, but I think you really want this word.
No, I don't want that word.
So it's interesting being inside this probabilistic universe
where every engineer using these things knows exactly
why it's doing the predictions at once
is because it's predicting the lowest entropy thing that you might do next,
which is also precisely the least original thing that you might do next, which is also precisely the least original thing that you might do next.
It will never spell my name right.
Or if it does, it will someday spell my name Elysia, spelled like electricity, because that's the only way it's ever spelled correctly.
You are definitely on the wrong side of history here.
Yeah. Yeah, my daughter has got a name that is the most famous female Blackwell in the world, which I sort of thought was an honor to the great pioneering American doctor, Elizabeth Blackwell.
But it's not great for my daughter when people want to Google her name.
But it does mean that predictive text keyboards know exactly how to spell it it's just a bit sad that she decides she wants to be called liz because
now i say oh no no no that's uh that's too much entropy for me you're gonna have to have the same
name as the famous doctor oh one of the things you mentioned in your book is about self-efficacy
and chris came across this article about how AI suggestions for radiology
causes humans to perform worse
when it is supposed to augment them.
Yeah, really worrying.
And this is definitely something
that I discuss in my graduate classes
on human-centered AI
because I think this is really dangerous
and we definitely need to think about designing for this discuss in my graduate classes on human-centered AI, because I think this is really dangerous,
and we definitely need to think about designing for this when we're thinking about the system boundary for our design project, drawing the boundary that includes the users and the
organization as part of what the engineer needs to be concerned with, which is, of course, what
all good engineers have always done. So once you think about how this AI system is going to be used in practice
and you think what I'm interested in is the joint performance
of the human expert who's working with the AI
and then studying what happens when you make different changes
to the impedance of the channel that they're communicating with.
A PhD student of mine did a fascinating piece of work exploring something that no one
had ever looked at before, which is the question of the timing between when you say something to
the computer and when it speaks back to you. So in conversation, this is like a really important
part of human conversation. Conversation is very musical and music researchers have shown the ways
that turn-taking between people who are having a nice conversation, they settle into a kind of rhythm.
You know, they sentence, sentence, they say, the way that you just did. You know, conversation is
a kind of music. So my student who was advised by the director of the Center for Music and Science
in this work, she said, I wonder if any of that is going to happen when you interact with computers. So she created a simple conversational
AI system in which a human expert is having to respond to judgments by an AI that may or may not
be right, just as in the radiology example that you've sent. And she just made subtle manipulations
to the speed with which the response would come back. So in some of the conditions, it would mimic what humans
do. So if you delayed a bit, it would delay a bit before it responded.
Which is very different to the usual approach to user interface engineering where the
goal is usually just make it respond as fast as possible. There's no
speed that is too fast. Just as fast as possible. But of course
in human conversation, that's not true. You don't want me like just as fast as possible. But of course, in human conversation, that's not true. Like, you don't want me to respond as fast as possible. In fact, we call
that jumping down your throat if you respond too quickly in a conversation. So she made different
versions of her collaborative human expert, AI expert system, and just changed the conversation
speed. Well, she found something really disturbing,
which is that if the computer responded in a more human-like way so that you really felt like you were having a nice back and forth,
people were more likely to agree with incorrect judgments by the AI.
More trustworthy.
Exactly.
So as it became more human-like, they said, oh, yeah, that must be
right. And they accepted incorrect recommendations from the AI more often. Well, sure, because the
computer thought about it. Yeah, clever. You know, we've published that work and I don't think any
of the peer reviewers ever suggested your explanation, but yeah, now I'm going to have to mention this when I tell students.
You may be right about that.
Well, that's funny because it's like when I'm playing chess, and, like, if I've got the level turned way up and the computer is sitting there spinning, I know I'm in deep, deep trouble, right?
Right?
Because it's thinking hard.
It's like, oh, no, it's looking 600 moves ahead.
I'm doomed.
It's a similar thing, right? And I don't know this for sure, but I have been,
I know a lot of people suspect that when you interact with the large language models,
that the response speed is slower than when the tokens are actually coming out of the server,
and that they do that to make it more emulating human. I don't know if that's true or not,
but certainly a lot of people believe it is. Feels true. It does.
Probably is.
Who knows?
But I think coming back to the question, yeah, having designing systems in a way that undermines human experts or even worse sort of subconsciously encourages them to make incorrect judgments by not recognizing the limits of the algorithm you're using.
I think that's super
dangerous. And I fear that it's going to be a big problem for us, including, of course, with people
who routinely just use ChatGPT and don't think too hard about what it's saying back to them.
Why are we trying to create human intelligences within AGI instead of something
more interesting like octopus intelligence?
Because we don't understand octopus intelligence.
Or human, but we really don't understand octopus.
But at least we know we don't understand it.
Yeah, well, that is definitely one of the problems with the phrase AGI
is that you're talking about some kind of intelligence
that supersedes every species on Earth
and transcends the notion of the body.
So yeah, we're already in deep philosophical water there.
So you could say in a sense that
engineers of algorithmic systems are designing new intelligences every day. We're just not
very grandiose about it. And AI researchers have always loved to talk about the thermostat,
because a thermostat, it acts on the world, it's got sensors, it's got internal states.
It meets all the definitions of classically what an AI is supposed to be.
It's just that once we get used to it, we prefer not to call it AI because we've moved on to more interesting ideas.
So yeah, octopus intelligence, not so sure that I need an octopus in my house, but thermostat intelligence, yeah, more useful.
One of the things I did like about your book was discussing what intelligence is intelligence in the way that is very familiar to us in the 21st century.
And he went back to projects that were trying to scientifically measure
what made some people better than other people.
It used to be called anthropometrics.
And through the 18th century, the 19th century,
it became increasingly popular because there were people feeling guilty about slavery. There
were people feeling guilty about colonialism and racism. Not the way to solve that. Well,
you know, it was the scientific age. And they sort of said, well, you know, it's obvious that
these people are inferior to us. It's just that we never tested that scientifically.
So, you know, that because of the tragedies of Nazi Germany, you know, this was the philosophy of Nazi Germany.
And they, you know, it used to be the science of eugenics.
You know, there was a journal of eugenics.
It was a respectable scientific discipline.
And it was the discipline of improving the human race
by measuring which people are superior.
Using phrenology.
Well, I'm looking at a phrenology head right now
because I have one in my office
just to remind people how stupid this is.
But all of the stuff about intelligence testing
was part of that project.
Intelligence testing was fundamentally a racist project.
And Stephen, in his history of the use of the word, shows this super convincingly,
that the word intelligence as a thing that you could measure and was considered to be something scientific
rather than just a thing that philosophers talked about, was totally associated with eugenics,
with astrophometrics, and motivated by racism.
So I actually say, you know, if intelligence was only ever about racism,
then does that mean that artificial intelligence is artificial racism?
And, you know, there are people who definitely claim that.
Ruha Benjamin at Harvard writes amazing books Just looking at the fundamentally racist design
of a lot of stuff that is around us
and we don't think to ask about.
But, you know, I don't see you guys' faces.
I'm guessing you may be white.
Certainly your surname is white, so there we go.
But, you know, I'm an old white guy
and I have, you know, a pretty easy life.
And it gives me the illusion, because I'm an old white guy, and I have a pretty easy life. And it gives me the illusion, because I'm an old white professor in Cambridge,
that everything that comes out of my mouth must be,
it doesn't matter where those words go,
obviously they are super intelligent.
You can put them in a book, you can put them in a machine.
It's nothing to do with my body.
They would be intelligent from anybody.
But I know that I've got colleagues who are black
and I've got colleagues who are younger,
and I work a lot with computer scientists
on the African continent.
And those people,
they can say exactly the same words I do,
and it doesn't sound intelligent to other people
because I have got the luxury of pretending
that my words would be received as the same
no matter what body they came out of.
But a person who's got a black body or who lives in the wrong place, they know very
well that that's not true. Intelligence is not a disembodied thing. Intelligence is actually
pretty much bodied, and it's only the people who've got the right kind of bodies that can
pretend otherwise. Octopus bodies.
To wrap this up, and at the risk of asking you to be my therapist,
I have a great deal of despair about all the things happening in technology,
but particularly where LLMs are going,
and not the technology necessarily,
but the way they are being used, abused, and marketed.
I have a lot of friends who occasionally, they know I'm a musician,
will occasionally send me,
hey, there's this new website
that you can generate a song.
And then they generate a song and send it to me
and they say, what do you think of all of this?
And then he's infuriated for the rest of the day.
I try not to respond with the Miyazaki quote.
I find this to be an insult to life itself.
How should I be engaging
as an experienced technologist and engineer
with this stuff instead of disengaging and going and sitting and just playing drums and not getting paid anymore?
Cool.
Well, I'm so pleased to hear that you're a drummer because that's just what my band needs at the moment.
This is not a serious I'm solving the problems of the world, but this is like here's a fun game that I'm playing because I'm a professor in Cambridge and I can.
With a few good friends who are also a little bit suspicious about traditional definitions of knowledge as being sort of inside old white men's bodies.
And so they talk about feminist theorists like Donna Haraway or Karen Barad.
And those feminists were already pretty suspicious about all the text in the world that is written by old white guys and that women sort of have to go along with believing it.
And we're at an interesting point now where text has eaten itself like we we've we've we've created these things which which literally produce bullshit um uh as i said in a blog entry uh
last year and i've got some i've now got some hard scientific evidence for it that i'll be
publishing soon so these things um you know and until pretty recently um universities in this
country around the world sort of thought that
as long as professors were producing text, they were doing a good thing for the world.
But it seems kind of clear now that there's sort of too much text and that producing more of it
is not necessarily good because we can now just do that with LLMs. So if there's somebody out
there that needs bullshit, well, that's good. They can just sit and
chat to an LLM and maybe I should do better things with my life. So with my feminist theorist friends,
I said, what would the post-text academy look like? Because we're always going to have young
people and we're always going to have universities. Learning is part of being human. But if it's not
about teaching them to write essays and if we're not being evaluated on our research outputs
and academic papers, what will we do with our time?
And so we've been trying to invent a kind of knowledge
which is as far away from text and as far away from symbolization
as we could possibly make it.
So we formed a doom metal band.
We play so loud that your body shakes, and no one's going to turn chatty pity up that loud
we use a lot of distortion pedals uh and my friend the singer who's a professor of creativity
she trained in mongolia in overtone singing so the noises she makes are not regular notes you
can't really write them down as a piece of music. We're having a lot of
fun. We've played our first gig and it was too popular. So we've had to say we're only going to
play underground from now on because we don't want to do this in front of a big audience.
But definitely, this is seriously challenging. What is the value of an LLM? How much time are
we prepared to spend having a conversation with a thing that doesn't have a body?
I'm fine with not having a body. I just want it to have more than
a pastiche of random
stochastic poetry. I feel like we've gone back
to which science fiction books we liked the best.
Imagining having a different body is an interesting thing. Back to which science fiction books we liked the best.
Imagining having a different body is an interesting thing.
But coming to terms with the fact that you're in a body,
yeah, I guess I'm a good many years older than you, Alicia.
I'm sort of getting towards the point in my life where,
yeah, this body's going to give out sometime.
And I can either be Elon Musk and say,
when I'm immortal, I'm going to go and live on Mars. or I can say, actually, this is what it means to be human.
To be human means to be born.
It means to die, and it means to be youthful, to be old, to do all those other things.
So quite a lot of the book actually says, just think about what it means to be human.
And is AI helping you with that? Or maybe moral codes,
you know, computers that will do what you want and help you achieve what you want. You know,
maybe that's what being human is about. Alan, thank you so much for this wide-ranging conversation. It seems a little redundant, but it's tradition. Do you have any thoughts you'd
like to leave us with? Yeah, I think the reason why we need
less AI and better programming languages is that having control over the digital realm
allows us to become more properly human. And a lot of the things being sold to us via AI
make us less human and give us worse lives. Our guest has been Alan Blackwell,
author of Moral Codes, Designing Alternatives to AI,
and professor of interdisciplinary design
in the Cambridge University Department of Computer Science and Technology.
Thanks, Alan. This was really great.
Thank you very much. It's been a pleasure.
Thank you to Christopher for producing and co-hosting.
Thank you to our Patreon listeners Slack group
for their questions and answers to my polls. And of course, thank you for listening.
You can always contact us at show at embedded.fm or at the contact link on Embedded.fm where there
will be show notes. And now a quote to leave you with from the very beginning of Moral Codes.
There are two ways to win the Turing test.
The hard way is to build computers that are more and more intelligent until we can't tell them apart from humans.
The easy way is to make humans more and more stupid until we can't tell them apart from computers.
The purpose of this book is to help us avoid that second path.