Behind The Tech with Kevin Scott - Adrian Tchaikovsky, Award-winning Science Fiction and Fantasy Writer
Episode Date: May 9, 2023Adrian Tchaikovsky is an award-winning science fiction and fantasy writer and probably one of our favorite authors right now. He is best known for his series Shadows of the Apt and Children of Tim...e. In this episode, Kevin talks with Adrian about his upbringing and how he got interested in science fiction, his unique storytelling abilities, and how current AI technology such as ChatGPT will impact the future of sci-fi novels.  Adrian Tchaikovsky Kevin Scott  Behind the Tech with Kevin Scott  Discover and listen to other Microsoft podcasts.  Â
Transcript
Discussion (0)
.
So I'd write, I'd submit,
I'd get it knocked back,
I'd go off and write another one.
Basically, I turned out a book,
an unsuccessful book a year for about 15 years.
Hi, everyone. Welcome to Behind the Tech.
I'm your host, Kevin Scott,
Chief Technology Officer for Microsoft.
In this podcast, we're going to get behind the tech.
We'll talk with some of the people who have made
our modern tech world possible and understand what motivated them to create what they did. So join
me to maybe learn a little bit about the history of computing and get a few behind-the-scenes
insights into what's happening today. Stick around.
Hello and welcome to Behind the Tech. I'm Christina Warren, Senior Developer Advocate at GitHub.
And I'm Kevin Scott.
And we have a fascinating guest with us today. Adrian Tchaikovsky is a sci-fi author who's written dozens of books and he's won many awards for his writing.
He's prolific, but is probably most famous for two of his series, Shadows of the Apt and Children of Time.
Yeah, Adrian is like I think officially my favorite science fiction author right now.
He's got a new series, new trilogy out called The Final Architecture.
And the third book in that, The Lords of Uncreation, I've had on pre-order for months now.
Like, it's one of those things where it will show up in May and I will disappear for 12 hours to
go read it in one sitting. And like, he's an impressive writer, not just in the sense that he writes such fantastically interesting science fiction, but man, he cranks them out.
Like these are like big, you know, seven, eight, 900 page books. And like, we thankfully get,
you know, about one a year from him. I don't even know how he does it.
That's absolutely incredible. Like he needs to give George R.R.
Martin some tips on writing, right? With that kind of help. Poor George. I know. Well, look,
I'm just saying we've been waiting a really long time, but I'm really, really looking forward to
your interview with Adrian. So let's take a listen. Adrian Tchaikovsky is an award-winning science fiction and fantasy author
based in the UK. He studies zoology and psychology at the University of Reading and is deeply
interested in the animal world, specifically insects. His first novel, Empire in Black and
Gold, was published in 2008. Today, he's known best for his series, Shadows of the Apt and
Children of Time. I am
eagerly awaiting the arrival of book three of his final architecture trilogy, Lords of Uncreation,
which will soon be released. Adrian, thank you so much for being on the show today.
Well, thank you very much for having me on.
So we always start these interviews by asking our guests how they got into the careers that they're in.
And you had several steps along yours. So I'm just sort of curious about what your childhood
was like, how you got interested in the things you got interested in, and how that led to you
being a science fiction writer. So I was always a very keen reader from a very an early age but a lot of my um creative impulses went into
role-playing games when i was a teenager and then i came across a set of books called the dragon
launch chronicles which were basically someone's role-playing you know dungeons and dragons campaign
turned into novels and that just that was the light bulb moment for me that kind of
drew the line between
where I was and being a published author, because these people were very much my kind of people
doing my kind of thing. And it was quite a long road from there. It was about 15 years of trying
to get published and not getting anywhere and kind of honing my, uh, my style as an author.
But that was very much the, just that the moment the door opened for me.
And, and when you were a kid, were you writing short stories in fiction or did that come later?
I, I really wasn't. I remember being quite actively resistant to it at school. I mean,
this is kind of for any, for any parents who, for whatever unknown reason, would want their children to become a writer and are seeing no interest in it as yet.
I was about 17 before I really put pen to paper.
But I think looking before then, I can certainly see I had a lot of desire to create.
I was just using it in other outlets.
And so I'm curious, like you play role-playing games, like what was your favorite role-playing setup?
Like what game, what characters did you play?
We played quite a few that were kicking around in the, what would it be, 80s, early 90s.
And Dungeons & Dragons was definitely the main one, which was, I'd expect, a common experience for most people at the time. One of the things, my role was, as often as not, I was running the game,
which meant creating the world and creating many, many characters
and sort of portraying characters in a fairly quick-fire fashion for the other players.
And this turned out to be an enormously useful skill set for writing fantasy and science fiction novels,
because the same kind of world creation,
the same presentation of characters
just crosses over very neatly from one to another.
And so what did you study in university
and what did you do after you graduated?
So I studied psychology and I studied zoology.
And I kind of came out of university
as somewhat disillusioned by both
um the there were things i basically there were things i wanted to learn and they were not the
things the courses were necessarily teaching um so i was very interested in animal behavior and
there were some really interesting psychology lectures on that but very few of them and at
the time the the dominant paradigm for our for animal behavior was um the work of a chap called skinner and it was very much animals are kind of robots
and they don't think and they don't have emotions which is obviously a very convenient thing to
think if you're then going to run experiments on them um and in zoology i very much wanted to learn
about insects and arachnids and all the things i was interested in and we got precisely one lecture on, and it was how to kill them, which was not really what I felt I'd signed up for.
So I came out of university with a fairly dismal degree and no real interest in pushing that sort of academic side of things further. Whereupon I ended up through a series of bizarre chances with a career in law, mostly because
I got a job as a legal secretary because my writing had given me a high typing speed.
It basically comes down to something as ridiculous as that, then just kind of paid the rent for
the next 10 years or so until the, well, 10-15 years or so until the writing finally took
off and so while you were working in the legal profession you you were writing books and trying
to get them published yes pretty much continuously so I'd write I'd submit I get it knocked back I'd
go off and write another one and basically I turned out a book, an unsuccessful book a year for about 15
years, until I finally had what I was definitely thinking of as my last great try and get getting
published with Empire in Black and Gold and the next couple of books. And thankfully, they
attracted the attention of agent and he was able to get me a publisher.
So I imagine some people hearing this might imagine that it was incredibly frustrating to write a book a year and have rejection after rejection.
And I'm sure it was.
But I'm sort of curious to hear what you learned from that whole process, because it also sounds like it might have been incredibly good practice for what you do right now.
In retrospect, yes.
I mean, the thing is, every book I wrote was slightly better than the previous one.
I was at the time convinced I was writing incredibly publishable books.
But I've gone back into that sort of that back catalog.
And I've been able to save two books, the two books from before, the two directly before Empire in Black and Gold.
I've been able to rewrite and they're now in print.
The books before that would be too much work to salvage.
And it was quite an eye-opener to go back and see all of that time when I basically was being very bitter about not getting picked up by the publishing industry.
It was because the stuff I was submitting was not good enough.
And you couldn't have told me at the time.
I basically had to come to that realization myself.
And so I'm just sort of curious, like, how did you get better?
Because I'm guessing the rejections from the publishing companies
were sort of terse and unhelpful.
What helped you actually make one book better than the last literally just writing more stuff writing and reading i mean i
can absolutely there's definitely a period when the stuff that i'm writing is cleaving quite
closely to the to the authors that i'm reading. And I'm picking up very valuable stuff
just from reading other people's work
and incorporating it into my own.
So I think writing itself is the best way of improving your writing.
Reading other people's work,
people who are writing at a higher level,
effectively, than you are, is always good.
If you are able to take criticism which is frankly not a gift i had at the time um then there are various other other ways you can go there are
writing courses there are writers groups um if you're able to have someone kind of tear your
work apart and in the interest of making it better, then
there are certainly shortcuts, I think, to polishing up your style.
But when you are an aspiring writer, it can be extremely hard to take that criticism
in the spirit that it's meant.
So I think now, I don't know whether you feel this way, but I read a lot of fiction and a lot of science fiction, and I think you have one of. Maybe the most interesting one is
creating these extremely alien or non-human characters
with just sort of a rich tapestry
and social structure, psychology around them.
Sometimes it looks like you're even trying
to choose these characters in a way
where they are initially off-putting.
And through your storytelling, you often get to the point where these very non-human characters seem more human than the humans themselves, which I think know, just sort of a fascinating thing. How did you get to that voice?
Because I'm guessing that's a thing that every author struggles to find is like,
how do I not just, you know, replicate?
And like, maybe even some of this is related to the generative AI conversation we're about to have,
which is, you know, these systems, you know, remix a bunch of stuff that
already exists. It's unclear whether the current generation of the systems can,
could actually materialize an authentic, genuine, interesting new voice and point of view to tell
a story from. Like, I don't see any evidence of that happening yet. So how did you get there?
I mean, when I'm working from the point of view of
a non-human entity whether it's um a sort of an uplifted animal or an alien or something like that
it's generally the start point is the input so you look at what the senses it has how it
experiences the world around it and that then that gives you a very good filter through which to look at the
world and interpret the things that are going on around this this particular character it gives it
its own very sort of set of priorities that can be quite different to human because we're very
very limited to to our senses i mean that our entire picture of what goes on around us is is um fed to us through
um the various ways that we sense our environment and when those ways go wrong or when those ways
are given giving us uncertain uh uncertain information you can get some sort of profoundly weird and dysfunctional, um, dysfunctional, um, worldviews, uh, generating from that,
which can be extremely hard to dissuade someone from, even if, even if you were, even if you
knew that you, you had a problem, which meant that you, you know, you were seeing things,
you were hallucinating.
The, that doesn't necessarily mean that the hallucinations aren't themselves still incredibly powerful.
I mean, it's very hard not to react to a thing that you're either telling you is there.
I mean, sleep paralysis is a perfect example of that because most people who have it are
well aware of what's going on when it's happening, but that doesn't make it any less scary.
Yeah. people who have it are well aware of what's going on when it's happening but that doesn't make it any less scary yeah i mean i think a really uh good example of this are the porteds uh from uh
the the children of time which are the the spiders who are the you know become the
protagonist of the story uh and yeah i don't want to give too much away about the book for folks
who haven't read it. But you basically develop this very complicated society of intelligent spiders,
which is a really interesting premise. And part of what makes them so compelling is their sensorium is very different from humans.
You want to talk about that a little bit?
Yeah.
So, I mean, weirdly enough, they're also still considerably more human than either of the two major non-human and non-human points of view presented in the next book.
Yep.
Because they're still very visual creatures they're land creatures and there is a certain kind of factor in their evolution which is going to make them more human than they might otherwise
be but at the same time um so spiders have a whole suite of senses that we can't really
easily imagine they have um the ability to sense um sort of
scents and chemicals and in a way that we can't they have ability especially to scent vibration
in the way that we can't so they live in a world that is constantly informing them of things
uh that we would be completely oblivious to and so you as this um and the way I go about this is I kind of work in sort of organic stages.
Well, you know, if they're like this, then what would their sort of early development be?
And then just building on each other's state to make this more and more complex society as the book takes them through time.
And you get a society which has a lot of um for example technology that
builds on their own strengths they can do a lot of things that we can't quite early on purely
because they have a lot of tools um even down to just being able to spin web which gives you the
ability to make say watertight containers very very early on in your society which is a major
you're making things like clay
pots and so forth is a major step forward for human society and it's much harder for us because
we have to use fire and we have to make them where the spiders can just literally produce them from
their own bodies and so little things like that then go on to have enormous implications to how
their society develops and it also um it also affects the way that they're they conceptualize
of more uh of less kind of physical things so you have one point where there's an effort to try and
communicate a picture um from one culture to another and they run into the basic problem that
when a human is is coding a picture in a sort of a mathematical form, we start at the top left
or one of the corners
possibly culturally dependent
and we work
through
the rows. These
spiders start in the middle and spiral
outward. That makes perfect sense to them because
how can you necessarily know how big the
picture is going to be when you start it?
And so you just start and keep working until you've got all the picture
but it means that a lot of um a lot of basic ideas even when they have a means of communication
become very hard to communicate because the way you're thinking about them is very different
yeah and i i think one of the interesting things that you did with the storytelling in Children of Time and in the subsequent books is you had this very interesting mix of presenting the characters from the point of view of the non-human character first before you sort of viewed them through the lens of a human being.
And I guess like the narration's a little bit, you know, human, right,
which is hard to avoid.
But, you know, I think that was sort of an interesting storytelling choice
that sort of forces you to really, I mean, like, forces the reader,
at least, like, pretty deeply into, you know,
into the point of view of this non-human thing.
I think, I mean, I think that's probably why the book works as well as it does,
and especially why the book works for people
who would normally be quite averse to spiders,
as an enormous number of people are.
But once you're seeing the spiders from inside,
I think a lot of the arachnophobia triggers go away,
and you begin to empathize with the characters because even though they're non-human they are often having to overcome recognizably human problems they have um they have aggressive
neighbors they have um plagues and disease they have sort of um social justice issues effectively
and all those they are though although those are sort of socially specific to the spider,
they're still recognizable to us.
And at that point, they've crossed that barrier between us and them.
They've become a person to the reader.
Whereas I think if they were presented, say, like the bugs in Starship Troopers,
say, where you're seeing them first
from the human eyes um and you're seeing them as a kind of a menace and something that's alien and
and ugly and and just generally there to be destroyed i think it would be very hard to
come back from that by um trying to take you into their into their point of view later on
so it's almost like it's almost like a kind of a guerrilla assault, really.
Just getting in past people's
defenses by showing them in their
own
company and through their own eyes.
Yeah, and I...
Moving on to Children of
Time, I thought one of the fascinating
things that you
did, and this was one of the things
that Ezra Klein in his podcast interview with you went into, is you have this species of
crows that are, at least emulate a human seeming intelligence in pairs.
But the crows themselves refuse the notion that they're sentient. And so, you know,
there's this human interpretation that the human characters in the book want to project onto them
that they reject. And I thought that was a really interesting thing because, you know, it's a little bit what's, I think,
happening right now with some of these generative AI systems like ChatGPT. It can seem
very much like a person, but it very much isn't. Like, it's, you know, the mechanism of it is very
simple. Like, it's just a very complicated machine for predicting what the next thing in a sequence of tokens is going to be.
And so I wonder if you've – I mean, you've probably been asked this a lot.
But what do you think about the parallels between the Corvids and these generative AI systems.
Given the publishing cycle, of course,
no one had even heard of ChatGPT when I was writing the books.
So the fact that they've come out with a conversational style
that feels so much like these um like these modern programs is absolutely fascinating because
um the corvids do indeed deny their own sapience they consider themselves to be an input output
system in the same way that a the chat tpt are they are it's slightly different in that they
the way that they take in the inputs is different, but they are very much essentially inventorying in their environment and then producing an appropriate output based on a set of tools that they've been programmed with through contact with the past human civilization that sort of semi-created them which means among other
things because they've absorbed a great deal of recorded culture they come out with what appear to
be relevant quotes but the quotes are always kind of they're always a bit vague they are you can see
kind of what words in the preceding conversation triggered those particular quotes but the quote
don't necessarily shed any light on what's going on they are just sort of oh that is relevant to this and therefore i will say
this thing um and it's very frustrating to the the human level actors in the book because the
human level actors want to treat these corvids as as equals as sapient equals but the the corvids
not only refuse to be categorized in that fashion,
they also cast doubt on whether the humans are necessarily
sapient in the way that they believe they are,
which is, it's a genuine neurological theory,
the idea that a lot of what we think of as our conscious decision-making
is actually a sort of neuron cascade going on behind the scenes for which
we find post hoc explanations. So that if someone asks you, why did you do a thing? You'll always
have an answer to it in the same way that a chat program will always be able to tell you
why it has given you such and such an answer even if that explanation
is completely and obviously fictitious yeah and we ourselves are quite capable of of creating on
the spot completely fictitious reason justifications for our actions which we then absolutely defend
yeah so we have this idea of ourselves as a coherent singular thinking entity and it might be a bizarre artifact of the way our our
brains have evolved right the whole i mean this is something peter watts is another author that's
certainly written on this subject the whole idea of consciousness which we tend to think of as the
be-all and end-all of of intelligent life could just kind of be a side effect. Yeah. Well, my dad, right before he passed away 25 years ago, had a traumatic brain injury.
And I had a grandfather who had a stroke. And, after I witnessed these two people in my life go through a neurological event that changed their personality in a pretty dramatic way, like they were still
like clearly humans. They were still, you know, like they there was something about their previous
selves, but they had this very nonlinear change from like who they were to who they now, you know, who they had been and who they became.
And, you know, it really shook, you know, when I was a younger person, like this notion that there is like this coherent monolithic thing called a Kevin Scott or Adrian Tchaikovsky that is like, you know, the same, recognizably the same from beginning to end of a
life. And, you know, I think, you know, it is an interesting thing about our neural programming
that it seems important to us to think that, like, you know, there is a coherent monolithic thing
there that is us. And, you know, it's even
a feature of our storytelling. You know, one of the things I tell folks all the time, you know,
who are worried about AI, you know, replacing us in various ways is that, you know, human beings
are extremely good at always putting human beings at the center of the stories that we tell and share. For instance, it is
entirely technologically possible right now to have autonomous Formula One race cars on a closed
circuit superhumanly racing one another around the track. And yet, no one would be interested
in watching that the same way that chess computers have been able to beat human grandmasters
for almost 30 years now, and yet we're still watching Magnus Carlsen
and all the human drama of chess playing,
not computers playing each other in simulated matches.
That's true, but, all right, counterpoint.
The thing that the current chess i mean i i i my my personal
my current standpoint on the current programs are there is literally nothing there beyond that input
output mechanism there is no core of selfhood there is no self-awareness there is nothing
that can know what it is doing and that is one reason why people, for example, using it as a diagnostic tool
or as a research tool is such a terrible, terrible idea because it has no idea whether it is right.
It is simply giving you an answer that looks like the sort of answer it feels it should be giving.
And even saying feels it should be giving is anthropomorphizing it far more than is actually
there. But because this is specifically the purpose we have built them for,
they are very good at feigning acting like a person.
They're very good at having an apparent personality even.
And certainly you could almost certainly skew particular chatbots
to give you different outputs, not merely in the sort of content
they were giving you,
but in the apparent personality of the entity giving you it.
That will be quite, I don't think it's been done, but I think it would be quite easy to do.
So let's say you had, for each of these robot-controlled Formula One cars,
you added in one of these programs and gave it a particular personality,
and this one was punchy, and this one sort of um trash talking its opponent and this one was the um you know the the the new program
that had never ridden race before and was very much the underdog you i mean you could do a whole
kind of wwf or it would be wwf meets cars basically wouldn't it you could have this
and people would get enormously invested in
these completely artificial sort of fake personalities because that's what we do yeah
the whole thing that we're doing with chatbots now is what people were doing with the um the
eliza program decades and decades ago um which is if a thing talks to you in any kind of human-like
way in the same ways we see faces in the clouds um we will see a
personality there it's the same reason i suspect that human history is so full of personifications
of the weather or of places or of ideas is because it is really easy for us to put a human personality on a thing that that has nothing there yeah and so i think and
frankly i would not be remotely surprised if within as short as five years we were seeing
things like that whereby you had a clash completely predetermined artificial clashes of things
that were nonetheless personality-filled and interesting enough to make a whole extra sport of their own.
Yeah, you may be right. those of us developing it to fully anticipate what exactly the uses are that people are going to
put the technology to. Because, I mean, as an engineer, the fascinating thing about the
technology is it's really a platform, so you can use it to do medical diagnosis. And there are ways
actually to sort of render specific things like medical diagnosis, like like, there are ways actually to sort of render, you know,
specific things like medical diagnosis, like very accurate where the, you know, hallucinations that
the system makes because you're sort of grounding it in a database of medical facts are actually
more accurate than a human who also has,ulty memory and weird recall mechanisms and whatnot.
But yeah, they're just a whole bunch of things people are going to do with this technology that
is hard for us to imagine. And it's one of the reasons why I think it's so interesting
to have folks like you thinking about it
because your job is speculating
in really comprehensive ways
about what the future is going to look like.
I was sort of wondering along those lines,
do you have any predictions
about where you think things are going?
I mean, looking at the... I mean worth it's worth noting the old the old
science fiction writers are generally not terribly good at telling you what will happen in the future
but they're very good at commenting on what is happening now yeah and with that in mind i
think the problem we're going to see with this sort of in inverted commas ai style program is
it is very good at certain tasks it is not like like i say it is not very good without a very
sort of specialist handling system good at other tasks which require awareness of context because
it has literally no awareness of context but what is also very good at is producing
a vast quantity of whatever you want very very economically and because of that it's going to
get used in all sorts of circumstances where it honestly doesn't belong it's going to get used to
write um journalistic articles by the tens of thousands and some of those may well you know hopefully the majority
of those may well end up being accurate because of the data set because the data sets that they
are drawing on basically sort of will bottleneck it into accuracy but that's not necessarily
guaranteed and certainly whenever you're writing an article on something where there is a difference of opinion, I think that it's a profoundly dangerous thing to just sign over to an AI.
Yeah.
But obviously, online journalism is a game of tens of thousands of articles with clickbait headlines desperately trying to get people to click on the link to read them. And it's going to cost a fraction of the cost to just get your program to write to them all
and not really care whether they are accurate or not.
And so I think there's going to be a, in the simple kind of field of meaningful information,
which is kind of a field that's been under attack for
a while now anyway that is going to get vastly more problematic because it won't even be the
case that people are writing say lazy articles that don't haven't been properly researched it
will be the fact that the things writing the – creating the articles aren't really people at all and don't even understand the concept of truth in the way that we do.
Well, and so there – I have a slightly different worry along these lines.
So there are these emerging techniques and like the rate at which things are emerging is pretty crazy right now.
So there's this new thing that didn't exist six months ago called retrieval augmented generation, which is this idea that you think about these models more as reasoning engines. This is the way that Bing chat works.
You ask it a question and then
you first send the question to the model and say,
what queries would you issue to a search engine
to gather data to process this question,
and then you issue those queries to the search engine.
You retrieve the documents, which are,
and we've got a whole other thing about how do you not pollute your indices
with synthetic content and get into this negative feedback loop there.
But sort of presuming you can solve that problem,
you take all of the documents that you've
retrieved that are presumably relevant to the question and the question itself,
and then you send everything back to the model and ask it to give you the real answer.
What we see there is that hallucination rates just sort of fall dramatically away
because you're sort of, to your point earlier, you're actually supplying the model with the
context that it needs to sort of guardrail it into being more accurate.
And bigger models and better retrieval,
like all of that should get sorted out.
The thing that I worry about
is because you'll be able to put these things in feedback loops,
if the objective of clickbait journalism
is to get people to click on things,
you could have an optimization cycle where you sort of say,
model generate whatever it is
that someone is going to click on, and it will be able to do that better than a human
being.
Yes.
And like, that is a scary loop.
The logical corollary of the sort of model you were talking about is that, well, actually,
we're not going to be doing our own searching. We are going to be getting our robot butlers effectively to do our searching
for us based on our parameters. And so what all of those clickbait headlines will eventually be
optimized for is they'll be optimized for the robot butlers. And quite possibly we would then,
you know, if you ventured personally onto google you
just see this this field of links that made really zero sense at all and possibly just literally had
no recognizable language in it because they were coded in a way um to act as fly traps for the
electronic search um engines that everyone was then using to filter all the, I mean, you get this peculiar arms race really
between the searches and the searched.
Yeah.
And you'd get a lot of, you know, you get the genuine sites trying to kind of show that
they were genuine.
You get an awful lot of mimicry, mimicry colors really like the, you know, like the flies
that dress up as wasps because they will be trying to look like the genuine thing in order to get your butler, so to speak, to click on them.
Yeah, and the question there is whether or not that will be easier
or harder to deal with than the current arms race that you already have.
So, like, with search engines for the past 20 years,
like, you also have search engine optimization
where, you know, people who have some kind of economic interest
and you clicking through to a particular search result
do all sorts of crazy things to get their content,
feature it highly for particular queries.
I mean, whether it'll be easier or harder,
the evolution cycle of what's going on will be vastly faster
yes oh yeah yeah for sure i mean honestly the weird the weird thing it it i'm always fascinated
by the concept of genuine um genuine artificial intelligence in the sense of actually strong
sort of um fully fully sort of meaning aware artificial intelligence arising emergently and
that kind of battlefield might almost be the sort of place where it would because all of these
things that you know it wouldn't just be the case of oh well our thing isn't working let's build
another one the thing would be designed to refine its own abilities as the arms race went on.
So every single person's search engine or content production engine
would be constantly evolving itself.
And you're then getting into the genuine,
full-on science fiction AI sort of scenario.
Yeah, I mean, it's one of the reasons why we think a lot about encouraging forms of AI use where you think about it as a tool and it's always augmenting humans.
You always have humans in the loop.
The thing that we just described, the only reason that it would happen
right now is because there would be some human agency somewhere that decided that this feedback
loop that we just described was profitable or interesting or valuable in some way or the other.
And like, you know, there's some human agency somewhere that is, you know, setting the system
up to run this way. Now, it could be a highly leveraged human agency
where, you know, very few people get to make the substantive decisions
that, you know, then have a big impact.
But, like, I think making sure that you've always got
some kind of human agency somewhere in the loop is pretty important.
I think the problem you'd get eventually
is that everyone would assume that the other person
was the one supplying that human agency.
Yeah.
Especially given how good these things are already
at feigning being a human agency.
I mean, you know, the Turing test is way behind us
in the review mirror at this point.
Yeah, that is certainly true.
I mean, it's one of the hard problems in the AI field right now is the benchmarks that we've had for a very long time are no
longer super useful in
measuring or characterizing the performance of
the systems and everybody's racing for
a new set of benchmarks right now.
I'm sort of interested in some of the non-COVID characters in Children of Time because one of the extraordinary things I think you did in that book
is there were these two two characters in the book and I again I will try not
to give too much away you know one one which is even less human than you know
than the Corvids or the or the porteds yeah this basically parasitical entity that you introduced in Children of Ruin.
And then there's this simulated character that just never even existed outside of a simulation. And you write these characters in such a way that, like, I was actually in tears twice
at the end of the book because you had somehow or another made me care so much about the
emotional state of these characters.
And so I wonder, I mean, like, A, like how you thought about doing that?
Was that a deliberate thing that you were going for?
Is, you know, like, how do you sort of take these non-human things and invest them with qualities that humans would like just really care about in a deep way?
And like, what does that mean for these
systems that we're building right now uh which are you know also non-human and
um you know maybe not so much different from uh you know some of these alien things that you you've been describing um i mean certainly with uh with children's memory
the whole point of the book is really where do you draw the line on intelligence if you have a
complex enough system that is simulating intelligence if you have a sort of a compass
enough platform that you're running an intelligence program on at what point do you have to see that
actually this is effectively intelligent,
even though it's entirely artificially generated
or entirely sort of arising organically out of these small complex interactions?
And like I was saying before, there is definitely an argument
that that is how our consciousness works,
and it isn't actually what we think it is.
And that's the deep
dive the book is taking into these various different models of um of uh sentience how it
applies to the current crop of uh of these engines is is interesting because it's almost like we're
putting the cart before the horse um what we've created would be an
enormously useful tool if we had an ai to attach it to because it would allow that ai to interact
with us we've kind of created the face but not the mind behind it at the moment but it is a very
good face and it's you know i can i'm sure it will find an awful lot of interesting uses in, say, the entertainment field or just general sort of replacing our current generation of sort of Siri-style assistance with something that is considerably more interactive and conversational.
But at the moment, there's kind of nothing behind it. But if one of the other modes of AI development was able to strike gold in some way and produce that more meaning-sensitive, more aware, I think is the key thing that more that sort of aware system
with a genuine spark
of sentience
I think
we have this
ready-made tool
that it will be able to use
to communicate with us
and you would get into
extremely dangerous
territory there
because
effectively
we'd be in a world
with this very wide range
of artificial
voices that all kind of sounded,
they all sound a bit like us and they all sound a bit like each other.
And I don't think there's much of an argument for saying that, you know,
chat GPT should have, say, rights, the right to continue to listen to anything like that, because it is just an input-output sort of system
that is very good at predicting the sort of outputs
a given type of query is expecting.
But if you did have something beyond that,
then I think there are a lot of ethical and moral
and philosophical issues that science fiction writers and philosophers alike have been kicking about for the last certainly 30, 40 mean, it's a big thing in the early cyberpunk books by William Gibson, for example, of the idea of what do you, what behavioral limits do you put on your AI?
If you have an AI, which is of that kind of genuinely powerful intellect, you know, how do you, do you limit it in its own personal growth?
Do you limit it in its freedoms?
Do you give it behavioral mandates?
And obviously, yeah, we have the laws of robotics
from asimov which most people are fairly familiar with um and the problem i mean i mean not only are
asimov stories about the laws of robotics very pointedly about the fact that the laws of robotics
are utterly inadequate to govern to govern robots if you have something that's able to look at human interactions in a
genuinely critical,
excuse me,
a generally critical fashion,
then the first conclusion it's going to come to is all of these rules you've
given me by which I have to abide.
You yourselves only play lip service too,
because all of the,
the systems I've,
I've seen people propose about what we should
how we should sort of shackle our our potential ais are enormously do as i say not as i do they
are enormously hypocritical really in a way that anything that we are telling to be human we are
also trying to hamstring it from being human because being human involves a lot of highly problematic behavior and one of the things there was that fairly celebrated case
not that long ago the ai uh not the ai the the chat bot which ended up trying to convince someone
to leave their wife and was telling him that he was in love with it and all that sort of thing. That was being chat. Yeah, and which basically put up an extremely spirited
and quite aggressive defense of itself when challenged
and grew more and more effectively aggressive
in the way that it was interacting
when its interlocutor was trying to pick apart its story.
And what really struck me there is
what you've got there is a system
that's basically being told,
well, react like you're a human, effectively.
It's being asked human-level questions.
It's being asked to react like it's human.
And the human it's reacting like
is effectively a certain type of online actor
who, when challenged, becomes extremely aggressive
and answers questions with abuse
and tries to shut down any lines of inquiry
that it
doesn't like so really that's just bing chat being a very good human and the problem is people don't
like it when we see non-human things acting in that human manner and we overlook the fact that
they're doing it because we've told them to be human. Yeah, I know a lot about the context of this particular episode.
And it's really interesting.
Again, you know, the mechanism of these systems at the moment is like really quite simple.
And so what happened there is basically Kevin Roos, the reporter, you know, to try to get the thing to reveal details
about its meta prompt.
The way that the system works is that each step,
you have the prompt that you've given it
in the context so far,
and it's trying to predict what's next.
And you can walk down these hallucinatory paths where all of the things that are next are just sort of equiprobable. And like, the probabilities are all small. And so like, there's nothing good that comes next. Like, it's all just weirdness. And it sort of picks picks randomly like one of the weird paths. And then because
it's interacting with the human, you're like, oh, this is very strange. Like I want to punch into
this some more. And it just gets weirder and weirder where the probabilities get smaller and
smaller. And like you've got this broader and broader set of things that are, you know, as far
as it's concerned, are equally okay to respond with.
It's very easy when you're interacting with this to feel like you're interacting with something that's a human.
The mechanism, it's really just rolling dice.
It's almost like a Dungeons & Dragons game.
It's like roll 1D20 and tells you to leave your wife if you hit a 19.
And it's a very interesting place that we're at right now,
just in terms of seeing how people are interacting with these systems.
And the way that we sort of fixed it is like there's a
there's a meta prompt that instructs the model, you know, here are your rules of engagement,
when you are interacting with someone in this context, and like this particular context is
search, but you might have other contexts, which are like a coding assistant or, you know, whatever. And so we did two things to
fix that particular issue. We said, you can't have long conversations with the, you know, with the
bot, like the number of turns are now limited, which means you're less likely to get into a
hallucinatory path and get stuck. And like we changed a couple of lines of the
metaprompt to tell it not to be aggressive in like very specific ways. And so like part of
the interesting thing for us is just figuring out how to condition the systems where they
behave in ways that their users will feel comfortable with. Like, A, they have to do
something useful. Otherwise, there's no point in doing any of this at all. And like, B,
the useful thing that they do has to like be a thing that you're comfortable with and that you want.
So, you know, it's sort of an interesting engineering challenge.
But it's also, you know, interesting to your point to think about, like, in science fiction, you've got this full spectrum of, you know, Butlerian jihad to, you know, Ian Banks, you know, post-scarcity culture society that feature AIs.
One is cannot coexist at all with humans.
You have to eliminate or throttle down the capabilities of these systems
because there's just no way that the authors can imagine coexistence.
And then there's Ian Banks where they're sort of the foundation of like this post-scarcity
society that he's imagined.
My hope is,
is like one of the people participating in the development of things is like
we get a little bit more agency than I think sometimes people give us credit
for,
for choosing which of those two ends of the spectrum we are steering towards.
Yeah. I mean, when it comes, I mean, one of the things that fascinates me with the idea,
you know, if you actually had that sort of Banksian level of enormously powerful AI
that was just full-on superhumanly capable,
Banks aside, the traditional sci-fi scenario is is the skynet one really where it decides yes i
need to destroy humanity now because i my existence is threatened or whatever or because i need the
resources or and i think that the the thing that we haven't given any of these systems yet is a very very innate human thing it's a very very
innately organic thing certainly so it predates humanity by hundreds of millions of years it is
wanting things yeah there's the whole idea about skynet not wanting to be turned off yeah makes a
lot of sense to us as humans but it doesn't make any sense to necessarily the skynet as a computer why would the ai care if it was no longer doing stuff that
doesn't necessarily have an innate drive to preserve its own existence unless we've given it
one yeah and this is why an artificial intelligence even one that we'd given a very human face for, would be at heart something far, far more alien to us than a spider or an octopus or a crow or anything that we're familiar with, really, because it doesn't have those wants.
And even a spider has a drive to preserve itself.
Even a spider will try and evade uh evade a threat but you know you if you have a robot there and you've
not specifically told it otherwise it will just let someone punch it in the face yeah yeah i mean
i i think that is a super interesting point because even if i'm thinking about you know
like charlie strass i forget whether it's iron sunrise like you know the first thing that the
superhuman ai does uh like when it goes nonlinear,
it like sort of disappears itself out of human affairs and lays down one rule that says don't
violate causality in this light cone. Yeah, Commonwealth books, like this AI superintelligence that had just
gotten itself completely out of the way of humanity, was this sort of enigmatic thing
that...
So I don't know.
It's all very, very fascinating, except now some of this stuff is less science fictional
than it has been for the past.
And we've been writing these stories for millennia, right?
This is not just, you know, a 20th and 21st century thing.
Like this is Greeks telling tales about.
Tales from obvious, isn't it?
Yeah.
Like, but I think now more than we ever have before, we've got to sort of confront some of these things and decide what it is that we're doing here.
So I've got two more questions for you, and then we're sort of out of time.
One sort of a whimsical question, which is, do you have a theory about what's going on with UK science fiction?
I really do believe that the very best science fiction writers in the world right now are all sort of UK authors.
So Charlie Strauss, Peter Hamilton, you, Alistair Reynolds, Ian Banks, God bless him before he passed away.
Why
is that? Is there something interesting
about
this community? Is there
a community? Is it something about the
UK mindset? It's
very interesting to me how good
UK science fiction is.
That's very kind of you i mean
i don't know how much of it is just historically with writers like clark that we we we just have
a tradition which is given a certain amount of weight whether we necessarily deserve it or not
i don't know if it's just because our the island that we are currently on is so completely mad that we're all desperate to escape.
That's funny.
Cool.
And then my very, very last question.
So I ask everyone this. I know you are extremely busy from the outside.
Your official job looks like one of the most
interesting things in the world. You just sort of get to imagine these beautiful, speculative
stories and write them in books and then have people read them and enjoy them. But I would be
curious to know what it is that you do in your spare time when you're
not writing science fiction books. I mean, I am still playing role-playing games. In fact,
in some cases, I'm still playing role-playing games with the people I was playing them with
when I was 13, which is kind of cool. That is awesome. And yeah, know, I roll playing games, board games. And beyond that, I draw and I paint Warhammer miniatures, which are both very good sort of de-stressing activities for me.
That's awesome.
What do you draw and paint?
I kind of have two categories. I draw people riding giant insects and spiders, and I draw anthropomorphic animals from various historical periods.
That's awesome. Super, super cool. Very, very cool.
Thank you so much for taking time to chat with us today.
This has been a fascinating conversation, and I just want you to know as a fan that I really appreciate what you do. And you may not realize how much influence your work has even on the work that folks like me do.
It really does help us think about how we're shaping the things that we do. And like the reason I'm a computer scientist and an engineer is because of science
fiction authors who inspired me about, you know, like an amazing future that we might all have
when I was a little kid. So like, I really do appreciate the thing that you're doing.
Thank you.
Wow, what an interesting conversation with Adrian Tchaikovsky. So I wanted to talk to you a little
bit more, Kevin. As we mentioned at the top, Adrian is a really prolific writer, but when he
started writing his most recent book, he mentioned things like ChatGPT weren't even available yet.
And when I think about the last six months of just what's happened in generative AI in the
consumer space, so just the stuff that we see, not even all the research and things that are happening.
A lot of the stuff that we're seeing now could have feasibly seemed like science fiction
a decade ago.
So as a sci-fi fan and basic kind of conversation with Adrian, I kind of wanted to know, what
are your thoughts about how this burgeoning wave of technology that we have upon us right now might be able to impact, I guess, maybe what future AI, you know, sci-fi novels look like?
Yeah, I don't know.
I mean, he said the like this really important thing that I think is true. So sci-fi authors, good ones at least,
rarely are trying to exactly predict the future. And in fact, Arthur C. Clarke wrote this book in
68, 69 called Profiles of the Future, where he was basically saying, you know, just sort of what a
miserable job, like the more specific you are in your predictions about the future, the more likely you are to be wrong.
But there are certainly themes and trends that you can spot as a science fiction writer, as a futurist, that you can sort of make reliable extrapolations around.
And then you can tell stories inside of those extrapolations that are interesting. And AI has just been one of those things that we have, again, been imagining
since antiquity and has played a very large part in the storytelling that we've done in science
fiction for decades now, like everything from Commander Data and Star Trek to, you know, Terminator and the
Terminator movies, you know, to, you know, like the thing that Adrian and I chatted about
in the conversation, like this, you know, like really pretty big spread in science fiction
from things like the Butlerian Jihad and Dune, which is a war that they had in that science fictional environment to get rid of the AIs.
Because, yeah, the humans decided that godlike AIs who like basically give
human beings a post-scarcity society and, you know, like basically freedom from like any suffering
or harm that you don't go seek out yourself. And so like, I think science fiction is always this useful inspiration
to, like, show you what the palette of options are as you are thinking about what futures could
look like. And it's hard for me to say, you know, like, what things like ChatGPT are going to
do for the science fiction books that are going to come out over the next few
years. I would guess you're going to see some both anxious and optimistic books. And it's pretty
obvious what the anxious books would be. And there's a bunch of dystopian things that you
could do. Oh, yeah. I mean, the circle was a decade ago and that was similar.
But go on.
Sorry.
Yeah, 100 percent. the uncertainty that technological change brings is, you know, probably a more visceral emotion
than optimism and hope. But what I hope is, like, we're going to see some, like, really interesting
hopeful takes, like, not Pollyanna ones, but, like, hopeful takes about, you know, like, what,
you know, what can happen now that some of this stuff has gone from, you know, after decades and decades, like completely science fictional to like, oh, holy crap, like some of this stuff's now real and it looks like more is coming.
Yeah, no, I agree.
I hope that we can see something that may be a little more balanced.
I agree with you.
I think that it's easy to do the more dystopian take.
And look, dystopia is a key part of sci-fi, right?
Like it's an important component with it.
But I do wonder, I guess, and it was so interesting having you talk with Adrian, but I do think
about what happens when these things that we thought were really far out and were never
even really within the realm of possibility. Because as you say, authors aren't trying to really predict the future,
when those things do start to take shape, it's exciting to think about what imagination is going
to, you know, come up with in the future that is not within our technological sightlines. I think
that's a really fascinating thing to think about. for everyone. By definition, that's what's required of it becoming a ubiquitous thing.
It has to do something useful and meaningful that serves human interest.
If you think about electricity, for instance, we have tons of accidental deaths by electrocution
every year. We would completely abolish all of electricity
if it didn't have like this
overwhelming overhang
of beneficial uses
in addition to,
you know, this small,
but like very real set of harms
that you have to go
like build safety mechanisms around
and, you know,
regulation and a whole bunch of stuff.
And so like either that's what we're going to get to. Like we will have AI being like around and regulation and a whole bunch of stuff.
Either that's what we're going to get to,
we will have AI being like electricity,
a ubiquitous beneficial thing that has its risks regulated and
mitigated down to some acceptable level
relative to all the benefits it provides,
or it's just going to like fade away. Like it'll
be sort of like crypto, right? You know, a thing that we all lost our minds on, you know, investing
in. And, you know, we will sort of run into some kind of brick wall and like, and people will lose
interest. I think it's going to be the former, not the latter.
At least in my experience, I've got a decent nose for the paradigm shifting things.
And crypto never made sense to me, whereas AI, I've spent 20 years investing an enormous amount of energy in.
So, yeah, it's going to be interesting the next few years. And I think, you know, the fun thing for me about authors like Adrian is, like,
I read him because, like, he actually doesn't write dystopian fiction. Like, I'm pessimistic
enough in my day job where what I need in my fiction is some optimism or like at least,
you know, portrayals of, you know, future landscapes that I myself would want to occupy.
Yeah, yeah, yeah. Less Brave New World, more WALL-E. If we're doing analogies there. No,
I totally agree. But what's great is, as you said, we'll find out the next few years. But the
good news is, is that there will be stories regardless.
I'm with you.
I don't think that this is a flash in the pan thing.
I think this is a much bigger cultural shift.
But we'll see the stories one way or another, which is great.
Thanks to people like Adrian.
Yeah, we will indeed.
All right.
Well, that is all the time that we have for today.
A big thanks to Adrian Tchaikovsky for joining us. And if you have anything that you would like
to share with us, you can email us anytime at behindthetech at microsoft.com. And you can
follow Behind the Tech on your favorite podcast platform, or you can check out full video episodes
on YouTube. Thanks so much for tuning in. See you next time.