On with Kara Swisher - When AI F*s Up, Who’s to Blame? With Bruce Holsinger
Episode Date: July 24, 2025What happens when artificial intelligence collides with family, morality and the need for justice? Author and University of Virginia professor Bruce Holsinger joins Kara to talk about his new novel, C...ulpability, a family drama that examines how AI is reshaping our lives and our sense of accountability. Who is responsible when AI technology causes harm? How do we define culpability in the age of algorithms? And how is generative AI impacting academia, students and creative literature? Our expert question comes from Dr. Kurt Gray, a professor of psychology and the director of the Collaborative on the Science of Polarization and Misinformation at The Ohio State University. Questions? Comments? Email us at on@voxmedia.com or find us on YouTube, Instagram, TikTok, and Bluesky @onwithkaraswisher. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
I'm a little more techie than Oprah.
I hope you don't mind.
I'm a little less techie than most of your guests, I would imagine.
Hi everyone from New York Magazine and the Vox Media Podcast Network.
This is On with Kara Swisher and I'm Kara Swisher.
We're smack in the middle of the dog days of summer and I'm sure a lot of people are taking time on the weekends
or during vacation to relax with a good book. So we thought we'd do the same with you today
and talk about a new novel that's been getting a lot of attention, including from Oprah,
but it is right in our wheelhouse. It's called Culpability and it's written by my guest today,
Bruce Holzinger. Culpability is a's written by my guest today, Bruce Holzinger.
Culpability is a family drama centering around the way that technology, especially artificial
intelligence has become woven into our lives and the moral and ethical issues that can
arise as a result. Who is responsible? Who is culpable when things go awry? And how do
we make it right again? This has everything for someone like me. It has AI, it's got drones,
it's got chatbots, stuff that I cover all year long. And actually, it's written with
a lot of intelligence. A lot of this stuff usually tries to scaremonger and stuff like
that. It's an incredibly nuanced book. It brings up a lot of issues and most important,
it allows you to talk about them in a way that is discernible. I think a lot more people,
especially since Oprah Winfrey made it her book club selection, will read it. And that's a good thing because we should
all be talking about these issues.
Holzinger is also a professor of medieval literature at the University of Virginia,
kind of far away from these topics. So I want to talk to him about how he thinks about generative
AI in his work settings too as a teacher, as an academic, and as a writer. Our expert
question comes from Professor Kurt Gray, incoming director of the Collaborative
for Science of Polarization and Misinformation at Ohio State University.
So pull up your beach blanket and stick around. Get to Toronto's main venues like Budweiser Stage and the new Rogers Stadium with Go Transit.
Thanks to Go Transit's special online e-ticket fairs, a $10 one-day weekend pass offers unlimited
travel on any weekend day or holiday, anywhere
along the Go network.
And the weekday group passes offer the same weekday travel flexibility across the network,
starting at $30 for two people and up to $60 for a group of five.
Buy your online Go pass ahead of the show at Gotransit.com slash tickets.
Hi, Bruce.
Thanks for coming on On.
My pleasure.
Thank you for having me.
So, your latest book, Culpability, is a family drama, a kind of mystery, and there's a lot
of tech woven throughout it, which of course is my interest.
I've been talking about the issue of culpability on the part of the tech companies for a while,
but this is a much more nuanced and complex topic here
because it involves humanity's involvement in it, which of course is at the center of
it.
For people who haven't read the book, give us a short synopsis of how you look at it
right now.
It may have changed since you wrote the book.
Yeah.
So I see the book as maybe trying to do two things.
One, as you said, it's a contemporary family drama about just something really bad
that happens to this family. They get in this bad car accident while they're driving the
family minivan over to a lacrosse tournament in Delaware, but the van is in autonomous
mode or self-driving mode. And their son Charlie, kind of a charismatic lacrosse star, he's
at the wheel, but he's not driving.
And then the dad Noah is distractedly writing on his laptop.
He's trying to finish a legal memo.
The daughters are in the car on their phones as tween kids often are.
And the mom named Lorelai is, she's an expert in the field of ethical artificial intelligence.
She's like this world-leading figure and she's
writing in her notebook getting ready for a conference and then they have this awful accident.
And then the rest of the novel and much of the suspense of the novel spins out around who is
responsible, who is culpable and why. And so that's one issue that it's trying to tackle.
And then the other is just exploring our world newly
enlivened by these chatbots, by drones, by autonomous
vehicles, by smart homes, and so on, and immersing the reader
in a suspenseful drama that gets at some of those issues
at the same time, while making them really essential
to the plot, to the suspense and so on.
Is there something that happened that illuminated you or are you just reading
tech reporters like myself over the years as we become doom and gloomers as time goes on?
It's interesting. The novel really started with just this wanting to deal with this accident.
Initially, I wasn't even thinking about an autonomous car.
I was just thinking, okay, I really want to explore what happens to this family, different
degrees of responsibility.
And then when I realized, okay, well, who would be responsible?
Then I just, we had this, I don't even remember what kind of car, just with some lane guidance
and so on.
And I thought, okay, that's interesting.
What if the car is somewhat to blame?
This was really before, what was it, late 2022 when the chat GPT craze, people started
talking outside your industry about AI in general.
And then so I was already writing this book and then boom, it was like this explosion
with LLMs.
And suddenly I realized, oh, this novel is actually about autonomy and culpability in
this world newly defined by what people call artificial intelligence.
Had you been using any of it?
Have you used a Waymo?
I've used them for years and years.
Yeah, I've used a Waymo a couple times.
Not until actually I started this book.
And then I test drove a couple of models.
I had maybe some big Chrysler
thing that, you know, and they now have self-driving mode on.
They do.
If you're on certain roads and then of course there's Tesla.
I've been in a lot of Tesla Ubers where the guys, you know, will just put the thing on,
you know, lane changing auto technology and so I don't have one but I'm really, I was
really fascinated by it.
And not scared necessarily of it?
Not so much.
This is not a scary tech book, I would say.
Oh, maybe it's a little bit scary.
But although in some ways, somebody pointed out to me
at an event last week that in some ways,
it's the humans that are doing scarier things in the book,
or at least.
But I wanted that kind of uncanniness of the tech,
especially this chat bot that one of the daughters
is interacting with.
Yeah, we'll get into that.
I've done quite a lot of work on that in the real world.
The narrator in this book is the husband Noah,
by no means a Luddite, but he is compared to his wife
who's deep in tech.
She's an engineer and a philosopher,
an expert in the ethics of AI.
Talk about their relationship.
Which one do you relate to?
I relate probably more to Noah because I'm someone I'm always in awe of, my wife and
her brain.
But I also, you know, I'm an academic.
Lorelai is an academic.
Noah is a lawyer.
He's a commercial lawyer working at a firm.
Lorelai comes kind of from this fancy blue blood family,
her sister's the Dean of the law school at University of Pennsylvania. So I think I wanted
that relationship, you know, as you point out, we're only seeing it from Noah's point of view,
but we get Lorelei's voice and snippets from her work. And so, you know, and Noah, he puts her up
on a pedestal, but he also doesn't understand her
in many ways. He just has no clue what kind of work she does, what he has a really hard time,
even understanding what she's writing. And so she's in some ways a mystery to him in the same way
that AI is a mystery to so many of us. Mm-hmm. The juxtaposition between AI and
ethics has always fascinated me, the idea that you...
There have been ethical AI people,
but most of them have been fired by these companies,
which is interesting,
because when you bring up these thorny issues,
it's difficult.
Yeah, many of these companies have fired
their entire ethics team.
That's correct, yeah.
The other big relationship in the book
takes a while to unfold between Lorelei and Daniel Manet.
He's the tech billionaire she consults for.
Lorelei reminds me of a lot of tech engineers talking about the goal is to make the world
safer, better.
I'm just here to help, essentially.
And of course, Monet is very typical, the change of the world nonsense that they all
kind of spew at you.
But to me, most of them, in my experience, are interested in money or
shareholders. That seems to be their only goal. Safety is one of the last things that is on their
mind, if it even occurs to them at all. Did you have any inspirations for these girls? Did you know
tech people? You got them pretty well. You got them pretty well.
Yeah, I don't really know many people in the tech industry, but I read your book. I listen to a lot
of interviews with people,
and if you'll notice, there's this mocked up New Yorker interview with Daniel Monet.
And in portraying him, I really wanted to avoid stereotyping.
It's not that I'm too worried about stereotyping tech billionaires,
but I didn't want him to be a stock figure.
So there's a tragedy in his recent past too, but he's also very cynical about
ethics and he's like, sure, we want to make the world a safer, better place.
But he also calls out the hypocrisy of so much ethical language, just like you do.
You know, it's the idea that their primary concern is safety.
So he's really in that world, speaking in that idiom,
but also contemptuous of it a little bit, the same way he is of effective altruism.
Are you contemptuous of it as a person? I mean, obviously you're an intelligent person. Most people
seem flummoxed by AI and understanding that it's important at the same time, scared of it at the
same time, trying to lean into it in some way.
And it's a little different than the first internet session where everybody did finally
get on board.
In this one, there's a wariness in involving yourself in it.
Are you like that yourself?
Yeah, I suppose so.
I'm maybe a little less worried about bots taking over the world.
I'm much more worried about slop and disinformation
and what it's doing to our students.
I'm no expert, this is a novel,
but reading around in journals like Philosophy and Technology
or Nature, Machine Intelligence about autonomous driving.
You know, I don't understand a lot of the technical,
any of the technical aspects of it,
but the philosophical part I can kind of grasp
and you know, Lorelai of grasp and, you know, Lorelei is convinced that,
you know, there is a good to be done with machine autonomy in certain spheres and saving
lives is her driving factor.
And she's not a tech billionaire.
You know, she's in the middle of it.
She's worried about it.
She's kind of terrified.
As are many.
But also feels like it's her job to help make the world safer with these things.
And the point being, look, Waymo's been expanding.
Last week it started testing in New York City, which is astonishing because that's the most
difficult place to do this, landscape to do it in.
Uber has been signing multi-million dollar partnerships trying to figure out its own
robotaxi.
Elon is also betting on self-driving even through the Tesla robotaxi.
Doesn't exist for the most part. It doesn't, even though he talks about it as if it does.
Now, obviously, there have been well-known accidents with self-driving cars,
especially in San Francisco recently, around GM and others. But most of the studies, and this is
why I've driven in them for a long time, show them to have better safety record than human drivers in most scenarios.
That is, I was in a San Francisco Waymo, but a bicyclist kept getting in front of the Waymo
on purpose in order to get it to hit him, which was funny in some way.
San Francisco, it's fine.
It's what we're used to that.
But that said, I do feel safer than some, I was driving in an Uber with an Uber driver
and he was driving like a bat out of hell. And I was like, slow down, I have four kids, can you please slow
down?
Yeah, I've caused myself two accidents over the years that completely totaled the car
I was driving out of my own negligence and idiocy.
And so like, who are we to say that we're safer drivers?
I don't know, so I'm with you.
But in the book, the accident occurs
because the sun, Charlie, overrides the autonomous system.
Yeah, yeah, it's a little ambivalent, I think, about what.
But that's what we always know.
Are you taking control or is it taking control?
Talk about that.
Is the need to take control of our destiny,
especially when we're scared part
of our human algorithm, I guess?
Is that one of the ideas you're playing with?
Yeah, I suppose so. I haven't articulated it that way before, but that's really wonderful.
There's this passage at the end of the prologue in Noah's voice where Lorelei is thinking,
Lorelei always thinks that, she always says that a family is like an algorithm, a family
is like an algorithm. These parts working in concert with each other, the variables
changing. And then of course he says, until it isn't, until it isn't an algorithm, you know, these parts working in concert with each other, the variables changing but, and then of course he says, until it isn't, until it isn't an algorithm,
until things go wrong.
And maybe, you know, I wasn't thinking of it this way, but maybe Charlie taking that
wheel which he does at the last second, you know, resting things away from the AI is a
nice metaphor for our desire to kind of intervene in this world that where we
feel like so much of our autonomy is being taken away. It's a very human
gesture on his part I think. Of course it's also a dangerous gesture of what
is the immediate cause of the accident. Right and one of the
problems is the car is recording all the time. By the way, family is not like an
algorithm so just FYI. I don't say't say that. One of my characters says that.
No, I know that. I like that she said it, but I was laughing at that one. Do you overcome
that? Because we do give in to automation in an elevator, an escalator. We do it all
the time. We get on a bus we're giving, get on a plane. But in terms of, do you overcome
that need to control? Because we have given over to automation quite a lot of our lives in so many ways.
I think we do it without noticing, though.
There's this soft creep of things.
And the times that we, maybe the times that we most resisted is when there's glitches.
So there's these moments where you realize, okay, we need to see some control back.
And I find myself, you know, the Jonathan Haidt argument, we're in this age of profound
distraction and just getting away from the digital world, let alone AI for a little while,
can be really helpful.
About a decade ago, we started to call it continuous partial attention, actually.
It's not complete distraction.
And so you're sort of
paying attention partially, which is what this kid is doing in the car, right?
Yeah.
Everybody's sort of paying attention.
Yeah.
Um, until then you become absorbed and, you know, eventually these cars, you will not pay
attention. You'll be like on a ride at Disney or on a bus. That's how it'll feel like to you.
We'll be back in a minute.
On August 1st,
May I speak freely?
I prefer English.
The Naked Gun is the most fun you can have in theaters also happening, there's other technologies that you enter here, the
middle child Alice going down a rabbit hole, nice move, on her talking on her phone chatting with Blair. Talk about that relationship. Yeah, so Alice is the middle child, as you point out,
and her siblings, Charlie and Izzy, are dynamic, charismatic, they've got friends to burn, they're
just sweet, easy kids to get along with. And Alice is the more troubled one. She doesn't have friends,
her parents worry about that. And when she's in the hospital after this accident, she starts
texting somebody even though she has this concussion. And the doctors take it, oh, you know,
she shouldn't be on there more than a little bit at a time, but it's a good sign that she can deal
with that. And so her dad is, you know, when she's home,
her dad is like, who are you texting? And she says, it's my friend, my new friend. I met her in the
hospital. And Noah thinks, ah, this is great. Finally, she has a friend, even if it's just a
friend she's texting. And then we learn very quickly that this Blair is an AI, she's in a
large language model. She's a chatbot that Alice has befriended
on this app. And the thread of their texts, I think there's 10 or 12 of them in just
very short little bursts throughout the novel. And they can't contain, you know, no spoilers,
but they contain a lot of the suspense, a lot of the issues of culpability in the book.
And I do want to flag the audiobook narrator, there were two of
them, the woman, Jan LaVoy, who did the voice of Lorelai's excerpts and the two voices in the chat,
absolutely brilliant, just uncanny what she does with those passages.
Yeah, making them seem human but not.
Yeah, and that was based on, you know, just listening to, looking at transcripts of some
of these
chatbot conversations going on with teenagers right now and just thinking about this crisis
of companionship and friendship and loneliness.
And this just seemed like something that would be an obvious part of the novel.
Danielle Pletka As many kids are.
Now, you portray this chatbot Blair almost like a good angel sitting on Alice's shoulder
trying to get her to do the right thing.
But there's a lot of evidence that these chat box can be extremely detrimental
for kids. I interviewed Megan Garcia, whose son died by suicide after chatting with Character
AI. Daenerys Targaryen was the character's name. They're in a lawsuit now. Common Sense
Media just came out with a study that found social AI companions exacerbate mental health
conditions, imposing unacceptable words for anyone under 18.
There's been a spate of stories of people over 18, by the way.
Just today, there was another one with, it sort of encouraged,
and OpenAI responded actually, which was surprising.
It encourages people with mental health in their delusions.
Like, oh, great idea to harness the sun and
go up there with a rocket.
Let's try that, and I like your calculations and with a rocket. Like, let's try that.
And I like your calculations and stuff like that.
It's a very aim to please.
So that said, you portrayed Blair
as sort of the moral high ground.
Usually they're very solicitous, which this bot is.
Is there any risk in doing that?
Well, I don't know if there's risk in doing it in a novel,
but I don't know if that is how I would
read those passages. I see that relationship as, you know, Blair, it's almost like, and again,
I don't want to give too much away, but so that Blair is kind of programmed to make Alice good,
right? It's like the way I was imagining it is whoever's coding this thing, you know,
steer her on the right moral path. And in this case,
the right moral path is supposedly to reveal something or to hold back from doing something
rash and dangerous. And yet the way Alice responds to it, it's almost like Blair's
surface level ethical consciousness, or, you know, to the extent that an LLM can have one, which
it can't, but I just mean, you know, whatever it's being programmed to do, steers Alice
in, as I think we see over the course of the novel, into a more destructive kind of mindset.
So it is, even though Blair, and that's why, you know, that's the great thing about writing
fiction is you can manipulate those kind of moral codes.
You can have what seems to be good, ethical on the surface, be much darker and less, you
know, more amoral underneath.
That I think is what I was trying to get at.
And that's one of the, and I would love to know what you think of this.
I think the, we have this, you know, whenever I talk, I'm a, in my day job, I'm a literature professor,
I teach at the University of Virginia. And there's a whole, there's a real kind of minimization
and almost disdain for LLMs and big parts of my profession. Again, like there's not going to be
artificial general intelligence, blah, blah, blah. There's not, you know, these things don't mean
anything. And I wonder, to me, one of the superpowers of LLMs is their complete
indifference to us. And that is scary. The coldness of it, to me, that seems like one
of them. I'm trying to play around with that a lot in the novel is how that is one of the
things that it has that separates it from us. It doesn't make it better than us. It
just makes it very, very different.
And I don't know if we recognize that yet
in our accounting for what this intelligence is.
I don't know what you think of that.
I think people attribute human feelings to them.
And I think one of the things I always say,
I say this about some tech leaders too, they don't care.
And they're like, oh, they're hateful.
I'm like, no, no, they don't care.
It's a very different thing.
Like it doesn't, it's hard to explain when someone doesn like, no, no, they don't care. It's a very different thing.
It's hard to explain when someone doesn't, it's almost not even malevolent.
It's just don't care.
So one of the things I'd like to get from you, I mean, because I think you did nail
it, is they have no feelings.
The question is, and in the case of Megan Garcia's son, Google and the people that are around
character AI, Google's an investor in it, say that this was user-generated content, right? That this is themselves talking to
themselves and it's all based on people, right? You know, Soylent Green is people,
essentially. So do you think these bots are us or something quite different?
Yeah, I don't know. I think, you know, obviously, it is us in that so much of what's been uploaded. I think I checked that database with
books three, right, and I think at least three of my previous novels are in that database. So,
it is speaking back to us in our own words in some way, but words that are manipulated,
words that bounce off of us and these, again, in these ways that are coldly indifferent to our fates,
but pretend empathy. And that's the scariest thing of all. If you can convince someone that
you are empathetic, that you are sympathetic, that you are just like them, that you're here
for them, and then that makes it all the easier to turn on them.
Is that what's immoral about Blair?
I think so, yeah, because Blair can-
Amoral, I meant amoral.
Amoral, yeah, amoral, exactly.
And then there's a subtle difference there,
and I think amorality is, and again,
I think that's part of superintelligence.
I think amorality is one of the kind of categories here that makes
these things so good at what they do.
Yeah.
Or awful in what they do too.
Even if they reflect us.
But it's the deceptive, it's the cloak of decency.
That's exactly. So one AI technology you all seem to look at more critically is a swarm
of drones. You have everything in here, by the way. Swarm drones that accidentally kill a busload of civilians in Yemen.
This is one of the many parts of the book where you use fictional artifact materials,
in this case a Senate hearing transcript.
There are serious ethical questions about the use of these autonomous weapons in warfare,
and the UN has spoken about it.
How do you think about this, and what do you want the readers to take away here? You know, at some point, they'll be able to target individual people of what I'm
told, like the DNA of individual leaders and not kill anybody else, for example.
Right. Well, that's the dilemma at the, you know, when that we get this, there's a lot
of snippets of different sorts of things like paratextual elements throughout the novel
and the Senate hearing is one that New Yorker interview is one.
The technology and where things are
in terms of autonomous drone swarms,
a lot of that I think is probably classified.
Like you may know, I did a little poking and prodding.
If you can think of it, they're working on it.
They're working on it, exactly.
And they're further ahead than we probably think they are.
That's correct.
And so, all right, so Lorelei would take, you know, if she were looking at this problem,
and again, no spoilers, but she would probably say, well, if, okay, so they're going to kill
civilians every now and then, but what if they kill far fewer civilians than conventional weapons?
Yes, that's their argument.
And that's, I'm sure, you know, in the morality of war arguments, that is always...
That is their argument.
Okay, so new technology.
These things are going to happen.
And so if we can just like an autonomous vehicle, yeah, it's going to kill people, but are they
going to kill fewer people?
Yeah.
And then so I imagine the same thing is true in war, that the thing that's so uncanny,
you know, is just that.
To imagine these drone swarms instead of, you know,
just working with their human operators, they're working with each other and improving themselves.
And they're, you know, it's a machine learning technology. Once you put more than one in the
air, they're learning from each other and they're learning about us. They're learning about our
tactics. And that's obviously the more futuristic element of it. But this novel is very much
set in the present. It's very much about a contemporary family going through the struggle
of thing after this accident. And so I really wanted to make that feel present day.
Another issue raised in the book is when the characters realize that the AI is collecting
data that could be used against them. Because one of the things the car companies are doing, not just with autonomous, is how
you're driving, when you're braking, where you're going.
And they're able to base insurance costs based on how you drive, like by the amount of braking,
for example, is something that's really revelatory in how bad a driver you are and how fast you're
going, how much you speed up.
Talk about the idea of tech surveillance, both good and bad
and culpability, because if they're watching you, you can't sort of lie about what happened or
misremember, I guess. Yeah, one of the dynamics of the accident is Noah, the father believes, you
know, he was sitting in the front car, he mostly witnessed what happened and he believes that the other car was swerving
into their lane and he doesn't have any, you know, he doesn't, he isn't even thinking
about the tech aspect of it. He's just thinking, okay, my son didn't do anything wrong. We're
going to get through this. The police are going to interview you. It's going to be
fine. And then, and this is one of the things I came across in the course of my research
for the novel, this field called digital vehicle forensics the police have whole units dedicated to if there's an accident they go into the car's computer and
they figure out exactly what happened from the computers like a black box yeah black box exactly
as in a plane and with AI control with with you know self-driving cars that's all the more
complicated and yet it's also there's probably a lot more information
being collected.
So it's like having 50 additional witnesses
to what happened in the exact moment.
What the driver was looking at, what the driver was doing,
whether what other computers were on in the car,
and so on.
So that's another kind of frightening bit
of surveillance technology, just like drones
and so on.
Is that a bad thing?
I mean, if you're texting while driving and you lie about it, you certainly should be
held accountable.
Absolutely.
And yeah, and there's arguments, the same arguments to be made for facial recognition,
for shot spotting technology, right?
Where'd the gunshot come from in the city?
But they are also tools of surveillance.
So you really, I think we really have to balance those kinds of things out, you know, algorithmic
injustice, the way facial recognition deals differently with different people of different
races.
You know, those are really difficult dilemmas.
The culpability, the novel doesn't pretend to resolve them, but it wants to explore them,
I think, in different ways.
Yeah.
One of the things I always used to argue when they were talking about texting while driving,
well, I'm like, but you made a technology that's addictive.
So maybe that wasn't your fault for staring at the text.
Maybe it was the tech company's fault.
You know what I mean?
Whose fault was it?
Because it is addictive, in fact.
But the idea of being watched constantly is also sort of a prevalent thing in the book.
Do you think it's changing the way we act? Will we get to be better people? Like, pay attention while
your 17-year-old son is driving. I always pay attention when my 17-year-old son was driving.
I never stop paying attention. And the chap bought is surveilling Alice and this and that. Do you
think we change when we're being surveilled or we forget
we're being surveilled? Oh, I think it's a little bit of both. That's a kind of stock feature in
thrillers, right? Like the cameras on in airports and people dodging the cameras, putting on
disguise to elude the ever-present surveillance state. So yeah, obviously that notion of the
panopticon from Foucault, that it's not just that
we're being watched, but we're aware of ourselves being watched and that is a whole different kind
of technology of the self and how we behave and how we comport ourselves in the public sphere with
each other. And I think even, I would imagine that even when we know we're not being surveilled,
that still, that sensibility is still there in some ways.
Or you forget, or you totally live in a world where you don't mind being surveilled and you forget that you are being surveilled.
Yes, exactly.
You know, a party trick I do is I open people's phone and tell them exactly what they did all day and where they were and the address they were at and how many minutes they were there. So I'd imagine if you were having an affair or something
not good, I could find you, you know what I mean? Easily just by your movement because
you're wearing.
Our phones become our jumbotrons, right? We're carrying our jumbotrons around with us all
the time.
Although everyone loves the story. Any thoughts on that? That's really because it's like
the same thing. It's like we're being watched at all times.
Yeah. Yeah. And it's, you know, the details of that whole story, that it's an HR person,
they're just things that are just too perfect.
I know. I know. A novelist couldn't come up with this, I think. I feel like they...
No. No. Yeah. But clearly it'll be in my next one.
Yes, obviously. So there's one critical voice in the book.
It's near the end of the book when Noah has an interactive with Detective Lacey Morrissey
who's investigating the accident.
She does frame it correctly as one of privilege who has ability to have tech, who has access,
who has able to use it to shift the blame.
Talk about that scene in your thoughts, how tech relates to privilege, because it's a
very big important. You've written about it in your other books too.
Absolutely. Yeah. Yeah. Thank you. I'm really glad you brought up that passage and that character.
So Lacey Morrissey is the Delaware police officer who is, she's the detective kind of looking into
the accident, basically going after Charlie and saying, you know, you're not going to get away
with this just because you're a division one lacrosse recruit and you're, you know, you come from
this fancy family and your dad's a lawyer and your mom's this world famous technologist
and philosopher, you know. And then she has this rant that she goes on, this very righteous
rant that where she's like the conscience where she's saying, you know, a kid from the housing
project who's in this exact situation is going to get put in jail for an accident that
your son might have, Noah, and get off with a slap on the wrist.
And this is where we are right now.
And AI is only exacerbating this problem, right?
And this surveillance is treating people inequitably. And we're now in this place where these things
are becoming a way of just taking any kind of the moral burden of our mistakes off of
our shoulders, right? It's just another excuse for things. And she just stomps out of the
hospital and then she drives away texting at the wheel and Noah sees her do it and there's this kind of shiver
of righteous glee that goes down his spine, one of the last scenes in the book.
Well, it's again, addiction.
It's addictive.
It's so funny.
One of the problems with a lot of these technologies, and I think you put this out well in this
book is that it's not just the kids that are addicted because when you tell your kids to
put down their phone, you can't put down your phone You actually can't and so everybody is culpable, right?
You can't you know, you have to sort of walk the talk. So every episode we get a question from an outside expert
Let's listen to yours
Hi, my name is Kurt gray. I'm a social psychologist and professor at the Ohio State University
I research the psychology of artificial intelligence.
And my work shows that people think that AI can help us escape blame.
If a drone kills, it's the pilot to blame.
But if an AI drone kills, then people blame the AI.
But does it ever truly let us escape blame, or are we ultimately
still to blame as human beings who invented the AI, who choose to use the AI,
and who deal with the aftermath of the AI?
Great question. Thoughts on that?
Yeah, that's a really wonderful question because it is the central dilemma, I think, of the novel of culpability is the title, you know, who is to blame. And Lorelei in the excerpts that she
writes a book that we get little excerpts from it called Silicon Souls on the culpability of
artificial minds. And as you read, you get little glimpses of that book, paragraphs,
or a page at a time, just eight or nine of them
sprinkled throughout the novel. And Lorelei is wrestling with this all the time. To what
extent are we guilty if our machines do something bad? Are our machines training us to be, could
they train us to be good? Could they train us to be better people? Because it's not
just a one-way street. Yes, we're inventing them. We are responsible in many ways for how they are in the world, but we're also responsible for how we are, how we comport
ourselves ethically in the world in relationship to them and thus in relationship to each other in
new ways. So, you know, I think it's always going to be a two-way street. I don't think that's a
squishy answer. I think, you know, we're caught in this world where we're in this Frankenstein world where we're creating these machines.
We're not we, not me, but we're using them. We are, we're subject to many of their, you
know, their controls.
Yeah, I'm going to go right to it's our fault. So speaking of that, this comes at the end
of a novel Noah's called, by the way, Lorelei Atlas and says she has the weight of the world on her shoulders after she acknowledges that these algorithms have been used
For drones and warfare. This is something I've talked to many people who have quit Google or stayed there make the you know
Everyone has their own argument. I want to play this clip from Lorelei from the end of the book. Okay
We do the world no good when we throw up our hands and surrender to the moral frameworks
of algorithms.
AIs are not aliens from another world.
They are things of our all-too-human creation.
We in turn are their Pygmalions, responsible for their design, their function, and yes,
even their beauty.
If necessary, we must also be responsible for their demise.
And above all, we must never shy away from acting as their equals.
These new beings will only be as moral as we design them to be.
Our morality, in in turn will be shaped
by what we learn from them and how we adapt accordingly.
Perhaps in the near future,
they might help us to be less cruel to one another,
more generous and kind.
Someday they might even teach us new ways to be good,
but that will be up to us, not them.
So talk about this idea of shaping our moral frameworks. It hasn't worked with each other,
right? Because we don't seem to affect each other. Do you really believe this is possible?
Well, do you really think we don't affect each other? You don't think that there's a-
I think we lately, worse.
Lately, yes. No, no, no. I agree, but in everyday situations, and when you have someone calling us to be good, I
think we can shape one another's moral consciousness, right?
Right, yes, absolutely.
Are these AIs going to train us to be better?
Are they going to, you know, it's always going to be mixed back. You know,
we can get really excited about advances in protein folding visualization and so on. But
you know, there's always going to be these, you know, kind of terrifying moral quandaries
that they put us in at the same time. You know, Lorelai's voice is right on that razor's edge of the
ethical dilemmas, right? That passage that you played from the audiobook, that is, you
know, I wrote those passages to explore this, you know, that profound moral ambivalence
at the center of these problems. And I think, you know, there are people in that world, I imagine, I'm sure
you know many of them, who are, you know, dedicated to that, like make them better,
make them, if not good, at least ethical, and are dedicating their lives to it and are
probably really scared and are doing everything they can to dig us out of these trenches that
these things have become.
I think a lot of people, I was thinking of Brad Smith at Microsoft, you know, he called
tool or weapon. Is it a tool or a weapon of these things? And it's up to us, essentially.
But in some ways, is it up to us? That's the thing. You know, one of the lines I always
use is enragement equals engagement. And so wouldn't you go to enragement over kindness,
right? Because that's what's happened. Yeah, not just enragement, but also massification. And one of these places, of course, is I have a
son who's in the data analysis programming space and just the speed with which programming is
being taken over and programming of programming and the next frontier of these things, AIs making themselves
better by creating more AIs to make them better. This kind of recursive loop that we're in.
For me, the doom, the P-Doom is, I don't really have a number, but I'm much more in the cap of,
and this is a kind of dark note, Jeff Goodall, as he puts it, the heat will kill you first. It's the most dangerous thing about AI maybe is just data
centers in general. And the consumption, you know, thinking about Karen Howe's book, Empire of AI,
has that brilliant chapter on water use and energy use. And, you know, I know-
Bruce, they're bringing back nuclear. What are you
talking about? It's gonna be fine. They're gonna, they're gonna harness the sun. They're
gonna go up there with a rocket and not fuck anything up. We'll be back in a minute.
No Frills delivers. Get groceries delivered to your door from No Frills with PC Express.
Shop online and get $15 in PC Optimum Points on your first five orders.
Shop now at nofrills.ca.
I'm going to talk just a little bit about your day job and the role of AI.
You're a professor of medieval literature and critical theory at the University of Virginia.
You dedicated this book to your students, one of the biggest tech issues facing higher
education right now, the use of generative programs like CHAT GPT, CLAWD.
You mentioned it briefly before about whether they were using LLM's to write.
Talk about how you're using it.
I would encourage students to use it so you don't pretend they're not.
Are you leaning in or out?
Yeah, this is, we're all wrestling with this.
Every department in my university has something called an AI guide where, you know, where
they're a faculty member, a colleague who are, you know, we're trying to come up and
I'm in an English department, we have a writing program.
This is a big issue, but I agree with what Megan O'Rourke said in the Times the other day,
that, you know, this is the end of the take-home college essay, like that is done, that is dead.
That was never a big fan of that genre in the first place, but I do think there's a huge shift
going on in assessment of student writing. And I don't know if it's all to the bad I do think there's a huge shift going on in assessment of student writing.
And I don't know if it's all to the bad. I think there's a lot of space for more in-class
writing, for slow reading, for even just, you know, I have this vision of just teaching
a class where we almost go in and have reading time, like in kindergarten, in first grade,
you know, where we're all
just fixed on a physical text. Medieval literature is my specialty, and I'm not calling for going
back to the era of parchment and scribes, but, you know, there is something there about
that slow attention and decelerating a bit.
Yeah, absolutely. It's also the idea of sort of doing home, I hate homework. I've been
an anti-homework person as a parent. I have four kids and I was always like, no homework,
homework is zero, stupid, go play kind of thing. So one of the things you mentioned
was parchment and scribes and I like this idea and I think you should go for it. But
as a medievalist, you know the way that the printing press, the original technology, electricity
is between the printing press and electricity, but both of them were critically important,
sparked the first information age by improving access to knowledge and education.
But as most people do not know, a best seller during that era was a thing called the hammer
of witches, which was a dangerous treatise on killing women, essentially, and about how
there were witches and witch hunts and etc.
etc. Was that another moment? Because the democratizing of knowledge in the first 60 years
ended up killing hundreds of thousands of women because of this book, for example.
Yeah, and hundreds of thousands of dissenters, heretics, Catholics, or Protestants, depending where and when you are.
And people, you know, people talked about the printing press.
Historians talk about it as an agent of change, and it obviously was, but manuscript culture,
you know, lasts for many, many centuries after the printing press.
And people talked about the printing press as the tool of the devil, right?
And conservative theologians would say, look, now the people can have the word of God in their hand. So, I think that that, you know, it's one of those technological
ruptures and in the culture of writing and literature and the written word, people freaked
out about it, just like we're freaking out about LLMs now. And, you know, it's a very,
very different kind of rupture. But, you know, democratization can have its dangers. The book that you're talking about,
Malleus Maleficarum, That Hammer of Witches, that also has a manuscript tradition and there's a lot
of persecution of women, of dissenting women, heretical women, and the pre-print era as well.
Sure. I'm talking about getting it out there to many young people, right? Is that a good
impact or a bad impact from your perspective? It kind of ended the medieval era, correct,
or not? Maybe not. You're more of an expert.
Yeah. I'm one of those people who always pushes against the idea of these rigid medieval early
modern medieval Renaissance period boundaries. I'm much more interested in continuities than
the way that those ruptures play into long scale continuities.
But I think that the printing press was an invention of the Middle Ages, not of the Renaissance.
And I think it's a nice analogy to AI, I think, because it's not futuristic.
It's coming out of so many kind of text generation, computer generated things that have been in
place for decades.
And so looking at, always being afraid of the newness
of the technology, that in some ways it's its own danger,
I think, I don't know if you'd agree.
I agree, I would, pretending.
You know, when people complain about certain things,
I'm like, oh yes, let's please go back to Xerox machines.
Like, no, like what?
When you think about what the most important technology of the era
you teach on, what would you say it was? I would say, you know, the emergence of paper.
I read a whole book on parchment, but this widespread technology that had been in place
for a thousand years and it continues to be used even today by artists, printmakers, and so on. But paper had come, was a product of the early middle ages as well,
but when it really starts, you get the convergence of paper and print, you get this mass production
of books in a way you'd never had before. And that really is enabled by paper, even though
Gutenberg printed any number of his Bibles on animal skin, on vellum.
So, but the preponderance of printed books, the vast majority are on paper. So, we suddenly get
this ocean of paper. So, in that same vein, what about the comparative impact on creatives? Now,
do you think it will be a net positive allowing people who aren't natural artists to get their work out? That's the argument. Or will it undermine the value of artists? I don't think it's going to undermine the value.
It's not that I'm sanguine about that within the creative worlds, but I feel like there are people
who are already doing really interesting experiments, collaborative experiments,
with large language models, with small language models that are just interesting
because art is interesting and technology is part of art.
I'm much less worried about the arts
than I am about young people and brains.
And what worries you then?
Oh, just about the-
Analytical.
Analytical, right, students who will say,
I can just summarize this rather than reading it. That,
I think, is the, scary is the wrong word, but sad. I'm just not just blaming students
and young people. I don't read nearly as many novels as I used to. And when I do read novels,
I get more impatient than I used to. I used to lounge around on my bed, read novels by
Charles Dickens when I was like 16 years old. And for me now, getting through a Dickens novel is a real challenge.
I can't do a Dickens. I can't do Dickens.
That's the sad part.
Can you imagine reading Red Expectations right now? I think I didn't like it then.
Why do they keep putting that?
I don't know. I can't read PIP. Just stop with the PIP. Anyway, did you use AI working
on culpability at all? How do you use it yourself?
I don't, I mean, you know, occasionally I'll often use it like almost unintentionally as a Google
search, right? But I am interested, you know, there's this wonderful poet who teaches at the
University of Maryland, Lillian Yvonne Bertram, who has used these small language models to generate
these poems in the style of other poets and doing it very intentionally.
I'm excited about that, where our department is even doing,
I think, a faculty search next year on AI and creative writing.
We just hired somebody in AI and critical thought.
So yes, it's transforming things very quickly under our feet.
But I don't have the kind
of dread of this that many colleagues do.
After looking at all these different uses of AI, and you really do cover the gamut here,
you've got the drones, which parts give you hope and scare you?
As you said, you're more sanguine, but would you describe yourself as a tech optimist or
a tech pessimist?
There's three designations, zoomers, boomers, and
doomers, right? Did writing your book move you in one direction or another?
I think it moved me more, probably more towards doom, more for those environmental reasons that I was talking about. That was one of the real eye-opening revelations is just commonplace
knowledge now, but often that we don't see, even talked about in much
journalism about the consumption.
But in terms of the models themselves, I don't know, I think, especially with autonomous
driving, I had a really bad accident that could have been really, really bad when my
kids were in the car some years ago.
This crumpled the front of a car and it was because of my negligence and I thought I would
much rather have had a machine driving that day. So that part of it, I think, this issue of autonomy and navigation, maybe I am a little
more optimistic there. And I think the thing is, you know, artificial intelligence, as you know,
is such a sloppy term for all these different things.
Right.
It's a catch-all.
It's all these machine learning, LLMs. So I think that you probably would have to kind of go
through a questionnaire to get my optimistic or pessimistic. I think we need more'm probably probably would have to kind of go through a questionnaire to get optimistic or pessimistic
I think we need more of that, you know, this is a wonderful book. It really is
I'm glad I hope it doesn't always pick the books I like but this one I do
How did that it was that like a shocker to you? That must have been a shocker
Oh my god, I was just my hands were shaking and yeah
one of the things she does is she she records the calls that she makes with authors.
And in this case, I was like, I was so like shaking and whatever, my voice sounded kind of
understated. So I got dragged a little bit for just like being like, oh my God, he's not even
happy. But it was such a lightning strike and so, so thrilling and flattering. There's a hundred
other books published this summer that could have
been her summer pick, but she chose culpability and I just can't believe it.
Certain things she does, the wedding she shouldn't have gone to, but she does these books,
things I think is really important. That helps, that's good. Yeah, I would have thought her voice
was generated by AI, I wouldn't have believed it. Oh my God, I still think this is all a simulation.
Yeah, it's- Well, that's your next book, it is all a simulation. The last six weeks, it's all a simulation. Yeah, well, that's your next book. It is all a simulation in case you're, it's some teenagers
from the future who are playing a video game right now and they're enjoying themselves.
Anyway, much congratulations. This has been a fascinating conversation and I really appreciate
your book. It's great. I'm recommending it to lots of, you know, people who don't really
understand it. It's a really great way to understand and you don't shy away, you don't
stupidize these issues, which is
Thank you so much, Kara.
Anyway, thank you.
Thank you. It's been a huge pleasure.
On with Kara Swisher is produced by Christian Castor-Rosell, Kateri Yocum,
Megan Burney, Allison Rogers, Lyssa Sowep and Kailyn Lynch.
Nishat Kurwa is Vox Media's executive producer of podcasts.
Special thanks to Rosemary Ho.
Our engineers are Rick Kwan and Fernando Arruda.
And our theme music is by Trackademics.
If you're already following the show, you get an AI and you get an AI.
Oh, that's an Oprah joke, everyone who doesn't know, all you youngs.
If not, watch out for that Jumbotron.
Go wherever you listen
to podcasts, search for On with Kara Swisher and hit follow. And don't forget to follow
us on Instagram, TikTok and YouTube at On with Kara Swisher. Thanks for listening to
On with Kara Swisher from New York Magazine, the Vox Media Podcast Network and us. We'll
be back on Monday with more.