Embedded - 514: Just Turn Off All the Computers
Episode Date: November 14, 2025Philip Koopman joined us to talk about embedded systems becoming embodied and intelligent. We focus on the safety considerations of making an intelligent and embodied device. Phil's new book is Emb...odied AI Safety: Reimagining safety engineering for artificial intelligence in physical systems. It uses robotaxis as an example as it discusses safety, security, human/computer interface, AI, and a bit of legal theory for tort negligence. If you'd like a taster, Phil gave a wonderful summary in his video: Keynote Talk: Embodied AI Safety This new book is intended for a wider (less devotedly technical) audience than his book How Safe Is Safe Enough?: Measuring and Predicting Autonomous Vehicle Safety. Phil was last on the show in episode 473: Math Is Not the Answer where we spoke about his book Understanding Checksums and Cyclic Redundancy Checks Transcript Thank you! This episode is sponsored by you, our listeners! If you'd like to become members and get ad-free episodes as well as bonus shows, sign up at Patreon or Ko-Fi. Thanks for listening.
Transcript
Discussion (0)
Hello, and welcome to Embedded.
I am Alicia White, here with Christopher White.
And our guest this week, a returning guest, is Philip Coopman.
And we're going to talk about his new book and, well, if not Robotoxys, probably the singularity.
Great.
Hi, Philip.
Welcome back.
Hi, thanks.
It's great to be back.
Could you tell us about yourself as if we met, I don't know, at an embedded world dinner?
Okay, well, to try and keep it short, I've been doing embedded systems since around 1980, give or take, depending how you want to start counting.
I've been doing self-driving car safety since the mid-90s.
I've done, I stopped counting at 200 design reviews, various embedded systems, automotive,
and natural gas pipelines and big scary power supplies
and little tiny not-so-scary power supplies
and a little aviation, a little medical.
I've seen all sorts of stuff.
So lots of embedded stuff, just plain old embedded.
But I've also gotten into the self-driving car safety thing
a lot more heavily of late
because they've been sort of $100 billion get you a lot of progress.
And so I've been on top of that lately.
But I've always been an embedded person.
and at heart. And so now I'm sort of pivoting out. I retired from Carnegie Mellon University
a few months ago where I've been teaching embedded systems. And I'm pivoting out to be a little
bit broader, more generally embedded, and not just 24-7 self-driving cars.
And yet you did recently write a book.
Well, it's on embodied AI safety, which is more than self-driving cars. But I suspect this is
the main topic of conversation today. Yeah, we'll talk more about it. Are you ready for a
lightning round?
Sure, why not?
Would you use a robo-taxie, and should it cost more or less than a human-driven taxi?
I've been in them. It depends who's a robo-taxie.
And I think ultimately it should cost whatever people are willing to pay.
You know, that's the market forces. I don't get to control that.
Do you think the singularity will happen in your lifetime?
I have no opinion on that.
But remember, the part where I said I just retired, so time's running out on that one, so I'm guessing no.
Do you think fully autonomous vehicles will happen in your lifetime without me defining what fully autonomous means?
Or what lifetime really means?
I'm going to say no, but the nuance here matters because even the way my robotaxis have remote support and assistance.
What course at Carnegie Mellon University was you a favorite to teach?
Well, I was teaching embedded system engineering, and it was, by the time, every year it changes a little bit,
but by the time the dust had settled, it was half embedded software quality and one quarter embedded system safety and one quarter embedded system security.
And I was delighted to teach that course for many years.
you prefer AI or ML
Depends what you mean by AI
ML
ML is the
is the current new hotness
for AI AI
It is?
It was the new hotness six years ago
Well and now it's back to transform
Oh who knows right
I took an AI course
Including a course project in AI in
in 1981
I'll just stop there
What were those called expert systems?
Oh, that was before expert systems.
Expert systems were the new hotness at that point.
Okay.
Expert systems were the new hotness when we went to school.
Right, right, right.
That's what I did my project.
I remember back then building a couple neurons in the neural net,
and I remember building an expert system resolver that went way faster than anything else.
And as the guy I did it for said, it asked questions really fast.
Do you have a favorite fictional robot?
I guess my favorite is Huey Dewey and Louie.
Although we have to mourn for poor Louis because he didn't make it.
And I guess you have to be at a certain age to get that one.
That's silent running.
Oh, right, right.
I haven't seen that movie in a very long time.
I should add back to my list.
There you go.
And I have three Roombas and they're named Huey Dewey and Louis for that reason.
Not everyone would get why they're named that.
Do you have a tip everyone should know?
Yes, I have two tips everyone should know.
The first one's a callback.
Don't run with scissors.
Some of your listeners will know why have I said that, but I'm not telling.
The other more serious one is checklists are just amazing for the right thing.
So I travel a lot, and I have a checklist that I use every single trip.
And most trips, it saves me from forgetting something.
And yeah, you can buy it at the far end, but we only have eight hours feet on the ground,
spending an hour to track down something as a real pain in the neck.
It's funny because there's been so much in medical and aviation about how checklists are critical.
And whenever I use them, I'm like, yes, this is very useful, even though I feel kind of silly because it's all stuff I should know what to do.
Look, it's really hard to find a drumstool at 11 p.m.
Yeah.
When you have to, when downbeat is 1115.
I'll give you my checklist story because it's a worthwhile story.
I used to drive submarines for a living.
I have a combat medal from the Cold War, but I can't tell you why, okay?
And part of, on a submarine is it's...
Giant squid.
It was a giant squid in a car.
No, I had nothing to do with giant squids.
The thing about a submarine is it's a large,
tank, right? You know, basically there's
people inside this large tank air
and the saying is you want to keep water
out of the people tank. You really don't want
that water coming in.
And so before, every time you leave port
before you die the first time,
two people, not one, but two people
go around and check the position of
valves and openings and stuff, and there
are literally hundreds of them.
And if even one of them is off,
everyone could die. So
you have this checklist, this laminated
thing, you go in with a grease pencil, and you
check off every single thing, one person does it, then a second person does it, because it'd be
basically impossible to keep track of all of them. And the new guy, the new guy gets the hardest one.
So I did the bowel compartment. I don't know how many times that checklist. It was like four hours
of this valve is open. This valve is closed. Just crazy. And so I learned the value of checklist there,
but it's for the right thing, it doesn't replace thinking. But the way I look at it is, if you're
really, really smart. Why would you want to waste brain power and remembering to pack your
underwear? Why should you worry in the way at the airport? What did I forget? And the
answer is, I know I completed my checklist. I know that I'm careful about completing my
checklist, so I'm simply going to choose not to worry about missing stuff. And it's not
perfect, but my, you know, forgot something on a trip went from 20 or 30 percent down to 1 or 2
percent.
Having just had a funny conversation with someone who forgot charging cables, that is exactly
the sort of thing most people forget.
I have another trick that I have two sets of toiletries, and I have a backpack that's always
packed with, you know, my travel charger stays in the bag, it never comes out.
That also helps.
But you have to understand that I, I don't know, I was past 100,000 miles this year in
summer.
So if you travel that much, it's worth it.
Don't you have to live on the airplane to get that far?
Pre-pandemic, when I was doing the networking to make U.S. 4600, which is a self-driving car safety standard, happen, I figured out that I'd been going about 25 miles per hour for the entire year 24-7.
And I was like, I don't think I ever want to do that again.
Are you traveling for speaking engagements, for training, for fun?
This is mostly speaking engagements.
There are a bunch of things I'd been saying no to for years because, first of all,
they didn't travel during the pandemic, but second of all, because there are just too many events.
And now that I'm soft retired from the university, not, I'm still do plenty of consulting
and all that other stuff, which is fine.
But it frees up a little more time.
I don't have to be back in Pittsburgh every week to teach all these kind of things.
So a lot of these are things I've never been able to go to in the past, and it was pent-up demand.
So it's not clear to me whether the next year will be as crazy or not, but probably not as crazy as this year.
But that's okay, because I get to meet new people I haven't seen before, go to events I haven't been to.
I spent, in September and October, I spent five weeks in the EU on four separate trips.
Because at some point, spending a week in a hotel room in Munich just wasn't what I wanted to do.
So I flew home for the week.
And yet, I could totally spend a week in Munich.
Well, yeah, but I've lost Canada how many times I visited.
So I've sort of seen everything that I need to see.
Yep.
Okay, let's talk about your new book.
When we spoke last, a little over a year ago, we talked about understanding checksums and see.
CRCs and how the noise in your environment should affect how you choose to protect your data.
There was a lot more than that.
I'm sorry, but...
That's okay.
Yeah, there was a CRC in Chexon book, which had nothing to do with anything other than
that was a hobby project for multiple, from a whole time at CMU,
so I buried it up in a book and now I can move on.
That was that book.
And then the previous one to that is how safe is safe enough,
measuring and predicting autonomous vehicle safety.
Correct.
Who was the audience for that one?
So the audience for that one was regulators and engineers that work at the salt driving car companies.
And there were some folks out of that group who read it.
There was a state representative in Washington who read it.
And there's some parts in there that say, skip this because it's really deep if you don't want to know the deep.
and I presume she skipped and that's fine, but she got a lot of out of it.
So there are lots of folks who read it, some news folks, reporters, technical reporters,
but it was, it did not hold back from being pretty technical.
In this new book, the new book's embodied AI safety, and it is not, it is kind of for a similar
audience, but I tried a little harder to make it more accessible, especially the front
half dozen chapters to make it more accessible so it's a more non-technical folks.
And hopefully I succeeded.
But it's an audience.
It's not, there's no math.
One of the things I've done, even in the CRC book, is I don't use an equation editor because as soon as you do that, it dramatically limits who can access the book, right?
So there's, the math is, can you compute averages?
It's about the math, right?
And it's designed to help anyone who, the new book, the old book a little bit, but the new book especially,
is anyone who can understand technical concepts, and it was motivated to really understand
what safety means when there's AI involved, they should be able to read most of the book,
if not all of it, and really get a lot out of it. That's the goal.
Would you say it's a Popsie book?
No, I would not. I don't think so. So it's not ever going to be sold on newsstands.
I mean, if there's a market for it, that's great, but I'm not holding my breath on that.
And in particular, I've read some AI books lately that are more like a Popsie book, whatever that means, but more general audience.
And the embodied AI safety book requires technical sophistication, and it requires a lot of thinking.
It's not a beach read.
It's not a casual read where Popsie often is more of a casual read.
So I think anyone who is smart and wants to learn the technology.
and can understand how things work in an engineering kind of way,
has an engineering kind of mindset,
even if they haven't not been trained as an engineer,
should be able to read it.
But it's not intended to have that broad appeal.
So you've gone from one that was more engineering,
and then we've gone to one who is...
Ultra geeky.
The CRC book is ultra geeky.
I'm not going to be bashful about that one.
And then this one is a broader audience
with insurers and journalists.
Yeah, maybe 20% broader, not way broader, but a bit broader.
Because it turned out insurers were reading the first book, too.
So I basically found out who was reading the first book and tried to tailor the book to hit all those folks, make it a little more accessible to them.
Is your next one going to be Popsie?
I have no idea.
I'm taking a little bit of a break from writing books for now.
This is enough books in a short period of time for now.
Yeah, it has been a lot.
And I've been traveling a lot, which makes it hard to write while you're traveling.
When I wrote my Embedded Systems book, there were two people who had worked for me who were my designated audience members.
Not that they read it, but I assumed that if they didn't have that information, I needed to explain it.
And if they did already have that explanation, like they knew what an average was, it didn't have to explain it.
Yeah.
Did you have audience members in mind that you used to define what you needed to explain and what you didn't?
I did, and it's going to be a surprise because I'm going to name the person and she doesn't know it.
There's a reporter, Junco Yoshida.
Have you ever run across her?
I have not.
I've met her a long time ago.
She started getting involved in self-driving car safety back in the pre-self-driving car era in Toyota Unintended Acceleration.
She was an editor at E.E. Times, which you probably remember back when that was a trade rack, right?
So she was at E.E. Times, and she was covering the Toyota unintended acceleration, and I met her as a result of that.
And she and I have stayed in touch over the years, and she's been covering all sorts of technology.
things, chips and AI and machine learning. And I've had a lot of discussions with her about this
technology. And one of the things that occurred to me to do was I should ask her what she
thinks she needs to know, what she needs to understand. And so I pitched the idea of the book
to her. And she said, I want to read that book. And I said, aha, there's my audience. Because
she's not a person working at a robo-taxie company because I need to have broader impact
than that.
But her skill set in her ability to understand things is pretty representative of the technically
sophisticated but not trained as an engineer set of folks.
And I've also put together in my mind a composite of regulators and legislators and
policy folks and other journalists I've talked to.
And so it was written to make sure it was accessible to them.
That having been said, engineers are going to get a lot out of it, too, because I also teach.
And the other composite I had in my mind was the students who take my class, what would they find accessible, what would they find worthwhile?
So that was sort of the composite audience I had in mind.
And having read most of it, to be realistic about my schedule lately, a lot of it was a good, a lot of it I nodded through, yes.
Yes, yes, yes, yes.
But it was information that I didn't have a previous place I could point to and say, look, expert in the field says to do this, let's do this.
So I don't, you know, so I can steal your credibility as my own.
Well, for some of the top, for someone like you, that's no surprise.
And that's great.
Part of the challenge is boiling things down to be a simple, concise explanation that's close enough to not be misleading.
Oh, yes.
Right?
That's always really hard.
And the more experienced you are, the better you get at that, right?
So part of this is an exercise.
There's four chapters that go through safety and security and machine learning and human factors.
And then there's sort of like a little – I didn't make it a primary chapter,
but there's a mini chapter on how tort negligence works.
And it's important to have those references to point to if you already know them.
But what I've also found out is a –
very large fraction of the folks working in this field are missing one of those pieces,
especially the tort negligence.
So it's your experience of, hey, we're going to, yeah, yeah, nod, nod, great reference
to look to is great, but there's other folks who will nod, nod, nod, and say, I had no idea,
right?
And so some of it is just the self-audit to see where the holes are and where the gaps are,
because that's important knowing.
I have a handful of favorite quotes, and some of them are pretty eclectic,
but one is, man's got to know his limitations, Glenn Eastwood, right?
But it's important.
You have to know your limitations.
And part of the purpose of this book is for folks who are really doing it for a living
to make sure they don't have any blind spots.
And that was super useful.
And some of the terms that I wasn't, that I had picked up in engineering meetings,
but hadn't quite followed all the way through.
Well, okay, let's start with risk hazard assessment and risk mitigation.
This was a process I learned from FAA guidelines.
Yeah, Hara has it in risk analysis.
That's actually an automotive specific term, but the idea is very general.
So I happen to use the automotive term, but it's certainly a very general idea.
And it's a matter of trying to figure out everything that can go wrong
and then trying to figure out how to make it not go wrong.
Or how to deal with it if it does go wrong in a safe manner?
Yep. It's identify all the things that can go wrong that can cause loss, that can go wrong,
decide how likely it is, decide how big a loss it creates,
and then determine the risk based on some combination of those factors.
And then after that, what comes after has in risk analysis is, well, you have to do risk mitigation, right?
And typically, the higher the risk, the more engineering rigor and effort you put into reducing the risk to something acceptable.
And this is how we end up with dual-call processors that are supposed to run the same code and get the same answer.
Right. If there's a hazard of a single event upset or some sort of defect that will affect one core but not the other, you run two cores and if they'd mismatch, you know, something bad happened.
for exactly. But it isn't that safety requires, that safety means having two computers. Rather,
safety is, well, we have to identify the hazards, and for something life-critical, you have to worry about
a runtime fault, and so you end up putting in two computers and comparing as the mitigation technique.
And when we start hazard assessments, we almost, my teams have almost always started, actually you listed them as well,
when a bolt breaks?
What happens when a bolt breaks?
Yeah.
That's like, okay, this is a failure we understand.
Even when a semiconductor fails or the code spontaneously turns on all of the motors at full power.
And so all of those things, you know, I've definitely sat in the meetings and tried to figure out everything that could go wrong with my software and how to make it not happen.
or how to let it happen in a way that's safe.
Which means you were doing safety engineering.
Yes.
So one of the takeaways from the book is that people think that safety means code correctness, and that's not true.
Code is never correct.
Well, if you have incorrect code or crazily bad code, it's hard to make it safe.
But you could have absolutely defect-free code and have a set of requirements that are outright dangerous.
So I've run into this all the time.
Part of the message is having perfect code doesn't make you safe because getting rid of all the bugs doesn't mean you thought of all the hazards in how to mitigate them.
Those are two different things.
Yes.
Yes.
Excuse me while I write this down.
I have to go talk to a client.
Those are the words I need to use.
Yes.
Well, that's the point of this book.
The point of this book is to give you the words.
For someone sophisticated like you, that's the point of the book is to give you the words.
That's right.
I'm trying to think of an example of that, even though I believe it, and I know it's true.
Coming up with a bike.
Oh, well, if your requirements are to make something dangerous.
Well, sure.
But no, no, no.
Works is designed.
It blew up the world.
Yeah.
You know, the one-wheel scooters.
Or even the two-wheeled one.
Yes.
But the one-wheels, they all got recalled at some point.
And it's because there is no.
way to make that safe.
Clearly nobody ever read BC Comics before making that product.
And so your software could be perfect, but nobody put in the, while you really can't do that,
notice when they thought about the risks.
This product should not exist, is the ultimate limit of that.
Less esoterically, there's the two-wheel scooters, that one of them had a problem that the
brakes would come on and throw riders.
And it was something, I don't remember the details, so I'm going to hypothesize something,
that if there's some sort of defect in the payment system and it decides you haven't paid and it jams on the break.
Oh, one of the rental scooters.
And it doesn't bother to check whether you're moving when it jams on the break, right?
Oops.
And that was hurting people.
Now, I don't remember this happened more than once in.
I don't remember any of the recalls were precise of that mechanism.
But it was definitely the thing would slam to a stop while you're moving for a reason that that was not reasonable, right?
As opposed to a car, which does that once a month.
You can have a requirement to stop it when it hasn't been paid, and you have a requirement to stop it when the driver says to stop.
But if you're missing the requirement that you're not allowed to stop at speed more than so many Gs, if you're just missing that requirement, you're not going to test for it, you're not going to consider how it could happen.
you're not going to be safe, even though all the other requirements are perfectly implemented.
And you could potentially not make this a software problem.
You could make it so that you can't put the brakes on hard enough to stop at speed.
That's the thing.
The mitigation does not have to be software.
Right.
And one of the important things is there's no actual thing is software safety.
That's not a thing.
It's computer-based system safety.
Well, people call it software safety because that's what they're used to.
right in computer-based system safety and by the way that computer is embedded in something hence embedded
system and if the mitigation is the brakes can only exert a certain g-force and the rider won't get
thrown then you don't have to worry about any of that yeah software is perfectly safe as long as it's not
connected to a system is it's not as it doesn't have any actuators for the most part or where it's
not trying to manipulate you into doing something bad the whole AI thing is is a whole another level of
One of the phrases that I didn't have, that I appreciate it, was common cause failures.
I think I heard it at some point, but, you know, didn't click in.
Could you describe what those are?
Sure.
A lot of safety has to do with no single point failures.
I'm going to get to common cause in a second, but no single point failure.
If there's a particular component that if it stops working, then someone dies, this is clearly a bad thing.
And anything life-critical, one of the bedrock criteria is there can be no single-point failures,
and there have been loss events due to single-point failures.
But then in the aviation industry, in particular, they started finding big, dangerous failures
that were not a single thing failed.
It's that a bunch of things failed together.
And there was some sort of causal mechanism.
So one of the big examples is there was a DC-10 huge.
airplane crash. And what they had found was they had three hydraulic lines to control the
aircraft flight surfaces, but all three hydraulic lines ran together. And there was a cargo hatch
that wasn't properly secured, and it opened causing explosive decompression, which collapsed
the floor, and the collapsing floor severed all three hydraulic lines together. Oops, right? And so
there was a couple plane crashes, one of them, the pilot miraculously did okay. The other one
was not so good.
And so the common cause was this one floor collapsing would compromise three otherwise independent
systems.
Another one is if you have a cable harness and you have two independent signals, two
independent wires, and they both go to the same connector, and that connector detaches.
You've lost both signals, even though on paper they look independent.
They go on through the same connector, and that connector is a common cause failure.
This made me re-evaluate a bunch of things I've been working on because I don't think about that.
I think about, you know, this signal happened, not this cable got unplugged.
Well, and part of this is, we're saying software isn't safe all on its own.
Part of embedded systems is not embedded software, it's embedded systems for a reason.
If you want to do embedded system safety, you have to have some basic knowledge about mechanical,
about electrical, you know, back when I went through school, undergrad,
barefoot in the snow, uphill both ways, all that stuff.
And they made me take thermodynamics and all that other stuff.
There's something to be said for that,
because I got pretty well grounded on things that aren't just computer software
with my engineering degree.
And embedded systems that stuff starts mattering.
Yeah, and that's why there's distinctions often,
at least in larger companies, between, you know, firmware engineer,
where you are working on the software that runs on the device
and systems engineers who kind of have to have an overview of all of the parts,
including mechanical, electrical, and software.
And you don't have to go to school to get that degree to pick it up,
but the point is you have to be aware that it isn't just a bunch of software running on a magic box.
There's real stuff going on in there.
There's another one that happened to me for a while back,
a bunch of, I think it was Dell desktop computers, had a bad batch of capacitors.
So these were electrolytic capacitors that would, at some point, age out and explode and plastered themselves on the other side of the box.
And I had several of my computers, so I had students, I was running research group, several computers all went within like a one-month period that they all exploded their capacitors.
And the common cause was bad batch of capacitors.
fortunately that wasn't life critical but if those capacitors were in a life critical system you said well we have two computers and so if one fails it's no problem we have another what if the exploding capacitor paste itself and shorts out something on the other computer because they share an enclosure or what if both of them during an extended mission suffered the same you know that both boards fail for that reason and it can be hard because I remember the first time I encountered something like this was at a medical device company and
And when I came in, they had a design for an emergency stop, which was, this is a laser.
It's shooting, you know, at somebody's skin.
And if something goes horribly wrong, there's a big red button on the front.
Yeah, the red buttons are great.
Red buttons are great.
Yeah.
But when I came there, the architecture was that the red button was a software button that went to an interrupt.
And so it was a software's job to shut down when somebody hit that.
Yeah.
And obviously, I'm not buying it.
Obviously, that went wrong because the software gets busy or misses it or something.
and there were a lot of problems, and I spent,
I think it was me and the electrical engineer
spent six months saying,
connect this to Maine's power and stop it.
And we did get that in the next revision.
Or put an accelerated relay or something, do something.
Just an obvious thing to do, right?
But it took a lot to convince the people
who were running the company that that was the way to go
after all this work had been put into the,
so there's people involved in these situations.
too, which makes it even more difficult sometimes.
Well, you asked if I were a ride at Robotoxy.
Now, I'm not going to shame and name the company,
but I had a ride with a Robotaxie in the safety argument
had to do with who else was that the other person
from the company was too valuable to let them die,
so I figured it was probably safe enough where he wouldn't go in.
And there was a safety driver, there was a backup driver.
It was fine. It was a short drive.
It was fine, you know, but I'm also a scuba diver.
So even if it's higher than typical risk for your normal workday, if your exposure is low, it's fine, live little.
That's okay.
But then afterwards, I took a look at their hardware architecture, and their big red button went into the autonomy computer as a parallel input pin, GPIO pin.
I'm like, guys, no, you can't do that.
And I'm sorry I wrote in your vehicle.
If I don't, I would not have ridden it.
So earlier, when I said it depends on whose vehicle, yeah, seriously, it depends.
I'm told they fixed it.
And yet you still are not getting their cars.
One of the big things about your book is the AI.
Yes.
So I don't know how to phrase list nicely.
Do you hate AI?
No, I don't hate AI.
AI is great for some stuff.
I have a healthy respect for how things can go wrong.
I'm a safety guy, so I'm always busy thinking about how things go wrong.
I always have emergency backups.
My wife used to give me the hardest time about why do you have this long checklist and blah-da-da-da.
And then we're on a trip.
Over several trips that we had taken together, she would say, oh, I forgot this.
do you have something, and I just hand it to her.
And she said, okay, I promise to stop giving you a hard time about your checklist.
So instead of at 2 a.m. local time running out to get whatever.
It's like, I just have it.
It's right here.
So I think about how things could go wrong.
And if you're a safety person, that's a mindset you have to have.
But that doesn't mean I live in fear every moment of my life.
And it doesn't mean I'm there to say no.
I'm there.
I didn't say don't take risks.
It's that you need to mitigate the risks appropriately.
So the issue with AI is that people are using it in ways where their risks are almost inevitably going to cause problems for them.
And they think about how cool it is and they don't properly respect the risks.
And in particular, they don't respect how people react to risks.
So one of the chapters is about human factors, which for embedded systems, I've been teaching years and years ago, 20 years, I've been teaching.
a module on human factors and courses where I could fit it in. So it isn't, it isn't like
it's a new thing. But it wasn't as big a deal as it is with AI because people are uniquely
suited to being conned by AI into not paying attention. Like it's just terrible. People are
terrible at supervising automation. So they're building these systems where the plan is when
the machine makes a mistake, the person's going to be blamed if they don't fix it. And we've
for decades and decades and decades.
Since before I was born, even, we've known that people are terrible at that.
So you're just setting them up for failure.
So the part of AI I'm unhappy with is it is being deployed in ways that are guaranteed to set people up for failure.
That's a problem.
If you deploy it in another way, that's fine.
We had a long show about this a while ago, or at least we talked about this topic.
do you think
one of the things that came up
was something you just said
which was that it's cool
which is already a red flag
when something's cool
something's cool
well if it's cool and innocuous
fine right
but it's an addictive coolness
do you think that because AI
as we currently talk about AI
which embodies a lot of LN stuff
has got so many air quotes going
do you think it's the
do you think it's made worse
by the conversational nature
the pretending the anthropomorphized kind of system that it is now versus something that's like,
you know, a model running in the background that is detecting, you know, a squirrel or something
and says this box is around a squirrel and you can make a decision.
Do you think that's made things worse in terms of the tolerance of risk or the, or the, not the
tolerance, but the not noticing the risk because, oh, I'm just talking to this thing, which is friendly
and it's saying all sorts of friendly things.
You guys need to define AIs.
Yeah, the LLM chatbot thing is certainly proving to be more problematic.
But let's back up a little.
You have the regular machine learning, deep neural networks, whatever, and it's a classifier.
That's a person, that's a dog, that's a pig, it's a loaf of bread, whatever, you know.
Squirrel.
Squirrel, yeah.
And those things, the problem you have with people supervising them is that if it's right a thousand times in a row,
really hard to stay paying attention.
And I'm going to use car examples.
The book is not all about cars, but the Robotoxy experience has given us concrete illustrations
of things to watch out for.
So anything that's a car example generalized to what you're doing.
But if you've been through a thousand red lights and a thousand times your car stopped
for the red light, you're not ready for number 1001 when it doesn't.
You're just not ready.
Your reaction time is going to be way longer.
You're going to blow through the red light.
Things that work 99% of the time are worse than things.
that work 50% of the time.
Yeah, well, if it's trying to kill you every mile, you're going to pay attention.
You'll have a different issue.
You'll be imperfect in response, and one's going to get by you.
But that's a little different than being lowled in complacency.
If it's medical images, if the system is perfectly accurate for 1,000 images,
it's really hard to pay attention for image now in 1001.
Now, if you have trained professionals and you have a thoughtful system and backups, you know, airline pilots,
physicians, folks like that, I'm less worried about.
But average folks, how do you expect them to perform in that kind of environment where
the stakes are literally life or death if you get it wrong?
It's just, it's asking them to be superhuman and they can't do it.
I think that we should make them play a driving game.
Like the people or the robots?
Do they actually die if they don't pay attention?
I mean, or do they kill a pedestrian?
Maybe like a shock.
They have to drive in the driving.
driving game, and then occasionally the AI makes them drive the real car, but they don't know
because they're still playing the driving game.
Well, so here's the problem.
You can motivate them all you want.
Not a plot of the Matrix?
I don't think I wouldn't even want to go there.
You can motivate them all you want.
You can shame them if they make a mistake, but you're asking them to be superhuman.
You just can't change human nature, and it's unreasonable to expect, well, we told you to pay
attention.
That doesn't change human nature.
You don't ask people to do things they can't do.
No, but it is a great excuse for companies.
I don't think you use the term moral crumple zone?
Exactly.
So, you know, there's a crumple zone in the front of a car,
and the idea of the crumple zone is to absorb the energy
to protect the passengers inside, right?
Okay, that's mechanical crumption.
The moral crumple zone is you know your computer's going to make a mistake,
and your plan is to use the person who's supervising it
if they're a driver, if they're a technician overseeing computer operation, whatever.
There's some person handily available, and their role is to be a one-time blame-absorbent device that's disposable.
You know, they absorb the blame.
They crumple up, and then the manufacturer of this defective device doesn't get blamed.
That's it. That's the moral crumple zone.
AI sin eater.
So, but you asked about chat GPT and, and, and, and, and, and, you asked, you asked about chat, and, and, and,
It's like, right.
All right.
So to go there, in those cases, it's not that it sometimes makes mistakes.
Which it does.
But it's, I mean, so all machine learning is statistical, and you have a problem, that 99% is great for machine learning, six more nines to go for life critical, right?
That's the problem there.
99.99, a bunch of nines for if someone dies.
But chat TPT type technology is a little different because it doesn't lie to you.
people think it made a mistake and it lies.
That's not at all what's going on.
That's projecting human thought onto it.
Right, right.
This is the anthropomorphizing.
Right.
And what's really going on is there's a thing called Frank Friedian bullshit.
Yes, okay.
And the definition of that is you're saying things with reckless disregard for the truth.
What was the first word there?
Frankfurt.
There's a guy named Frankfurt.
It's the author's name.
He had a paper, or it was a book on con, on bullshit.
Okay.
And so people say, Franfranity and emulsion, what they mean by that is that this is a precise term being used in a clinical way and not just saying someone's, you know, acting badly.
So it's not malicious.
It's just, it's not malicious intent necessarily.
It's that it's reckless disregard for the truth.
So think about Madlibs with randomized responses.
They may make sentences that make sense, but truth doesn't enter into it.
And so the thing is, everything generated by a chatbot is BS.
It is just statistically consistent with language use, but it doesn't actually know or care if it's the truth.
Now, many of the things it will say look true to the reader, but that doesn't mean it knows they were true or false.
So it's not saying, I think I'll say something false.
It's just, I'm just going to generate stuff.
And the truth is in the eye of the beholder.
And if the beholder is sophisticated, they may decide it's not true more often.
If they're unsophisticated, they may decide it's true.
They may get sucked in because it sounds authoritative, very confident, right?
And it's even worse because it'll just make stuff up out of thin air that could be checked, but it doesn't.
It'll make up web pages that don't work.
Or it'll say, here's a web page.
I'm summarizing it for you, and it will miss the context of the web page.
And it's very persuasive.
So it's really tough technology that way.
And we talk about the hallucinations, that it makes up webpages that don't exist.
But hallucination is the wrong word.
It's all hallucination.
The hallucination means it's capable of knowing the truth, which it's not, right?
That's why I prefer the BS term instead of hallucination.
I
Chris is very anti-LM
Although I do check in once a month or so
and try them just to see what's going on
but I try not to use them
Well you know I know someone who uses them
To write emails
To
That this person's a landlord
And they use it to write polite emails to problematic tenants
And it's really good for that
Yeah
That's the right
Yeah the obsequious tone is as usual
For certain applications
This person's a problem, but you want to be polite about it.
And it's like, how do you tell someone to stop doing something they shouldn't be doing in the politest way possible?
It's pretty good at that.
But you were going to say something else, Lisa.
I've had many bad experiences with coding.
And then time went by, and someone suggested I use it to think about some things.
physical systems. A physical system that I didn't understand well. And it worked. And I didn't,
I mean, I didn't just say, tell me about this. I said, tell me about this thing I already know.
And then I walked it into the system. And then I found things that I could ask it to do, show me the state
space equations for a PID controller. I know how a PID controller says.
set up in frequency domain. I know how to set it up as a code, but the state space, the whole
part of that math is kind of new to me, so I wanted to see how it looked and compare it to
some other state space I was looking at. And it was all, I mean, it was fantastic. It was really
nice. And it was stuff I could recognize as being true because it was again a step away from
where I was.
Well, if this is well-troddened territory, if there's lots of material out there on it,
it often does a pretty good job.
It struggles more a novelty.
So if you want a routine task, it's pretty good.
But even then, I gave, I try it once in a while.
I said, well, if I wanted to write an article or a book on Topic X, what would I do?
And it produces this long list.
or I want to do a course on Intro to Embedded or something, what you go in.
And it'll put out a list that's pretty plausible, 80%, 90%.
And occasionally there's this thing in the middle of just like,
that just makes no sense at all.
Why does it even say that?
And then I'll go back to making sense.
But the other thing I've noticed is a lot of what it does is kind of soulless, right?
It's really plain vanilla.
Now, vanilla is what you want.
That's great.
but it just, it misses things that are important but niche.
You know, it goes for the average.
And the things it writes don't really have a voice.
Now, if you're writing corporate emails to unhappy customers,
that's probably exactly what you're looking for, right?
So it depends on the application,
except my interaction with chatbots on websites is nothing short of infuriating
because I probably wouldn't be asking him for help
if I had something that was on their web page.
Oh, yes.
And they just send me back to the web page.
So, you know, it all comes down to what are you doing?
But personally, the way I look at it is if you want to use a chat pod, you have to ask yourself, I'm going to go back to my safety background.
You have to ask yourself, what's my hazard and risk analysis for using the chatbot?
What are the things it can do that are going to cause me a lot of loss if it gets it wrong?
And what's my mitigation?
Is my mitigation going to be effective?
keeping in mind that if my mitigation is a person who's bored out of their mind is going to look at it,
they're not going to be very effective because they're going to get bored out of their mind.
You said things that don't change are often good,
and that is where I have found some goodness,
because then I went back and tried to do some Python script stuff,
and it just was totally, because the libraries it gave me were deprecated.
Which one were you using?
I was using Gemina.
What is the things that don't change?
It's things where there's a whole lot of material about it from 20 different ways,
and it can synthesize an average bathroom, right?
And most of what I wanted was physics and systems engineering, which...
That's been around a while.
It was wonderful, because it would explain it to me three different ways,
and I could push variables, and it would do all of the algebra for me, and that was wonderful.
But you are an expert in the field.
You've seen as mathematics.
You have enough intuition to know when it probably was making a mistake.
If it had occurred, which maybe it didn't, but most people are not using it that way.
And the people who are using it to be friends with it are probably forced setting themselves up for the worst fall.
Well, that is, yeah, that use is scary.
Let me add one thing about the using it.
There's a thing called anchoring bias.
So to really underuse machine learning, you have to bring back your freshman psyche if you ever took it and maybe go a little beyond.
This is a thing called anchoring bias.
So if you say, hey, chat GPT, give me a starting point and I'll edit it, you're anchored in the narrative it gives you.
And the narrative it gives you may be vanilla or it may be just wrong.
It may send you a barking up their own tree.
So I've had several times where I tried to use it and it tried to anchor me in something that was just the wrong path on the wrong mountain.
And any amount of refinement was any can get me there.
So you have to, but we're back to you have to, you have to be very self.
aware of how people have bugs in their thinking, if you want to think about that way, right?
We have biases and gaps and failure modes in our thinking, and if you're not aware of them,
chat GPT will lead you right down the path.
And I think one kind of side note about that is I think that I think there's something wrong
with the user interface to a computer or a system being conversational.
I think that does something to us.
And if it's pretending to talk to you, pretending to be a person, or not pretending, but it's, you know, it's in that, in that, the role, it's, it's conversing with you, you go back and forth and ask you questions. It asks you questions. It suggests things and it feels like you're talking to a person to the point where I find myself, when I do use it, getting annoyed and starting to, you know, personify it and yell at it and stuff. And it's like, this is not healthy way to interact with the computer.
If I'm going to yell at my computer, I don't want a yelling back.
It's seducing you into thinking you're talking to a person, which is part of why they can sell the technology.
So I have a point on this, which is the Turing test.
Remember the Turing test?
Way back when, for those three out of all your listeners who haven't heard of it, the Turing test is the idea way back to Allen Turing that if you could type over a teletype machine back in the day, and you could not tell whether the person on the other.
side was a computer and machine, then it might as well be a person for practical purposes.
And this was an early proposed test for sentience, right? And everyone's known it sort of had
problems. What is not as widely known is even back in the 1970s, people knew that this wasn't
going to work out. There was a program called Eliza, which I remember playing with when I was
in college about that was a chat program. But you could look at the source code, and it was just
a bunch of hacks, you know, just abused this. A lot of a other than else. The other than else on
words, right? And it would just mostly sort of regurgitate back with some word changes, you
to eye. How does that make you feel? Right. And so, okay, fine. But what they found was some
small fraction of people thought it was a real person. And it would take, depending on the person,
it might take you a long time to figure out. It wasn't a person, depending how a lot of computer
folks love breaking things, so it didn't take them long. But ordinary folks, normies, might take a while
to figure this out. And so
the observation, and I'm certainly not the first
one to make this, is that the
Turing test is not a particularly
effective test of intelligence.
But it's a really
good test of gullibility
of the person.
Yeah, and that's, I mean, it's
just how we are.
Yeah, right. Like, it's not
an indictment on those people necessarily.
It's just, this
raises a flag. This is something to be worried
about. Well, a lot of this is just
exploiting, it's sort of hacking human nature.
A lot of what's going on with AI today is hacking human nature.
And if the forces doing it are interested in nothing but accumulating wealth,
then that's probably not going to be good for a lot of people.
You wrote this book, embodied AI safety,
for folks like CEOs and other people who are interested in the technology
as a whole.
Yeah.
Those people all seem to love the LLMs.
Are you getting any pushback on this?
I have not gotten any pushback on this book, which is, I guess, remarkable.
Maybe they're either in denial or just hasn't gotten to them yet.
I've been really pleased.
It's gotten a lot of traction right out of the gate, you know, quite a number of books moved.
And I'm getting pretty good feedback.
Lots of folks are saying they really enjoy reading it.
Some are saying they've learned new stuff.
Others, like you are saying, well, it's sort of stuff I knew, but it gives me hooks to hang things on.
But as you get towards the back of the book, there's a lot of things about we have to change how we think about safety because of AI.
Because there's not a person responsible things like deciding when it's okay to break the law, which is an everyday thing.
that everyone does this.
How do you trust a machine with that?
And now engineers have to learn liability because they have to decide, well, you can't,
if you wanted to follow traffic rules perfectly, you'd never get anywhere.
There's a tree that fell down in the lane.
Do you cross the W.L line to go around it?
Well, it depends, right?
And as long as you're being responsible about it, it's okay.
No police officer is going to give you a ticket for going around a tree and a thunderstorm
norm if nobody's coming and everything's clear and you're being careful about it.
They're not going to go after you for that.
Or even, there's a bunch of things like this where there's social norms and there's
reasonable behavior and all the rules are in light of a reasonable person and taking
accountability that if something goes wrong, you're accountable.
You can't hold a computer accountable.
How do you do that?
That's that old thing, right?
Computer can never make a management decision because a computer cannot be holding out of it.
That's right.
And so do you let a computer make a decision, which we'd let a human make a decision for,
knowing that the human would be held accountable, but the computer's not?
It would be better if the computers were sentient because then we could just sue them.
Well, then they would care if they went to jail, but they don't.
Exactly.
So part of the book is talking about how the world changes when there's AI supplanting human agency in operation.
And now all of a sudden there's all these things you have to do.
And it really fundamentally changes how you have to think about safety.
And the last chapter, because I don't want it to be a book about how everything's broken,
the last chapter talks about if you want to build AI, and I don't care if it's Robotoxy,
or it's a power plant control system, or it's a medical device, you know, I don't care.
You need to build trust with people because you're taking away human agency and asking the trust machine,
which fundamentally does not care if it goes to jail
to make life or death decisions potentially.
So how do you do that?
And it talks about how you can build trust in ways
that are other than trust this bro.
We're going to save lives 10 years from now,
so never mind the person we killed last Tuesday.
Because right now, that's kind of the industry approach
is to just, is a trustless bro approach,
which is a problem.
Because there's no technical basis to know
this is actually ever going to save lives.
And that's not required.
What's required is the companies have to be responsible.
If you put something in a public road, I want to know you're a responsible road user,
not that you're making promises that can't be disproven for another 10 years.
One of the examples you had in that later part of the book was a car,
a autonomous car, that needed to stop, that had a critical error,
had decided that it was time to stop and that the best it could do was stop where it was,
which, okay, let's say it ran out of gas
or its battery fell on the ground
or something happened.
There was the time the engine fell out of my car,
so I definitely stopped then.
That's a true story, yes.
Yes, sometimes there's nothing you can do.
That is correct.
And that's a perfectly fine assessment
and what thing to do,
except you don't do it in front of a firehouse
in front of where ambulances are,
fire trucks will come out of?
If you have a choice.
And so this is kind of the trust erosion game that goes on.
They'll say, well, if something goes wrong, we stop because we care about safety.
And there were robotaxies parking in firehouse driveways.
And what was never really brought out, but it's really unlikely that there was no alternative.
I mean, if the engine falls out of your car, there's nothing you can do.
But the argument is sort of, well, it might have been the engine falling out of our car.
we have no obligation to do anything better than stop.
And they were blocking fire trucks, they're blocking ambulances.
There was one robotaxi that stopped between firefighters and a burning car,
so they had trouble getting at the burning car to put out the burning car, right?
Just crazy stuff.
So there's, but there's a standard theme that sort of comes out in the book,
which is how do you judge how good it has to be?
Can't be perfect.
Nothing's perfect.
How do you judge how good it's to be?
And the really unsatisfactory feeling conclusion I came to is on a case-by-case basis,
you need to compare it to how a competent and careful human would have done.
So if a human...
But not a perfect human.
Not a perfect human, right?
But if an ordinary human driver who is not drunk and not distracted, so they're already
the top 20%, right?
I'm making a joke there.
But, you know, so a careful, competent, qualified human driver could have moved the car
out of the driveway, then the Robotexie should have moved out of the firehouse driveway.
If your axle broke and your engine fill out of your car, yeah, I agree.
There's nothing you can do, but that's a very rare thing, right?
The usual case is, of course, you can move another 20 feet.
Even if it's at one mile an hour, be out of the way.
So if the fire truck has to get out, you're not locking it for 20 minutes, half an hour, whatever the time was.
And a human at some point will say, okay, I have to push this.
car five feet. Or whatever. I mean, if there's a police officer screaming at you to move your car,
there's an exceedingly small fraction of drivers who will not do that. But screaming at a
robo taxi doesn't do anything. And there's photographs of that happening. There's no consequence
for it. Well, that's right. So beyond robotaxies, ask yourself, if you're giving the AI,
if the AI is supplanting human decision-making authority. And that
decision-making authority comes with
accountability. So the AI
supplanting it, what happens
if the AI gets it wrong?
And if the answer is sue the manufacturer
for product effects, you're
basically saying, well, unless
the loss is more than like a million or
$10 million, there's no consequence because
they'll never get the money back from a lawsuit,
then you've basically
acting with impunity, and that's a lot of
where we are right now. I mean, yeah,
you can sue a company
into oblivion, and then they're
gone. And now anybody else?
That's counterproductive for that reason. But B, in almost every case, you can't afford
to mount the lawsuit because you have to pay your costs and you might lose. And it's going
to be difficult or impossible to find a lawyer willing to pony up the cost to run a case
for a product defect for a one-off. It's very difficult to make that happen.
And so basically, you have to kill a whole bunch of people before it matters.
Under current law, that's correct.
And one of the things I would like to see is the law changed and say if the AI system is taking the responsibility the human would normally have, you should hold the AI system accountable for the same degree of care human would have.
So you should be able to sue AI systems for tort negligence, liability, wrongful death, carelessness, recklessness, right?
Just like a person, except the AI.
doesn't care if it goes to jail, it doesn't have any money, so the manufacturer has to be
the responsible party. And that may not be perfect. It won't make things perfectly safe,
but it'll put much better pressure on your manufacturers to be responsible when they deploy
this technology. And you mentioned before you have a bit of legal theory for tort negligence
in your book. I have three law journal papers co-author to the law professor, which is three
more law journal papers than I had ever hoped to have in my life.
And so, yeah, there's a section that sort of summarizes that and a pointer out to the
real, to all the law journal papers.
And I think many engineers have wondered what their exposure, legal liability, would be under
some of these cases.
And so that was an interesting part of the book.
Yeah, that is tricky.
we've seen recently some engineers go to jail or get criminal sentences for really egregious stuff like
Bokeswagon Dieselgate, things like this, although there's some feeling that they were made,
that they were the sacrificial anodes for a more systemic issue.
So saying that the engineer will never be held responsible has sort of gone away,
but it's still got to be a really, really big deal.
But just ethically, as an engineer, whether you're going to go to jail or not,
you don't really want to be responsible for someone dying.
Talking to the engineers who were involved in like the Uber,
Robotexie fatality, yeah, they blame that on the safety driver,
but the engineers really were hit hard by that.
They really felt that having someone die from your technology
is pretty amorphous and theoretical until after it happens,
then it can really, really hit you hard.
And so if you're in this technology,
it's a lot harder.
You can't say, well, we followed the safety standards,
so I can sleep at night because we did everything we could do.
We followed the safety standard.
These robotoxy companies, some of them are not following the safety standards.
And now if something bad happens, you can't really say you did everything, can you?
And having worked on FAA and FDA products, there's always a little more you could do.
There is, although there's no safety standards for AI that have teeth in
them for those areas now, right? Right. And so if you're working on the products and the safety
standards are immature or don't exist, how do you sleep at night, right? And the first step,
okay, I'm here to sell my book. The first step is this book at least tells you how to think
straight about it. It's not all the answers, but at least you get a framework to try and reason
through. Are the things I've thought of? Do I have a blind spot? At least don't have blind spots
when you're going into this. And with that, let me bring up the other two sections that I think
were super important, the safety and security, which you drew some really interesting parallels around.
Yeah, I'd like to think of security. Now, there's the whole IT-based security and credit card thefts and all that stuff and ransomware.
Those are interesting. Let's not talk about it.
Right. And so we'll set those aside. I'm not saying they don't matter. I'm saying we're setting those aside. Okay.
And what's left, what's unique about embedded system security in general is that it isn't about encryption. If you say, I've an embedded
system and I'm secure because I encrypted, you probably are barking up the wrong tree.
Because it isn't about keeping secrets. Everyone knows you have your turn signal on. The fact
you're about to turn is not a secret. It's more about integrity, making sure that no one's
subverted your code. Sometimes it's about denial of service of the safety shutdown function,
right? So there's different security properties you care about. But the overarching framework,
The way I like to look at it is, when you're doing safety, on a good day, you have a safety case,
which is a well-reasoned argument supported by evidence why I think I'm acceptably safe.
And if you're going to attack a system that is a safety case, what it amounts to is the attacker is looking for holes in the safety case.
It's like, oh, they assume this never happens, so we're going to make it happen.
They assumed that the program image that's burned into the chips at the supplier is the one they sent the supplier.
Okay, so we're going to burn a malicious program image by bribing someone at the supplier.
So a lot of the things boil down to holes in the safety case.
And so that allows you to think about security in the same framework of safety as hazard
analysis, hazard risk analysis and risk mitigation, but you have to think not just of stuff
that can break, but the stuff that can go wrong on purpose.
Malicious intent.
Malicious actor, yep.
Which is, I mean, so much harder to protect against.
It is.
And again, just like safety, you can't be perfect.
But there's a lot of things that have already happened.
And did you think about those?
What's your mitigation?
And the mitigation doesn't always have to be technical, but sometimes it is.
And you can learn, just like with safety, you learn with experience, you build a hazard log, list all the hazards.
And when you learn something that you put it on, and from the next project you use that new hazard log.
Even, there's lots of folks who get frustrated and say, well, I can't be perfect, so I'm going to do nothing.
And the answer is no, you don't have to be perfect, but you shouldn't do nothing.
The best you can do is make a list of things to worry about, to think about, and add to it over time.
Continuous improvement is okay for safety and security if where you're starting from is a pretty robust starting list.
Okay, and forgive me here.
But there's a little whiny part of me that says, I don't want to worry about this.
I just want to make my widget.
Yes, I understand.
And all of us go through that phase in life.
Some stay in the phase.
Some get out of it.
Some go back, right?
At some point, you say, this is just too hard.
I'm going to do hobby projects.
Okay, cool.
That's fine.
Please don't sell them at scale if you haven't done safety.
that's the stakes of playing the game.
If you want to build something that can kill someone, you have to do safety.
If you don't want to do safety, go find something to build that can't hurt someone.
And the thing that makes it feel so overwhelming is, as computers connect to every part of her society and daily life, more and more things become safety critical.
So it used to be safety was this niche specialty that only a few people worried about.
And now safety is everywhere, and it's because computers are everywhere.
Well, there's your problem.
Well, you know, just turn off all the computers, everything would be fixed.
Christopher has been wandering through the house in the last couple days, as some of our lights are smarter and complaining about their failures.
And it's hilarious.
And I'm just like, we could just go back to switches.
It wasn't the lights I was complaining about.
Well, it was everything breaking.
Well, I have fun with this because I'll be in someone's house and the thermostat's not working and said, oh, I did the code review for that one.
Here, try this.
That happens more frequently than you might think.
It's funny how many things are safety critical.
I mean, I worked on children's toys.
And on one hand, they didn't tend to actively hurt other people.
On the other hand, you still had to make sure that the kid couldn't hurt themselves.
And kids are creative, and they have a lot of time.
They have a lot of time to worry about that, yeah.
They really do.
Well, I'll just say you this.
as a child, I learned how to reset
circuit breakers without my parents knowing that it had happened
and I'll just stop there.
There are some questions.
Like, what were you doing that requires you?
No, no, not going to leave it like that.
I'm just going to assume you were licking the light switches
or the light sockets or the anywhere.
Remember, this is back in the day when lots of appliances had vacuum tubes in them.
So we'll just stop there.
Phil, it's been wonderful to talk to.
to you. Do you have any thoughts you'd like to leave us with?
I guess the high-level thing is I think safety is becoming more pervasive, and the advent of
AI and everything is fundamentally changing how you have to think about safety. More people
have to understand safety, and it expands the scope. There's safety, there's security, there's
human computer interaction, there's machine learning, there's legal stuff. And you don't have to be
an expert. It's not reasonable to expect everyone to be an expert. But if you're literate in all
those things, you're much less likely to have a bad surprise when you create a product and
deploy it and realize, oh, there's this huge hole because I didn't understand the fundamental
deal with some of these areas. So I wrote this book, embodied AI safety, in part to help people
get up to speed and make sure they don't have any gaps in those areas. And in part, to set out a
menu of what all the challenges are. What do we learn from the robotaxie experience that applies
to things that aren't just robotaxies? So I hope the book is useful to help people get up to the
next level of what's going to be happening because AI stuff is going to be everywhere. We keep hearing
that. And embedded systems are everywhere and you put them together. It really changes things
in a much more fun and a mental way than a lot of folks have come to appreciate yet.
Our guest has been Philip Coopman, Embedded Systems and Embodied AI Safety Consultant, Carnegie Mellon University Professor Emeritus, and author of Embodied AI Safety.
Physical copies are available wherever you get your books.
The e-book will be ready for Christmas.
And if you'd like an hour-long preview, Philip has a great keynote talk about the topics covered in his book.
And there will be a link in the show notes to that talk.
Thank you, Philip. It's always great to talk to you.
Thanks for having me on. It's been a pleasure.
Thank you to Christopher for producing and co-hosting.
Thank you to our Patreon supporters for their support.
And thank you to Mark Omo for pre-chat and some questions he contributed.
Of course, thank you for listening.
You can always contact us at show at Embedded.fm or hit the contact link on Embedded FM.
Now a quote to leave you with from Werner Vinges,
The Coming Technological Security, How to Survive in the Post-Human Era.
Werner says within 30 years we will have the technological means to create superhuman intelligence
shortly after the human error will be ended he wrote that in 1993 so we're running a bit behind
