Embedded - 67: Software for Things That Can Kill People

Episode Date: September 11, 2014

In front of a live audience, Chris and Elecia talk about their experiences with FAA and FDA. This show was recorded live in front of the Silicon Valley Automotive Open Source meetup group at Hacker D...ojo.  The Wikipedia article on DO-178B is a good place to get an overview of the FAA process (even for other levels of concern). For FDA, their guidance is the best place to start. Also see their 510k information. Finally, note that all class III (3, very high risk) require the more difficult Premarket Approval (PMA) process. Everything we know about car safety certification, we learned by reading Wikipedia's ISO-26262, including Automotive Safety Integrity Level (ASIL). Jack Ganssle's Embedded.fm episode was Being a Grownup Engineer.

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Embedded, the show for people who love gadgets. I'm Elysia White. I'm Chris White. We are recording live from Hacker Dojo, and we do a weekly podcast about embedded software, hardware, gadgets, engineers, engineering, and, well, laughter. And while we usually have a different guest every week, this week it's just Chris and laughter. And while we usually have a different guest every week, this week it's just Chris and me.
Starting point is 00:00:29 And we have an audience, which for us is really different. Thank you so much for having us. And I really appreciate Alison Chaikin, a past guest of the show and a listener, as she mentioned. She heard our show about being a grown-up, and it was with Jack Gansel, one of the absolute heroes of embedded systems. And we wanted to talk more about that.
Starting point is 00:00:55 We care about safety in embedded systems. I've worked on FAA products doing airplane devices, DO-178B, which is you can crash the plane if you try hard, but you're not likely to have the wings fly off. And I've done two FDA products. One was a DNA scanner early in my career, and one was a operating theater and ICU temperature sensor, which doesn't sound all that important until you realize that if your temperature goes down, something may be very, very bad and it happened quickly. So that was a level two on the FDA list. But Chris does more FDA.
Starting point is 00:01:38 Um, yeah, I don't know if I've shipped more FDA. I've worked at a few startups, probably three or four startups that did FDA. FDA-regulated products, various safety levels. The first was probably class two, which is the one where you can injure people significantly but not kill them. And then there were a few more that were of lesser concerns. But I've been through the process a few times, and it's interesting to watch the FDA process from within a startup rather than from within a larger company, because startups, they have different goals,
Starting point is 00:02:16 they have different deadlines and requirements, they have investors who are wondering why you aren't done yet, but you've also got the government wondering why you're going so fast. So it's an interesting line to try to walk. And we're not lawyers. And we don't have any experience with cars. I drive cars all the time. Well, true.
Starting point is 00:02:37 Me too. Okay. We don't have any development experience with cars. You know, I read about DO 26262 in the ACIL levels, but all I have is what Wikipedia told me. We're also not lawyers, so we're just going to talk about what we've done, what we've been through, what was good, actually, and what was bad, and why this matters even if you aren't writing software for things that can kill people. So why are we here? To talk about being grown-up engineers.
Starting point is 00:03:14 All right, yeah. That was one of the things Jack said when he was on the show, that he wanted engineers to be grown-ups because we spend a lot of time hacking things together because that's the fun part. Well, and hacking has gone from a negative connotation in the 80s and the 90s to positive connotation now. I'm a hacker. I put things together. I make things work, which is good. It's good that people are out there doing stuff. But I like that hacking is cool. Making is cool.
Starting point is 00:03:49 Being an engineer is cooler than it's ever been because of the hacker and maker communities. But there's a difference between engineering and making. And that's the difference of being a grown-up, of having the discipline. If you have kids, you probably would rather feed them ice cream and play with them. I mean, that's way more fun than demanding they brush their teeth and clean their room. I know, I don't have kids. I usually just feed them ice cream and play with them and then send them home. But you have to make them eat their
Starting point is 00:04:25 vegetables. And as engineers, we have to eat our vegetables. Yeah. And I think it's difficult, again, going back to the startup mindset where you're trying to, you're not quite sure what you're making necessarily, and you want to have progress. You want to have something working in a week to say, oh, maybe this idea is going to pan out. And there's a lot of almost delayed gratification that comes when you have to work on something safety critical that, I don't know, a lot of engineers aren't used to. And a lot of the people I've worked with at startups who didn't necessarily have a safety background would come into these companies and they'd be fresh out of school or fresh from a more normal software environment,
Starting point is 00:05:06 and they'd just be shocked that they can't just go, oh, I'm going to modify the code to do this, and we'll ship it. Well, you can't do that. You have to go through this process, which is very lengthy, and you have to document it this way and that way. So there's getting used to not getting what you want when you want it in terms of development and trying to find a way to do the prototypes and mock-ups that you want to be able to do and keep that compartmentalized from the product development so you don't get in trouble maybe we should talk about the actual processes yeah um do fda first uh well I can speak to the FDA software process. Right, and manufacturing is different,
Starting point is 00:05:49 and we should be really clear with that. Well, and the whole product process is a little bit different, but the software process sort of maps onto the product process in some ways. I mean, the documentation is sort of similar. It's all very parallel. For software, basically what they want to see, and the FDA, just to set some guidelines, the FDA has historically been very skeptical of software.
Starting point is 00:06:15 Afraid of software. Afraid of software. They don't, in the last, I think it's gotten better in the last half decade or so, but about 10 years ago, they really didn't understand software development. They didn't understand software. So anything you did in software was given a high level of scrutiny that you wouldn't necessarily be used to from another regulatory agency, maybe. So it was a challenge sometimes to explain what you're doing and what this part of
Starting point is 00:06:42 the software is for and why it is safe. But they have guidelines. They have a document, which we'll link to the show notes, a general guidance document for development. And basically, it starts out with establishing a level of concern. And level of concern is, how badly are you going to hurt somebody
Starting point is 00:07:00 with the device you're making if you screw it up? And they've got three. There's minor, which is no injuries likely. You're probably not going to hurt anybody. You may just screw up some data record somewhere. Moderate, which is minor injury. And this applies to both the operator and the patient. Because you can't hurt the doctor if you do this wrong.
Starting point is 00:07:18 Right. Or nurses. A lot of devices. And the devices I was working on were high-energy lasers, so you could quite easily, and it did happen at the company, injure people or things if you weren't careful. And then there's major, which could cause death or serious injury, and that's usually for things that are like life support devices, respirators, that kind of thing.
Starting point is 00:07:42 So that actually parallels FDA. FAA, sorry, and the ASIL, the automotive ones as well. They're a little different in that the names are different, but similar levels of how much damage can you cause. Choosing your level is about hazard analysis and risk assessment. So you can figure out how many people your product's going to kill. I mean, it is. In FDA, how many people your product is going to kill if you don't do a really, really good job?
Starting point is 00:08:19 Okay. Is that better? No, but that's fine. So FDA level, DO-178, and there's always a letter after that that tells you. Wow, FAA, I'm so toast. Okay, so E is it doesn't matter. You know, fine, seatbelt.
Starting point is 00:08:39 Well, not seatbelt, but tray table doesn't go down. That's an E level thing. Who cares? D is down. That's an E-level thing. Who cares? D is minor. It's noticeable. It may be inconvenient, but it's not going to cause injury. C is major.
Starting point is 00:08:54 And that's where you can cause injury, possibly even death to crew members, but not to passengers. And hazardous is fatal to the crew, but not to passengers. And hazardous is fatal to the crew but not to the passengers. Injuries, major injuries to the passengers. And A is catastrophic, a crash. And that would be injuries to everybody on the plane,
Starting point is 00:09:17 death to most people on the plane, and death to people on the ground. Even as they're thinking about that, it's how big is your plane? If you have a little Cessna and you can crash it, that's a different level than if you crash a 747. I think we're all kind of aware of this now, but when I worked in FAA things, it was before people had crashed large airplanes into buildings for... Yeah, well, so cars are very similar to FAA. They have paralleled that whole process,
Starting point is 00:09:56 and there are no catastrophic levels with cars. You can kill people inside your car, you can kill a few people outside the car, but you can't kill whole blocks. And that's where your risk assessment comes in, is how much damage could you cause before you design this properly, before you engineer it, before you think about it, before you're a grown-up about your processes. And that's why we get these systems and these processes that they talk about. And these are, again, very similar between FDA and FAA.
Starting point is 00:10:30 Right. And a lot of it comes down to documentation. But the documentation... And the kinds of documents you have to produce. Yeah. And with the FDA, the kinds of documents you have to produce are governed by the level of concern. The higher the level of concern, the more stuff you have to write. See, in FAA, you have to write all the documents,
Starting point is 00:10:48 but you don't have to fill in all of the check marks. There are objectives, and some objectives have to be met with independence, which is somebody else has to come in and prove that your traceability or your implementation is what you said it was? Define somebody else.
Starting point is 00:11:08 Well, there are people, auditors, and their DER, Designated Engineering Representatives of the FAA. And they go through weeks and weeks of classes in order to be able to afford a bond that will let them basically take the blame if your software screws up. Although it never works quite like that. But there are auditors. And you have auditors too. The FDA has notified bodies that people will come in who will do some testing. But it's not
Starting point is 00:11:38 the same thing. A lot of it is actually self-reporting. And that's what's a little bit scary about the FDA. Even in catastrophic, most of the objectives are self-reporting. And that's what's a little bit scary about the FDA. Even in catastrophic, most of the objectives are self-reported. Yeah. So, I mean, you have to come up with, I think the documents are similar for FDA and FAA, but you have to have a requirement specification. The SRD. You have to have a software design specification.
Starting point is 00:12:01 SDD. You have to have a V&V verification and validation plan, which I don't know what that is in FAA. I have that. SCVP, software cases and procedures. And you have to have a hazard analysis. And the hazard analysis is like the ISIL thing where you go through and, well, this could happen. What are we doing to make sure it doesn't happen?
Starting point is 00:12:21 Or if it does happen, how do we mitigate it? And you go through that whole thing. All these documents have to be linked together too in what's called traceability. And that's the real trick is every document has to flow back. Every element of every document has to flow back. And so you've got all these processes that govern how you document these things. And you've really got to have that in place in your company in order to make progress toward anything that you could ship and that's kind of where startups tend to trip up yeah let's let's all do this the agile way we can ship it in a month if only we all work on it at the same time um but that's really
Starting point is 00:13:01 really hard when you're talking about proving that you thought this through. I mean, we talk about a software requirements document, a software design document, and these require some sort of forethought of what you're building and a definition of what you're building, not necessarily before you start because you give the whole package to the FDA or FAA at the same time. They don't review them as you go along. So you could do an agile development process where you design this thing, you define it, you implement it, and now you backtrace everything. But Waterfall is much easier.
Starting point is 00:13:41 Yeah, and to your point about the three the three-letter agency whichever they are looking at your documentation sometimes they don't i mean they're not the fda is not going to read your software design specification they're not going to read your software requirement specification and they're not going to come back with comments on it they're going to look at the main documents of the 510k that surround your product and your clinical studies. The only time they're going to look at that stuff is when you screw up. And then they're going to look at it
Starting point is 00:14:11 and then they're going to come into the office with sidearms and they're going to sit you down and they're going to go, so what's about the line 187 of your software requirement specification? How did that trace into a particular line of code? Yeah, exactly. Show me how this works. Show me, show me, show me.
Starting point is 00:14:27 And then they'll come out. And if you're lucky, they'll give you a set of corrections that you have to make. If you're unlucky, they'll tell your company to stop working. Entirely. Yeah.
Starting point is 00:14:35 So that's where the self-reporting is kind of a double-edged sword. Because yeah, great, okay, we can state all of these things about our product and how we've done all this, and we can claim that it's safe, and the FDA is not going to stop you. The process doesn't make it good. It just makes it so that it's been through the process.
Starting point is 00:14:57 But I want to go down that path a little bit, but you said 510K. Yeah. So explain that. So 510K is the general product process for an FDA medical device or drug. Isn't it usually based on another similar device already having gone through the process? So for a 510K, you try to base it on what's called a predicate device, which is something that somebody else has already done. And you say, well, this is just like those guys' product, but just a little different.
Starting point is 00:15:28 And that shrinks your requirements down by quite a bit. If you have a truly novel product, which you never ever want to do in the FDA, you have to do what's called a PMA, which is pre-market. And that takes a long time. Well, it requires, I mean, for even our little temperature sensor, which wasn't entirely a novel. It was a 510K, but because of the technology, they had to go back and do some of the PMA activities. I mean, we spent months in clinical trials. Well, you're always going to spend months in clinical trials. We spent months in clinical trials proving that the temperature sensing was within the spec that we gave. I see. And it was before we were allowed to finish our software
Starting point is 00:16:12 and process and package. Yeah, so the 510k is where you want to go. PMA is bad. Give an example of 510k. I think while I'm speaking on my part, I'm in the digital health. So I'm trying to learn the whole process of the software development in the digital health with FDA, with HIPAA compliance. And I don't know who else have questions regarding that process. So can you just give an example of what 510K is versus a pre-market category? Thank you. Okay, so the question had to do with 510K and what it is, with some examples with pre-market analysis as the alternative.
Starting point is 00:17:00 Well, I can give the example of the 510K of a product I worked on, which was a laser, a surgical laser, of which there were many others in the market that were already very similar. And this one just delivered the energy in a different way. Yeah, the hardware. And the 510K and the PMA designation apply to the whole product. It doesn't apply to just the software, unless you just have a pure software product. Which I think she does, because she mentioned HIPAA compliance and digital health.
Starting point is 00:17:31 Yeah, for a pure software product, I'm not quite sure how the distinction works there. I would expect almost all pure software products to be 510K, or at least easily argued that they should be by a competent regulatory person. Yes, because record management is something that already occurs
Starting point is 00:17:50 and the software is not making it more dangerous. Right. Privacy concerns have to be handled. Yeah, the HIPAA side of things is a whole other... We don't know. We do embedded systems. We do gadgets. So I think maybe we should take that offline.
Starting point is 00:18:08 Yeah, yeah. When you were talking about documents, you didn't mention the post documents, the verification results. You have those for FDA, don't you? We have to not only say the cases, but that you passed them. So the V&V document has... V&V, verification.
Starting point is 00:18:25 Verification and validation. It's verifying your design and validating your requirements or vice versa. I probably got that wrong. It's two very similar words with the same letter. I don't... It's very unpleasant. That is a test plan.
Starting point is 00:18:44 And for each line item in there, you execute a test. For every release of the software. For every release of the software, unless you can explain very well with a letter saying, we've only changed this module of the software, so we only executed this portion of the VNV. Which again is fine as long as nothing goes wrong. You can play games, but when you play games, you should be aware that you might get called on it if something goes wrong. You can play games, but when you play games, you should be aware that you might get called on if something goes wrong. So executing that test produces a test report, which goes in the file. But you don't send these things to the FDA every time.
Starting point is 00:19:12 You just put them in a file cabinet somewhere, and when they come in and audit you, they would say, okay, well, show us your latest VNV. FAA, sometimes you have to show that to your DER in order to release it to the manufacturing. I'm just curious to interject with a question. Are there tests coming from the agencies? One of the interesting developments in automotive is the National Highway Transportation Safety Administration has announced that they are going to start evaluating advanced driver assistance systems, ADAS,
Starting point is 00:19:52 essentially the collision warning, blind spot detection, lane keeping, adaptive cruise control. They're going to start evaluating those as part of car safety ratings. And so presumably they will announce what those tests are. I don't know if those tests will be related to the ASIL automotive safety levels or not, but I'm just wondering if any of the tests that you're talking about come from FAA or FDA. For the FDA side, no. I think for a few reasons. One is the product space is so wide that they can't really establish standards for various tests. And also their agency is so overworked and small that they couldn't really keep up with
Starting point is 00:20:28 that. Basically, your tests are what you declare them to be for the FDA. We got pushback with our temperature monitor from the FDA. You might get pushback. You might get pushback. That we had to test that it was within a certain range. Right. But they didn't give us the test.
Starting point is 00:20:42 There's no standard, oh, you're making a blood glucose monitor? Oh, well, then you have to look and do this. What about something, though, like linters and static analyzers for software? I mean, when we deliver our product to our customer, it has to compile with no warnings with the tool chain we're using. This is unfortunately completely up to you. Well, one of the documents you didn't mention was the configuration management, which is a software environment configuration description, which I think is one for you and separate it.
Starting point is 00:21:16 But that's with the document where you say, our process is waterfall. We have a bug tracking system. We compile with no warnings. We run PCLint. Or you don't say it. And if you don't say it, for the most part, that's okay until you get sued. So the governing bodies aren't governing like that. They're asking you to say what your process is, and they're asking you to say that you followed it.
Starting point is 00:21:49 And if your process is not good, your software will not be good. They're not demanding unit tests. They're not requiring that you have to do this or that. With FAA, one of the products I worked on was what fed into a horizon indicator, the thing that indicates whether you're tilted or pitched or whatever. And that had some accuracy requirements in order to qualify for what was similar to the 510k
Starting point is 00:22:21 process for the FDA. In order to say this is a product just like the other one, a predicate, you have to describe how you're going to be just as accurate as the other guy. So those were some tests, but it didn't have tests like, well, are you testing your code to this level? Are you verifying that your bugs are all fixed? If your software environment document said 5% of bugs don't have to be fixed as long as they aren't critical,
Starting point is 00:22:53 eh, nobody's going to complain until something goes wrong. Yeah. So I worked on FAA Part 29, which is a utility helicopter certification. And so in that sense, when they have a certain recommended practices that you have to follow, so it's called a... There's a... Yeah, sorry.
Starting point is 00:23:22 So there's a Society of Automotive Engineering SAE process, Aerospace Recommended Practices, which you can follow. Also, there is a FAR Part 29, which has advisory circulars that tell you how to certify your devices or aircraft. Now, if it's a mission-critical system, then you may have to do some sort of fault tree analysis or some markup chain or something to determine what the failure rate is. And if it's a handling quality-related device, then you may have to work through a working group to make sure that, for example, your first response goes back to where it needs to be within whatever half time, whatever
Starting point is 00:24:14 it is. So if it's a control system, then there's some working group-related parameters that you may have to match. And that's in part mostly because you're saying it is like this other thing. That's right, yeah. And so you have to prove that you're like the other thing. Right, and for aircrafts, it's very, very well defined what those are.
Starting point is 00:24:34 But for cars, you may have to sit down and you may be the ones determining what the rules are. Well, like Allison's question about collision avoidance, that is very likely to be something that everybody does differently. Everybody implements different software for, but it's going to have a set of things. You have to be able to do X and Y and Z.
Starting point is 00:24:55 But these aren't going to be the verification and validation procedures you use inside your company. Inside your company, you're going to do something smaller. You're going to be twice as company, you're going to do something smaller. You're going to be twice as accurate and you're going to define it as doing it in this parking lot with this car under these conditions. It has to be repeatable,
Starting point is 00:25:16 easily repeatable because every time you change your software, you might have to repeat the test. So there's likely going to be, if it's a user activity involved, then there's going to be, if it's a user activity involved, then there's going to be some handling quality type of test that needs to be done. And it may have a combination of objective and subjective
Starting point is 00:25:36 tests that you have to specify, and how you're passing those, and something like that. Thank you. Yeah, thanks. So one of the things we've been talking about is that all the documents, they don't imply real quality. They don't imply grown-up engineers. Process doesn't imply real quality. Process does not imply anything
Starting point is 00:25:58 other than you slogged through it. And that's not good enough. That is not good enough for engineers. That is fine for hackers, but it's not good enough for us. Can I ask a question about this? And you can tell me... Because you can tell I was going off on a rant there,
Starting point is 00:26:19 couldn't you? Yes, I could. And I wanted to tone that rant a bit. And you can tell me, maybe this may not even be the right time in your presentation to talk about it, but I'm cringing whenever I hear waterfall because waterfall is awful.
Starting point is 00:26:36 You need to eliminate... Yes, but the FDA's heard of it. Yeah. I can tell you the Army, the Air Force, they've all heard of it too. And so the point is... It should at least be a spiral or a modified waterfall so you're feeding things back. At the very least, you need to be able to have eliminated the things that require a miracle. You need to be spending some time saying, that part scares me, and you need to be
Starting point is 00:27:06 doing that in a more free-form, Agile-like model to eliminate the, you know, increase your understanding of what it needs to be doing before you go and write the requirements. Totally agree. And you're not advocating against that. No, and every time I've written a software environment specification, I've gone, well, we start with waterfall, and then this is how we're not doing it exactly that way. And we're going to have continuous integration here and this kind of testing going along during development. And here's where we're going to cycle back and check the requirements again. So you say waterfall to make auditors who probably have only heard that as the only process available happy, and then you say, well, but we're really doing it this way.
Starting point is 00:27:49 And so that's what I've done in the past. I've never used Agile in an FDA kind of situation. I'm sure it's possible to adapt. It makes me a little nervous. Well, I've actually worked in environments where sort of waterfall was sort of enforced on you. And so you had to go and write the requirements before you did everything. I've actually worked in environments where sort of waterfall was sort of enforced on you. And so you had to go and write the requirements before you did everything in such detail that when you got down further down the path, you know, the whole thing. Write it again. Well, and you don't even get a chance to write it.
Starting point is 00:28:18 You had to go and track to the requirement that got changed. Exactly. Or it is unimplementable. And so there needs to be some, hopefully, some sanity in all this so that, you know, in general, I was working in environments where the government was paying for each dollar as opposed to being a, you know, in a, I understand from talking to the medical device world, particularly in the startup environment, is one of the ways of handling it is to start off
Starting point is 00:28:50 and not worry about the regulations at all. Make a prototype. And make a prototype. That's the word. Use the word prototype. Right. And so that you can demonstrate that the fundamentals of what you're trying to do work.
Starting point is 00:29:06 And then go and team up with a big company who knows how to do all the I dotting and T crossing. And effectively redo it from that point of view. And redoing it. I mean, that's an important point. That's a huge point, actually. Because I've worked for a few startups in medical devices. And I've come in a few months into their development, and they have a device that does something, and they want more software.
Starting point is 00:29:34 But they've done no quality work. They don't have a quality system. They have no documentation. They have no process. And they say, well, we'd like to get to human trials by Christmas. And that's when you start checking to know, checking to see, you know, if there's a camera on you or something, or, you know, if they're going to fit you for an orange suit.
Starting point is 00:29:53 Engineering jokes. Yeah. And that, so that's, that's the other extreme. That's the extreme where we're just going to wing it. We're going to put something together and we're going to put these scalpels, robot controlled scalpels into somebody's eye and hope for the best. And that's what scares me when I come into those companies and I say, look, you've got to have a minimum quality system, and you cannot test on humans until you're following this process.
Starting point is 00:30:16 So certainly prototypes are great, but they have to be kind of firewalled from the thing you're going to actually touch a person with. Let's take Carl's question, and then I think we're going to take a break from questions for a little while so we can finish some of the things we want to get through. I think I'm going to knock your microphone over here. Okay, thank you. Yeah, I'm Carl Auerbach from Interworking Labs.
Starting point is 00:30:38 Well, actually, I think I've worked on a system that's higher than A. One of the first things I've worked on was carrying nuclear launch codes. That'd do it. Oh, yeah. So we didn't want that to go wrong. It was 0, 0, 0, 0, if I remember correctly. Yeah. system is higher than A. One of the first things I worked on was carrying nuclear launch codes. That would do it. Oh, yeah. We didn't want that to go wrong. 0000 if I remember correctly. Yeah.
Starting point is 00:30:50 The CRM whatever on the airplane. Anyway, what I'm curious about is fallback and cross checks. I'm reminded of the story of why are traffic signals red and green. It's because the original railroad signals were white and red until the red lens fell out one afternoon. We talk about embedded systems. Well, they're embedded systems. They've got to make their intent known through some
Starting point is 00:31:15 external means. I mean, they don't do it themselves. How much do we have to wrap our systems in cross-checks, like making sure that we don't exceed a particular signaling level, say, of 10 or something like that? How much do we have to wrap our systems in cross-checks, like making sure that we don't exceed a particular signaling level, say, of 10 or something like that? How much do we have to put into our software, counter-software checks, to make sure that our outputs are sensible? What sort of design rules have we come up with to code things?
Starting point is 00:31:40 For example, in my testing, we're always finding that people leave the unsigned keyword off C code for integers in networking. So just sort of what's the context all this works in? Particularly the issue of cross-checking. Yeah. Well, I mean, in FAA, I learned about things that vote, where you put it, you know, a plane is really expensive. You just go ahead and toss a couple of them in there and two of three win, you know. If one of them goes bad, you just vote,
Starting point is 00:32:14 which to me is mind-boggling. I think you're skipping ahead a little bit because that's hardware. I also remind you that I worked on a system where counting votes is hard. Exactly. Especially if you're doing things like attitude and roll pitch and yaw. Especially when one of your computers has failed
Starting point is 00:32:31 and you're counting votes between an even number rather than an odd number. So one of the approaches I've used in the past for safety-critical systems, and this is going to be funny, is to keep it out of the software. Signals especially. Put signal monitoring and level monitoring and safety monitoring of sensors and safety-critical levels in hardware. If it can't go above 10 volts, then there should be a fuse.
Starting point is 00:32:56 And the software may monitor the hardware so it can throw an alarm, but if it's a truly safety-critical thing, the hardware should be autonomous to shut something off if it's exceeding a level. And so that can go in an FPGA if you're being fancy, or it can go on a board. And in that case, also, you usually have redundant sensors. So you've got two that have to agree, or you fault, or you have three that have to vote.
Starting point is 00:33:18 Is there a principle or rule of thumb we know when to decide when we need this versus when we can get by without? When we need this in terms of the monitoring, or when we need to compartmentalize it? When we need to worry about it in our code, say, oh, I think my code might not be enough. I need an external check. That's the hazard analysis. That's the thing that sets your level. And as you go through and you look and you say, can this feature, this particular feature's malfunction cause injury what is the percent chance what is the risk of it causing an injury how big of an injury and how many people does it affect you do all this multiplication until at the very bottom when you add through your
Starting point is 00:33:57 assessment you're getting down to oh i'm not going to kill anybody this year and that's what or you get to 25 chance and then you go look at that code and say okay i need to wrap this in extra safety systems you find your big numbers and then you wrap those in more testing or hypervisors which is where you like run a virtual machine inside like a like a computer You run a virtual machine inside and maybe you run two virtual machines or three or five and let them all vote
Starting point is 00:34:29 and average their answers or have some testing that says this answer is not within the realm of possibility. Or even beyond voting, compartmentalizing your system so that your UI is on a different processor
Starting point is 00:34:42 from the thing that's controlling your, you know. I think that's controlling your, you know... I think that's a big one for automotive systems. Dangerous blade or whatever. Oh, yeah. The infotainment system should not drive the car. So I think it shrinks your problem space a bit if you can keep all of the dangerous things in one place,
Starting point is 00:34:59 and then you don't have to worry about, okay, what if the general-purpose processor has a problem, our general purpose software operating system takes a fault, what effect does that have on this 25-watt laser? Well, it shouldn't have any effect if you designed it correctly. And at the risk of monopolizing the microphone, how much do we assume that the hardware underneath this actually works? That is a different set of certifications. FDA loves hardware.
Starting point is 00:35:25 There are certifications associated with hardware, and some of them have to do with the design, but there are even more associated with the manufacturer. Because I'm thinking of the old Pentium floating point error. Yeah. I didn't look up the hardware.
Starting point is 00:35:41 It's DO17... No, DO-259. It starts with a DO, has a dash, probably a stroke, somewhere. For the FDA, it's a similar process. You have a V&V. You have things that test each subsystem. That's usually where you have an external body come in and actually do the testing for you.
Starting point is 00:35:59 And is that a once-through hardware testing? Pretty much. But there is constant manufacturing time. There's manufacturing tests, but that's not usually linked to the same, it's not the same test necessarily. No, but with FAA, there's a whole set of manufacturing process
Starting point is 00:36:18 to ensure that you are building what you say you're building. Well, that's the whole key. That's the whole key. That's what they're trying to establish with all these processes is, are you building what you said you were going to build and does what you say you're building. Well, that's the whole key. That's the whole key. That's what they're trying to establish with all these processes is, are you building what you said you were going to build? And does what you build do what you said it did? So Carl's question actually did link into what I wanted to talk about next, which was, how do we get to real quality? Who knows, man? Oh, sorry.
Starting point is 00:36:41 No, that's not the right answer. I think compartmentalizing is one of the good ones. Making sure that your safety critical systems are separate. Separate code, separate hardware if you can afford it, separate processors. And that lets you spend more time verifying those and less time verifying the things that are warm fuzzies for your customers, that aren't critical to the safety of the people around you. The voice feedback from the user interface. The voice feedback.
Starting point is 00:37:20 Sorry, pains of previous jobs. I know you have a laser, but could it speak? You did get that, didn't you? Let's see, so there's dividing into modules with some firewalls, making simple, direct APIs between the different modules, especially the safety-critical ones,
Starting point is 00:37:43 and working on small simple hardware focused solutions which actually cars are really good at that they already have an enormous number of processors associated with them that's one of the areas that i think fda could learn things from faa there are a lot of processors in each plane, a ridiculous number of processors in each plane. Do they have standard networking like CAN on planes? Is it actually CAN or is it something else? ARINC. All right.
Starting point is 00:38:13 You can get away with CAN on some parts, but ARINC is a higher voltage thing. Higher voltage thing. Higher voltage thing, yeah. Less susceptibility to lightning strike uh so you you had an example of hardware versus software yeah well this was the fpga example again which i mentioned briefly but we had a system that had to control motors uh for a scanning device it had to control power output from a laser and to monitor the power output from the laser
Starting point is 00:38:45 because it was delivered in pulses that were... You could adjust the pulse width from the user interface, so you had to make sure that you were actually delivering what you were commanding. And all these things were there. And it could conceivably have all been done in software. And I think, actually, for our first version of the product, it mostly was.
Starting point is 00:39:04 For the second version of the product, we put... But you had a lower laser on that first product. No, it was the same. Oh, all right. It was the same. Too bad for the first teeth. It was not a good... Just to digress for a second,
Starting point is 00:39:16 the first product was one of those don't ship it things that they shipped. It was... They pieced it together out of some parts. I swear to God, it had vacuum tubes in it. And this was not very long ago. They pieced it together out of some parts. I swear to God it had vacuum tubes in it. This was not very long ago. It had BNC connectors for things and banana connectors. It had a lot of blinking lights because the engineer liked blinking lights.
Starting point is 00:39:34 I hope he's not here. That's the problem with Blink-A-Mal. They shipped it, and wouldn't you know it, it had a ton of problems and didn't work very well. And so they kept coming back. And so we started a project to redesign it. And one of the things to redesign it was to put a lot of the safety monitoring stuff out of the software, out of analog electronics that the software was sampling with analog to digital converters,
Starting point is 00:40:01 and put all that in an FPGA. And the FPGA would simply, through registers, report values to the software. Then software could say, yeah, you've had a fault. The FPGA could autonomously say, shut the laser off. We're having a problem. And that was good because FPGAs can be tested in slightly different ways than software.
Starting point is 00:40:21 It's a much more controlled environment. You don't have an operating system and a user interface and multi-threading and all these things happening. That's an example of compartmentalizing, but the FDA, how do they feel about FPGAs? They thought it was hardware, so they just ignored it. Pretty much. I mean, that was, I'm being a little flip with that, but it was not covered under our software specification at all it wasn't covered under vnv it was part of the normal test plan and so you'd test the safety systems you pass them they weren't going to look inside to how the
Starting point is 00:40:56 vhdl was implemented quality systems do not imply quality it was, yes, but it didn't have to be. It was good because you guys made it good, not because it was good by nature. My understanding is they made a distinction between CPLDs and the FPGAs too. Are you familiar with that?
Starting point is 00:41:20 I don't recall that. They've become more cognizant of FPGAs. Yeah, when I did this it was about five or six years ago, so it may be that they're starting to catch on. What they were doing, yeah, is they were saying FPGAs are more, less deal-withable than CPLDs. Right. And as you know, you can overlap them pretty well.
Starting point is 00:41:39 Exactly, it's just getting two CPLDs out there. You might choose to use a CPLD because it makes them happier. At one point, we were going to put an ARM core on the FPGA. We didn't do that because we needed some control stuff, and we were scratching our heads trying to figure out, okay, where does this fall under? And you don't know. So we're talking about ways to get to good quality.
Starting point is 00:42:08 What else? Testing. I have the list. Testing. Yeah, so we mentioned Agile. And Agile has two main components to it. There's the, we will continuously show our customer what we're doing and we'll get feedback all the time.
Starting point is 00:42:27 And then there's the test-driven development. And there is nothing that marries these two things together. Absolutely nothing. And I am a huge fan of test-driven development. I think it was Jack Gansel or maybe James Grenning who wrote the test-driven development in Embedded Seabook who said that the way things are now with waterfall, if you spend 50% of your time debugging, does that mean you spend the other 50% of your time bugging?
Starting point is 00:42:55 Which yeah, exactly. Maybe we shouldn't wait until the end to debug. Maybe we should write unit tests and integration tests and regression tests. It is so much easier to find a bug if you've already found it 10 times using the computer. And the computer says, this test no longer passes because you changed over here when you didn't realize it had an effect down here. And so tester in development is awesome. I really, even if you start out not doing the big frameworks,
Starting point is 00:43:32 not doing all of it, just today's code, you know? Here's the problem. Here's the problem. There's always a problem. The problem is the owner-CEO-investor who doesn't know anything about computers. Then he's not going to know you wrote a unit test. He's going to ask why your prototype isn't ready. And he's going to be very angry when it isn't ready in the two weeks that he gave you to do
Starting point is 00:43:56 a six-week prototype that you spent four weeks doing test-driven development. I'm not saying that test-driven development is bad. I'm saying it's sort of a utopian ideal to actually get stuff like that done in a real company where somebody's pulling out their checkbook and telling you to get stuff done. How do you deal with that? It is tough to deal with, especially when you come into a company
Starting point is 00:44:20 and you have a whole bunch of other code to deal with. I mean, do you write tests for it? You could spend all of your time. I'm talking about a company where I came in and there was no code and it was from scratch. I find test-driven development quite a bit faster. And maybe it's because my tests aren't enough, but they test the basic function.
Starting point is 00:44:40 As we go through FDA process or FAA process, and you get this idea of traceability, and you have a requirement, and you have this goes to a design, and this goes to a piece, just, you know, for myself in the future, what it does, and then write a test that says this function does what I thought it should. I went to James Grennan's class, and he talked about writing the test at the same time, I mean, line by line at the same time. Now, I was not convinced by that. But function by function?
Starting point is 00:45:24 Yeah. I still think it's unrealistic. I still think you're going to end up in that situation where you need a prototype, you need it fast, and you're just going to write code and you're going to do it. And you're going to be faced with, I've got a prototype that works. Now it's time to make this the basis of the product. And maybe that's the time to start going back and writing tests, but I... You would never go back to write tests. Nobody ever
Starting point is 00:45:49 goes back, unless they have to. Then I guess we're doomed. You are. Okay, so... You've never been... You honestly think that full test-driven development is faster to a prototype than just hacking something up. Yes.
Starting point is 00:46:06 Alright. If you want it to work more than once. If you're hacking something up, you know, making lights for your bike and you want it to work whenever you're willing to put the time in to make it work, hack something up.
Starting point is 00:46:22 Yeah. I'm just talking about that situation where a guy comes into your office on Wednesday and says, hey, we got so-and-so from Kleiner Perkins coming in on Friday afternoon. It'd be really great if this worked. Wink, wink, nudge, nudge. And that's what being a grown-up is, is not falling to that, to pushing back.
Starting point is 00:46:45 You have to be willing to lose your job. back. You have to be willing to lose your job. Yes. You have to be ethical. You have to be willing to stand up. And maybe test-driven development isn't where you choose. Maybe that's not the line in your sand. I don't think that's the hill to die on. Yeah, maybe not.
Starting point is 00:47:00 But you should have one. No, and I agree. And I like test-driven development. And I think when you're in product development, that's the right way to go. But I think when you're in that first stage of testing the concepts of things, because you're going to throw away a lot of that code,
Starting point is 00:47:18 which means you're going to throw away a lot of tests. And so you're actually going to be doing twice as much throwaway work, possibly. You think it's twice as much but it's not. It's faster because you actually know that this works before you go on to the next one. But I may not care that it works that well.
Starting point is 00:47:36 That well? It's a prototype. It has to demonstrate a concept. It has to demonstrate a concept. It doesn't have to be free of memory leaks. It doesn't have to be bounds checked. Go ahead, Bob. I'm being devil's advocate here. In this area,
Starting point is 00:47:52 isn't the party that's not being a grown-up, the Kleiner Perkins that's coming there and saying that okay, I've got a prototype, then I need to have something by Christmas? I mean, the reality is I'm actually agreeing. You need to have something by Christmas. I mean, the reality is I'm actually agreeing. You need to have something that shows that it works.
Starting point is 00:48:09 But then after you show that it works, you then actually have to go back and do all this stuff. And you're saying they're not letting you do that. Sometimes, but usually they will. I'm saying nobody bothers. She's saying you'll never do it because you'll be lazy. There will always be a new crisis. I have gone back and done it.
Starting point is 00:48:28 So, you know, I think that's the piece of being a grown-up, is taking your lumps and going back and doing it. But, you know, Kleiner Perkins has a billion dollars, so... You guys want to say anything about liability and the role that plays in all this?
Starting point is 00:48:47 Yeah, I don't know. So this is probably not something we're going to be able to answer very well. I don't think that individual engineers end up with a huge liability. No, but C-levels do. But executives definitely do. Well, then we asked with the, the horizon display,
Starting point is 00:49:08 we asked our, our designated engineering representative from the FAA about liability. And he said, he said, uh, that if a plane crashed, even if it was our fault and somebody could prove it, our company was so small and so poor that they would go after the plane manufacturer. So that doesn't make it right, but the liability is more likely to be burdened onto the people who have the money. And criminal liability. Criminal negligence, on the other hand, can belong to everybody. on to the people who have the money. And criminal liability.
Starting point is 00:49:49 Criminal negligence, on the other hand, can belong to everybody. That's certainly possible. I mean, you can probably write some really bad code if you wanted and end up at least in court testifying, which is not somewhere you want to be, I think. And that's happened. I mean, I know people who've had to testify. I'm back. And that's happened. I mean, I know people who've had to testify, so. I'm back.
Starting point is 00:50:10 I'm mostly techie. However, I am one of those evil people. I'm a lawyer. Excellent. Maybe you can answer this question. I wouldn't be so sanguine about you not being at the defendant end of the table when something bad goes wrong. And I was listening thinking i was wondering what these certifications from the various agencies are going to do will they protect me from civil or criminal liability or not and i i couldn't come up
Starting point is 00:50:36 with an answer i think we're kind of all running kind of naked do you know of any precedents in terms of liability yeah neglig well. Where they've gone after individual contributors or... Well, this is a good reason to do the best you can and to understand what your options are, to be aware of what
Starting point is 00:50:55 the state of the art is for the people surrounding you so that you can say, look, you couldn't have expected me to do any more. Well, it sounds like you should say no to Kleiner Perkins when Nick says that.
Starting point is 00:51:08 Yeah, and I think it comes down to negligence. I think that's the kind of thing where they could come after you. Well, let's see. We have a little bit about ethics, and I want to make sure we talk about that. Okay. Do you want to hold for just a minute, and we can finish that up? Go ahead. No, you were going to go ahead. I was going to go ahead. Ethics. Um, well, I think we've already been talking about this a little bit, but in these kinds of products where you're making something that can hurt people,
Starting point is 00:51:44 uh, it's a different thing than making a video game. It's a different thing than making a consumer product. You have real people being affected in possibly negative ways, and that's a different frame of mind for engineers sometimes, at least people who haven't experienced that before. So there's a real ethical component to this that's different than, you know, I only worked four hours today even though I booked eight, you know, I'll just go home. That's, that's a different ethical question than, well,
Starting point is 00:52:13 the boss says we need this to do this. So we're going to, we're going to change this safety system to be 5%, give it a little 5% less headroom, and we'll get some more power out of it, and it'll probably be fine. And there's value judgments like that that are going to come up a lot when you're developing a safety-critical product. And it's not just a matter of we built this thing and we make sure it's safe. You're going to have percentages. You're going to have judgment calls on sort of what distance is there between this working and this not working. It's not just a matter of, okay, this system is healthy,
Starting point is 00:52:56 this system is not healthy. Well, for example, you went through and you did all of the V&V, you did all of the tests all over again because you had new software to release you get to the very last day and you realize you forgot to change the version number well that's okay
Starting point is 00:53:13 and so do you hex edit it? no do you change the executable or do you go through all of the VNV again? for that kind of thing you rebuild with the new version number and you write a letter explaining why it doesn't matter. As long as nothing else changed.
Starting point is 00:53:31 I kind of trust that computers are at least going to produce the same code if you change a version number. Unless you've got code in there that says, if version is this and this, then do that. That's different. But that's where you have to make a judgment call. But the way compilers work now you can't just i mean before when when i was faced with this problem i actually did recompile and then do a diff on the hex but there are a lot of things
Starting point is 00:53:56 that change now just because compilers are so freaking complicated. That particular example I'm okay with. Yeah. But you do have to figure out where your line is. Yeah. And it isn't about the product. It's about you. It's about the whole, you know, there was the car manufacturing quality movement
Starting point is 00:54:18 where they allowed all of the workers to stop the line instead of this mentality that the line has to go on all the time the the manufacturing must continue they gave the workers the power to stop it so that they could fix what needed fixing whether it's their station or the one before them everybody is in charge of quality and it's not as much fun as goofing And it's not as much fun as goofing off. It's not as much fun as blinking lights or hacking it together or seeing new features work. I mean, it's not as much fun as signal processing for sure. But it is down to each person who works on the system to say, no, this is not good enough. I will not let my software hurt people.
Starting point is 00:55:11 And so... And pushing back on people above you who are trying to get you to cut corners. And that's always going to happen. Always, always, always going to happen. And you have to be willing to say no. And that can be difficult. And I would recommend if you can't say no to people
Starting point is 00:55:29 who are asking you to do things that you're uncomfortable with, that you don't work in a safety-critical industry. It's one of the reasons that many of the safety-critical industries pay better. Because, you know, you are accepting some liability here. Whether it's criminal negligence or yeah never got paid better than any of those companies let's talk about that later sweetie wrong company uh and let's see i have one more thing about measuring performance and then i think we're open for questions okay uh and this this came back to you know know, I mentioned the Jack Gansel show.
Starting point is 00:56:09 He was a huge proponent of you should measure performance. Whether you're using bugs per line of code, which is not a great metric, but it's a metric. You should use something. And you should improve every day, every month, every year. And I take that personally. You know, it's not just about what my company is doing. It's that I want to be a better engineer. I want to keep improving.
Starting point is 00:56:35 And, you know, sometimes it's just reading a book or reading Spectrum magazine. It's making sure that things are getting better around me and in me. Or hacking into upper management's emails to make sure that they're not doing anything illegal or unethical. I didn't think you knew about that. Questions. Questions that don't involve email hacking.
Starting point is 00:56:56 Because that never happened. Never. No, it was the other way around. Upper management hacked other people's emails. Never mind. I wanted to ask a question about what role longevity plays in some of the, I suppose it's more appropriate to the hardware side of things.
Starting point is 00:57:15 And, you know, I guess an abstract example might be Segway. They had triple redundancy, if I recall, in there. And it was likewise probably three times more expensive than it needed to be to be a commercial success. But 10% of accidents are mechanical failure or attributed to. So if we have vehicles, I don't know how it relates to planes versus cars, but we're holding onto our vehicles for a decade or so. What role would longevity play in determining who's at fault in some of these scenarios? That's really related to FDA.
Starting point is 00:57:57 FDA? Well, yeah. Well, I guess it's related to all of them. I was recently asked to modify a system that had been FDA certified that had a processor that was actually older than I am. And since I'm celebrating a 40th birthday this year, that was
Starting point is 00:58:18 impressive. But they didn't want to change it. They didn't want to make any changes to the system except for this one tiny tweak that they were sure they could squeak through. And I'm like, how are you going to build this? We don't have computers that can compile any of this. And they were like, it'll be fine.
Starting point is 00:58:37 And I didn't take that contract. So I don't know what happened, but it was, yeah, longevity. I think it comes down to the type of product you're talking about, too. For planes, of course, you want them to last a long time and be maintainable forever. On the FDA side of things, there's plenty of products out there that the company would love to have last a year so they could sell you a new one. And the failure mode there is it just doesn't work anymore. Not that it fails catastrophically. I worked on a product that actually had consumable elements that sort of wore out after a couple of uses and you had to buy a new one. And
Starting point is 00:59:18 they were basically pieces of plastic with a chip in it with a counter in them. And the idea was, you know, you'd get continuing revenue that way. So that was a different approach to longevity. Certainly for, you know, class three equipment, things that go into hospitals, things that are life-sustaining kinds of devices, I think, yeah, designing for longevity and having redundancy is a big part of it.
Starting point is 00:59:44 I do not think there are standards. I think the FDA certainly would look for redundancy, but there's no, I don't think there's any written standard. Sarah. Okay. Let's say something, something goes wrong and you do end up killing somebody. After the fact, what is it that they are going to do other than, say, look at your documentation? Will they ask you for a snapshot of your build so you can go back and verify what compiler you were using, whether or not there was a compiler bug, so on and so forth? Well, one of the things we haven't talked about is document control.
Starting point is 01:00:27 You don't have to go back and get a snapshot. Once you've released something, your whole package, including usually your compiler, goes into, you know, it gets burned onto a CD. The first time we did this at one company when we had the first release, the regulatory guy came in and said, so do we have enough paper to print this out?
Starting point is 01:00:48 Because he thought we were going to print out the whole source code for the whole system, top to bottom. Like, no, man, we're just going to burn it to a CD and we'll put it in the package. But yeah, you have a snapshot of absolutely everything to recreate whatever you've got released. We saran wrapped a computer.
Starting point is 01:01:04 I mean, it was the safest, easiest way to make sure that it was buildable and that we could prove that this was what we did. And it was the golden computer. $2,000 for that insurance? Sure. So they're going to look at that. They're going to look detailed into the circumstances.
Starting point is 01:01:22 They're going to ask you why it failed. And you're going to have to do an investigation internally and come up with some plausible answers for why the thing happened. And, you know, if it's one death out of, and this is going to sound cold, out of 10 million uses, that's different than, well, you've had this thing out for a week and you've killed two people already. They're more likely to, you know, put the hammer down real quickly in that case, I think. And it's, it's, I am a firm believer
Starting point is 01:01:50 that the engineers are not just cogs we have to push back when we need to. But the first person is going to be the CEO. He's the one that's going to get the question asked. And, you know, and then he's going to go to... No, he's not going to know. The VP of engineering or whoever's next down until maybe it trickles down to you,
Starting point is 01:02:08 but maybe it gets handled above. And sometimes, and this is going to sound terrible, sometimes they get handled with bags of money. Yeah. And it's going to depend on if, a death is one thing. If somebody's injured by a device, sometimes the lawyers get there before the FDA and that person just never complains. And
Starting point is 01:02:31 suddenly they have an island somewhere, but you know, that doesn't work for startups, obviously. And that's not, that's not to say I encourage that kind of behavior, but it's a weird world out there. Well, I mean, even in cars, there was the Toyota unintended acceleration. Michael Barr proved that in their millions of lines of code, they had hideous process, hideous code, tens of thousands of global variables. It was reproducible in a fixed environment.
Starting point is 01:03:05 And were all of those unintended accelerations due to this bug? We can't prove it. It wasn't logged. We'll never know. And Toyota ended up paying $1.2 billion. So a million lines of code, $1.2 billion. That's $1,200 a line. You'd think they would have paid the engineers
Starting point is 01:03:28 or gotten more engineers or given them a little more time for that sort of $1,200 a line. I think you need different engineers if you've got that kind of code, though. That's my new rate. I'm sure you'll get a lot of work with that. Yeah, so sometimes the liability doesn't come down to you,
Starting point is 01:03:48 but somebody gets it. Hello. Hello. Can you hear me? Yes. Good. A while ago, there was a talk in the autonomous driving meetup group from Brian Walker Smith at Stanford
Starting point is 01:04:03 about the ethics of programming cars with if-then statements, self-driving cars. Now, increasingly, there's a beautiful technique called machine learning and deep learning there, where there is no if-then statement for making decisions about running over the grandma or the little child. What happens to liability in the context of just looking at a bunch of weights in a neural network rather than if statements that somebody consciously designed? So I think I get this because I've done the machine learning. There are best practices for weighting those neural nets,
Starting point is 01:04:44 for doing the Markov chains for waiting your svp parameters and you should be following them and if you can't say this is how i trained it this is how that case works into my existing cases, then you are not probably following a process that says I should be able to trace back through my traceability matrix how everything works. Neural nets, in particular, are going to be hard to get through a standards body. The other forms of machine learning that are more linear algebra-like are easier to push through because it's a little easier to say this did this because of that.
Starting point is 01:05:30 Yeah, as a lawyer, can I interject about how the law is going to look at it? Sure, Carl. There's going to be basically two approaches. One's negligence, which is, did you meet some sort of standard of care? And that standard of care is shifting. As we get better at building these kind of systems,
Starting point is 01:05:47 that standard is going to increase. It's probably right now kind of mediocre, but it will improve. The other standard is one called strict liability, and that's based not so much on negligence and did you do right. It's they built it, they were in a better position to make sure it worked right than the victim, so they pay no matter what. It's called strict liability for a reason, and that's the two ways the law will approach this.
Starting point is 01:06:12 Yeah, and I'd add that the FDA is just starting to come to terms with things like this with robotic surgery. You know, things like the DaVinci, the doctor always has control, even though it's an assisted device. But people, a couple of startups I've worked at are starting to explore the idea of, well, let's just make this completely autonomous. We know where this particular feature is. We want to strike with a needle or a scalpel or a laser. Let's just let it do that. And that's sort of uncharted territory. And that's not necessarily machine learning.
Starting point is 01:06:47 It could be simple heuristics, but it's still, okay, the doctor is now the computer. And that's going to be an interesting field in terms of regulatory, because you are going to have problems, and who are you going to blame? It's going to come back to the company that made the thing, that made the decision.
Starting point is 01:07:07 There will always be a cause in the craze to mess up your software, always. This talk has encouraged me to get out of this business. No, no, it is, I mean, it is a great business, and for all that you can kill people, you can also save lives, and that's why I don't want to get out of the business. I think we're about out of time.
Starting point is 01:07:29 It doesn't look like we have too many more questions except for those of you who are waiting for us to stop recording. Do you have any sum up thoughts? Be careful. All right. All right. Thank you, Christopher, for chatting with me and for producing the recorded show.
Starting point is 01:07:49 Which is actually recorded this time. It only happened once. I can see the little lines, so it's good. Thank you to Hacker Dojo in Mountain View for letting us have the time and space. Thank you to Allison Chaiken for arranging this under the auspices of the Silicon Valley Automotive Open Source Meetup Group, which meets pretty regularly and check them out if you're in the Bay Area. And all of the door donations go to the San Jose State University Formula SAE team where they build a car every year, which sounds pretty cool. And thank you very much
Starting point is 01:08:27 for our live audience for coming out and asking us questions and supporting us. It's great to see you, so thank you. And for those of you who weren't here,
Starting point is 01:08:40 thank you for listening. Hit the contact link at embedded.fm or email us, show at embedded.fm. Every week I do this little final thought that's a quote, which is completely ridiculous,
Starting point is 01:08:52 but I like it. And this one's going to be from Victor Hugo. And that is initiative is doing the right thing without being told.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.