Embedded - 519: The Password Is All Zeros

Episode Date: January 23, 2026

Mark Omo and James Rowley spoke with us about safecracking, security, and the ethics of doing a bad job. Mark and James gave an excellent talk on the development of their safecracking tools at DEF CON... 33: Cash, Drugs, and Guns: Why Your Safes Aren't Safe. It included a section of interaction involving the lock maker's lawyers bullying them and how the Electronic Frontier Foundation (EFF) has a Coders' Rights Project to support security research. As mentioned in the show, the US Cyber Trust Mark baseline has a very straightforward checklist; NISTIR 8259 is the overall standard, NISTIR 8259A is the technical checklist, NISTIR 8259B is the non-technical (process/maintenance) checklist. Roughly the process is NISTIR 8259 -> Plan/Guidance; NISTIR 8259A -> Build; NISTIR 8259B -> Support. We discussed ETSI EN 303 645 V3.1.3 (2024-09) Cyber Security for Consumer Internet of Things: Baseline Requirement and the EU's CRA: Cyber Resilience Act which requires manufacturers to implement security by design, have security by default, provide free security updates, and protect confidentiality. See more here: How to prepare for the Cyber Resilience Act (CRA): A guide for manufacturers. We didn't mention Ghidra in the show specifically, but it is a tool for reverse engineering software: given a binary image, what was the code? Some of the safecracking was helped by the lock maker using the same processor in the PS4 which has many people looking to crack it. See fail0verflow :: PS4 Aux Hax 1: Intro & Aeolia for an introduction.  Mark and James have presented multiple times at Hardwear.io, a series of conferences and webinars about security (not wearables). Some related highlights: 2024: Breaking Into Chips By Reading The Datasheet is about the exploit developed for the older lock version on the safes discussed in the show. USA 2025: Extracting Protected Flash With STM32-TraceRip is about STM32 exploits.

Transcript
Discussion (0)
Starting point is 00:00:06 Welcome to Embedded. I am Elycia White alongside Christopher White. This week, we are going to talk to Safecrackers, Mark Omo, and James Rowley. Hello, Mark, hello James. I'm excited to hear about my new life of crime. Hey, hello. Glad to be here. Mark, could you tell us about yourself as if we met at SuperCon Lunch? Oh, as if I was the kind of person to introduce myself. Yeah, so my name's Mark. I'm the director of engineering at Mark's Engineering. I work on military medical and aerospace devices,
Starting point is 00:00:46 and I also do lots of cool embedded security stuff for those as well. And James, we've never met. Could you introduce yourself? It's true. Well, I'm James Rowley. I was kind of cursed myself to being a jack of all trades, but these days I mostly do embedded software engineering and embedded software reverse engineering.
Starting point is 00:01:11 Do you sometimes just go back and forth, like you engineer something and then you unengineer it? I've never had to yet reverse engineer something that I've done, although I guess that's a good safe guard against losing the source code or something. I do hit the backspace key pretty often when I'm coding, so I'm not sure if that counts. It's backwards engineering. Mark, do you want to do a statement about only talking for yourselves instead of other people? Or organizations?
Starting point is 00:01:44 Yeah. So all the work we did on this stuff was all personal from James and I. It wasn't affiliated with Marcus Engineering, the company that we work for. And now Lightning Round, are you ready? I was born ready. Ready. What is the combination for your safe? I'm not telling you
Starting point is 00:02:05 Oh I say it's actually 4 1, 2, 3, 4 right now But I wonder if they have the same safe All right Well, the follow-up And what James is keeping in there The follow-up was where do you keep your safe
Starting point is 00:02:17 Just put, I guess it doesn't Yeah All right What is your favorite Human MurderBot character? Oh, my favorite human MurderBot character Oh, that's a good one
Starting point is 00:02:31 it would definitely be I want to say now I'm blanking on the name not Gareth Garth Garth um oh Garethan Garethan Gareth there you Yeah James are you passing or are you I have not watched murder bot
Starting point is 00:02:48 I've heard it about it I've heard it's good Well it's been a good show Thanks for joining us Sorry Next question Which of these is not a real famous safecractor? Safecractor? No, safecracker.
Starting point is 00:03:08 Frederica Mandelbaum, Johnny Remensky, Linus Yale Jr. or John Bridger? That question is impossible. That's why I put it in here. I'm going to guess John Bridger. That's the one that sounds least like someone who would be cracking a safe. Mark, you're going to agree with that or go with something else? I don't know. Except that and I was like, oh man, I've got to go look at the history of safe.
Starting point is 00:03:30 cracking. John Bridger is correct. He was the safecracker and the Italian job. The other three were real safe crackers in history. It's a lucky guess. What is your favorite processor family, microprocessor family? I can answer that. Pick 18. I love the pick 18. It's so simple. All right. Well, it was a great podcast and thank you for joining us. It's familiarity. That's the what I've had to write the most machine code for. Maybe it's Stockholm syndrome. You have to use that IDE. Mark, are you going to agree with that?
Starting point is 00:04:12 Oh, I'm definitely a pick 32 person. Mips or arm? Mips or arm? Oh, man, you've got to go to the original MIPS. It's not really PIC 32 if it's arm. That's just vortex M0 in a jacket. Exactly. The SAM processors,
Starting point is 00:04:29 are not picks. I don't care what they say. I got fooled into that when they first came out. I got the Pig 32 CM and I was like, oh, it's a new product. No. All right. Then what's your least favorite processor?
Starting point is 00:04:44 The one in the BSP 32. That one was a lot of thoughtful issues. It's amazing. It's so popular. Extensa is kind of weird. I don't know that much about it. Hit a weird spot in the market in a weird time, and, you know, it was the thing with Wi-Fi that was easy, sort of easy to do, right? And, yeah, once you've got a market, people keep coming back to you, like with Pick.
Starting point is 00:05:15 Complete one project or start a dozen. Oh, definitely both. I like to start a whole bunch and then, you know, farm for the one that I actually get done. I agree with that. It's kind of the shotgun approach, see which one. See which one, you know, blossoms. Favorite fictional robot. Definitely got to be Baymax. I absolutely love the human interface design of the inflatable robot. I'd say, I forget its name, but the first robot you meet in the video game, Soma, that is a, it's an experience.
Starting point is 00:05:53 I'm familiar with that game. Very good game. 2015. Oh, okay. And now a tip everyone should know. I always like to say, don't be afraid to reach out and ask people about the things they're passionate about. They're probably just as excited to talk to you as you are to them. I had a phone call this morning.
Starting point is 00:06:13 And he asked me about something. And I realized, like, five minutes later, I was still burbling. And I didn't know how to stop. And then I just, like, abruptly stopped. And he was like, oh, no, I was interested. And I'm like, oh, okay, okay. Yeah, yeah. That's always great.
Starting point is 00:06:29 You ever talk so much, your hands get tingly because you're not breathing right? Sorry, next. James, do you have a tip? Everyone should know. I have a tip, which is you should, when you're trying to do something, I were trying to build something, a reverse engineer or something, you need to believe that it can be done or that you can find what you're looking for. Because if you go into it kind of skeptical of yourself,
Starting point is 00:06:53 I think it's much more likely to give up prematurely. And that's how, you know, when we do things like, like these safes to believe we're going to find something, and in this case we did. Okay, that was actually a really good tip. I feel like James didn't get the memo. This was fine. I mean, but it was really good that the whole believe you can do it is really important, especially for reverse engineering.
Starting point is 00:07:18 Yeah, but that doesn't apply to me. I don't believe I could do anything. A few months ago, I heard that the two of you gave a talk at DefCon, with multiple demos on a large stage. And you called it cash, drugs, and guns, why your safes aren't safe, which is the most pandering title I could have imagined. Did you actually go to DefCon and think,
Starting point is 00:07:50 what could we do to make this crowd go nuts? That title was created in a lab. That was... Yes. Definitely there was some creating the title first and then backing it up with actual applications by browsing the internet for random safes with these locks. That's half true. I mean, we definitely like, we were thinking about this is actually bad if somebody gets into a gun safe because of this. Right.
Starting point is 00:08:21 Yeah, and then we found out they're super popular on the pharmacy safes because they're like the cheapest locks that meet all the certification requirements. requirements. So all the industrial suppliers use these, you know, by default and all their safes. So that was another fun experience to find out. Okay. But we should step back. There exists locks that go into safes and many different safes use one particular type of lock. Are we saying the company name? I guess we have to secure RAM. and this lock is easily crackable with some small, large amount of embedded software experience? I think that's an interesting way to put it. This is kind of something that we argued with Securam a little bit on was like, how practical is this? How easy is this?
Starting point is 00:09:21 The way that they frame that. And of course, I'm paraphrasing is, you know, you have to spend hundreds, of hours and be an embedded security expert and have all these special tools and lab equipment and stuff in order to do this, which is, in a sense, true, but also once the tool's been created, it's been created. And then it takes, I think, I did on stage in maybe 30 seconds. It was fast. There were two different exploits you talked about in the presentation. Could you describe them? Sure. So the two exploits we talked about, we called the first one, well, I don't know what order we would have, but code snatch, which is a physical electronic tool that goes up through the battery
Starting point is 00:10:11 port and hooks onto a debug port inside the part of the lock that's on the outside of the safe, the keypad, and it reads the supercode, that's the code of the highest level permissions out from the keypad because it is in fact stored in the keypad. The other exploit we called reset Heist, and these locks have a procedure that you can do on them, where you put it into a particular mode, and then you call the OEM, you give it a code that shows up on the screen, they give you a code, you type back in, and it resets all the codes to the default. And I should add, because I always forget to add this, only a locksmith, a registered locksmith, is supposed to be able to make that call. But you didn't need to make that call because you just reverse engineer the software.
Starting point is 00:11:03 Right. So you have one where you walk up to it, you take out its battery, you put in your tool, and then you get a number, you put back in the battery, you type in the number, and now you have an open safe and nobody can tell you made any changes. Right. And then you have a different one where you don't even have to take out the battery, you just walk up to it, type some numbers, use a bit of software to find some other numbers, and then type those in. And now the lock is in factory default, so its password is stupid and you open it. Pretty much. To the second, there is the caveat that if the owner of the lock has changed a couple of the default codes, you have to know those to be able to do that process, whether it's with our software or with calling Securam. So if those have been changed, then you can't do it.
Starting point is 00:12:04 Or if it's been disabled, which it can be disabled, then you can't do it. But the information on how to change those or even that you're supposed to change them, is supposed to happen when the lock is installed into the safe? Yeah, it's quite, I think it's quite hidden and obscure. They have a great PowerPoint or great webinar on YouTube titled Locksmith only with drill points that is for locksmiths about these locks. And in there they say, yeah, these codes are like these technical things that are here, but nobody ever changes them.
Starting point is 00:12:43 Don't bother. There's no security impact. and I would be blown away if even sophisticated users of these had concepts about these these extra internal codes that affect the reset process. The data point that I like for that is we bought quite a few locks off of eBay, including locks that were locked and the seller did not know the codes for. And on all those locks, the codes related to this process were the default, and so we were able to do the recover. process. The recovery process of resetting it back to zero, not the, I want to say, cracking process of
Starting point is 00:13:26 finding the code? That's correct. Although both. Well, where they're applicable, but yeah. Not exactly. The code snatch of actually reading out the code only works on locks made during a certain period. So there's kind of a new hardware and an old hardware.
Starting point is 00:13:50 I forget when the switchover was. I used to know. Anyways, but, you know, the old ones, our tool doesn't work on. I think it'd be interesting to look into them. Really old ones, there was another commercial tool that did the same thing on. And then there's kind of a period where that tool doesn't work, our tool doesn't work. And we've heard through the grapevine that they've made some product change. So potentially the newest locks also it doesn't work on, but we haven't verified that.
Starting point is 00:14:25 So these tools, as you said, as the company sort of said, these tools now they are, they looked very easy to use. But it wasn't like you just spent a few hours playing with this. Right. You did some serious reverse engineering. If I wanted to follow that path, after I got a few dozen instances of the lock, what would be the next step? Well, the first step is always, once you have the hardware, you have to get the firmware off. And for that, we attacked the debug port on these. They have a combined debug and programming port that does both.
Starting point is 00:15:13 and the programming interface does not provide a way to read out the memory. The debugging interface does not intentionally or directly provide a way to read out the memory. But what the debugging interface provides is a method to write into the RAM any data you want, and a method to jump execution to a particular place in RAM. So you can upload a little program. that reads every byte in the entire memory space. It's a unified address space on this part and spits it out over the same serial port
Starting point is 00:15:51 that's used for the debugger. But surely they protected the debugger and you couldn't just type at it. Yeah, it's great. In this part, they actually have a bunch of protections for this. They have a disable bit so you can turn the debug interface off and then you can't send any commands.
Starting point is 00:16:11 And then they even have a interface where you can set a password. So even if the debugger's enabled, the password, you have to enter the right 10-digit password, which is, you know, would take heat death of the universe to kind of guess. I actually, you know, wrote a bunch of code on a Pi Pico to like re-implement the debug protocol, spent a bunch of time working on this, trying to get it to work. The nice thing is the PlayStation guys, this is the same processor that's used by the PlayStation 4. And so these were exploits that they discovered for this processor. And we were implementing, although they didn't have a ton of great documentation on like all the nitty-gritty details.
Starting point is 00:17:01 And as we were trying this out, trying to see if my implementation worked and try the glitching parameters. you know, I, just to kind of like debug what was going on, I tried to send the debug commands, you know, not expecting to get anything back from the processor. And lo and behold, oh, we got responses back from the debugger. So it actually, it turns out they didn't disable the debugger. And, you know, I set up this great glitch loop to go glitch the password to try to figure out what was going on. And I wasn't really having that much success with it. And so I went just.
Starting point is 00:17:35 to manually inspect the data and figure out what was going on. And I sent the debug password of all zeros, which is kind of like the default disabled state, just as like a test vector. And it turns out that the lock just unlocked with that vector. So they didn't disable the debugging, nor did they set a password. So we didn't need any glitching at all to get to get into this part. Speculate on how that happens. So you go to the trouble of having, you're a lot.
Starting point is 00:18:05 lock manufacturer, or at least a lock mechanism manufacturer, you go through the trouble of putting locks on your lock, and then you leave them unlocked. Is that basically what's going on? Well, and they think they did not read the manual for their processor to do things like disable the debug interface, which Renaissance has pretty good documentation on. It's quite nice. I mean, they had a way to turn out the debugger, and they didn't do that, which, you know, sometimes that happens, and they had a way to put the debugger behind a password. Yeah.
Starting point is 00:18:47 Which I've definitely done that one. But they left their password to be all zeros. Yeah, same as the launch coats for the nuclear missiles for decades. Exactly, yeah. So I wanted you to say that because I wanted to see Chris's face. I should have taken pictures for everyone. It was awesome. But when I heard you say this in your DefCon talk, I got weirdly angry.
Starting point is 00:19:16 Oh, yeah. Yeah, yeah. Who thought this was okay? What engineer out there said, yeah, that's good enough? I think you're thinking about it the wrong way. I don't think it was a, yeah, that's good enough. I think it's just general, forgive me, incompetence. I don't think they knew they did it.
Starting point is 00:19:36 They had to know that they wrote the password as they're all zeros. That's just the default. Oh, that's, yeah, that's just the on-program state. Yeah. So they just pulled this processor and did nothing on top of what the processor does. And I think that's even like a attack. tactical issue versus the larger problem, like that indicates, hey, they never, they didn't really go through a security engineering process, is my impression after looking through the whole
Starting point is 00:20:09 thing. You know, the, this, if we step back to think about the threat model of a safe, its only job is to protect the stuff on the inside from the people on the outside. And, you know, there's a safe system has two parts. We call it the outside part, the keypad, where you type in the codes and the inside part, the latch, which is the bit that unlocks the safe after the right code has been entered. And, you know, when I frame it like that and I say to you, hey, where should you store the codes? You might say, well, gosh, I'm going to store the codes on the inside of the safe behind all the steel. You know, sometimes I'll be designing secure products and be like, gosh, you know, it would be great if. our product was a safe or, you know, full of metal.
Starting point is 00:20:59 And this is a product that that's literally the function. But, but no, the codes are stored on the outside of the safe. And they get checked on the outside. And then they send commands to the inside, which are only slightly encoded to unlock the latch. Just like this very basic threat modeling failure. And then, you know, the not locking the processor is just like a follow-on effect of, you know, not really considering security very much at all, it seems. As a devil's advocate, I mean, I don't know how expensive these particular safes and
Starting point is 00:21:35 mechanisms are, but isn't there an argument, somebody could say, well, if you want, you know, this is to keep, you know, the casuals out. If you want a safe that's going to keep out somebody with, you know, embedded systems knowledge or actual safe cracking abilities, then you should buy the more expensive one. Yeah, and I think, you know, that at that end, the nice thing is UL has a certification program for electronic locks, and so does the European Union. There's a UL standard for what they call high security electronic locks, which is the standard that's required for you to use locks on pharmaceutical safes. And so luckily, these are certified to that standard for high security electronic locks. Well, devil's advocate loses.
Starting point is 00:22:25 And, I mean, you titled Cash Drugs and Guns, which definitely brings up gangster vibes. But then you say pharmaceutical safes, and these are safes used to hold dangerous. Controlled substances, probably. Controlled substances worth a lot of money. And not in a very legal, and the reason you're doing this is to a by laws. You have to put some things in safes and not because you're trying to hide them, but because that is the correct thing to do. And yet these locks are certified to be fine. And yet you can, who's in charge here? I have notes.
Starting point is 00:23:17 Well, I mean, the UL standard basically covers a lot of good mechanical stuff and good mechanics. test methods. And I don't remember exactly what it had to say about electronic test methods. It had some stuff in there like, okay, if I, you know, jam mains electricity onto the battery port, that doesn't open it. You know, the number of codes you can have are that sort of thing. But as far as what it had to say about, like, you know, cyber security, it was basically very vague. kind of left it up to the interpretation of like if you wanted to go do all the stuff we did that would maybe in some kind of sense technically be valid but there's no requirement to really do
Starting point is 00:24:08 any particular cybersecurity analysis mark is that more or less right yeah i think the uL standard does not it it was written based on the mechanical lock standard and it it really doesn't contemplate you know, product security, embedded security in the way that, you know, we think about it today. Thinking about, like, we think about what we're going to design before we design it, and we consider what we're going to do. And we take mitigation measures. And we at least document that we thought about it and what we did. It's very mechanical focused. And they have things like, oh, it has to be six-digit code.
Starting point is 00:24:50 And like James said, you can't. There's a technique called spiking the lock where you, just apply very high voltage to the communication pins. And in older lock models, like more than 10 years ago, that might have just burned through the processor and open the solenoid. And they say, oh, you can't be vulnerable to that. But this notion of modern embedded security is really absent from the UL high security safe standard. But EU has standards that should be relevant. they have the Cyber Resilience Act.
Starting point is 00:25:27 And then I think ETSI, EN 303, 645, blah, blah, blah, cybersecurity for consumer Internet of Things baseline requirements. There are good standards on security by design in electronics. But those are not related to the locks? Yeah, the lock manufacturers or the lock standard, creators don't have that incorporated in their standards. There's even the European one, which is EN1-3-00, that is a more comprehensive standard than the UL one, but it really similarly doesn't integrate, you know, like the modern embedded security
Starting point is 00:26:15 standards, you know, like you mentioned, you know, all the work that the FDA has done to set those standards, you know, SISA and NIST and as well as like the automotive manufacturers. You know, the lock world is way behind, which is wild when you consider, you know, the purpose of a car is not to secure valuables, but the purpose of a safe is only to resist attack. That's its only purpose in life. Do they think that this isn't an Internet of Things thing and so therefore security is not relevant? It's not connected to the Internet.
Starting point is 00:26:49 I mean, yes, that part is true. What? Wait, say that again? Yeah, they have great models of this that are connected to the internet. Why? Why? Why? Why? No, it would be useful.
Starting point is 00:27:03 For like pharmaceuticals, knowing which safe got open today. And being able to track data and lot numbers, it would be totally useful. There's, I have a half a dozen ways to do that that don't involve connecting it to the internet. Scanning stickers? Or exposing it to Bluetooth. too. Okay. Look.
Starting point is 00:27:27 What would you suggest I put my drugs in? Like a little bag or something? So there are internet locks, but let's not worry too much about those that can only get worse that way. What should they have done? Like, if I get hired by these people tomorrow, what should I? I do. And I think, you know, the first thing is, like I said, when you think about the threat model, right, the easiest win you can do is put the coats on the inside of the safe. Yeah, yeah.
Starting point is 00:28:05 Because at least your lock is not making the safe worse. You know, if you have to cut through the safe to get to the lock, the electronics, then you've cut through the safe. So your security has died. One of the tricky things about this is actually, safe codes are quite short. There are six or eight digits, depending on the standards that you apply. And I said, the highest security standards are eight digit codes. And even if you had like a hashing or, you know, B-Crypt or something like that, I did some math. If you took the processor on this, which is not a slouch and you set the work factor so high that it took 20 hours to do all the B-crypting to check the hash.
Starting point is 00:28:52 to open this, you could, with a 40-90, you could exhaust the entire space in like less than a day. So there's no hashing that helps. That the nice thing is the only thing you can do is put inside the safe, which is a really easy thing to do. And then all your problems are second-order safe problems like side-channel analysis or, you know, and other kinds of analysis where you're reading tea leaves over the data cable between the keypad and the bit inside instead of putting logic on the outside that does stuff. Yeah, what, I mean, what's the thought process of... Because they're lock manufacturers.
Starting point is 00:29:38 They make this one part. Yeah, and they can't integrate it that way. And then it's an on-off to a solenoid. And then if somebody else provides power to the solenoid, you only have to make the keypad as one single thing. But there is an argument like, okay, if I'm going to spend 20 hours doing something brute force, how is that different from, you know, getting a diamond-tipped drill press and just physically breaking in?
Starting point is 00:30:03 Oh, yeah, and that's where I said 20 hours. I mean, that's like if you entered the code and it took 20 hours for the safe to tell you your code was wrong. Oh, I see. That's what I was telling you. Oh, okay, okay, okay. And the reason I chose 20 hours is because the U.S. standard actually says that the code needs to resist, the lock needs to resist attack for 20 hours.
Starting point is 00:30:19 And that's basically them saying, hey, this can't be the wink link. We want the physical safe to be the weak link. We want you to say, I might as well get a diamond tip saw, not I'm going to bring my chip whisper in here and attack this. Got it. Okay. That makes sense. When you were starting this process, did you, you know, get a chip whisperer and all the glitchy things and a couple of J-tag different units and. and, you know, sit down prepared to crack this the hard way?
Starting point is 00:30:56 It's funny you say that. I think, you know, there's some more history about how we got into this. I read an article by the New York Times about how when there's a team doing January 6th investigations and they went to this person's house and they had a safe from a company called Liberty Safe that uses their safe OEM that by default uses these secure M locks. And in the article, they said it was a kerfuffle between Liberty Safe and their customers because Liberty Safe, the FBI called Liberty Safe and said, hey, can I get the code to open the safe?
Starting point is 00:31:36 And they gave them the code and they were able to open it. And I read that and I was like, there's no way that that is implemented securely. Oh, no. And so we bought. a different model of this lock. So this whole time, we're talking about the pro-logic series. We bought an earlier version of that called the Scan Logic Series. And those are the much cheaper, like the very cheapest locks.
Starting point is 00:32:05 They have no screen. They're just press the button and they beep. And they actually have no logic in the keypad. All the logic is inside the safe. Yay. And we spent a long time analyzing that. There's a processor, this new low-cost processor. that that one was so old that there was literally no way to read out the memory.
Starting point is 00:32:25 There's no debug port on it. We spent like several months developing a completely novel way to dump memory from that chip. We gave a talk on Hardwareo about that chip. Really fascinating, really cool way that we did it. And we did do all that stuff. You know, we not only use Chip Whisper, we used like, you know, really high. and pico scopes, you know, be subjected to the best attacks that we could create tons of analysis. And we determined that the only possible vulnerability was that the code they used had a non-constant time compare.
Starting point is 00:33:07 So it would bail out of the compare loop early if it, you know, if your code stopped matching at some point. And then, you know, you can use some timing analysis and stuff. And we actually were unable to even get a proof of concept of that working at all. So the lower end models were a lot more secure. And that didn't work. But then that took us to the higher end models with the screen. And that's where we, you know, like you mentioned, you know, we opened it up. We did some recon.
Starting point is 00:33:39 We figured out that, oh, yeah, these are in PlayStation's. And from there, it's easy because those guys are relentless. Can I go back to that? Why are the... Are these, like, support chips for PlayStation's or someone? What is the role of the... I think they're... I have no idea.
Starting point is 00:33:54 I think they're the part of the platform management. So, you know, like the basic bring-up and all that stuff. But, yeah, because it's in the PS4, they have been relentlessly hacked. Yeah, well-known. By people who have lots of time and view the video game as cracking the PS-4 security. And having to be part of the system management means you can break interesting things.
Starting point is 00:34:21 Lesson. Let us be a lesson to your kids. Pick an obscure chip that is maybe 40 years old that people aren't using anymore. Nothing wrong with an 80-51. The Liberty Safe and the key to the FBI, that was probably using the I'm a locksmith, so give me the reset methodology? It's funny. It actually isn't. So the reset feature is actually a feature only on the high-end locks. So what's going on on the low-end locks is actually something quite boring. The low-end locks only support two user codes. One's called the manager and one's called the user. And when you buy a Liberty Safe, the factory programs the manager code to something. And then they set the user code. to all ones.
Starting point is 00:35:15 And in the manuals you get from LibertySafe, they don't tell you about the manager code. They only tell you about the user code. So the way that they did that is not through any complexity. It's just this like management vulnerability. They set a code on it. They're a user on your lock and you just didn't know. That feels actionable, like legally.
Starting point is 00:35:39 Like these days they say, you can write to them and ask them to delete it, and they will. Or at least they'll say they will. I have no reason to doubt it. If you know to do that. If you know to do that. I know they also changed their policy after that to require a subpoena. They made it, you know, a higher legal burden for whatever, affordy to get it rather than just basically asking in connection with a warrant.
Starting point is 00:36:09 Still, somebody has your code. Yeah, yeah. And hence why there was this controversy that the New York Times reported on. So, yeah. I still feel like the lock manufacturers don't really understand what their product does. It has been interesting to connect with a lot more people in the lock industry since we gave the talk. And I think I agree there's this, it's like people who are developing secure software in medical and, you know, cars. and all these other areas are on this,
Starting point is 00:36:46 are altogether, you know, mixing around doing good stuff. And the lock people are like on their own little island. You know, there's one of the things that, that was most surprising when we really started digging into this field is, you know, there's a long, proud history of people exploiting bad design and safe locks. And most of it is actually private. So there are these tools called the Little Black Box and the Phoenix Tool. And those are commercial products sold only to locksmiths that can actually unlock several dozen lock models that all have various vulnerabilities.
Starting point is 00:37:27 And this was kind of wild to me. Like there are this long list of locks that are not only have vulnerabilities, but are known exploited because you can purchase tools as a locksmith that unlock all these. models from all the major manufacturers over this long period of time. So it's not even like, this is the first time it's been done. It's people have been breaking these for years and they have not gotten substantially better. That reminds me of the, I don't know if you've seen the lock picking lawyer YouTube channel. Oh, yes. But yeah, this goes for all kinds of locks. And you can go to his website and buy, you know, for educational purposes, things that basically take the place of an entire pick set and automatically pick, you know, in three seconds, most,
Starting point is 00:38:14 most kinds of locks you have on a house or padlocks and things. Yeah. But those kinds of, those kinds of locks are in a different situation, right? Because the safe and the lock on the safe is an entire system that's designed to protect the interior space. Where a lock on a house, everybody realizes if somebody really wants to get in, they're going to break a window, they're going to, you know, apply lots of kinetic force to the door because it's meant to discourage, not completely prevent access. And I think one of the things to figure out is also, that's actually the same as a safe, right? Like, if you put a safe in a field, people are going to get into it. Eventually, yes.
Starting point is 00:38:55 It's part of this, like, layer defense for everything and, like, all securities like that. But my goodness, you know, I agree with, they're designing products that are only supposed to be secure. it just blows me away that they are not applying the best in-class security to this very constrained embedded system. I would argue they're not even applying basic security, given what you've said. It kind of feels like, you know, within when we're doing medical products or whenever people are doing products for industries that have standards for this sort of thing, you apply a certain way of thinking about it. You have a secure product development framework. You have standards you're trying to meet that actually say something about cybersecurity. And it just feels like this is a case of just making something.
Starting point is 00:39:47 I think we've all been there. You make something and it's not going anywhere serious. It's not doing it. You don't think about it that hard. It's like, okay, well, it works. I forgot to disable the bug port. I don't know. I didn't even think about it.
Starting point is 00:39:58 If you don't have that checklist and that standard in place, it's very easy to just make something that fulfills all. the visible requirements without having properly analyzed the things that you can't see so immediately on the surface. Okay. Yeah. 25, 30 years ago, yes. Now there are checklists. There are people who want to help you.
Starting point is 00:40:26 Not even, I mean, there are people who want to help you for money, but there are people who, I don't know, give away free information at DefCon. And there's a lot of people who will yell at you, apparently. But until you get yelled at, you might not know about those things. But you actually delayed this presentation by quite a while because you were disclosing to the manufacturers. And they, instead of saying, thank you, oh, my God, we will fix this as soon as we can, said, we're going to sue you. I, you know, we've got to be very precise. I think they never said they would sue us.
Starting point is 00:41:09 I don't remember what the exact wording was, but, you know, it was implied, perhaps. I believe it was something along the lines of, if you go public with this, we will sue you. Along those lines. Yeah, we will refer this matter to our counsel for trade libel if you choose the route of public announcement or disclosure. That's right. That's a little scary. It was. You know, we were trying to responsibly share with them,
Starting point is 00:41:40 hey, we actually reached out to both Securam and Liberty Safe in March of 2024 and said, hey, we're kind of looking into this. You know, we wanted to let you know and we'd love to be connected to the right technical people to disclose stuff if we find something. And at that time, we hadn't really distinctly found. anything, but we had definitely gotten deep enough into it that our spidey sense says, you know, there's going to be stuff here. And they were initially warm and they said, oh, hey, you know, thanks for letting us know, that's great. And then in April, we sent them a detailed technical disclosure about a bunch of things that we found. You know, we actually had some additional findings
Starting point is 00:42:25 that were, we didn't share at DefCon just because they were like not exciting. And You know, very soon after that, that's when, you know, they talked about, hey, we're going to, you know, they implied that they're going to sue us if we, if we talk about this publicly. You know, we were sure to make sure that we contacted them well ahead of the, you know, like kind of standard disclosure timelines and stuff like that. So they had plenty of extra time. And they, you know, we're not really forthcoming to that. And as a result of that, we actually got connected with the EFF at the year that DEFCon that we were, I think it was the year we were going to go. Another talk was given about vulnerabilities in lockers. So there are electronic lockers that are kind of like at gyms or things like that.
Starting point is 00:43:24 And they were, they at the endth hour got legal notices from. the company that they were disclosing vulnerabilities to, and the EFF stepped in. They have this project called the Coder's Right project that helps people in that kind of situation to communicate with the companies and provide representation through that, you know, just in the scope of talking back and forth with them with the manufacturer. And we got in contact with them, and they were just unbelievably fantastic and supportive through the whole process. And they helped us write letters and communicate with the company and write letters on our behalf, you know, as our attorneys to them. And eventually we got to the point where they were salty about it. But the EFF had convinced us and we had, you know, together gone through the responsible disclosure and talking that, hey, we should go ahead with it. Even though they at the time said still their position was, we will refer this matter to our council.
Starting point is 00:44:51 yada yada yada um and uh and only after we said hey despite you saying this we're going to go present did they actually um did we actually hear from secure m lawyers uh you know in july the month before august we were going to present and talked with them a little bit and uh you know ultimately uh before and after and since then you know we haven't had anything happen but um Yes, kind of a long, tense process. And so there haven't been repercussions yet. Right. I think one of the things that the EFF, you know, made sure to emphasize is you can't stop someone from suing you.
Starting point is 00:45:40 But you can take steps to protect yourself and to reduce that likelihood and to try and ensure. that they couldn't get very far if they did. And that's the sort of thing they helped us with. But, you know, I guess any time in the next seven years or however long it is, maybe we could get a nasty letter, I don't think it's going to happen at this point. And there do exist many other tools that are marked for certified locksmiths, but the tools exist and I'm sure they follow. into the wrong hands occasionally.
Starting point is 00:46:24 So what you did was cool, but it wasn't like the only thing out there. The big difference is, you're right. The big difference is we went public with it, and we wanted to get the word out there and spread it, you know, as much as we reasonably could, or at least, you know, get it out enough that it was likely that people who owned these locks would. would be able to understand the security profile, the real security profile of the device that they had. And, you know, just so they could make a more informed decision about whether they wanted to have that on their safe or not. As opposed to the locksmith tools,
Starting point is 00:47:08 which keep a very low profile, they don't, you know, really advertise outside of locksmith industry publications. They don't go to DefCon. and they don't, you know, ask the people who are integrating those locks, hey, did you know about this? They kind of, you know, this is interesting. We heard from a couple people in that industry that the M.O. there was kind of, if you were making this sort of tool and you found a vulnerability,
Starting point is 00:47:42 you would put it in your tool, tell the manufacturer, the manufacturer would kind of quietly fix it and then your tool wouldn't work on new versions, but that was about it. But as a locksmith, you're making money off of other people not knowing the vulnerability. You're the only one who can open this type of lock. And, I mean, there is a lot to be said for a lock's main job is to not be opened by people who shouldn't open it. but there are always going to be locks that need to be I mean I remember I remember talking to my stepdad
Starting point is 00:48:23 who was a truck driver and very into cars and I was worried that the car I was looking at didn't have great locks and he explained to me that the locks on a car are not really there to be great they're there for casual people not to walk up to your car and that it was okay. You actually wanted your car to be openable by a locksmith in case anything went wrong. And even for safes, there are times that you want to be able to open a safe theoretically.
Starting point is 00:49:05 I think I sincerely agree that, you know, we've tried security through obscurity for it. Turns out it's terrible. Security through obscurity just means only the bad people have it and the good people don't know about it. And security out in the open means that now we've leveled the playing field. And I found it wild. The whole, the industry-wide view that security through obscurity was best in locks and locksmiths and safes and stuff like that. And I think that that's holding the whole industry back in terms of making actually secure products.
Starting point is 00:49:47 I agree because I think that what you're saying, Alicia, is true. Sometimes you do need to get into a safe where you've forgotten all the codes. And they have tools for that, which is a big drill that drills for the safe and breaks the lock off. And if you can't stomach that, then maybe you. should remember your codes. I mean, usually it's somebody else's codes and somebody is probably. That's probably true. So, okay, so you haven't said.
Starting point is 00:50:20 You both are still working for a company, a professional services company that does engineering. So you haven't gone on a crime spree and you haven't been contacted by people who are willing to pay you a lot of money to open. and saves? No, we haven't been, we haven't had anybody. Well, actually, we've had a bunch of people reach out and are like, oh, that was cool. I'd love to get all the code and docs because, you know, did you share the GitHub repo with your Pi Pico? Or did you share the 20 lines of JavaScript that calculate the recovery code?
Starting point is 00:50:55 And I said, absolutely not. You know, the stuff is ripe, ripe for abuse. So we have definitely declined to share it with, with all the people. who reached out and said, hey, I'd love to get access to the GitHub, or where's the proof of concept, or all that kind of stuff. Mark, I think I understand why, but why aren't you just making this public? I mean, it's a... The way you did it made it clear that someone with a medium amount of familiarity with the art
Starting point is 00:51:29 could probably replicate what you did in less than six months. I mean, why are you gatekeeping, Mark? I think the potential for abuse for this, we felt, was extremely high because there's a huge install base for these locks. And because the techniques we discovered were so simple. You know, we wanted to make sure we shared as much information as we can about the bad engineering. So I hope that people watch the talk who make locks and are like, oh no let me go look at our system and they understand enough to do that but without sharing things like hey the exact constants that they use to compute stuff is x or gosh here's the exact series of
Starting point is 00:52:21 protocols or the memory map about how they save the codes you know those kind of tactical details I don't think add to the you know the body of security knowledge you know the stuff that people who are engineering products should think about. And the only thing it enables is, you know, people with significantly lower skill or significantly lower effort investment to go, you know, do something on a nefarious spectrum. How do you feel about Flipper Zero? It's interesting you say that because all the stuff that I see on Flipper Zero is actually and I don't, I'm not like, have a great deep context about it, but it's mostly taking stuff that was already public. You know, like, hey, here's how we can troll people by opening their Tesla port covers.
Starting point is 00:53:14 You could already do that with a bunch of other software-defined radio and running Python and stuff like that. It's just made it way, way, way easier. So now it's like a zero-skill thing that people can just do. And I haven't, and there are probably people out there who are doing this, but all the stuff that I've seen from it are not like serious security research. It's more taking serious security research and kind of productizing it. And I think I haven't heard anything that makes me think like, oh, man, this is terrible. It's mostly, I think, stuff that's pretty far down the pipeline and maybe boosting the, boosting the visiting. of it. So I don't, I don't know that I, I hate that it exists, but I definitely don't see it as
Starting point is 00:54:03 like a serious security tool, or I don't know a perception that's a serious security tool or something that's used on like the, the research creation side kind of thing. And boosting the visibility of the lack of security is probably a very good thing, even though some of these tools have the potential to lead to bad things? Yeah, I agree. I think if you're doing a good job, you know, people who are discovering those vulnerabilities are disclosing it to the right people. And, you know, they're making good faith efforts to update.
Starting point is 00:54:47 You know, Tesla is a great example because they can update their cars all remotely. And they certainly had, well, I don't know the back. background on the whole thing. But I suspect they had plenty of opportunity to do that to keep that from happening. And, you know, in that case, it's probably a good thing to push people to make the right changes. So James, you mentioned you bought some locks on eBay, that eBay vendors said they couldn't open. Yeah. Does eBay also sell locked safes? You know, I haven't looked for locked safes on eBay, but I do, there's a like public surplus auction here in Arizona that I browse idly because they had something good, you know, once two years ago and I'm just chasing that high.
Starting point is 00:55:33 And, you know, it's not infrequently locked safes come up and I always have to zoom in and see is it a secure RAM pro logic? And, you know, maybe speaking to the true market share so far, it never has been. But I always check. If you saw one of these in the wild, like you walk into your favorite sandwich shop and see they have a lock because of the kind you know how to open. Would you ask the clerk if you could just try? I wouldn't. I think Mark probably would. Well, it's funny.
Starting point is 00:56:11 One thing I desperately wanted in the DefCon presentation that we didn't get. is I wanted to show a video, not of a demo of us opening a hot pink safe on stage, but a video of me opening one at a dispensary that was full of millions of dollars of cash or opening one at a CVS. Yeah. But you might be surprised to know that, you know, if you ask people, hey, can I show how we can pop open your safe full of cash or drugs? They're not really excited about that opportunity.
Starting point is 00:56:46 So we didn't end up doing that. But you still could. Now that you've done this, do you have, what's the next New York Times article you're after? We gave a presentation at Hartwryo last year, which is kind of like a check-in where we developed some novel, or we kind of somewhere between develop some novel and expanded the scope of some previous STM-32 exploits to dump code. and we develop those because we've been working on and off on hacking the parts of fridges that implement DRM for water filters. So that's kind of the next thing in our pipeline that we're working on. DRM for water filters. It seems a little safer.
Starting point is 00:57:34 Not as likely to have, yeah. You never know how organized crime come after you. A little safer for that, sure. So could you mention something about the, this was Hardware.com. Is the presentation online? Yeah. So we've talked at Hardware I.O. Three times in a row now?
Starting point is 00:57:58 James. I've been there three times in a row. You've been there too. That's right. And so, yeah, they, it's, we go to DefCon and other places and they're not hardware-focused, you know, embedded hardware. They're all about like, hey, we exploited Excel or access or something. I'm like, oh, man, luckily, my products don't run that stuff. It's the best place for people who are working in embedded security and hardware security. And so, yeah, we gave, have given some talks there,
Starting point is 00:58:29 and they're all available on YouTube and a bunch of other really talented people. So you can definitely go, I definitely encourage you go check them out. And I'll make sure to link to your videos and the site, which is hardware, H-A-R-D-W-E-A-R. So basically hardware as in wearing hardware. Yeah, I always thought it would be about wearables or something. I did. It's not. Yeah.
Starting point is 00:58:57 I will say if you want to like see more of the hardware hacking type stuff at DefCon, shout out to the car hacking village. That place is cool. I know in the past, Mark, you personally have given advice on our Slack channel, on the Patreon Slack channel, about getting better at security. Do you want to share a little bit about that? I guess both Mark and James, how do I get better? I think the nice thing is lots of people have been making lots of great content about getting better. So there's the Cyber Resilience Act and NIST, which has been
Starting point is 00:59:40 helping to lead the charge on making security for general devices. And how do you think about it and how do you do it? And, you know, how do you get started? Like, what's the first thing that you do? And they have some great documents. The NIST Internal Report or NIST 8259 is their standard that has checklists for, you know, the two parts of security, which is building the device, the technical checklist. And then the part that everybody forgets about, which is the postmarket, you know, what do I have to do after it's in the field?
Starting point is 01:00:16 I can't just kind of chuck it out the door and forget about it. And those are actually the standards that were developed in conjunction with the user, the U.S. CyberTrustmark standard that is designed to set via certification for kind of like a baseline level of security for consumer IoT devices. Yeah. And I'd say, you know, following all the standards, very important. Do across all your eyes, dot all your T's, and don't skimp on any of it. What I like to think about is also just like security at a product level, thinking about, you know, it's fret modeling, but it's also how do those frets apply to like the physical configuration of your system if it is an embedded system. Mark, Mark likes to call this system theoretic fret analysis. And, you know, because with something like the secure M pro logics that we exploited,
Starting point is 01:01:17 it didn't come down to buffer overflows or side channel analysis or gadgets or rops or anything like that. It was just that it was designed in a way that did not lend itself to secure. and, you know, no matter how good the implementation of that design had been, it's still worse than, you know, putting the codes on the inside of the safe, having a debug port that can be locked in a more secure manner or locking it at all. So it's important to, like, because if you start by thinking about it correctly, then even if your implementation sucks, you still probably have way better security just by organizing. your data and your hardware in a way that places what you're trying to protect as far away as possible
Starting point is 01:02:16 from who's trying to attack it. I worked with a guy who would say that one of his main goals in his career was not to end up with his face on the cover of Wired as being an idiot. And it seems like a low bar, but, and I feel a little bad because I know that the person
Starting point is 01:02:36 who worked on the locks security probably didn't have time or was just out of college or, I mean, there are lots of reasons. But whoever left for the password as all zeros and didn't turn off the debug port, kind of does deserve to have their face on the cover of wired is don't do this. I mean, maybe a cartoon of their face. I think it's more about organizational engineering rigor. Like, yeah, everybody needs to take some responses.
Starting point is 01:03:09 But as you say, I can easily imagine somebody fresh out of college, somebody who had never worked on a security product before. Who didn't know chips had security? Didn't know chips had security. Had never gotten yelled at yet. And, you know, if your engineering department is not having the protocols in place to have somebody review it, to do that fret modeling, It's, you know, these are the kind of failures that I like to call it a team effort, right? It's hard to identify any one person. Or maybe we, you know, maybe if we knew their work chart, maybe it actually is just one guy and he's not very good.
Starting point is 01:03:51 But, you know. I'm going to imagine it was coded to specification and the specification was just very bad. Exactly. So what I'm saying is just be a group picture on the cover of wire. Yeah. Just get that whole team down. Company logo. Thank you both for talking to us. Mark, do you have any thoughts you'd like to leave us with?
Starting point is 01:04:14 You know, I think sometimes people see security. There's like this asymptotic thing. You know, if I work on it for infinitely long, people who are smart enough can go break into it. And the nice thing about asymptotic curves is the opposite is also true. There's the 80-20 rule. If you do a little bit of effort, you're going to get huge returns. And if you do some, even if it's not perfect, even if it's not perfect, even if it's, It's not awesome. You're going to be way better off than doing nothing. And James, do you have any thoughts you'd like cleave us with?
Starting point is 01:04:46 For as much as we've kind of ragged on Secure Ram in this presentation and what we talked about today, I do want to bring it back to like the standards. Like, yes, I think they should have built a better product. But they're also not wrong when they say it complies to the UL type 1, whatever the number is for high security. electronic safe locks, and both companies and consumers should be able to trust that that's on there. It means something. So it's, you know, a team effort, like I said, but that doesn't end at SecureM's front door.
Starting point is 01:05:26 It's a good point to make. Our guests have been Mark Omo, engineering director at Marcus Engineering, and James Rowley, senior security engineer at Marcus Engineering. Thanks to you both. It's good to talk to you. Thank you. for having us. Thank you to Christopher for producing and co-hosting.
Starting point is 01:05:43 Thank you to our Patreon Slack Group for their encouragement. And thank you for listening. You can always contact us at show at embedded.fm or at the contact link on Embedded FM. And now, well, there are a lot of security quotes and I just couldn't choose. So let me tell you about one of my new favorite creatures. This is a quote from Robin Wall Kimmerer in her book, Gathering Moss. The water bears simply shrink when desiccated to as little as one-eighth of their size, forming barrel-shaped miniatures of themselves called tons.
Starting point is 01:06:26 Metabolism is reduced to near zero, and the ton can survive in this state for years. The tons blow around in the dry wind like specks of dust, landing on new clumps of moss and dispersing further than their short, water bear legs could ever carry them.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.