Embedded - 500: Nerding Out About the Ducks

Episode Date: May 2, 2025

Komathi Sundaram spoke with us about her enthusiasm for tests and test automation. We talked about the different joys of testing vs. development, setting up CI servers, and different kinds of tests in...cluding unit, hardware-in-the-loop, and simulation. It may sound dry but we had a lot of fun. Komathi’s site is TheKomSea.com which hosts her blog as well as contact info. She will be speaking on automated hardware in the loop test processes at the Embedded Online Conference.  Transcript Nordic Semiconductor has been the driving force for Bluetooth Low Energy MCUs and wireless SoCs since the early 2010s, and they offer solutions for low-power Wi-Fi and global Cellular IoT as well. If you plan on developing robust and battery-operated applications, check out their hardware, software, tools, and services. On academy.nordicsemi.com, you’ll find Bluetooth, Wi-Fi, and cellular IoT courses, and the Nordic DevZone community covers technical questions:  devzone.nordicsemi.com. Oh, and don’t forget to enter Nordic Semiconductor’s giveaway contest! Just fill out the entrance form, and you're in the running. Good luck!

Transcript
Discussion (0)
Starting point is 00:00:00 Elisa White Welcome to Embedded. I am Elisa White alongside Christopher White. Our guest this week is Komathy Sindrom, and we're going to talk about testing, but we're going to talk about testing as if it was really exciting because Komathy thinks it is. Christopher White Hi, Komathy. Welcome to the show. Komathy Sindrom Thank you. Hi. Hi, Chris. Hi, Komathi, welcome to the show. Thank you, hi. Hi, Chris, hi, Alicia. Could you tell us about yourself as if we met,
Starting point is 00:00:31 I don't know, at the embedded online conference, if it was real and had, if it was in person and had a table we met for lunch? Yeah, I'm pretty sure it's real. It's not my imagination. It is real. I think it's just a little bit virtual when you're recording for me. It's not my imagination. It is real. I think it's just a little bit virtual when you're recording for me.
Starting point is 00:00:50 I'm Komati Sundaram. I'm a principal test engineer, predominantly working in embedded software testing side, and definitely lots of automation. I did start my career as a software developer, but then I found testing more fun because I'm a curious person. So when I found something big, a big bug, I almost felt like, okay,
Starting point is 00:01:14 it's worth investing my passion into testing. So yeah, that's how I ended up at Embedded Online Conference, because if they invited me and accepted my application to talk about Embedded software testing using hardware in the loop benches. Yeah, that means I did a good I made a good choice We want to do lightning round where we ask you short questions and we want short answers and if we're behaving ourselves We won't ask why and how and are you sure are you ready?
Starting point is 00:01:42 Yes Should I be nervous? No. Probably not. Favorite animal. Oh, that's an easy one. That's there in my logo. Hammerhead sharks.
Starting point is 00:01:55 Favorite place to be a farmer. Oh. I mean, Argentina, although I spend the most time being a farmer in Costa Rica, because you get to drink wine in Argentina. But not in Costa Rica? No, you're working. I mean, I was in the middle of nowhere, so there was no wine. Complete one project or start a dozen?
Starting point is 00:02:22 Yeah, I prefer a balance. I don't think I've ever worked on one project or a dozen at the same time. Like, I'm not extreme, so I would say two to three projects. I know it's not in the option, sorry. What is the most objectionable material you've ever used to construct a house? Oh, horse manure. Ha ha ha.
Starting point is 00:02:43 Ha ha ha. Look, you use what's on hand, I guess. Yes, absolutely. I mean, you just have to mix it with sand, dirt, and some hay so that it all sticks together because, but basically the horse manure is the sticky thing that holds things together. That's not going to be the show title. No, I can see your cogs turning. I'm not doing that.
Starting point is 00:03:08 Favorite fictional robot? Wally. It's such a cute robot. Do you have a tip everyone should know? Yes, with respect to, I mean, of course, testing. I would say let the systems break early, like find ways to break the system early so you can fix things faster, cheaper, and smarter.
Starting point is 00:03:33 This is the first time that break things fast actually really works for me. Well, the idea is to find out what's going to break, not break it on purpose. No, I don't know, right? No, I agree with you. I'm agreeing with you. Well, when you do that as a job, then you will have to break it on purpose. And I'd like to thank our show sponsor this week, Nordic Semiconductor.
Starting point is 00:04:02 Nordic has a vast ecosystem of development tools. It helps us reduce our development cycles and accelerate the time to market. For example, the NRF Connect for VS Code extension provides the most advanced and user-friendly IoT embedded development experience in the industry. Nordic's Developer Academy online learning platform equips developers with the know-how to build IoT products with the Nordic solutions. Nordic's DevZone brings together tech support and a community of customers to provide troubleshooting and assistance on technical topics. Visit these key websites to learn more.
Starting point is 00:04:39 NordicSemi.com, academy.nordicsemi.com, and devzone.nordicsemi.com. And to thank you for listening, Nordic Semiconductor has a contest to give away some parts. Fill out the entrance form and you're in the running. That link is in the newsletter and you moved into testing. How did that come about? So it was back in India, which is where I grew up. I was working for Motorola testing telecommunication devices like huge boxes. Then all of a sudden, all the test engineers were laid off and I stepped into doing a system and integration testing
Starting point is 00:05:33 of a feature for another developer. So we started doing peer to peer testing and that's how I figured out that, wow, like this is so fun. I get to see how this feature actually works end to end. And it just fascinated me when I found a bug and the joy that the developer had, believe it or not, because maybe because I was a developer to developer testing.
Starting point is 00:05:58 So that was a moment. It was probably in the middle of the night back in Bangalore in India. Yeah, that's how I switched. I think I wanted to stay curious, I bet. And then rest is my career. The joy of finding the bugs, does it persist? Not when you're doing something repetitively. I think that's why I started leaning on automation. If I'm doing something three times
Starting point is 00:06:46 and I'm not having fun, it's that constant pulse check. Are you having fun today? What did you do today, interestingly? Like, what did you break? What kind of a new bug did you find today? So that's the joy. And if you don't get that, you start looking towards automation. Yes, absolutely.
Starting point is 00:07:01 I think I did start manual testing, because that's how I figured out what is testing, first of all, before I started solving problems with automation. But when you start developing automation, you're back to developing lots of lines of software. Yep. But you stay a test engineer. How does that work?
Starting point is 00:07:26 Yes, that is a very interesting question and I love that question. Sometimes it is true that automation ends up having more code than the firmware code itself. Then you would get into the problem of how do we maintain this? And that's exactly why I follow this concept called Unified Testing. What it means is that it uses most of the object oriented principles such as like can you write a module that can be reused. So you just apply all the abstraction layers, design your objects very well with relevant scalable concepts and not duplicating the code all the time. So if a test script has like a thousand lines that definitely throw it away and start again,
Starting point is 00:08:18 is my motto. Who tests the test scripts? Oh, we do test the test scripts too. Where do you stop? So, okay, I think you can keep writing tests for tests too, but I think I usually find a balance. If I'm adding a code, like I said, when you are using inheritance, abstraction layers, base concepts, then when you're adding code to a layer that affects like, I don't know, many, many
Starting point is 00:08:52 different tests, different types of tests that is written inherited from that particular Python class, let's say, then you need to think about, okay, how do I protect all these thousands of tests that are sitting on top of this, like, a class? Then that's when you add a gate to even test PRs. So you got to pass this gate before you get into that class that could affect literally all the projects tests, like you release thousands of tests. So yeah, you need to create your configuration of what layers get tested, what layers don't need testing for the tests. Okay, let's do a little bit of role playing. Let's say I'm a curmudgeonly engineer who has reluctantly but diligently done manual
Starting point is 00:09:49 testing for years. Say it's an inertial navigation product, so it's complex. And testing requires kind of a nuanced understanding of the system. My boss went to an automation testing conference and wants me to add that to the list of things he can say about our product. How do I even begin? I mean, I know you talk about hardware in the loop testing and unified testing, but where do you start with somebody who doesn't have, I mean, where manual testing has been
Starting point is 00:10:22 the right way for so long? Well, I beg to disagree there. Can you explain to me that word you used? I can't even say the word. What is that? Cranky. It means old and cranky and set in their ways. Christopher, you should explain it. That's mean. Okay.
Starting point is 00:10:46 This is when I usually use the joke like, oh, English is not my native language. No, no, that's a, and spelling it is even more challenging. Yeah, it's got all the vowels. Yes, like there's no way I'm going to attempt to say that word. Maybe I'll practice it. You can use cranky instead. There's no way I'm gonna attempt to say that word. Maybe I'll practice it. You can use cranky instead. Yeah.
Starting point is 00:11:08 Grumpy, jaded. Cynical. Burnt out. Okay, that's when I come in. So my approach is this, anything that has been done the same way always, and it's almost like humans' ability to not accept changes or also like I like this familiar way of doing things,
Starting point is 00:11:31 which is manual testing, even though it takes about one hour to run like one test manually. And then I do that every day, like it's a muscle memory or something. So what I would say is my approach is first of all, I would understand the workflows and would pick the workflows that are high value, high impact. So basically that could be my sanity test. But I do create incremental way of adding automation, automated tests. Like I said, it's a lot of code.
Starting point is 00:12:05 So you, even for automation, you need a proper design. You need a proper set of modules, shared modules. So that takes time. So what I would do is first get my hands dirty, create like a strategy of how I'm gonna approach this. Is it really worth automating? Is the value too high to automate? So I would assess a few things.
Starting point is 00:12:30 And then I usually come up with an automation architecture that would have incremental points of progression. So what that means is that you're giving an opportunity for the developers to kind of like catch up with you. Like all of a sudden you just add like 10 tests in front of their PR. That is going to gate them. They're going to be surprised.
Starting point is 00:12:47 Like why? I used to be able to merge my peers all the time. And you're supposed to do manual testing and then tell me like a month later if the software is good or not. So what I would like to do is like, pick those high value tests, automate them, show them the magic of catching the bugs
Starting point is 00:13:06 as soon as they emerge and then get on their side. Like I'm talking like a test engineer who's testing LCS code right now, by the way. I'm not talking like a developer. Sure, sure, sure. I can expand more, but that'll be where I start. One of the things, continuing my role as cranky engineer or cranky team lead, I'm happy my team uses git
Starting point is 00:13:35 instead of renaming folders with version numbers. That's the level of automation and technology we have. And so one of the first things that we'd need to do would be to set up a test server, which seems like a lot of work to set up and maintain. Is it or does it just seem that way? Well, I think since I have done it so many times, I think it's become a muscle memory for me that it doesn't seem like an effort. But I can see why it's an effort if you don't have someone that is kind of like sitting there and testing things and setting up all these infrastructure. I
Starting point is 00:14:14 think I would again start with small incremental steps, like can you add some validation checks for the environment so that I know like the developer actually trusts and finds the very high-value failure modes rather than lots of noise. You need to focus on more, what is the signal-to-noise ratio? Is it even worth it? Or am I just all the time debugging failures and errors that's absolutely not adding value to me? So again, this is where I would focus on what are the high value failures that I need to catch early. I would add validation checks. And I think you
Starting point is 00:14:57 mentioned about, you know, like even for PR, like I think I mentioned about PR testing. So for PR, like I think I mentioned about PR testing. So here is where I would find like, how can I quickly break the system? So I'll come back to breaking systems early. What kind of things I can do that I'm allowed to do on this particular feature as per the requirements, let's say, to find those issues. And then I think, again, having an architecture
Starting point is 00:15:23 to build those test servers that are easy to bring up rather than like, oh my God, it's such a complicated, it's going to take like one month of five engineers, then I guess you're definitely going about it in the wrong way. I think having a solid foundation of an architecture, well thought design that can scale and actually does not have any maintenance burden, anyone would sign up for that, right? Yes, yes, so much yes. But I mean, okay, so tactical, where do we start?
Starting point is 00:15:55 I mean, is it, we're using Git, does that mean Jenkins is the right answer or is that a, it depends sort of? If you're using GitHub, there's the GitHub has I don't the action stuff where you can Run things on PRs, right? I have never done it Yeah, you can yes, absolutely. So I'll tell you like let's say let's since we are using GitHub
Starting point is 00:16:19 So let's just plug in something whatever it is. You want to plug in Let's say you want to take a GitHub PR and you just want to like comment saying test this. I'm telling you test this. So you can just attach a comment to a PR to something internally to trigger tests on either Jenkins, whatever that you pick. I do think that the CI infrastructure tools like Jenkins, whatever that you pick. I do think that the CI infrastructure tools like Jenkins, Bell Guide, and there's like, even I heard about Beetle Box or something like that.
Starting point is 00:16:53 There's one more fun CI environment for Hill testing. Just connect anything, but I think one thing that I recommend is don't write like a thousand line of CI code and then you don't know when things are breaking. This is where the logic comes in where when you have a PR configure it so that what files do I absolutely need to test for relevant test like failures to catch the relevant failures and what kind of like PRs must definitely go through like a lot more heavier testing. So you need to find what are those meaningful failures and group them.
Starting point is 00:17:31 And then you also need to think about how long to test for. Let's say if you're testing, if you're pushing a PR in GitHub and you have a simple mechanism to trigger testing on a hardware in the loop benches, then you need to know that the bench is available. You can't just trigger a test like, oh, I don't know when the bench is available or not. I'm just going to go through the usual CI commands and kickstart a test. And then later, like one hour later, it says, by the way, bench is busy. So I guess this is what I would come back to it, right? Which is sit together with your developers, think about what type of failures you want to catch for those PRs. Think about the time.
Starting point is 00:18:14 Like how long do you think it's like, I would ask you, Alicia, if you submit a PR, how quickly do you want the feedback for your PRs? Time-wise? See, I think that's really an important question because if it takes more than an hour, I get pretty frustrated because I forget what I've done. I'm just so noodle-headed sometimes. But I remember when Chris worked at Cisco, you had— 24 hours.
Starting point is 00:18:40 24 hours. And by then, the tree had all changed and you had to do another 24 hours and I never committed anything It was just so awful and I quit I had a I had a commit pending for eight months and then I gave up I'm exaggerating but the static these weren't hard to run the loop So I do want to remind people were talking about a little bit of a narrow Version of testing which is hard run the loop, which adds additional complexity. But she's mentioned it. Yes, that's true.
Starting point is 00:19:07 No, I have done, I mean, let's talk about testing, seriously. Like I think at the end of the day, hardware in the loop is just one more layer we added. But it's a constraint, right? We're talking about testing in general. I can even remove the hardware in the loop right now. Yeah. And so it's about figuring out what you want to test.
Starting point is 00:19:26 And how much pain you're willing to put up with. Oh, see, that's the thing. I think that's where the focus is for me. I don't think, as a test engineer, I've done a great job if I don't come across as pain. Right, right. You know, adding the test. I actually, like, oh my God, I cannot wait for results to come back and I know it'll find real bugs for me. I will not be looking at debugging failures of how Jenkins went down in the middle of
Starting point is 00:19:53 the testing, right? Yes, I hate that. I had this big test suite, it was simulation. And for some reason, it would run perfectly on my computer. I would push it to Jenkins, it would run perfectly on my computer. I would push it to Jenkins. Jenkins would barf all over it. It took us like a week to figure out that there was some driver that was incompatible on the Jenkins system. And I'm like, that is not what I want to spend my time debugging.
Starting point is 00:20:18 Did you try to run a Windows app as part of Jenkins or something? It was running on a Windows server. I understand. You were on hard mode. I stand, yes, but I didn't know I was on hard mode. It's not fair to say you're doing something difficult when all you're doing is flailing about, hoping you get it right.
Starting point is 00:20:41 Sorry, lately my work has felt a lot like I'm animal at the drums, just randomly flailing about and sometimes good stuff comes out and sometimes I hit myself in the head with the drumstick. So these are all great because that's why I have a job. So I do think that this is why I keep going back to create your foundation, how you want this thing to work. What are the kind of KPIs you want to hit? The KPI could be like, how many times do you annoy your developer by failing a test, but
Starting point is 00:21:16 the test failure, whether it is a real failure in the firmware or it's a real failure in the CI or it's a real failure in the test. If the test is broken, is the board broken? So if you're doing a guesswork, then that means that's really bad. This is why I would say the thing that you talked about, Alicia, is that if Jenkins runs on my system, fine. Why is it not running the same way in CI? This is the first problem I always solve when I go into PR testing using continuous integration. So I would first of all figure out what are all the dependencies that we need to have for system under test. Like, forget about the hard run loop for a second. What is this software you're testing?
Starting point is 00:21:58 What are the kind of dependencies you have? What are the kind of configurations it needs to be to say, let's go. When you do a Go button for, let's go test, this is a clean space. I would typically have a Docker environment. I would deploy everything in that, which means you run the same way in Jenkins, and then you run the same way in your local computer.
Starting point is 00:22:20 That's like containerizing all the dependencies and configurations you need to test your software. And then you build on top of that. Again, automation has many, many layers. I don't want to get into that. But that's where the problem is. A lot of us just do whatever we want on a computer and then install everything and then test. And then when the same test runs in a clean environment like CI, it's a common frustration
Starting point is 00:22:50 developers have, believe it or not. Yeah, no. Although as I was watching my CI, my continuous integration do its thing, and it was reinstalling things it had reinstalled many times before. It did occur to me that as we have more processing power, sorry, total tangent, as we have more processing power we make computers do the same thing over and over again more often. Like when the computers rise up, when the robots attack, they're going to be, this is for all the years of boredom.
Starting point is 00:23:27 We don't have to worry about that because we can't make computers anymore. True. Okay, sorry, rant over. Let's talk about hardware-in-the-loop testing because I think the whole, how do you set it up is just a hard problem and you have to accept it's a hard problem. That's a total mystery to me. I always feel like I have experienced hardware-in- hardware in loop testing a little bit, but generally, well, I guess at Cisco, we ran the software on the hardware and so all testing was on
Starting point is 00:23:55 the hardware, but they were big computer-like routers. They could do debugging on the hardware. Yeah, but for like small embedded devices, it's always been like, how? What if I have a screen? How do I interact with it? There's all these little, it's a device. How do you do this? I had one device where we piped in the signal that we were making it look for.
Starting point is 00:24:14 And then we would listen on its output, let's call serial port, output port to make sure we got the right signal characteristics. So like a sort of a black box. Yeah, it was a black box. Does that count as hardware in the loop testing? Yes. But I think it's interesting sharing some of the Cisco experience. You know that the first time I started doing
Starting point is 00:24:43 hardware in the loop testing, I did not know what was hardware in the loop testing. That was my first day at Ember software testing. So I understand how people feel about hardware in the loop testing. I was one of them. Could you define it for us better? Yes, this is my favorite part, which is how do I make it fun for someone who has never done hardware-in-the-loop testing.
Starting point is 00:25:11 So I've done this with my friends, I've done this with my parents. I always tell them this joke, which is like, imagine, you know that I test self-driving cars, right? I can't park a car at my desk to test self-driving cars, right? I can't park a car at my desk to test self-driving cars. It's sitting somewhere else. And that's when the hardware in the loop benches come in to do that testing. That's the simplest way I can explain to my friends
Starting point is 00:25:37 and family. That makes a lot of sense. I've had some seriously weird things on my desk, but once we get to the testing phase, I usually want them off my desk. I've had 50 watt lasers on my desk, which I tested on my desk. Yeah, and then at some point you want them to be over there or to be tested differently. So you don't burn holes in your monitor. Or the wall or yourself, all the things.
Starting point is 00:26:01 But yeah, there's definitely different kinds of hardware lend themselves to. Like I've been working on a UAV and doing a lot of testing on the ground without flying it, various components and stuff, because how do I do hardware in the loop testing of something that has to fly? And I think probably autonomous vehicles are similar situation, right? I did a whole autonomous vehicle project and we just used simulators the whole time. Oh. But I don't. Right. Well, that's okay. I think we should put a pin in that and come back to that. Okay, we'll come back to simulators.
Starting point is 00:26:35 So, hardware in the loop testing is all about simulating some parts of it, but then the hardware, the hardware and the tests, which I sometimes call it device and the tests. Most of the time I call it DUT. So that's where you're going to deploy your firmware and then you're going to like interact with those interfaces that the device has from your test conducting computer where you could be running your tests to kind of orchestrate like a real world scenario. For example, like imagine I have to play, I have to connect over a Bluetooth and then play music and then record the music and see if the music quality is good. That is end-to-end automated testing using hardware in the loop. And you have to do it with an NFC touch. So what I did in the lab, for example, is that I would have the device under test with the NFC tag extended and then I would attach the touch NFC touch thing on top of it and then tied it with the tape, believe it or not. And then I would initiate the test from the test conducting computer, which will trigger
Starting point is 00:27:46 a Bluetooth connection initiating the signals on the NFC tag, which will internally send signals to the Bluetooth chip saying, okay, I got a tap, go and attach this to pair this device. And then you would have the on the other side, you would connect the audio input output to your computer so that you can play that once it's connected, then you play the audio from your computer through the device because it has an audio chip. And then on the other end, it'll loop back and come back to your computer. And then you record it and say, okay, I sent a sine wave. I got a sine wave back, yay.
Starting point is 00:28:25 The test passed. So that's a simple, that's a fun test that I have automated using hardware in the loop benches and then supporting devices like NFC staff, power supply, JTAG, whatever it is that you need to get your job done. And when you say benches, you mean one of these systems is a bench, the whole?
Starting point is 00:28:46 Yes. Oh, that's like our own jargon. And so when you say you have 10 benches testing cars, you have 10 car computers, not 10 actual cars. No, it could be the computer, it could be a sensor, it could be the cleaner for the sensor, right? Like it could be anything, but anything that is like, I'm focusing on this, the center of that test bench is a DUT,
Starting point is 00:29:18 and if I have something else connected to it and then connected to a computer to kind of orchestrate a situation, like a scenario use case, that's a hardware in the loop test bench. That bench has one DUT, or it has multiple DUTs. You can control all of them from your test conducting computer. It's test conducting computer plus device under test plus any other accessories you need, like power supply or some kind of an attenuator for example,
Starting point is 00:29:46 or dspace. You mentioned the NFC tag to send data into the device under test. Do you ever use just analog signals? Yes. I think we use it when we are... I mean, it was like 10 years ago. Back in the UK, I used to work for Cambridge Silicon Radio. So, wait. I don't think I particularly have used analog signals.
Starting point is 00:30:22 Sorry, I'm a little rusty. I just wondered if you had tools you used for that, but if you aren't using that, then the answer is no. Like the analog signal generators and things like that. Yeah. I have used them, definitely. I've used them. I'm just trying to recollect a use case where I could just,
Starting point is 00:30:40 like just like how I talked about how I played music we end to end. Right, right. When I was testing audio products. That definitely counts. Well, and I was thinking like we've seen some heart monitors. Those are very good for you. You shove in an analog signal and you expect to get the Bluetooth out. Oh, I did that.
Starting point is 00:31:01 I plugged one of my synthesizers into the front end of the EEG. But that was a one-off engineering test. Yeah, that was me testing at my desk. And if you wanted to build a test that continued to test, you would need a signal generator. And it would be better if that signal generator was controlled via Python, probably. Absolutely. I've done this, but it's a second kind of... Oh, everybody's... Do so many things you forget.
Starting point is 00:31:31 There are a lot of devices that require some sort of human interaction. You mentioned the NFC. And I have seen places where, you know, in a device that has a UI or something, whereas people have made little robots... Do you remember at Fitbit people have made little robots. Do you remember at Fitbit where they made little finger buttons and it just went down and pushed the button 9,000 times? I thought there was somebody doing that also when we were at Maker Faire once.
Starting point is 00:31:55 They had a whole framework for button, for it was a little robot button. And it could, you could script that to push buttons on, you know, a screen-based UI or something like that, or physical buttons on the side. Have you done that? And are there other ways to do it? I guess I've seen things where you can kind of, I think it fit that we did this too, where you bypass the physical interface and you could inject UI commands. It's like, okay, pretend a button was pressed.
Starting point is 00:32:22 But then you've got code running on the device. It's not simulator code exactly, but it's like. Sort of circuiting. It's like little scriptable things that you can access from testing as if you had pressed a button. Command line, I love the command line. I feel like there's a continuum
Starting point is 00:32:39 between leaving the device as it is, but then also kind of leaving hooks in for testing. Oh man, like you have asked one of the most interesting questions, and my brain is all over the place right now. I'm thinking, should I give this answer or this answer or this answer? Because I think that's a very interesting question. So I have literally tested button sensors, and I went through the stages of pre-silicon to tape out to shipping the product to the customers. I had the most amount of fun testing that system on the chip product and it is to do
Starting point is 00:33:17 with the buttons. I did try different types of button pressing attenuators and also you would have different types of thickness for those buttons. So you would have a layer called application layer where I could literally, like you just said, load my test into that space and I could actually run test as if I'm, my test is the application that I'm building on top of the chip. So you hit that, like I've literally done that. When we do it again, the same, I felt like the concept is still the same. You just treat that button pressing, let's just call it robot hand just for fun sake, which I didn't use.
Starting point is 00:34:02 It is very inaccurate to use those things. There are much powerful instruments out there. They might be expensive, but it all depends on where you want to put your money. But at the end of the day, you can also simulate those things as well. So yeah, you can, I think that's the concept I want to talk about a little bit.
Starting point is 00:34:24 When you are designing your software, think about testing as an application that you would load, which means you're thinking about testing from the time you're adding a line of code. How am I going to test this thing? It's something that you should think about, and that would save you so much time, so much money, so much frustration that we are talking about. So that is a classic example of well-designed firmware and application software, where testing is kind of part of the software development. If you want, I can continue with the next interesting thing that came to my head.
Starting point is 00:35:00 Yeah, yeah. So this is one of the things I am kind of like passionate about, which is if I'm asked to do some button pressing tasks, like the way you just mentioned, Chris, like I have a website or I have a GUI based application that I have to test that for all my, let's say, you have a tools configuration software. If my job is to press the buttons and make sure all the widgets and gadgets and everything that is on the screen,
Starting point is 00:35:33 combo box, drop down, whatever it is that you've added as a developer, my job is to test and ship the product and to test that on Mac, Windows, Linux, all kinds of OS. I don't know what customers are gonna be deploying to test that on Mac, Windows, Linux, all kinds of OS. I don't know what customers are gonna be deploying the tool configuration tool on, right? Then that's when I think about how do I automate myself out of this job? How do I sit with the developer and see what is
Starting point is 00:35:55 in the backend of these UIs? Let me think about how they are generating the data. Is that a jargon of text file or is it JSON file? If it is a JSON file, how do I like, if I click this button, this particular register has to be written to or it has to be read, then can I just like apply some prefixes, post fixes, that'll tell me to predict what is this button going to do? Then what I did is that I automated myself out of that job by automating, auto generating the tests as they change the UI. But you know, before that, I had to press the buttons and I had to be so uncomfortable. I had to hate my job to get to a place.
Starting point is 00:36:39 It has to be something. What if the developers don't have to rely on me and their tests are auto generated as they change? They want to, okay, today I feel like having a combo box in this UI. No, I'd like to have a radio button here. You know, they can change their mind too. I can't be pissed off about that their mind is changing every day. And then I got to rewrite my tests again. I mean, we are doing something wrong. So that's an interesting one. Believe it or not, I could get in trouble for this one, but I'm gonna say it.
Starting point is 00:37:10 I replaced about 160 people, including myself, by auto generating tests. That's a very interesting, proud story of mine. Although it's, I just thought that I started doing something more interesting because I did that after that. So you mentioned that you got into this because the test team had been laid off and now you're talking about auto generating tests and writing yourself out of a job.
Starting point is 00:37:40 Is test engineering a more difficult job to stay in? I like the twist you just added. You just said this, but you also did this. So thank you for holding me honest there. What was the last sentence? Is it a job that doesn't have much stability? Oh, I'm a living proof. It has really high stability right here and I can give my take on it and then listeners
Starting point is 00:38:15 can take what they want to take from it. In my opinion, I stayed curious. That's why I got into testing. I enjoyed it and I just thought, you know what, I think I'm good at this. I'm just going to like step into this, stay curious. And then I also know that I had to evolve as the industry, as the technology evolves. So I was testing telecommunication products in the beginning, like big boxes in the network somewhere, right? And then I started testing financial models for insurance companies to calculate risks and claims. So I completely went to the other side where I was testing UIs and web apps and desktop applications.
Starting point is 00:38:57 What I learned there is I stayed curious on how the user experiences, how should I find issues before the users of those applications find. I stayed curious there. But I also automated a bunch of stuff. It kind of was easier automation and then finally stepped into embedded software. I was so fascinated. I still remember the time I asked my boss, Dan, Gordon, what is a GPIO?
Starting point is 00:39:25 So I can never forget. Like it's staying curious. It's not worrying about what others think. It's about, I think something is there and it's kind of like keeping me on my toes and I need to figure out how I can be the best at this. And how do I apply my strengths to be the best at this? And then also, obviously, when you're automating, when you're testing, when you're finding
Starting point is 00:39:48 issues, you are going to get on people's nerves. Your job is to find issues, but then the timing matters and you would end up frustrating some developers and you need to be staying resilient. That's another thing. Bugs are gifts. I mean, getting a bug before the customer sees it, it's another thing. Bugs are gifts. I mean, getting a bug before the customer sees it, it's a gift. But sometimes you don't want the gift right before you're gonna ship. I have to say, I have had unfortunately just a few
Starting point is 00:40:19 instances of working with really, really good developers, but they were really, really good. And in one case I'm thinking of, he sat with us, not developers, testers, excuse me, development testers. And in one case he sat with us in our area, and so we were the developers, he was the tester, and we were basically seated in the same place and we interacted constantly. Large cubicle-ish.
Starting point is 00:40:40 Yeah, yeah. And psychologically, he was very different from us. Like, there's a thing with developers where even if we're trying to break our stuff, we can't because there's something... You think about how it should work. Think about how it should work. And so you just, there's this blind spot of, well, I'm not going to try, I'm not going to try, no, to try certain things.
Starting point is 00:41:04 And he would just, there would be these sequences where it was like, well, when I do this, this, this, and this, this happens. I'm like, why are you even doing that? And then a week later, he'd demonstrate that, well, that's a very common thing that happens. But oh, and he had this attitude of, maybe it was an age thing, he was an older gentleman, but he had this attitude where if he found a bug, you couldn't be mad at him because you were the one at fault. If you're the only mad at yourself, which is true.
Starting point is 00:41:38 Yeah, but it was like, well, I did this and your stuff did this. And instead of like, well, why did you find that? It was like, well, I'm very ashamed. Um, but it was, there were certain relationships with testers where it really just worked and that was one of them. Um, but I think there's a different mindset and I think you've touched on it. A different mindset from a good tester to a good developer, because a developer we're biased to, we want things to work and tester while they want things to work, they want things to work after they, while they want things to work, they want things
Starting point is 00:42:05 to work after they find everything that doesn't work. And I don't know if it's just, you know, like you mentioned, curiosity, there's a different form of curiosity or what have you, but I have noticed that good testers are few and far between and I don't know if they, we need to train people differently to become good testers or if there's just something temperamental. Or if we need to stop telling them it's a dead end job because it isn't. Definitely. It's definitely not. I mean, I mean, I do have to say, Alicia, with the AI, doing the testing tasks, right? So that could definitely be making you feel a little bit
Starting point is 00:42:37 intimidated, like, oh my God, I could be replaced by AI. Like, they get this talk talked about it, right? Like, can I do my job so well that I could replace myself with automation and then I can go and do something fun? I would apply the same concept to AI testing, right? How do I leverage AI to do better testing? How do I make it do fun, but very corner cases, it still finds the bug? How can I be taking that for granted instead of thinking that AI is limiting my options now? So that's another way. That's another thing I always think, like, how can I be ahead of AI is my motive right now. And I think that's really important. And I think that's a whole other conversation.
Starting point is 00:43:24 And I think that's really important, and I think that's a whole other conversation. If we start now, we'll be here for at least another hour. Because I think that's a fear across the industry. It's not just testing. Yeah, it's the same concept about testing. I think I always applied in the past, which is I moved from domain to domain to domain. I remember one of my close friends said, like, Komati, you're never satisfied. It's not about satisfaction. It's about it's not challenging enough for me. So how do I solve more complicated problems next? And then I went from simple semiconductor chips for Bluetooth to all the way to semi is chips for running a whole car.
Starting point is 00:44:01 That's how complicated the embedded software testing got for me. But it's all in your hands and your mindset, like you said, Elisya. And you've gotten to work on some really interesting applications. That's always been what I liked about embedded systems was that the devices go off and do things and I get to kind of be part of their lives. But you've mentioned, you know, I think financial software and cars and buttons and it just, you get to see a lot more if you're curious about, if you're curious beyond what needs to get done today. Yeah, absolutely.
Starting point is 00:44:39 I feel like my career would have been absolutely different if I had just stayed on telecommunication because look, we have 5G, like LTE, like what else can we do with that? We are like thriving there. You are giving a talk at the Embedded Online Conference, which Jacob asked me to say a few words about, Jacob being one of the organizers. It's taking place the 12th through 16th of May. Lots of engineers bringing their experience with practical presentations focused on embedded systems development. Philip Koopman, James Grinning, Jacob Beningo, all been on the show. You can get $50 off your registration with the promo code embeddedfm at checkout.
Starting point is 00:45:26 Okay, but sorry, you're giving a talk and it is about, I don't know, testing, maybe hardware in the loop? Yeah, it's like arranging the ducks in a row. I love the ducks. The ducks are in the process. In other words. I'm sorry, could you say it again? I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was,
Starting point is 00:45:44 I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I was, I love the ducks. The ducks are just like a test process. In other words. I'm sorry, could you say it again? I was nerding out about the ducks. I said, I'm talking about arranging the ducks in a row. In other words, automated hardware in the loop test processes. And what do you hope people will get out of your talk? They would know what is hardware in the loop testing is.
Starting point is 00:46:09 Well, we've already covered that. I mean, people here, no, sorry. And then they would know how to incrementally add continuous integration based automated test processes, even with hardware in the loop, when you would think that, hey, if I add a hardware in the loop based test gate to my PR testing, I'm doomed. Because we have one bench and we're sharing among 10 developers and that bench could be broken. So that's a fear everyone has.
Starting point is 00:46:42 So this would give a little bit of an idea of how you can approach that real challenge because that happens in a lot of startup companies. You have, like you said, I have three DUTs connected and it's sitting on my desk. I'm like in the early stage of development. How do I bring it up quickly? And I would still want to develop faster. I want to break things but without breaking my whole day. I have never let the smoke out of the chip with my software.
Starting point is 00:47:13 How often do you have hardware in the loop that do break? I mean, software really isn't supposed to break the hardware, or I think the hardware qualifies as weak in any sense. Do you have fire extinguishers on the benches? We do, literally. I remember there was a time that we were like, I think I got to a point where one of my engineers recently, I caught up with him back in the Bay area and he said like, hey, you know, we used to, we had like 400 plus benches connected like remotely and then people were able to access it and run their test manually or see what that was awesome. But what happened is that in that building when we had those many benches, we were like sucking so much power.
Starting point is 00:48:03 We used to have power cuts. So it does happen. You just have to be prepared for everything with respect to testing, I guess. Just to say it lightly. Okay. But how, assuming you're not actually like frying boards or trashing motor drivers, how do you, how do the engineers tend to break the benches? Yeah, so let's say you are developed, so you're a company that's developing audio chips. And obviously, that's your specialty and you're great at it.
Starting point is 00:48:41 And you obviously think about how do I create a platform-based library so that I can just use the platform, common platform across all my chips. And that's when sometimes most of the common breaking problems happen there, which is you if your platform is broken, whatever the firmware you build on top of it, you're gonna brick the bench. Brick is another word we use in the hardware testing loop. Hardware testing, that's another jargon we use here. What it means is that you push a code that has the ability to break many other firmware that you could build on top of it. And that means you are trying to do something very cool but you tested it on maybe one device in the test. Maybe that's called it A. But you didn't test it on B and C. So you
Starting point is 00:49:35 don't know. But because you don't have those resources you could only test it on A and then like hope, pray to God, and then push the code, and then you end up breaking that code would break for B and C. So that means you've got a bug at the platform layer that it has the potential to break most of the benches if maybe just one bench might be safe. LESLIE KENDRICK Is this where B and C might have minor hardware differences or major hardware differences that should be accounted for, or is this where B andC should be the same hardware? There could be different types of hardware, but it is supposed to use the same platform
Starting point is 00:50:15 code, OIS platform code. I just wondered how many devices I needed to run the same software on to prove that the same hardware should work, which those two together should be okay. And if they're not, that's a different problem. Yeah. And that's another complication, right? If you're in a development environment where there's new hardware revisions, you've got to rev the benches pretty often too, right?
Starting point is 00:50:41 Absolutely. That's exactly the problem. So my job is to keep the relevant environments up to date. Okay. And sometimes it changes so fast. Considering that we work mostly remotely, I mean, you need to be very fast at planning for all those things and then making sure
Starting point is 00:51:00 that those things are available. But then sometimes the software can go faster than you think, like development. So you may not be prepared for that. But I do have another way of breaking. Like I think I would say like, why would you break software when you're not supposed to, when you push the code?
Starting point is 00:51:15 Let's say, let's remove the platform OS software could break many other firmware that you build on top of it. But let's talk about situations where you said, Alicia, like, oh, it works on my setup. How did it break in the CI? So that's the next problem. It's just that maybe your device might be in a good state that when you loaded the new software that you're trying to develop, when you do a firmware upgrade, it all works fine. But then when you go to a CI environment where it assumes a, you know,
Starting point is 00:51:45 from the scratch, deploy this chip and bring it up, that could be a place where it could break. So that's a very common problem I find with respect to firmware testing, where developers are so confident. I definitely know that it cannot be. It's not possible. It works for me. Yeah. Oh, yeah. That can't happen. That's impossible. I remember standing, yeah. That can't happen. That's impossible. I remember standing over the tester I was just mentioning. Thank you. Yeah.
Starting point is 00:52:12 Because I don't have the bias. You see that, like, I think someone else mentioned, I don't have the bias that I need to, like, I think you told me to test within the Squire. I'm still testing within the Squire. Unless you said you changed your Rukamana to test within the square, I'm still testing within the square. Unless you said you changed your requirement to test within a circle. Right. And I think as developers, we tend to think about what we intended to do with the code and think that's what we actually did do. And so when something goes wrong,
Starting point is 00:52:41 it's intention with what we intended to do, but we're confusing that with what we actually did. My intent was to catch this failure, which I did not. Yeah, I understand you. I have sympathy and empathy. But this reminds me, right? Like that is a good, I mean, I want to add something positive here. This is why I often design a special setup called Break Testbed.
Starting point is 00:53:14 Basically, it's a bench you're allowed to break because I have every single possible way of recovering that bench remotely using automation. That's a special bench which has special particular accessories so that if you want to run your very questionable tests with your innovative software, because I think if a test engineer accepts the fact that developers are going to innovate,
Starting point is 00:53:41 then I want to give them a playground too. And that is Brick Testbed Bench, where it's a playground for developers to think that, you know what, I'm allowed to break this because I'm innovating something. I like how she says innovating, but she seems to mean messing it all up. Do you have a little hammer on a solenoid
Starting point is 00:54:02 so you can smack the board for when you need to, you know, percussive? Percussive maintenance. Well, my benches are going to be locked inside a room. You can't enter. So you can't do those things. So you definitely got to have a hammer. Violence is not allowed.
Starting point is 00:54:15 Exactly. I come from the world of Gandhi's. I mentioned simulation earlier. Do you think hardware in the loop is really that much better than simulation? I think there's no question about what is better, what is not. What is bad. It's a question of when is the right time to do what type of testing. It's like I think something I use in general in my design is that use the right time to do what type of testing? It's like, I think something I use in general in my design
Starting point is 00:54:48 is that use the right tools to do the right job. It's just like that. Like for me, simulation is a set of tool set and hardware in the loop is also another tool set for me. I would apply hardware in the loop test for wherever you need real-time interaction with a physical hardware, but physical. Physically you need that physics and electrical interactions and how your software, which is the embedded software firmware, reacts with those things.
Starting point is 00:55:18 Where I would like two devices to talk on their physical interfaces and make sure their interfaces are working. They're able to be integrated into a system and then all the way to a fully integrated whole car or a whole camera device, right? So it all depends on when you use hardware in the loop testing. I would use in the earlier stage of the product, like let's say you're still, you just got your hardware,
Starting point is 00:55:45 you're just developing the firmware, you wanna make sure all these interfaces work so that it actually captures an image and then sends it over to you for processing later before you apply simulation saying, okay, my car is running and then you have all these cameras and sensors capturing all these signals
Starting point is 00:56:02 and now am I taking a left or going straight or pressing a brake? So it all depends on what is the situ, like you need to decide what to use simulation for, what to use a hybrid in the loop testing for. In fact, sometimes unit testing catches most of the problems. Right.
Starting point is 00:56:20 Is unit testing different than what you're talking about? Yes. Okay, how? Unit testing could be like, okay, so you're writing a, it could even be like a, you're adding a new API to your module. It's a simple, it could be a simple file and you just want to run a bunch of tests, sending different inputs to your API attributes. And then you just want to figure out how well it works, how gracefully it fails, and it fails, how well it throws errors so that it can debug.
Starting point is 00:57:00 Unit testing is just breaking a simple space, like one. Or sometimes you could be writing unit tests for API to API interaction. Or you could always be writing unit tests where you literally don't need anything else. Like you can just run those tests on your local desktop without needing anything. So just step by step as you are developing your testing your APIs, that could also be unit tests. And hardware in the loop comes in when you're doing more system integration testing. Yeah, I would say this is how I arrange my docs in a row with respect to testing, which is do your static analysis.
Starting point is 00:57:41 Just have your build system built with static analysis. It finds so many bugs and then finish all your unit testing if you don't catch any bugs there then you use your expensive testing that would need more resources and that is a way to think about it right there's there's layers of testing and there's at the top of the expense layer is okay I've got real hardware with a bench with lots of things and there's limited resources, it's expensive in time, it's expensive in parts. And then moving down to the least expensive testing, which is the developer compiling it and running for a few seconds or something.
Starting point is 00:58:19 But I think you were talking about the difference between, well, simulation and hardware in the loop testing, and there's right tools for the right job. And I think that goes to expense, too. Where do you get the most bang for the buck? And where should you test certain kinds of things? You shouldn't be testing an API change on the hardware in the loop thing first, right?
Starting point is 00:58:44 Yeah, let me say an example, right? If you're testing parking features of a car using hardware in the loop testing, you're definitely doing it wrong. That's where you should use simulation. Why? Because of the system? Because you're going to crash the car. Better I crash it in the test bay. Simulator. Oh, true.
Starting point is 00:59:08 Okay. Yes, yes. You can crash, you can create some fantastic safety-related tests using simulation. Use them. Think about the project you've been working on and how often you flung the poor hapless simulated human into another dimension. But that's when I really enjoy it. The little simulated guy, he just falls off and he tumbles over.
Starting point is 00:59:35 And then the physics engine breaks and he starts spinning at 7,000 RPMs. Yeah, that's the best part. And imagine simulating a plastic bag flying and you had to test it. Yeah. Simulation, not real hardware in the loop testing. and then simulating a plastic bag flying and you had to test it. Yeah, yeah. Simulation, not real hardware in the loop testing. It depends on what you're testing. And it goes back to where you need to spend your money and time. And are you likely to actually damage something?
Starting point is 00:59:59 And the vehicle that I did that was all simulation until we got to the actual vehicle, it had very constrained interfaces and a well-written simulation before we got there. And a human operator who could stop it. Yes, but he didn't. And that's what's good about vehicle testing. I do have to say, like, throughout my career of embedded software testing, I have only been using simulation testing in this industry of testing autonomous vehicles. But in the past, I didn't have to use simulation at all.
Starting point is 01:00:36 Most of the bugs were caught by unit testing and simple API to API integration testing, to be honest, before hardware-in-the-loop testing. Yeah, I think simulation really does come in when something's large and expensive and interacting. And when the hardware itself is so expensive that you can't do hardware in the loop bench testing until the hardware is done. Right. You have to use simulation because it just doesn't exist yet. But like for a Fitbit, we tried to do some simulation stuff for Fitbit and it was a lot
Starting point is 01:01:05 of work and it ended up just not being all that useful. It became more useful when we had apps, but that was less of a simulated environment and more of an emulated environment. But yeah, but it was like, oh, we were going to simulate the whole thing and you could run it in QMU and load the same firmware and stuff. And it was like, yeah, but I have like five on my desk and they all work. So. Right. Emulation is another space. I'm glad you mentioned EMU. One problem with emulation is that you need to maintain that. Like you're writing a
Starting point is 01:01:33 lot of code to maintain that so that it can use emulation instead of hardware in the loop. Sometimes like let's just run these 10 tests on hardware in the loop. We have one bunch, one bench right and you have automation right. Let's just go for it. Yeah, and simulation can be a huge amount of software. All the simulators I'm using right now are things other people wrote for other large projects. And I'm just kind of, you know,
Starting point is 01:01:56 mooching on the end of what other people did. If I had to write the simulator, I would be like, no. That's a different. I couldn't. I think that's a third role, that the people, I would be like, no. That's a different... I couldn't. I think that's a third role, the people who maintain those kinds of test tools. For me, simulation is like magic. Oh my God, I'm fascinated. How much of designing and science goes into creating all those simulation platforms, man,
Starting point is 01:02:23 like that is next level testing. And you know, I say, I sound like I'm a little bit down on it, but then there's stuff like Wokwi, which- Wokwi is an online- Simulator of several different development- Development boards and processors. And platforms. Running different languages, and it is amazing.
Starting point is 01:02:39 You can do things that aren't physically possible because you don't have to care about current and power and such plebeian things like electrons. I think there's going to be an overlap of hardware in the loop and stuff like that where, yes, you're doing hardware in the loop, but it's not real hardware. And that's where it gets confusing. You see testing is not a dead end. You can just keep on upleveling. I totally agree. Komathi, we have kept you quite a while.
Starting point is 01:03:07 All right. Do you have any thoughts you'd like to leave us with? Oh, yes, absolutely. I would say stay curious. Be the best friend of developers, and let them break things themselves, and so they can build the systems faster. That's the main thing that I'm focused on.
Starting point is 01:03:32 And also think about how do you replace yourself? How do I automate myself as a test engineer? Or not as a test engineer, just yourself in general. I can replace myself with a small shell script. I would do it and then I will do something more fun. And you have a new website with blog posts and more information about talking to you. Could you mention that? Yes, I recently launched my website, thecomsi.com. It just happened. I think, again, one day I was just frustrated with the tools and automation and the testing
Starting point is 01:04:11 terminologies that I thought, you know what, I'm just going to write about it in a funny way. So, yeah, I started writing some interesting blogs, especially referring to Croissant versus Bagel, like really caught a lot of people's eyes and I'm definitely in trouble for that. But yeah, then I thought, you know what? I have a good way of thinking about how to explain some complex things with easy metaphors.
Starting point is 01:04:37 Why not I just write more and see, stay curious what comes out of it. And then I realized, you know, I want to invest back into testing community and I want to spread these awareness and teach more people out there how to do smarter testing. That's how the Comsi was created. And it's doing well and I have some fun, exciting projects that will launch through this that will help the community again. So yeah, that's my that'll help the community again. So yeah, that's my infant at this time.
Starting point is 01:05:07 I'm taking care of it in my freedom. Excellent. Your blog posts are very interesting and informative about testing and the whole environment that many of us find, if not frustrating, still mysterious. Our guest has been Komathi Sundaram, Principal Software Engineer, AV Simulation Testing at CRUZ. Her website is thekomse.com.
Starting point is 01:05:35 That's the-k-o-m-s-e-a.com. You can see her presentation on hardware and loop testing at the Embedded Online Conference in mid-May. You can use the coupon embeddedfm for $50 at checkout. Thanks, Kamathi. Thank you. Thank you for this opportunity. I had a lot of fun talking to both of you about testing. Thank you to Christopher for producing and co-hosting.
Starting point is 01:06:01 Thank you to Dennis Jackson for the introduction and some of the questions. Thank you to Nord Jackson for the introduction and some of the questions. Thank you to Nordic for their sponsorship. We so appreciate it. Please sign up for their giveaway. And thank you for listening. You can always contact us at show.embedded.fm or hit the contact link on embedded.fm. And now, I don't know, do you want a quote or a fact? Fact. All right. Hammerheads, which is Komathy's favorite animal, have incredible electromagnetic sensing abilities.
Starting point is 01:06:32 One of the most fascinating facts about them is their extraordinary ability to detect electromagnetic fields. All sharks possess special organs called ampullae of Lorenzini, which allow them to sense electrical fields produced by other animals. That's how they find the little fishies. However, hammerheads have these sensory organisms spread across that wide hammer-shaped head, which gives them the enhanced ability to detect even the faintest electrical signals. Gives them a nice wide time difference of arrival.
Starting point is 01:07:03 I don't think they calculate that way. This adaptation, because they're kind of late in the shark evolution, helps them locate prey hiding beneath the sand in the ocean floor. They just swing that little nose around and they can detect the electrical impulses from a stingray's heartbeat from under several inches of sand. That is why they are superpower and one of the most effective hunters in the ocean. you

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.