Embedded - 179: Spaghetti Reducer

Episode Date: December 15, 2016

Miro Samek (@mirosamek) of Quantum Leaps spoke with us about making better state machines through actor objects and hierarchical state machines. Miro wrote a book: Practical UML Statecharts in C.../C++: Event-Driven Programming for Embedded Systems. He has an excellent YouTube channel explaining embedded concepts. We discussed his video that describes how a stack overflow works and the related in-depth post on EmbeddedGurus.com. Elecia enjoyed his object oriented programming in C PDF for both the OO and the UML refresher.  Miro mentioned the Software Engineering Radio podcast. We mentioned our favorite podcasts blog post. Also, we talk about Jean Labrosse's recent episode of Embedded.fm.  

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Embedded. I'm Elysia White alongside Christopher White. Our guest this week is Miro Samek. We're going to talk about what to do when you don't want an R-toss, but you need to go beyond the wow one loop. Before we get started with Miro, we are ending the Tinker Kit contest. We have a winner. Yay! Benja Blom. He is going to use the Tinker Kit to start playing with robots and hopefully make it fun enough for his baby boy to enjoy his robot. I think that sounds like a great use for that kit.
Starting point is 00:00:46 Hi, Mero. Thanks for joining us today. Hi, Chris. Hi, Alicia. Thank you for inviting me to your show. We had many listeners request you, so I should have invited you months ago. I'm glad you could make time for us. Can you tell us about yourself? Well, sure. I grew up in Poland. I studied physics all the way to PhD level, actually. But then I had a chance to do my PhD research in Germany, and I fell in love with programming, and especially real-time programming. So after my year of postdoc, I came to the US and I somehow managed to get a job at GE Medical Systems as a software engineer. Later, I moved to the Bay Area and worked for two Silicon Valley companies in the field of GPS. Well, that was at this time that I also developed the first versions of software that
Starting point is 00:01:48 later became QP frameworks. And I also wrote my first book that explained the framework and all the concepts related to it. Finally, in 2005, I started my own software company called Quantum Leaps that develops and sells the QP framework, as well as the QM modeling tool based on UML state machines. Cool. And that's what you're doing now? Yes, that's exactly what I've been doing for the past 11 years. But you didn't mention that you have a bunch of videos that you've been teaching people how to do embedded systems.
Starting point is 00:02:29 Well, that's just, among others, what I do. I think that there is not enough resources that the newcomers to the field can learn from. So in 2013, it was my New Year's resolution. I started teaching an embedded software course on YouTube, and it got quite popular, I would say. Excellent. We're going to talk a little bit more about that,
Starting point is 00:02:57 but we do want to do lightning round, where we ask you questions and want short answers. And if we are behaving, we won't ask you for long explanations about why and how. So, Chris, you want to get started? Sure. Object-oriented or procedural programming? I prefer object-oriented. Would you rather explain leakage inductance or API design?
Starting point is 00:03:27 API design. Favorite fictional robot? Well, I think that R2-D2 was funny, but maybe Wall-E was lovely too. Okay, I can see a tie between those. What is your favorite programming language? I program in C and C++. So probably C++, I would say. Which language do you think should be taught in the first CS course?
Starting point is 00:03:59 Well, I think that it should be possibly low-level, high-level language, which means C. And actually a longer version of my answer to this question would be my embedded software course on YouTube. I teach C there, but I very frequently go all the way to the machine level and I show disassembly and what happens with your code with the C statements
Starting point is 00:04:21 when they are actually executed on ARM Cortex-M processor. That's interesting because I think that gets missed a lot in education, that ability to drill down to the assembly and say, okay, here's what's really happening, without necessarily knowing assembly language, but being able to use it to debug. So my goal is not to teach ARM assembly, but just to show people who probably for the first time see what those machine instructions look like, what they do. What does it mean that you can teach or tell a computer to do something in the real world, like to turn an LED on and off or something like that. And so when people learn and see how fundamental concepts are implemented
Starting point is 00:05:10 ultimately, then they use those concepts much more confidently and program with more efficiency and they just understand this stuff much deeper.
Starting point is 00:05:24 Okay, I broke lightning round, so we're going to get back to that. You did, so I'm going to ask a short question. What's your favorite physical constant? Probably Planck's constant. That's a good one. Favorite processor? I would say that the best would be Renaissance RX, although, of of course Arm Cortex-M is the most popular these days. How is the Renesas RX different?
Starting point is 00:05:53 Well, it is cleaner. It is not as complicated as Arm Cortex-M. As you know, Arm has a lot of baggage, the processors had two instruction sets, thumb and ARM. Now Cortex-M has only thumb two, but some of this legacy still lives on. It has those modes and so on. And the FPU introduced in Cortex-M4 is a big problem. I mean, for context switching, you have to remember all those registers. The context switch is longer and very complex.
Starting point is 00:06:31 Somehow, RSS and Rx, on the other hand, manage to give you a single precision FPU and all of this without any of this complexity. Apparently, I need to go look some stuff up. That's neat. Chris, do you have any more uh i can go as long as you want you picked all of them out huh you want one more and i'll do one yeah sure uh favorite planet favorite yeah well the Okay, favorite planet besides the Earth. Well, I don't know.
Starting point is 00:07:09 Maybe Jupiter. It's a giant and probably very interesting. Probably maybe the moons of Jupiter are actually more interesting. Okay, last one. What science fiction technology or concept do you think will be real in our lifetimes? I think that the biggest potential is I see in some convergence between nanoscale microelectronics and biology, like genetics and artificial synthetic life. Something very cool might happen there. I don't know, micro robots or something like that.
Starting point is 00:07:52 That's both very exciting and very scary at the same time. Yes. I've read that science fiction novel. It didn't turn out well. So we talked to Jean Labrasse about building a real-time operating system not too long ago. And then we talked about his operating system, the micro-COS. And on the show, we talk a lot about running bare metal with a while loop, with the occasional interrupt, and definitely a state machine in there. But those aren't the only options. You talk about hierarchical state machines and active objects.
Starting point is 00:08:31 Can you tell us about those? Yeah, I mean, this is probably a longer story. And probably the best way is to start with what people already know, which is the bare metal, also called super loop, sometimes also called main plus ISR, and sometimes also called foreground background system. Now, so this architecture is that you have, you know, your main function is being called, you run bare metal, so there is no operating system whatsoever in your system and then you initialize your hardware and everything and finally you enter an endless loop while one or forever loop and in this loop for instance for the venerable blink example what you would do is you would wait for a certain period of time, say a thousand milliseconds,
Starting point is 00:09:27 then you will turn the LED on, send an instruction to do this. Then you will wait again for another thousand milliseconds and you will turn the LED off and then you will look back, which will in the end cause the LED blink once. Stay on
Starting point is 00:09:50 for a second, stay off for a second. So blink once in half a second. Once in two seconds. That makes sense. I mean, we all do that when we boot up a system. We all do that and that's why I wanted to start with something that we all do and know. This is also how, for instance, Arduino programming starts,
Starting point is 00:10:08 and there is an Arduino Blink tutorial out there on the web. Yes, but we know that that's not where it ends. I mean, because that's not power efficient, and there's just so much better stuff to do. Right. But this introduces the most important concept, which I will call sequential programming.
Starting point is 00:10:29 So you program this based on sequence of events. You turn the LED, you wait, you turn it off, you wait, and so on. Now, how can you improve it? I mean, what are
Starting point is 00:10:41 the obvious shortcomings here? First of all, it's difficult to extend. Because, for instance, let's say that you want to react to a button press. This button press, you want to obviously react faster than in one second that you wait. And so in order to react faster, you would like to have probably a second such loop. And this is where RTOS comes in, real-time operating system, because the job of an RTOS
Starting point is 00:11:10 is to allow multiple such background loops to run simultaneously on a single CPU. And the job of the RTOS is then to make an illusion that all those forever loops, while one loops, have the CPU all to themselves. And how the RTOS is doing this. So while in the sequential code, when you call the delay function, delay for 1000 milliseconds, what is really happening is that this function spins in a tight polling loop and waits and wastes all those CPU cycles until the time elapses,
Starting point is 00:11:47 and then it returns the caller and proceeds. In the RTOS, it will be functional equivalent of this would be actually to put the calling task to sleep, meaning that it will be switched away out of the CPU. This process is called context switch. And then switch in a task that has probably something useful to do. And then when the time elapses, the artist will do the opposite. It will switch the context again and will return to the interrupted task, which was blocked all this time.
Starting point is 00:12:32 And so now we introduce the concept of efficient blocking based on the RTOS. And this is how most of the software is developed. And this is the role of RTOS. This is what the RTOS brings to the table, so to speak. Yeah, but I honestly wouldn't implement it that way. I mean, we're talking so far about an LED and a button. Even if you added 15 LEDs and 20 buttons, I wouldn't bother with an RTOS with that. I mean, an RTOS you do need sometimes, but it is a lot of overhead.
Starting point is 00:13:13 That context switching is expensive, and understanding how the threads work is painful if you've never done it, or if you have done it on a more functional system than an embedded system, the way embedded threads work is also painful. Right. So this is way too much craft to make many systems work. Well, I would agree that obviously a toy problem like the Blinky, that's all I have time to explain. Sure, sure. a toy problem like the blinky that's all i have time to explain sure sure then um then it's um
Starting point is 00:13:47 obviously still small and and uh the super loop is much more scalable than this so obviously uh i wouldn't either uh go with an artos just to blink and blink and led and then perhaps react to a button press this was just an example to show the general principle, the general paradigm, which I will keep calling sequential programming. Okay, okay. Mm-hmm. And so while RTOS is a huge improvement to the super loop, it has certain shortcomings. But first, let me kind of enumerate what the benefits of RTOS are.
Starting point is 00:14:27 So first of all, it is divide and conquer strategy. So instead of dealing with one messy super loop, because those super loops tend to get messier and messier as you keep grafting on new features. So instead of dealing with the kitchen sink of one super loop, you can partition your problem into those tasks or threads that now are efficiently blocked by the RTOS. talking here about partitioning in the time domain because now it is possible that all those super loops those tasks appear to be executing simultaneously on a single cpu so so the partitioning that i'm talking about is in a time domain and this is very very valuable The second benefit is that you don't waste your CPU cycles on endless polling. For instance, in a Blinky example, the useful CPU cycles are one in a billion, perhaps, right? Because turning on the LED costs you a few machine instructions, but waiting for a second costs you millions of instructions.
Starting point is 00:15:44 But we would never do that. I mean... Yeah. And then the third one would be that the artist notices very easily when all tasks are blocked, and then it is a good time to put machine to sleep,
Starting point is 00:16:00 which allows you to save a lot of power. So all those are benefits, but there are, of course, problems. And what I'm trying to achieve here is to explain, because what I see, the biggest disconnect and the problem in the community is that people know about sequential programming. People know about Arthas, but they cannot make the paradigm shift to event-driven programming
Starting point is 00:16:28 because they don't know why should they do this. Why should they bother? What is really the difference between the two? Okay, so we have the benefits of the RTOS. And they are all very still sequential programming. But like I said, I'm still not using an RTOS and they are all very still sequential programming. But like I said, I'm still not using an RTOS. Can you guide me over to more what you're thinking with this event-driven stuff?
Starting point is 00:16:56 Yeah, so the point of event-driven programming is a completely different viewpoint. It's not sequential. So your system is actually constantly waiting for occurrence of an event because an embedded system is naturally very event-driven. It just constantly waits for something to happen and then it reacts.
Starting point is 00:17:21 And so what that means is that when the event occurs, you very quickly handle this event, and then you return after handling this event without ever blocking back to the infrastructure that called you. So what happens is that, for instance, the graphical user interfaces in the early 80s first introduced this concept of event-driven programming. And as you recall, for instance, Windows programs, Windows 3.1, they were not multi-threaded. And yet they handled most of the events in quite timely manner.
Starting point is 00:18:09 Of course, there were shortcomings, but without multi-threading, they were able to handle multitude of events. And they were structured that way that Windows was in control at the time. And all events were converted into event objects. They were put in the message queue or event queue. And then the user-provided code was called to process those events. So this would be the structure of an event-driven program. Yeah, and I think many UI-based things, even embedded systems, are structured that way these days.
Starting point is 00:18:45 Still, to this day, they are structured that way. But I would like, again, to contrast this with sequential programming in which you block in the middle of processing and wait for the event in line, not returning at all to the caller. Yeah, that's usually bad. There's usually all kinds of badness with just blocking to the color. Yeah, that's usually bad. There's usually all kinds of badness with just blocking in the middle.
Starting point is 00:19:09 But that is the early paradigm that people choose a lot. The easiest paradigm. Yes. Because wait five milliseconds is simpler than... The recipe style of programming, right? Then call me back in five milliseconds yeah yeah okay exactly so so um so now the the main difference is that that uh in order to have an event-driven program you need to have several uh things first of all you need to
Starting point is 00:19:42 have event instances event objects that are generated for every event. People often confuse events with the event delivery infrastructure. For instance, in Artosys, you often hear that a semaphore is an event or event flags are events. They are not. They are just the infrastructure
Starting point is 00:20:02 to deliver events to the blocked tasks. It is like confusing sending mails, sending letters with the postal service. Some of us are part of the infrastructure of the postal service, while your letters are those events, messages. So you have event objects. Then you have to have the infrastructure that delivers those events and that calls the user code. This means that the control is inverted so that the infrastructure calls your code and not the other way around. Again, when you program within RTOS, you write the code for each and every task, and then you call the services such as semaphore and time delay and so on. So this is the main difference between the RTOS
Starting point is 00:20:57 and event-driven infrastructure. Such an infrastructure is called very often a framework, and the inversion of control is the characteristics of this framework. So from a high level, you mentioned these event objects. I'm not quite super clear on that concept. Would it be... So the semaphore is a signaling mechanism, and then you have user code that handles the events.
Starting point is 00:21:23 Would the event object be similar to, I don't know, in the most basic sense, a case in a switch statement for a particular event type? Or like an ADC reading? It wouldn't be that the ADC was read, but the actual value of the ADC reading. The contents of the message? Actually, you are very close.
Starting point is 00:21:42 It will have both pieces of information. First of all, an event has to tell you what happened. And it will be then handled in a case statement of a switch. So this would be the discriminator for the switch. What happened? So let's say that you have an ADC. When you read an ADC, you will generate an event. And the very first thing that this event will tell you
Starting point is 00:22:13 in its signal part is what happened. So ADC conversion has happened. And then it can have event parameters. And this could be different for every event. And this particular one would have adc reading inside let's say 16-bit quantity that tells you the actual measured value so when you receive this event you will know what happened adc has converted and you will know the value at the same time. Okay. And then our event-driven thing, I guess we should call it an actor because that's what we're going to eventually call it, can handle that and can do whatever it needs to do given that it saw that event. That's right.
Starting point is 00:23:00 The most interesting part in this is that it should handle this possibly quickly and in a run-to-completion way. That means that it should process one event at a time. It should not be in such a situation that it will have to process another event while still busy with the previous one. The structure of event loop, also called message pump sometimes, guarantees this one-at-a-time processing, this run-to-completion processing. So you call this, it has to return, and then the loop loops back,
Starting point is 00:23:41 takes new event, if possible, and then dispatches it to the active object for processing. I kind of like the idea of message pump. Because if you think about it that way, if you think about that the messages come in and the messages go out, and your goal in this thing is not to wait on anything, but to handle it and pass on whatever you need to pass on, whether it's a new event, whether it's an ADC start conversion and a send back to the mothership, the actual value, or whether it's to turn on a motor if the value is low enough.
Starting point is 00:24:20 So you go ahead and you get your ADC value, you turn on the motor if the value is below a threshold, and then you pass on the message that says in 10 seconds or when the ADC is low enough, turn off the motor. You don't wait to turn off the motor, you pass it along. Yeah, I think that's the piece I was missing, is that you're all in on the event model, right? So it's not just that you have a case switch statement and you get messages, then you go off and do your normal sequential stuff based on all those. It's that inside the handlers,
Starting point is 00:24:58 you might trigger other events, and in fact, all of your actions are event-based. Yeah, and I wanted to just point out what alicia said is very very important here is that when you generate an event such as you know because the adc conversion was below a threshold you might turn on or turn off the amount and so on you generate this event and post it to some other event queue, possibly your own, but you don't wait in line for this event to be processed. This is called asynchronous event posting. This is what it means to be asynchronous.
Starting point is 00:25:37 Yeah, and that's actually a concept that higher-level languages on desktops like Objective-C and c sharp have been have been extending to the point where it's almost it's almost ubiquitous you set up these little objects in line in your code and say okay this is my asynchronous event handler and something happens and you don't even basically know you know it just takes care of it however it wants to but it's totally asynchronous um it's sometimes hard to come to code that is built like this because in sequential version, it's like okay, this happens and then this
Starting point is 00:26:10 and then that. And if there's an error case, I go over here and I do this. But with sequential programming, you have all of these little actors or, I'm sorry, event programming. You have all these little actors that run around doing whatever it is they need to do. But what happens first? Yeah, it's one these little actors that run around doing whatever it is they need to do. But what happens first?
Starting point is 00:26:26 Yeah, it's one of the things that I've had trouble with moving from embedded to desktop sometimes, because I'll go and have to do something for an app or in QT, for example, for a desktop program. And my first couple of hours are spent just stuck because where's the entry point? Right. Where do things start? Yeah, and we are very clearly going towards state machines, but let me only point out this, that all this infrastructure,
Starting point is 00:26:56 this event-driven message pump and reactions to this just by calling functions that are run to completion, so they are just one-shot functions. They are not in endless loops or anything like that. It's just a function that quickly processes and returns. All this can be implemented with a traditional RTOS. This could be the guts, the structure of each and every thread.
Starting point is 00:27:20 And this run to completion is very often misunderstood that it means that the event handler needs to monopolize the CPU while processing an event. It's not true. The event handler can be preempted multiple times by other more important threads that can be running on the system. As long as there is no sharing of resources, the event handler will eventually complete and complete its run to completion step. And it will grab the next event and will process it. So it is possible to combine preemptive multithreading with this paradigm.
Starting point is 00:28:06 And this is becoming very interesting because it's suitable for hard real-time work. Yes. And the run-to-completion is nice because it is something you kind of want to do if you're doing a new certification, FDA or FAA. It helps with traceability of your requirements. So your requirements document and then design and all that. And you can see in the code, this traces all the way back. If you have run to completion, then you can trace one-to-one. And that is so nice.
Starting point is 00:28:39 Instead of saying, well, this state machine handles six different use cases and six different traceability points. And so the run to completion has some use there. But you were saying about sharing resources. And one of the things in an embedded system is you don't have a lot of resources, so sometimes you end up sharing things, whether it's memory or access to the ADC value. How do you avoid sharing? Yes.
Starting point is 00:29:12 So, okay. So first of all, the sharing of resources. What you should do is you should, instead of sharing resources, make an object, event-driven object, that will be the owner of sharing resources, make an object, event-driven object, that will be the owner of the resource, the manager of it.
Starting point is 00:29:32 Let's say that you have an ADC or some screen tool, like, for instance, an LCD GUI. You can make an object that will encapsulate this, will own this, and only this object has the rights to access this resource directly. All other objects that could be running in the application have only cannot access this resource
Starting point is 00:29:53 directly but can only send events to the owner, to the manager. And this resolves all many, many potential conflicts and serializes the access to the resource. Please note that the event exchange is thread safe. So this is the job of the infrastructure, of the framework that runs the show. So you as a programmer don't need to worry about this. So you just post events and receive events from those owners. That's how you solve
Starting point is 00:30:27 them. You don't share at all. Okay. Well, and to some extent, that's a feature of good encapsulation. I mean, you don't really want to share the lowest level stuff. You want somebody else to have control over it and have everybody say, oh, I want this, and then to have the person with control over it say yes or no. Yes, and I absolutely agree. But, you know, it is easy to say something like this, those shall not share. But as long as you don't provide any infrastructure, any help to the programmers to avoid the sharing, this is just, you is just a good guideline, but in practice they will have to share
Starting point is 00:31:09 and use mutual exclusions such as mutexes. Here, in this paradigm, you provide some mechanism, which is the thread-safe events that can be sent and received. And this is the game-changer, because now it is practical to avoid sharing that way. So we have thread-safe events, but I thought we didn't have an RTOS. Well, we can have an RTOS and we can have a variety of RTOSes. And that's where it becomes a little bit confusing to people because they think that it's either Argos or event-driven active objects.
Starting point is 00:31:47 But in fact, it could be both at the same time. Okay. I'm not used to thinking that. The only thing is, of course, how you architect your application. Of course. But the point is this, that when you start doing this in the event-driven way,
Starting point is 00:32:10 which is, by the way, the recommendations of many experts in concurrent programming, then all the blocking mechanisms of the artists become a problem because they can be used accidentally while you are not supposed to block. For instance, people call event posting to the message queue, and such a message queue might block when it is full. And many artists actually will block when the event queue becomes full.
Starting point is 00:32:42 Or many people call, for instance, send a message and expect a reply. And what they do is they block in the middle of this until they receive the reply through unblocking. And this is a backdoor reception of an event. And by the way, it also violates
Starting point is 00:33:00 the run-to-completion semantics that I was trying to explain. Because it means that while processing one event that triggered this whole sending and receiving a reception of the request, they are also receiving the request, which is another event. So they are not done with the first event while receiving the other. And they are violating at this point the run-to-completion semantics. And then they get themselves into trouble, really.
Starting point is 00:33:27 So you really have to understand the primitives that are making up your infrastructure to know that you're not screwing up your rules about running to completion. Right. Yeah, but the problem is when you use an RTOS, most of the RTOS primitives are blocking. So when you buy an RTOS, most of the money that you are spending is exactly for those primitives. And now I'm telling you not to use them. So first of all, you know, this goes counter this guideline that an API should be easy to use correctly and hard to use incorrectly. When you
Starting point is 00:34:09 use an RTOS in a situation like this, it is just all too easy to use blocking mechanisms inadvertently. So it is too easy to do incorrect things. I'm still back to, okay,
Starting point is 00:34:26 you're convincing me not to use an RTOS, but I was already on that boat. I mean, I like RTOSs and sometimes they're very useful. Oh, I like them too, but But for what you're saying, it isn't providing
Starting point is 00:34:42 what I want at that point. What are my options if I don't want to do an RTOS, but I also don't want to do a sequential state machine super loop? Well, up to this point, you need to create an infrastructure, an event-driven infrastructure all by yourself. So you need to kind of invent your own event objects. You need to repurpose the message queues from an artist for your event queues.
Starting point is 00:35:12 You need to invent event-driven way of delivering timeouts, so time events, and some other basic mechanisms to handle all those things in a vendor-driven way. The other options you had was to use a modeling tool such as IBM Rhapsody and others that come with built-in event driven frameworks of that sort. For instance, Rhapsody comes with OXF, IDF, and other frameworks. So you could reuse those. And for that reason that there are not so many choices, I have developed a QP framework, and this would be another choice.
Starting point is 00:35:57 There are, of course, other frameworks out there available, but not that many and certainly not as frequently available as RTOS. When I have programmed, like Chris was saying, on a computer, a non-embedded computer, and ended up using event-driven for UIs and whatnot, there isn't a huge framework. It's a bunch of messages. Why do I need a framework? I can definitely do a timer.
Starting point is 00:36:32 I've done that a lot where I won't block on the timer. I'll call back when necessary. Or I'll put things in interrupts if they're very very small or just have an interrupt have a flag that i can then check for and call the necessary function that does everything it needs to do why do i need more than that what is? What is it buying me other than lost RAM and cycles, which, of course, we never have enough of? Well, first of all, when you're programming on a desktop and all you need to write are those event handlers,
Starting point is 00:37:18 that means that you're already using a framework because you are not writing all the code that calls your event handlers. This is provided to you already. So in a sense, you are already writing all the code that calls your event handlers. This is provided to you already. So in a sense, you are already using a framework. This framework is rudimentary and provides the event-driven event queue and perhaps timeout mechanisms, but typically does not provide state machines. And this would be the next step that I would like to talk about and why you would even need state machines.
Starting point is 00:37:51 So that would be my answer to your question. Okay. So you need a framework. You need a framework. You need this inverted control that calls your code rather than you writing the code from scratch every time. Which is something we rail against a lot, having to the code from scratch every time. Which is something we rail against a lot, having to write code from scratch every time.
Starting point is 00:38:08 Yeah. So what happens when you write those event handlers often is that you end up... So what happens is this. Because it is no longer a sequential code, you cannot store the context in the sequence. So remember that when the sequential call like Blinky was there, that you could wait and you knew that it was after you turned the LED, say, on and before you are going
Starting point is 00:38:35 to turn it off. So when the delay elapses, you know that you need to turn it off. You have the context in the sequence. When you have to return every single time to the calling infrastructure, you lose this sequence. You lose the call stack, essentially. That's what it is. And so what happens is that you have to replace this context somehow.
Starting point is 00:38:57 And what people do is they invent flags and variables. For instance, they could invent a flag that LED was on. Okay, so now if LED on, turn it off, otherwise turn it on, and so on. And this very quickly, as the situation becomes more and more complex, keeps adding more flags, more ifs and more else's, and it quickly degenerates into something that programmers know as spaghetti code. And this is where the state machines come in because the state machines are the best known spaghetti reducers we know of.
Starting point is 00:39:37 Right, because then it isn't if the LED is on, it's if I am in LED on state, then my only other option is LED off state. Right. So what happens is now you will replace this with a state machine that has those two states, as you just said. And so if you are in LED off and your timeout event occurs, you know what to do. You need to transition. You need to turn the LED on and off,
Starting point is 00:40:06 whatever the state was, and you will change your state because now you are, if you are in on state, you are going into off state and so on. So now we are ping ponging between those two states with, and please know that every time you handle the same event, timeout event, so the event is identical every time, but you process it completely differently you depending on your state and the whole beauty of this approach is that instead of handling the whole bunch of variables that people keep inventing and and then they have to introduce ifs and else's you have just one variable that remembers the current state is this state on or off or perhaps 100 other values 100 other states that
Starting point is 00:40:46 you might have and and and this is just one variable and uh that um that is simple to i mean first of all it doesn't cost much to store and remember so much all the relevant context but you have to be careful because we do want these to be small. You don't want an LED on and ADC is converting state and an LED on and ADC is finished converting state and LED on and an ADC is off and LED off with all of those three. That's ridiculous. You want little tiny state machines or as small as they can be and have several of them that handle their own particular area instead of making a giant one. Is that right? Yes.
Starting point is 00:41:38 Yes and no. Yes and no. You certainly don't want your state machine to grow too big because it's difficult to understand. And unfortunately, the traditional state machines have this tendency that's called the state transition explosion problem. And that's sort of like what I was saying. Because what happens is that very often you have events that are handled identically in a group of states. For instance, when you model a desktop calculator, you have the button clear and off. And in whichever state of the computation you might be, you are supposed to clear the display when somebody presses C
Starting point is 00:42:22 or turn the calculator off when they press the off button. So in every such state, you have to handle those two events in the same way. But the traditional state machine will have no way of avoiding those repetitions. So in whichever state you are, you have to always have those two transitions in there repeating over and over and over again. And so this is the shortcoming of traditional finite state machines. So I think that what you are asking for is what would be the mechanism then to try to reuse those common transitions rather than repeating them over and over?
Starting point is 00:43:02 And the answer is hierarchical state machines, also known as Harrell state charts or UML state machines. Okay. I'm familiar with them as hierarchical, but not UML. Why are they UML state machines? Well, the history is this, that David Harrell invented this whole state chart formalism, I guess it was in the 80s, when he worked for Israeli Air Force. And this concept then became very, very popular. The real-time object-oriented modeling room method took it over and introduced introduced room actor that exactly had the behavior specified
Starting point is 00:43:47 by the Harrell state chart. They called it room chart at the time. And then the UML just adopted it, just took it pretty much exactly how it was invented by Harrell. And they are now called UML state charts. But it is essentially the same concept, but very, very closely related. Okay, for people who haven't heard of these things,
Starting point is 00:44:14 ROOM is an acronym, R-O-O-M. It's all capitalized. I have no idea what it means. UML is a unified modeling language, which is if you've seen design patterns or you've seen C++ programs and they're described using graphs with lots of boxes and arrows and they have special boxes and arrows that mean certain things about objects and what's a call and what's inherited and all that, that's UML. We don't use a lot of UML in embedded systems. Our field isn't really known for embracing either UML or code generation. But I know that you are very passionate about both of those things.
Starting point is 00:44:58 What kind of response, how do you convince people? I think that it's very unfortunate that we don't use it as much as possible. On the other hand, I absolutely understand that UML became so big and so complicated that it stopped adding too much value for what it cost to understand all this stuff. So I think that we shouldn't be throwing out the baby with the bathwater completely forgetting about UML. But rather we should take pieces of this that are relevant to our work. And one of those is those state machines that are now part of the UML. I'm not saying that all other parts are equally valuable.
Starting point is 00:45:48 I don't use most of the UML, in fact. I typically use the state chart diagram, and I like sequence diagrams. And we probably all use them, perhaps not even knowing that they are also part of the UML and standardized. You do have a nice and short PDF about object-oriented programming in C, which secretly includes a refresher on UML that I found very useful.
Starting point is 00:46:16 Thank you. I mean, the object-oriented part, if you haven't done object-oriented programming in C and you wonder what that is, Miro's PDF, which will be in the show notes, definitely gives you the overview, the good parts, and it shows you a little bit about the parts people hate, which is the function pointers. But personally, it was having a place where all of the UML that I would actually use in one spot was kind of cool. But you also have a book that is UML-based, and it's UML and State Charts. What's the title of the book?
Starting point is 00:46:53 Yeah, so there were two editions of the book. The first one was Practical State Charts in CNC++, and then in the second edition, we just added practical UML state charts in CNC++. So this second edition was published in 2008. Okay. So that will, of course, be in the show notes. And what was it like writing that book? Well, you know, I learned a lot from the first edition. From my first edition, I had the wonderful editor, Robert Ward.
Starting point is 00:47:29 We actually became friends. He visited me a couple of times. And he constantly corrected my non-idiomatic use of the English language. Obviously, that's a second or third language for me. But I learned from him tons of things. He was the former editor of CC++ User's Journal. He actually created this magazine. So I had very, very good experience writing the first edition. Second edition, it's much bigger. I decided to put all the source code there and explain it. Pretty much like Jean Lepreuse explains in his MicroCOS book.
Starting point is 00:48:10 I still admire his books. So I wanted to emulate him in this respect. So the second book is much bigger. But I started every chapter with a short introduction and explanation what kind of problem I'm trying to address and solve with this so even if you just read the intro the first few paragraphs of each chapter you will kind of have an overview what it is about and if you want to go deeper you obviously can can read the rest or the code so did you so i'm looking at doing a second edition for my book, and I have been writing the code that probably should have gone with the book to start with, but I didn't want to put it in the book.
Starting point is 00:48:51 I just wanted to link to it. What made you decide to actually put the code in? I wanted to explain the code. Because people just, as you said, there are not so many frameworks out there. And I saw my mission with this book, similar to Jean Labrosse's mission, that he explained the art for the first time in this public way. So I was kind of thinking that I'm explaining the object-oriented active object framework and state machines to embed folks, engineers like me. So that's why I put a source code there too. Okay.
Starting point is 00:49:32 I want to make sure we get to some listener questions. As I said, you've been requested as a guest a few times. So let's see. One from Andre, who is famously, at least in my little world, opposed to flow charts of all varieties. He calls them flaw charts. Flaw charts. And, yes, flusses. But he likes your state charts more and wants to know how to annotate interrupts in the state charts oh wow
Starting point is 00:50:07 well i i'm not sure we don't have a whiteboard to go with this is it i don't understand this question in two ways first of all um there could be state state I mean, the interrupts can themselves be structured as state machines, right? So an interrupt is a run-to-completion one-shot, single-shot routine that has to run quickly and then return, obviously. So it is ideal, actually, to use a state machine there. And there are, of course, many types of interrupts. Some of them are for a single purpose. For instance, a timer expires. And then what you have is that you have only one kind of event, which drives the whole state machine in this case, which will be just
Starting point is 00:51:05 the expiration of the timeout, the timer interrupt. In this case, there is no special annotation I would use. It's just the mere fact of invoking the state machine drives it. So you have one type of event. I would call it a timeout, say. There are other interrupts that are kind of kitchen sink in which you have to go to the hardware and find out why this interrupt happened and then read some registers and figure out based on this information what actually happened.
Starting point is 00:51:34 And then at this point, you can create an event and dispatch it to the internal state machine associated with this interrupt. And again, in the diagram, you will see only those events that you dispatch. You will not see any special annotation. And the other way of looking at this question would be that interrupts produce events
Starting point is 00:51:56 that are then consumed at the task level by active objects. And then again, it will not be any special annotation. Actually, the purpose of interrupts is to produce events. So what they do is they create events, they write some information to them. For instance, ADC conversion just happened. And they read ADC value, put it in the event, and they post it asynchronously. And it will be later processed by the state machine. And so in the state machine you
Starting point is 00:52:25 will see adc convert event that's all so if i'm understanding correctly the interrupts themselves are a form of actor object and the event that it gets coming in it comes from the hardware it comes from the chip it's the actual event itself and the event that it could be putting out is a more global framework event of what it is, whether it is a timer that expired or a UART now has data in a buffer that it has been putting in. Right. So they're just objects like everything else it just happens that their message queue comes from the processor or hardware well you well certainly speaking um in for instance in the qp framework that i can talk about is uh they are not considered active
Starting point is 00:53:20 objects this these are just um mechanism to events. That's the purpose in life that interrupts have. They don't even have an event queue. That's why they cannot be considered active. So it is not possible to post an event to an interrupt. So they are just functions that mostly produce
Starting point is 00:53:39 interrupts. They can have associated state machines with them because sometimes that could be useful. But I typically don't even code them as complicated or big hierarchical state machines. I typically use just a switch statement just right away in the body of the interrupt. Yeah, because you don't want to put much in your interrupt. I mean... Yeah, absolutely for simplicity. So keep it simple and use the heavy tools only when needed. Yeah. The only interrupt I can think of as a good state machine was I2C.
Starting point is 00:54:17 If you have to deal with the I2C signaling mechanisms, whether or not you're in an ACK or a restart state, it's all very state machiny. But if you're just expiring a timer, just fire up an event. Cool. Yeah, exactly. So, not even big state machine, just post an event. All you do is that. And from Altronic, when does the state machine model break down or hit its limitations? For some background, he does a lot of wearables, and the state machine paradigm is pretty much perfect. He doesn't do embedded Linux, and his preference is bare metal, avoiding RTOS as much as possible to keep things simple. So, yeah. When does the state machine model break down?
Starting point is 00:55:09 Well, frankly, I have not experienced this breakdown yet. I mean, the biggest system I worked on was at GE Medical that we had over a million lines of code. And then in hindsight, I would say that this was an event-driven system. We didn't have problems with, for instance, race conditions because the designers were clever enough to give us only callback functions that we will have to then implement. And there was no concurrency hazards anymore in this. So this was a good thing but we suffered a lot from the um from the spaghetti code problem with tons of ifs and elses and and flags and variables and and and we we had some state
Starting point is 00:55:57 machines but they were not flexible enough and they certainly were not hierarchical. I think that this GE software, the biggest I worked with, could have been much improved if hierarchical state machines were used in it. And certainly I would not see any limitations of this method. I know that, for instance, ROOM method, real-time object-oriented modeling, that's what it stands for. Finally, sorry. Yes, it was used in huge telecommunication switches. And those things are huge. And state machines were with thousands and thousands of states.
Starting point is 00:56:38 And it still worked. So that's all I know about scalability. Okay, well, we will take that as a, it's pretty scalable. Let us know when it fails for you, Altronic. Right. I mentioned at the beginning of the show that you're doing a lot of videos, and I wanted to highlight lesson 10 in your videos where you look at stack overflows and you wrote about it on Embedded Gurus with more detail on how you identify a stack overflow. Can you tell us a little bit about how you think most people are finding them and how you suggest we debug them.
Starting point is 00:57:28 Well, in this lesson, just to very quickly tell the people who might not know, I show what a Stack Overflow can look like. And I specifically kind of arrange the situation that a return address was corrupted. And what ended up happening is that instead of returning correctly, the CPU went off and started to execute its own vector table. And so it plowed ahead through this data,
Starting point is 00:58:01 which in ARM processor are just simply addresses of event handlers, of interrupt handlers, and it plowed through this data as through code and then happened to reset itself and started to execute again. So in this particular case, I arranged it, obviously, to show how sneaky Stack Overflow can be because the CPU actually never stopped working. It was resetting itself hundreds of times per second. So obviously, all bets are off when stack overflow happens.
Starting point is 00:58:38 So it is very difficult. That's why it is difficult to detect it reliably and say that this and this would be the symptom because the symptoms can be so different. So random. It's so hard to reproduce. Completely blows your mind. It's not rational what's happening. Yeah.
Starting point is 00:58:58 So then I also was fascinated with Michael Barr's deposition in the Toyota case of unintended acceleration. And he describes that most likely his bet was that it was possible to cause this unintended acceleration by stack overflow, by overflowing the stack. And just right under the stack, there was this operating system data. And just one flip in this, one bit flipped in this data would actually make one task be forgotten by the scheduler. And this was obviously the most important throttle control task. So now I was thinking how, I always actually think that way.
Starting point is 00:59:49 Is it anything that couldn't have been done to begin with to avoid the situation? And then my first idea was maybe I can move the stack such that it will overflow and not corrupt the most valuable data that we had in the system. So where would I put the stack? And then I thought maybe putting the stack at the place at the beginning of RAM such that the stack will grow towards the beginning of RAM would help because then you will not destroy any valuable data. And so that's what I suggested in my blog post on on embedded gurus and so you put the stack in a place that if you overflow it causes a hardware or it causes a
Starting point is 01:00:36 an interrupt and therefore you get an error interrupt and if you ever are in that error interrupt you really should consider resetting or crying for help or something really important putting a break point because that's bad if you got there that's really really bad yeah well first of all all this has to be tested for a very particular processor i experimented with this with arm processor. And so I put the stack, you know, you need to know which way the stack grows on which processor. Typically, it is a descending stack. Most processors have the descending stack, meaning that the stack grows toward lower addresses. So when you put it at the beginning of RAM, okay, it grows. And so overflow
Starting point is 01:01:22 will breach the beginning of RAM and it will go into area that has no longer valid RAM. And now it depends on the processor what will happen in this case. ARM processor calls the hard fault exception, which you can implement. So instead of going into the unknown land and keeping resetting itself or something, or forgetting a task, which was much better than resetting the machine, by the way, you know about it. So you can do something about it. And so you stay in control
Starting point is 01:01:54 rather than losing the control completely. I mean, you still have the problem that something awful has gone wrong and you probably can't recover 100%. But at least you know it happened. And where. And maybe where. Or closer to where.
Starting point is 01:02:09 Closer to where. Because a lot of times with those kinds of problems, you've already moved so far past the initial cause that it's almost impossible to retrace your steps. But this requires, as you said, understanding how the chip works a little bit and being able to modify the linker file in a way that lets you do this linker files remain one of those things people are afraid of
Starting point is 01:02:34 with good reason you can shoot yourself in the foot there right do you think we're going to get better at that well it depends you know like, like the maker movement that you have mentioned, it solves this initial kind of problem of getting into the programming in the first place because they provide everything for you. But on the other hand, it does not teach people how to write the startup code, how to write the linker script, how to get going, what's the world even before the main function. So can we get better at this?
Starting point is 01:03:07 I don't know if makers can get better, but professional embedded software engineers certainly need to do it. And that's why I teach in my video course to show them the linker scripts and stack overflows and all of this. And there are so many of these, not really secrets, but almost secrets that make it so that you can be a much more effective debugger, a much more effective engineer from design all the way to debug and test. But it is sometimes not obvious. I mean,
Starting point is 01:03:50 that whole, okay, how does a stack overflow work? Then I need to know about the chip. Then I need to know about the linker file. And I need to know enough about my application to be able to do RAM. And then once I have an RTOS, I can't use any of this because now i have multiple stacks and it it's a lot of data that you can't throw at somebody who hasn't right stared at it for hours and hours yeah so you know to me this stack overflow is is actually just a small part a small part of a bigger thing. And that bigger thing is designed by contract formally, but really peppering your code with assertions,
Starting point is 01:04:33 which I highly recommend. So to me, placing the stack at the right place in RAM so that when it overflows, you have some hardware fault is just an example of hardware-assisted assertion. So what happens is that you get a hardware fault, and it is catastrophic. You cannot continue from there, right? So you need to design your recovery strategy to kind of control the damage. And very often, it turns out that the best course of action is just to quickly reset the system.
Starting point is 01:05:10 But I would just recommend that instead even to look into all those hardware gotchas of how to get the hardware to behave correctly, you pepper your code with enough assertions. And I don't know how it happens, but when the assertion density is high enough, almost all failures manifest themselves as assertions. So you no longer have hard faults or something like that. You'll end up hitting an assertion, and this is the best course of action because assertion allows you to remain in control and at least do the damage control.
Starting point is 01:05:50 So that's why, for instance, QP frameworks have the right density of assertions. I know that NASA JPL coding standard recommends very highly to have, I don't know how many, a few assertions per function they recommend so that they maintain their high density of assertions. And I also recommend highly to keep assertions enabled, especially in production code, especially in safety critical code like medical devices. It is controversial. Many people will turn the assertions off, but I think it's a mistake.
Starting point is 01:06:32 Yeah, that's the common thing is that we have the assertions for development and then we take them out. And of course, our code size goes down and we go faster because we're not doing all these checks. But yeah, you're not doing all those checks either. So you have to be utterly convinced that you're never going to hit them in production. I am amazed when people will implement an MPU, which is Memory Protection Unit, which costs a lot, say, cycles when you use an RTOS. We have to set up the MPU every time and so on.
Starting point is 01:07:00 So they will pay for an artist that will do it, but they will not ship their code with assertions that they can put in their code. That doesn't make sense to me because an MPU is a form of assertion. So they like those assertions, but they don't like other assertions. They should be using everything that they can, I think.
Starting point is 01:07:26 So let's talk about assertions for just a sec. When I do assertions in code that doesn't have an RTOS, it is often pound-deft to be a breakpoint and then a debug log for whatever the error was. The debug log might go out to a serial port. It might get written to Flash. It depends on the system. It just needs to be recorded.
Starting point is 01:07:53 Is that the sort of assert you're talking about? I mean, I'm not handling anything there. That's exactly the assertion I'm talking about, except that I'm going one step further. And I'm saying that in the release code, you should have the same assertion still enabled, but the assertion handler or the routine that gets called when assertion breaks should be very, very carefully designed and tested. To do what? So that it will do the damage control and um and and react correctly to put the system in a safe state whatever that means yeah whatever that means
Starting point is 01:08:33 yeah for instance we we i once worked on an insulin pump and uh and in that case what when insertion broke what we decided to do was not to pump insulin, obviously, because overdose is not good. Not to stop pumping, obviously, because the patient that will not get insulin will get sick also. So what we ended up doing is we stopped the device. We started the buzzer and vibrator to alert the user that the device is no longer reliable. And this was the best course of action in that particular case. But this was carefully tested.
Starting point is 01:09:14 This was tested with the conditions of Stack Overflow, for instance. So before calling this routine, we will carefully reset the stack to the reasonable value and so on. It was very carefully tested under many conditions. And I think that it was the best we could do. And you do have to make these asserts. They need to be when things are very bad, not, oh, I got an odd error or I got an odd message. I just want to send it out to the debug log.
Starting point is 01:09:44 I don't. Ass asserts are critically bad not kind of bad oh yeah that's a very good good point i actually brought a blog post about it or an article so so you need to distinguish between exceptional conditions and bugs uh so so assertions are for bugs something that should never happen in the code. And if you have conditions in the code that are off the main path, but can happen, can legitimately happen in your system, then you obviously have to design code and handle those. Even though such code typically is even bigger than the main use case of your device. Very often those exceptions take more code than the rest,
Starting point is 01:10:30 but still you need to design and implement and test this code. So yes, absolutely, you have to very carefully and be able to distinguish when the situation is a bug that should never happen in a system, or it is an exception condition that you need to handle. And the bugs you assert. And not use an assertion. Yeah, okay.
Starting point is 01:10:52 Excellent. So I want to go back to how do people learn these things? How do you learn about asserts? How do you learn about handling a stack overflow with a hardware interrupt? And this all goes back to the question you asked us before we started the show, which is, what do you read? How do you stay up to date? Or even learn from the beginning. Right. So how do you?
Starting point is 01:11:25 No, no, I was asking you. I was sidestepping the question entirely. Well, you know, I am still a little bit old data, meaning that I still lived in the good times when every month we had Embedded Systems Programming magazine full of excellent articles. I was so looking forward to every installment of this. We had Dr. Dobbs.
Starting point is 01:11:51 We had CC++ User's Journal. I don't know if you remember all those. Oh, yeah. And so I learned a lot from reading this. I also went to conferences. Embedded Systems Conference was a good source to me. But then, you know, with the advent of internet, things started to fade away, and I don't see that many resources there right now. We have blogs, but blogs don't replace
Starting point is 01:12:21 in-depth articles, in my view. Tell that to Andre, who's on part number 38 of Welcome to Embedded Systems. And I think SPAC is on 28 or something. But yes, it is hard to find... Most blogs aren't long multi-series. Yeah. I mean, they're taking big magazine kind of series and turning them into blogs. Yeah. And they're clipping them up into like three blog posts per article, really.
Starting point is 01:12:51 Right. And there are a couple of other blogs that I like that can be in-depth. But I agree, I do miss the articles because they were edited and they were a little better thought out sometimes with a plan. Yeah, very often with source code and you could experiment with them. And yeah, I mean, this was an excellent resource. But there are other resources. There are the blogs. there are a number of youtube channels i know you have one um philip koopman who will be on the show next year which is really
Starting point is 01:13:36 soon uh has a nice video series and contextual electronics. I like his embedded software, better embedded software, something like this. I enjoy his book immensely. So there are still resources, but so many of the resources are also branded. I mean, I would probably look more through some of the forums, except I don't want to look through the Nordic forum separate from the ST forum, separate from blah, blah, blah.
Starting point is 01:14:07 Half the information is the same and the other half is so chip specific I don't care. So, yeah, I don't have any great resources right now. Of course, one reason to do the podcast is to con people exactly like you to come in and to answer all of my questions. There was that one we did about internationalization where I had a really high-powered expert come in and I just asked him all of my internationalization questions for the project I was working on. That was perfect. But unless you have your own podcast, I'm not sure how you get people to tell you all these things.
Starting point is 01:14:48 Yeah. You know, a similar resource to yours, there is Embedded Software Radio or something like this on the internet. But this is general purpose computing, so this is not embedded specific. But they had some very interesting podcasts as well. I'll have to look at that. I guess that's all I have. I mean, I actually have a lot more questions and a lot more in the outline,
Starting point is 01:15:10 but I think we're about out of time. Okay, thank you. I don't have much of closing thoughts, except maybe this, that Superloop and Arthas are not the only game in town. There are some other options such as frameworks, event-driven frameworks, active objects and state machines. And some systems can certainly benefit from them. I'm not saying that they are universally going to replace Superloop or Arthos any day now, but some systems such as safety-critical medical devices
Starting point is 01:15:48 and some other systems that actually need to demonstrate design and traceability, they can certainly benefit. And, you know, I've been doing this for the last maybe 15 years, working with these concepts. And I will tell you that I will never, I mean, go back to the old days. And for me, programming that way is much more fun because I no longer struggle with spaghetti code, with race conditions and all of those things.
Starting point is 01:16:15 I just program at a higher level of abstraction. And then those abstractions are actually high enough and right abstractions so that I can use modeling. For instance, I can use state machines from the UML specifications. One of the takeaways from this conversation that I have is that it really helps to have a design paradigm that forces you to think. Because if you go straight into sequential programming,
Starting point is 01:16:42 you just open your editor and you start typing. That's the way a lot of people program. And just hack it out until it works, and then you end up with spaghetti code, but it works, and then you find what's wrong. If you do something like state machines or event-driven programming, you can't do that. You can't just sit down and start hacking away.
Starting point is 01:17:02 You have to at least write something up on a piece of paper and say, okay, here are my states, here here's these events and here are the handlers you're forced to plan and i think that's really it's really valuable because planning gets lost a lot of times when we're trying to work quickly and it's nice to have something that that's that kind of forces you to do it because there's a benefit to programming this way and using these frameworks and these paradigms in and of themselves, but the fact they force you to think about
Starting point is 01:17:31 what you're doing before you do it, I think is a major side benefit. Right, and it also changes the way you think of those problems entirely. So instead of thinking, you know, of which are the primitives, which semaphores, which event flags I would be using, you think in terms of different things, of which events would I produce, what kind of active objects I need. You think about how to best encapsulate your resources so that you don't need to share them.
Starting point is 01:18:05 And then inside each active object, you have the prescribed recipe how to do this you have a state machine it will be a state machine so you know right away what what will happen and then you try to structure your state machine the best way you can and this is another topic for another big conversation how to structure best state machines because yeah many people that. But this is actually a different way of thinking about the problem. Well, and as you and Chris summed up, I'm sitting here thinking
Starting point is 01:18:34 this would be a lot easier to test. You have events coming in, you have a small number of things you need to do, you have an algorithm, you have events going out that makes it... Not just to test, but to log. Also, all of those, all of the above.
Starting point is 01:18:50 I mean, for testing, Galicia is absolutely right because the active objects are natural units for testing because they don't share anything. So they don't have any dependencies. In unit testing, the most difficult part
Starting point is 01:19:01 is how to handle the dependencies. You pull out the piece of code and it has hundreds of unsatisfied references to things. So it is much easier because you don't share anything. So this is the testing part. And the logging is, again, because framework reverses the control. So the framework is in control and framework knows everything that happens in the application, not just what the RTOS knows, like which context has been switched and which semaphore was used.
Starting point is 01:19:33 The framework knows it too, but the framework also knows about which events were posted, what states have been visited, what transitions happened. And this information was not available in the in the artos and qp frameworks are instrumented that way so you can get all this information and if you want to this does seem like a good place to stop because i think that is a great summary on why this is worth looking at our guest has been Miro Samek. He is the founder of Quantum Leaps, the creator of the open source QP active object framework, and author of practical UML state charts in C++,
Starting point is 01:20:17 event-driven programming for embedded systems. Thank you for being with us, Miro. Thank you for having me. And I would like to send a special thank you out to Altronic and Peter Nye, who both requested recently that we have Miro on. I'd also like to thank Christopher for producing and co-hosting. And of course, thank you for listening and for supporting us on Patreon. It's pretty cool. We're very happy. If you would like to read the blog, contact us, and or subscribe to the YouTube channel, go ahead to http://embedded.fm.
Starting point is 01:20:54 You can do it all there. Why did you spell that out? I just, I feel like people maybe don't, I don't know. Search for embedded.fm. Search for Embedded Podcast. Search for however you'd like to spell my name and embedded and it'll work out. You'll find it. A final thought from Stephen Fry, whose QI TV show I am now in love with.
Starting point is 01:21:18 We are not nouns. We are verbs. I'm not a thing, an actor, a writer. I am a person who does things. I write, I act, and I never know what I'm going to do next. I think you can be imprisoned if you think of yourself as a noun. Embedded is an independently produced radio show that focuses on the many aspects of engineering. It is a production of Logical Elegance, an embedded software consulting company in California. If there are advertisements in the show, we did not put them there and do not
Starting point is 01:21:51 receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.