Embedded - 364: All the Abstractions
Episode Date: March 5, 2021Jacob Beningo spoke with us about embedded systems, conference talks, writing articles and books, and best practices in development. Jacob is a consultant and instructor, see his website for more deta...ils (beningo.com). Jacob is one of the organizers of the Embedded Online Conference, May 18,19, and 20. Session times is generally noted in Eastern Time (Americas). A coupon code for a discount on registration is in the show. Jacob will be giving a talk called Best Practices for RTOS Application Design. He likes the full visibility of tracing, using the Segger J-Trace with SystemView or Precipio. Jacob has written three books: MicroPython Projects: A do-it-yourself guide for embedded developers to build a range of applications using Python Reusable Firmware Development: A Practical Approach to APIs, HALs and Drivers API Standard for MCUs He’s also written many articles for Embedded.com as well as his own blog. He recommends the IEEE Software Engineering Body of Knowledge (SWEBOK). The SWEBOK is a free download from IEEE, which covers the best practices that engineers should be following when they develop software along with processes and strategies. Jacob also recommends Renesas’ Synergy Software Quality Handbook that describes the processes that they used to develop and validate their software.
Transcript
Discussion (0)
Welcome to Embedded.
I am Alicia White alongside Christopher White.
This week, we are going to talk about embedded systems with Jacob Beningo.
Hey, Jacob. Welcome.
Hey, how's it going?
Good. Could you tell us about yourself as though you were keynoting the Embedded Systems Conference?
Sure.
Yeah.
So my name is Jacob Beningo.
I'm an embedded software consultant.
I've been developing embedded systems for the last 20 years professionally, although
I'm one of those software nerds who started developing software when they were 14 with
first robotics and got started with PIC-16F84s and Stamp2 controllers.
You know, but I get the opportunity to work with a lot of cool people in a lot of different
countries and around the world developing, you know, anything from simple consumer electronics,
widgets, all the way through kind of my specialty, which is flight software for small satellites.
All right. I have so many questions.
But first, we have lightning round where we ask you short questions.
We want short answers. Are you ready?
Yes.
All right. Preferred embedded programming language?
C.
What about non-embedded?
Non-embedded Python.
Oh, I see. That counts as your question.
Favorite RTOS?
ThreadX.
What's the most important thing to teach software engineers who want to be embedded software engineers?
How to design and architect their software.
I'd say that goes for everyone.
It's pretty generic, but I find embedded guys tend to skip that step a lot, although I'm sure we all do.
Do you have a bug that you'll always remember
or hope that you could forget?
Not specifically.
Okay.
Yeah.
Do you have a least favorite peripheral communication bus?
I'm not the biggest fan of CAN,
even though it's very useful.
Yeah.
See you last episode.
Is there an interview question you like to ask embedded engineers?
Not one right off the top of my head, specifically.
Do you have a tip everyone should know?
That they should use tracing technology when they're designing,
when they're implementing their application code.
Favorite fictional robot?
Johnny Five.
That one's pretty popular.
That may be the modal answer, yeah.
Yeah, probably one of the ones that got me down the engineering path as a child. Okay, so tracing technology.
I use an in-circuit programmer, a JTAG or whatever we're going to call it this week,
even though neither one of those technologies is still what we use, to program my Cortex processor. And I can step through it,
which is such an improvement over things like Arduino or embed.
But tracing is different.
Tracing is an order of magnitude different.
Could you tell us about it?
Yeah.
So the idea here with tracing is that, you know,
so being able to set our breakpoints and kind of look in and see
what the registers are doing is, you know, fantastic. Being able to step through and get
an idea of what's going on. The thing that's great about tracing is that there's different
modes, obviously. You can do kind of snapshot where you get a little picture of what's happening,
but you can also do kind of streaming trace where you're constantly acquiring data. But the idea is
that as your system is creating events, so for example, like when you have a context switch
to start running a new task,
and then you context switch out of it,
an event gets recorded in the little RAM buffer
that then gets pushed back over the JTAG interface
or the programmer,
that then a program on your PC can take
and rebuild what's happening in the actual application.
So what can happen is, as developers, we typically are, we can step through and we can get little pictures of what's going on.
But the tracing technology allows us to see kind of like an oscilloscope.
It allows us to see everything that's happening in our system potentially.
So I can see when tasks are starting, the stopping, when semaphores are being given, when they're being taken.
I can create custom messages to see when LEDs are turning on or off or see the states of individual state machines that I've created in my code.
So it really allows us to visualize what's happening in the software versus kind of, you know, when I first started developing where, yeah, you could maybe toggle some LEDs, but you really had to spend a lot of time instrumenting your code to see what was happening.
And most of the time, it was kind of cross your fingers and guess, and you go through a process for days trying to
guess and figure out what exactly the system is doing. Whereas tracing technology today really
allows us to see what's happening very quickly. And hopefully, you know, one, see things like
performance, you know, make improvements. And, you know, probably the big thing is chase down bugs as
well. So, but it requires a special tool. So you can do it with just a standard onboard debugger on a lot of these development boards.
Really?
So you can push trace information out of a simple serial port if you wanted to.
You can, depending on the recording library that you're using, there are several different companies that create them.
But you can, simple serial interface, you can do it over USB, you can do it over Wi-Fi. I typically push mine over my J-Link tool that I have. I usually set
the library up to use SIGR, RTT, which is a real-time transfer protocol. And then that allows
the JTAG to very efficiently transfer the event data so that it has minimum impact on the real-time
performance of the application.
Because when you instrument things, you want to minimize the impact because every little extra line of code you put in there could change the performance and the time of the
system, which could then maybe make a race condition that exists not exist for a short
period of time or something like that.
So it's always, you know, you want your tools to have minimal impact on the execution of
your application code.
I certainly have compiled for release, getting rid of all of those printfs and realized,
oh, there's a timing bug. Yes, yes.
Yeah, absolutely.
But there is special hardware you can buy. I think if you want to keep up with huge quantities of data, like every memory access or every instruction, then you do need something more beefy.
Yeah, the JLinks, there are different levels of them.
I remember at Fitbit, we used something from Lauterbach that was a very expensive box.
Because it's basically a giant buffer, a very fast buffer, usually an FPGA in there as well.
Yes, yeah, exactly.
So you can certainly, you know, if you want to be able to do instruction tracing and really get a ton of information out, then certainly you have to spend the money on a more professional tool that can handle that. So there's certainly, you know, debuggers that have, you know, the name trace are designed to work on the extended trace macro cell, which is often shown as the ETM that
can actually grab that individual instruction tracing and rebuild it. And obviously you need
very fast signals and interfaces to be able to get that kind of information.
So what software do you like to use for trace? I remember I've only really experienced it with the Lauterbach and its software was difficult to use.
Complex.
It was one of the harder things I've done in embedded systems in a long time is figuring out how to get meaningful data out of that thing. Because there was there was a fire hose of data. Yeah, absolutely. Yeah, so there's typically two tools
that I generally use.
The one I predominantly use
is Persepio's Tracealyzer tool.
That's one that interfaces really well
with Red X and FreeRTOS
and several other real-time operating systems.
And I've had a pretty good experience with it.
I might be slightly biased though as well
just to kind of put it out there
because I have done webinars and work with them. So just to kind of put that out there. But I've also used
Sager System View, which is kind of just a free utility tool that comes with the J-Linked series
of, you know, programmers. And that one can still get you some basic information out of it, but it
doesn't have as much of the reporting types of capabilities. How does this relate to satellites?
I mean, you can't exactly go up into space and touch the satellite and ask it for a trace for what went wrong.
Well, theoretically, you could.
But if you were to store it on board and then try to pull down if something went wrong, you could pull those trace buffers down or store them in a file.
But the way I use it most of the time is actually during development
to get the understanding of how the system's performing.
I want to make sure that if I'm designing a real-time system,
you want to make sure you have good response times.
You want to make sure that the tasks are executing when you think they are.
Sometimes you have very complex timing, especially if you have a guidance and control algorithm that
needs to run and you're reading sensors and you have to drive a propulsion system. And then
there's all these things going on and you want to make sure that they respond within a reasonable
period of time. What I'll do is I'll use it on the bench to measure the performance of the software and fine-tune priorities or the timing of tasks or watch for periods of time where maybe you might have too much CPU time being used.
Suddenly, you hit 100% for too long or things like that.
Or maybe you have a bug.
And at least on the bench, it allows you to kind of dive in and see what's happening with the system.
And I usually will trace through development,
and with each release,
I'll actually kind of say,
okay, here's the gold standard
of what the trace showed
while I'm performing this particular operation.
And then if there's ever an issue in the future,
you can always go back to the Git repository
and pull, okay, how did this baseline trace look?
And what might have happened?
It helps you kind of track back through time how your software is evolving and how the performance is changing.
And sometimes you'll have a bug you didn't discover, and then you go back a couple and you're like, oh, yep, here it is.
It is in the trace.
It's just we didn't see it then.
Or you'll discover you go back somewhere in between these two versions, one's working good, one's working bad.
And that can kind of help you figure out where to, you know, start dividing and conquering to get the bug out of the system.
I found Trace to be very helpful with very low power systems, trying to figure out why you're still awake or figure out how to optimize so that you use the least amount of processor and go back to sleep as quick as possible.
Yes. Yep, that's a great way to, definitely a great way to use it.
The crazy thing we did was we used it to basically, I think we used it in conjunction with the MPU so that every time we had it, we set basically every memory access to be an MPU fault.
And every time that happened,
we went into a fault handler
and then shipped some trace data out
so we could get a map of all memory accesses.
And then we used that to optimize
which memories we put things in.
So things that got accessed a lot got put in fast memory
and things that got accessed least got put off an external flash
or something like that based on the linker map.
But you can do all kinds of crazy things with it.
Oh, yeah.
I mean, it gives you everything your code did.
Yep, exactly.
Yeah, there's so many different uses for it.
It's unbelievable.
And the insight that it gives us, I mean,
I have a colleague who I think probably said it the clearest is that, you know, this trace
technology is really the oscilloscope for software developers, you know, at the end of the day. I
mean, you want to be as a hardware guy, you want to design hardware without an oscilloscope.
And that trace technology really kind of gives us software developers that kind of level tool
to see what's actually happening.
Do you have a preferred IDE? Neither of the tools you listed for Trace are themselves IDEs.
Yeah, correct. Yeah. So a lot of times I use a lot of different standalone tools.
Some of the IDEs that I use, I probably gravitate a lot of times towards the Eclipse-based tools a lot. So, for example, if I'm working with an ST part, I'll use STM32CubeIDE.
If I'm using a Texas Instruments part, I'll use their Code Composer.
A lot of times that's because of client requirements or what specifically we're doing.
If I am doing something that's a little bit tighter or has more regulation and stuff like that behind it, I'll do something like IAR or Kyle.
But a lot of times I'll find that even in some of the space systems where I work with small satellite stuff that I do, a lot of times people will still try to use open source tools as much as they can.
Yeah, that's my answer too.
I use whatever the client wants me to use.
Exactly.
A lot of cool ones out
there. I'll use Sublime Text, for example.
Sometimes if I'm outside of the IDE and I'm
doing command line stuff.
But yeah, it's
whatever the client
says to use at the end of the day.
I am becoming fond of
Visual Studio, especially when I'm on a remote system.
Visual Studio Code.
Visual Studio Code.
Thank you.
Yes, no, that is the important part.
Because you can log in and pretend to be local,
and it's very cool.
What about RTOSes?
Yeah, so from RTOSes, I think my favorite one,
my favorite one was ThreadX.
And ThreadX obviously has been, you know,
ExpressLogic was purchased by Microsoft a couple years ago.
It's now become Azure RTOS.
And I think it's still one of my favorite ones.
They've done a pretty good job, at least,
of keeping the ThreadX piece still available and separate
if you don't need the cloud connectivity piece.
But then probably behind there,
obviously we all love FreeRTOS
and some of the other open source ones,
things like Zephyr and MicroCOS.
How about you guys?
What's some of your favorite RTOSs?
Most of my
RTOSs creep in through other ways.
Like Nordic,
it's
really an RTOS even though it sort of says it's an API
and I've been doing a lot of
TI and again their
wireless system is
basically an RTOS and I go ahead and use the BIOS
that they've developed.
Oh, okay. Excellent.
I used a lot of ThreadX.
And the one that I haven't heard people talk about much anymore,
I don't know whether it's just not popular
or whether there's something wrong with it,
is I used Green Hills for a long time.
I had one client, and I kind of liked it.
It was a little different than most RTOSs,
but maybe it's just very expensive.
I don't remember the reasons why people don't tend to pick it up.
Yeah, I think you're right there. I think it's probably the price because I've got a few clients
who use it and usually they are mentioning, you know, price, price, you know, the cost. But,
you know, in all honesty, you get what you pay for. So, I mean, as much as we love our free tools
and our, you know, free RTOSs and things like that, there is a considerable difference if you look at the performance or the output of the code of an open source compiler compared to a commercial one.
So it's something that as developers, we definitely need to keep in mind.
But it does limit, the cost does limit our ability to experiment with some of the other ones for sure.
I thought FreeRTOS and RTEMS were both well enough supported and funded that they were approaching the point of real commercial viability for even systems that require stable RTOSs.
Yeah, I think they are. I mean, I think over the last couple of years,
free RTOS, there was always little annoying things with it.
But a lot of those have been kind of, I think, cleaned up.
I think it's gotten a lot more robust.
I've been in the process of updating
one of my RTOS courses and we use free RTOS.
And in my lab notes, I have,
oh, look at this quirky thing here and look at this quirky thing there.
Then going through and updating it,
I've been like, oh yeah, this whole section goes away.
This is not quirky anymore.
They've
definitely made a lot of improvements for sure
to a lot of these, which is
great. They're so widely
used.
Considering robustness
and things like that, that's always been some of my concern with some of these open source operating systems is just, you know, people go to them because they're free.
And they don't necessarily look at what's happening underneath the hood or looking at how robust they are and whether they really fit their applications.
They just kind of look at the cost and say, okay, that's what we're going with, which can be scary. Free as in puppies, yes.
So you are giving a talk at a conference in May, Best Practices for RTOS Application Design.
Could you tell us a bit more? Yeah, absolutely. So yeah, this will be, I think, hopefully be a pretty fun talk.
One of the core areas I've been focusing in on the last several years has been on RTOS application design.
So one of my goals is to try to bring some of the best practices that I know and that maybe aren't as publicly available.
If you go out and read a lot of articles about real-time operating systems and things like
that, we get kind of that traditional, okay, this is what a mutex is, this is what a semaphore is,
and that kind of stuff. And those are always great. But people don't necessarily show,
if I have a data structure, how can I make sure that a developer is going to
be aware that it's a shared resource and that it's protected by a mutex, you know, in a gigantic, you know, code base?
Or how do I, you know, what's a good way to initialize all my tasks
in a way that's simple and reusable and scalable?
I see a lot of people, for example, go and, you know,
there's, you know, if you got 50 tasks in your thing,
which would be a lot, but, you know, so say 20,
you know, there's 20 calls to whatever the task create function is.
And there's little tricks you can use for creating configuration tables and things like that, that
I'm going to be hopefully, well, not hopefully, that I will be showing off at the conference. So
talking about how you can, you know, how do you create a diagnostic task properly?
Because some of these things have, you know, hooks and tendrils into a lot
of different tasks and areas of memory of the system and things like that. So I'm going to
try to share those best practices of how do you properly architect these things so that you don't
end up with, you know, race conditions and long response times and stuff like that. So
it, you know, it should be a lot of fun. And then of course, immediately after the talk, we're going to have kind of a Zoom session where we'll kind of all get together and people will be able to ask their questions.
And we'll have hopefully some good user interactions and things like that.
So a lot of them ask their own questions or share their own best practices and has an expertise that is great when they share that or, you know, corroborate that.
Yeah, you know, I've seen this works or, oh, you know, I've also seen this other thing.
Oh, maybe you haven't thought of that.
So, you know, that type of interaction is always a lot of fun.
And this conference, um, the embedded online conference, you're one of the organizers.
Yes. Yeah. So the embedded online conference, this is actually a conference that's been around,
uh, this is actually gonna be the fourth year. And I co-founded this conference with Stefan
Bauscher, um, who is the founder of Embedded Related and actually several embedded sites like DSP Related and things like that.
And we kind of teamed up to say, hey, let's try to do some stuff online.
And the idea is to bring worldwide expertise and kind of bring it all in one place and make it an affordable way for people to access it.
You know, over the last several years and, you know, we started this before COVID.
So right now, no one can go somewhere and get access or very it's very limited.
And there's all the hoops and red tape to jump through.
But, you know, we, you know, we saw that we kind of saw an issue actually
with embedded software developers.
And that was, we were big attendees of Embedded World
and the Embedded Systems Conference and Sensors Expo
and the Embedded Technology Conference and stuff like that.
I attend a lot of these.
And we were kind of starting to see that
there was a transition of developers
not necessarily being able to get
to these types of physical conferences.
We're all under the gun to get the product out the door.
The managers don't want to let us leave the office
because there's some fire that needs to be put out.
There's a cost for travel
and the entry fees for the conference
and things like that.
So, and they're starting to view a little bit more of, yeah, there's a lot of stuff
that you can just go get for free on YouTube and stuff like that.
But it kind of turns out that you don't really get the same type of access to some of the
knowledge, real technical experts and the industry leaders on kind of where the industry
is going and those types of things, and even the latest and greatest techniques. So we kind of said, hey, you know, we can get a lot
of people, you know, we can get people from around the world to be able to kind of, you know,
participate in a conference, get access to, you know, cutting edge knowledge while, you know,
while they're still in the office. So if they get pulled away, that's okay. You know, it's not going
to cost a lot, you know, if someone has to pause or, you know, not participate for a couple of
hours or half a day, because they got to go somewhere else, they can always come back,
watch the recording, uh, and that sort of thing. So it's, um, you know, it's been something we've
been doing. It's been really exciting. It's been growing every year. And, um, you know,
to some degree, I almost don't even care as much about the growth as it is. We've been able to interact with a lot of engineers who are building really cool things and helping provide them with the knowledge that they need to be more successful.
You've been doing online conferences for four years.
Did you look around last year?
Has everybody scrambled to figure that out and say, ha, ha, ha, I've done that?
Yeah, it was kind of funny because all the
lockdowns happened in March and we already had our conference scheduled for May. So there were
a lot of people, I think, watching our conference to see how we did things. And then there was a lot
of scrambling of people moving from physical conferences to online conferences. I think there's been a lot of, um, there's, unfortunately, there's probably been a lot of
failures in doing it because they've tried to take your traditional trade show and just move
it online. And there's not a one, one relationship there of, you know, Oh, I did it in a physical
space. We just moved to a digital space. There's other constraints when developers are in the office that I think a lot of people didn't take into account.
And some of these conferences just have so much content that there's no way people could actually consume the material.
So they've kind of gone overboard, which is why our Embedded Online Conference, we try to focus on getting industry leaders to participate. We try to limit the content.
You see a lot of conferences that, oh, we have 300 exhibitors and 300 sponsors.
We try to say, we're not trying to go that route. We're trying to get a couple
of core partners and people who have really cool widgets
that we can help and we think have good products that will help with the conference,
but we're really focusing on the
end-user experience and providing
them with the
knowledge that they need.
Then making sure that our partners also
aren't lost, I guess, in the
gigantic list of people
who are participating in some conference.
There's a balance there, i guess you could say so it seems like the discovery
problem like when you're wandering around a trade floor it's you know going you can kind of see
things you want to see and it's a little harder if you're just presented with search for search
for vendor or here's a undifferentiated list of logos and stuff.
Exactly, exactly.
I attended an online conference for,
it was a space systems conference
a couple of weeks ago, three weeks ago.
And it was one of those things,
they had a hundred exhibitors.
And I was very interested,
but it's just a webpage you kind of scroll through
with a bunch of logos.
And I wanted to interact, but I found it very easy to just be like,
oh, well, I'll get back to this later. It's been three weeks
and I still haven't done it yet.
Obviously, the people who exhibit and do all that
kind of stuff, they're a core piece to the people who have those
who are putting on conferences and stuff like that.
But, you know, I think some of them have had a disservice by there being so many people,
you know, so many people being involved in those types of things in the conferences.
And it's probably tricky for them, you know, to balance the, you know, physical versus
trying to do it digitally online.
Because who do you, you know, who do you say no to, you know, if someone wants to.
So, but, you know, you always want to make sure you do you know from their perspective i think they want to provide value back to um you know their their partners and stuff like that so which
is why we limit you know limit it for ours and try to focus then on the developers because that's
what it's really all about is making sure developers get what they need in order to continue to be
successful so i i admit i do miss wandering the show floor i would often for the embedded
systems conference i would often arrange to meet people for lunch or coffee or whatever
people i only saw once a year um And just wander and listen to their questions for the trade show
folks and get to know some of the trade show folks. And that's one thing that I don't,
it's like going to the library versus buying things on Amazon. You don't really get to see
what's next to everybody. The thing I didn't know I needed.
But I really like that you're recreating the information part with the lectures that
give information and the questions that other people have that can be elucidating as well.
Thanks. Yeah, it's definitely, you know, I miss the show floor as well. I mean,
it's one of those things that's so hard, I think, to have a digital representation of.
And, you know, like you said, it's fun to wander the show floor. And, you know,
as you attend these conferences, you kind of develop relationships with,
you know, the people who are always there. And even the attendees, you know, like you said,
you can kind of even stop and ask questions and kind of hear
their questions. It's, it's a piece that's really, unfortunately,
really missing. And even in some of the, you know, right now,
especially during COVID, it's something that is, you know, sorely,
sorely missed at the moment. So especially being able to go and actually
physically grab some part and see how it's working and,
you know, that kind of thing. So.
Beg for samples.
Yeah, exactly. Walk away with a goodie bag full of development boards and things you can hook up
to your computer and go back to the office, download some software and, you know, make,
make cool things happen. It's definitely fun.
So you write software.
You mentioned setting up a class.
You organized this conference.
And you had a book come out last year.
Do you sleep?
Occasionally.
It's something that I've generally found gets in the way.
How do you find the time to do all of that?
So usually it comes down to, I mean, part of it is sometimes just not sleeping.
One thing with COVID, which has been interesting, is I've been sleeping seven hours a night, which I got to get out of the habit of doing. But, um, you know, a lot of it comes down to having a well organized schedule. Uh, so, you know, like on a, on a daily basis, like if I'm, if I'm in the process of working on a book, I will, you know, get up at 6am. And one of the first things I'll do is I'll write for an hour or hour and a half before the day even starts. Then the kids get up and, you know, the wife and all the fun stuff. And then you get all the, get to school, do all that, you know, work out, then get back to
the office and then kind of do the normal day stuff. You know, same thing with my blogging.
It's usually, you know, every day I'm writing. So even like, like when I'm not working on a book,
what replaces the book time is regular blogs. So, you know, I'll be writing so even like like when i'm not working on a book what replaces the
book time is regular blogs so you know i'll be blogging on mondays and thursdays you know monday
i might do design news thursday might be um you know my own personal blog tuesday wednesday friday
might be the you know some little section for a book and then like i said i'll have my normal
work day where i'll teach a class um and then I will, you know, have some meetings and spend, you know, four hours writing some code or thinking through some new way of some new process or something like that of improvement or something like that.
So, but it's definitely a tight, very fast paced type of day.
Let me put it that way.
It requires a fair amount of dedication.
Yes. Yeah, definitely.
Because as you develop more and more content,
I'm sure you're well aware,
sometimes you get to a point where it's like,
okay, what do I write about today?
Or what's the next thing?
You know, I guess being as involved
in different projects,
it makes it a little bit easier.
But sometimes, you know, figuring out what the next thing is can sometimes be challenging.
I made the choice not to do the training classes.
I was invited a few times after my book came out to give the week-long teach everybody embedded style class. But those take a lot of preparation.
I mean, an insane amount of preparation. And I mean, I would rather code than teach, which is
my personal preference. Did you fall into it? Did you go after it? How did you get into teaching as part of just my interest in writing and sharing knowledge, when I develop code and do things, I discover dumb things that I've done or something.
And I'm like, oh, man, I can't believe I did that.
Well, that's a good thing I want to share with someone so that they don't run into that problem again.
And this is something that's gone back since, you know, to college days.
I mean, I think when I was a sophomore in college and undergrad, I started doing tutoring
for, you know, physics and, you know, just being able to help people, you know, avoid
the mistakes that I'm making.
I wanted that to be kind of part of my consulting.
So as I started, you know, almost immediately when I started saying, okay, I'm going to be a consultant,
I pretty much said, okay, I got to write and I want to teach classes to share knowledge.
So right from the beginning, that was something that I wanted to pursue.
I prefer the hands-on mentoring of a one-on-one relationship, but then you don't get to affect
nearly as many people. Yeah. Yeah. So it's kind of a one-on-one relationship, but then you don't get to affect nearly as many
people. Yeah. Yeah. So it's kind of a mix, which is why with my consulting, I'll do,
I have my courses, but I also do advisory, technical advisory work as well, where sometimes
I'm brought in to work one-on-one with people as well. And a lot of times the people in my classes,
I mean, I keep it pretty open.
And right now we do a lot of stuff online and I try to have office hours
or tell people if you run into issues,
we'll get on a Zoom session.
And so going back to the time-consuming piece,
it certainly can be very time-consuming,
but it's rewarding, I think, as well.
So the book that came out last year,
MicroPython Projects,
a do-it-yourself guide for embedded developers
to build a range of applications using Python.
That's a bit of a mouthful,
although MicroPython Projects
is probably what it's usually known as.
Why'd you write this one?
Yeah, so I wrote this one.
This is one where, honestly, I don't know that I would have written this one.
I actually had a publisher approach me and say, Hey, would you write this micro Python
projects book?
And I've done, I had been doing a fair amount with micro Python.
It was, uh, you know, it's, I think it's very interesting.
It's very intriguing.
I, to some degree, think it's one of those things that could potentially be the future for embedded developers.
Just because so many people know Python and the MicroPython organization has done such a great job of improving the robustness and really making it work really well.
And I thought, oh, this will be, this will be fun to do.
And, you know, generally I like to avoid doing hands-on books.
So I figured I should do one, the try one before I said, I'll never do one again.
But it was, it was a lot of fun, but very time consuming to put it lightly. But it, you know, it is one of those things too,
where I took the approach of, you know, as a professional, does MicroPython really make sense?
And if I were to try to use it, you know, what do I need to know? And I tried to kind of spin
that into the book, um, probably a little bit to the publisher's, uh, sugar in there, but, um,
I think it, I think it ended up working well enough anyways in the book.
So it's not just like for hobbyists.
I mean, a professional who says, could I really leverage MicroPython?
I kind of point out what was useful, what was not, what to watch out for and those types of things.
Because MicroPython has found its way into lots of products. I mean, if you look at some of the, who is it?
I think it's DigiLink that uses Micro-Python on their radios for you to interact with and drive it.
I've worked on a couple of space systems where it used Micro-Python instead of an operating system to drive the whole thing. And, you know, it can be very useful because even someone who's not trained in
software can, you know, muddle the way through with Python because Python is so simple to use.
That can make it dangerous, but it makes it very useful.
MicroPython and Adafruit CircuitPython are pretty different. Have you tried them both?
I have played with them both. Yeah, it's, um, it's kind of interesting.
I've used circuit Python probably a lot less, uh,
partially because when I've used micro Python, I'll, I'm going in and, um,
I'm also partially modifying like the kernel and stuff like that for my own
purposes. But, um, so yeah,
circuit Python seems to be to where they really lightened it up for, uh,
for your maker, you know, for someone who's a maker and maybe isn't thinking of developing a product.
Whereas, you know, MicroPython is a little bit stricter from that sense.
But CircuitPython is a fork from MicroPython.
And if I recall, they actually kind of collaborate or coordinate, you know, some of the development together.
So it's really
interesting. Adafruit's just
done so many awesome things.
It's really a cool
company.
And what they've done with CircuitPython
I think is another fantastic thing.
And intending
something to be user-friendly,
it's a different goal.
MicroPython is more focused on being efficient.
Yes, absolutely.
The efficiency, that nice fine-level control where CircuitPython is trying to really create all the abstractions so that someone who doesn't maybe understand the underlying hardware and all of the software engineering side of it
can actually still make things happen.
So at least my understanding.
Feel free to let me know
if I'm going in the wrong direction for sure.
Makes sense to me.
But you had a different book before, less hands-on.
What was your previous book?
Yeah, so the previous one was Reusable Firmware Development,
A Practical Approach to APIs, HALs, and Drivers.
That sounds like the answer to many questions we get.
I'm going to start pointing to that book.
Yeah, it's one of those problems that just, I feel,
plagues us embedded software developers.
And so I kind of took a look at it from a,
you know, it can be applied to application code really easily,
but it can also be applied to, you know, driver development.
And, you know, within that, I tried to share my experiences
and show, you know, again, kind of how you use configuration tables
and how you can decouple your code
and, you know, how you can really create real nice layers and software.
And, you know, really try to show that reuse aspect.
And, um, you know, it's, it's, it's, I think that was a fun one.
It was the first book I wrote.
It took a, it took a couple of years to pull that one together and, uh, create the balance
of how you, how you write and get something that's interesting versus, uh, you know, working
on development work, but, uh, lots of fun, let me put it that way.
I have a question that's maybe very open-ended,
but why do you think that embedded systems
has not got to the point
where normal application development has,
where there's an application framework
that everyone uses and knows about,
and you just write your business-specific, your product-specific code.
Why do we continually have to worry about SPI drivers and I2C drivers
and file systems and network drivers,
all this stuff that's mostly done for us now, which is better,
but it's sort of not completely done for us,
and it's still not completely abstracted away. I mean, when I'm writing software for an iPhone, I don't care
about the file system. I don't care about the wireless driver. I mean, the worst thing that
ever happens is Bluetooth and that's still pretty abstracted away. Yep, absolutely. Yeah, I think
it's, you know, I think there's probably several reasons why it's been a struggle for, I think, for, you know, kind of the embedded system, the microcontroller guys.
I kind of partially want to kind of fall back and say that it's partially because originally we just worked in such a resource-constrained space.
Right.
And I think a lot of the people in that resource-constrained space were also kind of traditionally hardware developers, right?
So they didn't have computer science backgrounds.
They didn't know how to create decoupled,
reusable frameworks and all that stuff.
They just knew how to go into the hardware registers,
flip some bits and make it work, you know?
And, you know, for years and years and decades,
you know, that kind of solution worked.
But, you know, we've kind of reached this interesting point
where the hardware has become so sophisticated
and so powerful.
And also the need of the users
has dramatically grown in complexity
and the features they want.
And for someone to start from scratch anymore
and kind of build all that up,
it doesn't make sense.
It's too costly, too consuming of time. And,
you know, you could bankrupt a business or, you know, have it go under before it ever even
had a chance of getting off the ground. And I think a lot of, you know, as we've looked at
kind of the adoption of ARM as a hardware platform, certainly there's other cores out
there, but it's probably the most prevalent. And then you have, you know, them kind of pushing on
CM Sys
and developing some of these layers
and trying to really push that to the industry.
Their ecosystem is starting to catch on
and adopt more and more of that.
So I mean, I kind of look at it as,
I was actually just having an argument about that on Twitter,
not argument, but kind of teasing back and forth this week
where someone was like,
the HAL drivers for STM32 parts are horrible.
And it was like, well, they might be,
but they work, right?
So do you want to spend all that time
rewriting all those drivers
or do you want to just work on your business logic
and be done with it?
So I think it's going to get there.
It's certainly not where it needs to be,
but I think another five, ten years.
And the microcontroller group,
embedded software developers,
we're going to be developing
kind of like your iPhone developers
and stuff like that.
So at least that's my prediction.
I don't know.
I had an interesting bug this week.
I have a system that has to do
over-the-air firmware updates via BLE.
And we have a spy flash.
And the system worked.
I mean, eventually, you know.
Yeah.
But I got it to work.
And I got it to work securely.
And I was thrilled.
But I'm pretty good at testing. So I was testing two images going
back and forth with only the version as the change. And I noticed that I could definitely
always load the image once. And of course, I was confused. What if there wasn't just a version change? Maybe version one
had a different something than version two. But no, it was really just a version change.
And the problem was that I was getting a watchdog when the Flash erased it. And when I first programmed the code, it pre-erased all of the flash.
So my bug had to do with the physical reality of what was out there and the timing necessary to work with the physical reality.
That sort of thing is really hard to mock up.
Oh, absolutely.
Absolutely.
And that's where there's kind of the,
it's double-edged to where it feels like,
you know, the traditional bad software developer
is going away,
but there always has to be
some of us traditional people
who understand the hardware
and what is happening behind the scenes.
You know, kind of almost like when you think about a normal PC,
there are those guys who know how to go in and write that BIOS
and develop the stuff to make the motherboard actually boot up.
But after that, there's everybody else who can just put their business stuff on top.
But it's definitely an issue for sure.
I completely see your point for sure.
You actually kind of answered this, but I have a listener question from John, who's
a student who's been doing embedded systems for a while.
And John wanted to know if it's a good idea to use Hal libraries on microcontrollers.
If he's in an interview, job interview, would it be bad to talk about a project using the STM32 GPIO I2C or SPI library?
And he was really worried about if he should be writing all the drivers from scratch so he knew how everything worked yeah so i mean what i would do
i i mean i would show that you know personally if i was being interviewed i would still show that
yes i'm using you know the scm32 pals for example uh they you know they can do most of what you need
to you know let's let me put this they can do 80% of what the average developer needs to do.
So you can show in an interview
that, hey, look, I understand how to use these
HALs. Maybe you add an
example where you extend
the hardware abstraction layer and
add some custom feature to the driver or something
to kind of show
that you're not just using the API
but that you understand what's happening behind the scenes.
But I think for a lot of projects,
it's perfectly acceptable to use the HALs
that are provided by the silicon vendors.
I would just recommend that if you are using them,
you look at the code and understand their limitations, right?
Because there will be limitations like with anything.
Yeah, not necessarily fast with the GPIO changes, SGM.
Yeah, exactly. Yeah, if you look with the GPIO changes, SGM. what your end application is. And, you know, is this, you know, if I can, you know,
let's say it's taken 10 microseconds to flip or 100 microseconds,
but I could write a driver
that does it in one,
you know, for my application,
is it okay or not?
And that's what should be
probably that driving factor
is what's the actual requirement
on the system
and does this meet the requirement?
Because I don't know about you,
I'm almost a perfectionist with my
code sometimes in my real-time systems. I've had to train myself to say, okay, I'd rather do it
this way, but is option B good enough? And will it get the product out the door in a reasonable
time and at a reasonable cost? It's the cost's it's how how do i balance for my clients
um the use of their money and and optimizing the gpio drivers on stm32 isn't usually the right
use of their money it's the thing i keep coming back to is ask yourself what your job is yeah it's my job to write
cool drivers or is it to make what this company wants yes yeah and the company didn't say yes
that's a that's a fine spy driver and it's great that you can transfer stuff 25 faster than than
the the off-the-shelf driver but also it's a thermostat. Yeah, right. So do they really need that 25%
improvement? Yeah. Yeah, absolutely. Yeah. You have to be a good steward of the end customer's
time and money for sure. And I think to some degree, that's where some of these discussions on
whether a HAL is good enough or not, it really comes back to that. And for John, that's where some of these discussions on whether a hal is good enough or not, it really comes back to that. And for John, that's where that could be his argument of, hey, I understand
these. I could rewrite this to do it better, but it wouldn't be a good use of my time.
And that might be something that catches an interviewer's attention, the fact that he's
paying attention to his time and how he's using the company's money, essentially.
Maddy C. had a question.
In your excellent book, Reusable Firmware Development, there's very little mention of interrupts.
Understanding that interrupts are device and architecture specific, is there anything you can recommend that can help with reuse in that realm?
Yeah, so I think in the book I covered just a very small piece of that. Is there anything you can recommend that can help with reuse in that realm? Yeah.
So I think in the book, I covered just a very small piece of that.
And kind of the core thing to look about for interrupt reuse is to create callbacks.
And in the book, I believe I showed, you know, as part of the APIs you develop for reuse,
it was so little because it was like, hey, create a callback, make it so that you can register the callbacks with the
HAL, and then make sure that when you're using a callback that you
keep in mind these are interrupts, so you must follow best practices for an interrupt.
And that's why it's not covered that much, because there was maybe a page or two to
really explain that, show what a callback is, and that's it.
But if you look at Microchip's Harmony,
you look at SCM32 tools as well, some of the other ones out there that lots of people have
their frameworks nowadays, callbacks is kind of the core way that we interact with reusable
interrupts. And a callback for me usually frees a semaphore or sets a variable.
It shouldn't do much.
I mean, just because it's in a callback and in your file doesn't mean you're not still in an interrupt and shouldn't be doing printf. So the callbacks have to be sensible and short
and not do everything.
Absolutely.
Yep, I completely agree.
That's exactly what a callback should look like.
So it's real simple, quick to the point,
maybe give a semaphore,
maybe store data in a buffer or something,
and then get back to the main application
and let it decide on the non-real-time pieces on processing it when it gets a chance.
When do you think we're going to get kind of built-in async capabilities in some of the
HALs and the RTOSs to where people don't have to worry about this stuff so much,
where an interrupt comes in and you have a callback, but the callback is scheduled by
something? You mean DMA?
No.
Pretty close?
No, so you're kind of talking about the sophistication where the OS is handling the interrupt processing versus today.
So yes, most RTOSs today, the model is us developers are still responsible for the interrupts, and the RTOS is responsible for scheduling.
But a sophisticated one will allow you to register those interrupts and let the RTOS handle it.
So it's a great question.
I know there are RTOSs out there that do it now.
Those are usually kind of the commercial paid-for, more sophisticated ones.
I think it's going to be a little while, to be honest, probably another five years or so,
at least in like the free R tosses
and those types that are freely available to us.
At least it seems confused.
What I'm talking about is instead of,
so you're talking, when you, your example,
the callbacks that you provide,
set an event flag and then somewhere else
handles that eventually.
Instead, I'm talking about, you provide the OS with a callback,
and the callback doesn't happen at interrupt level.
It deals with all of the I got to send a message to user level to have this happen.
And then that gets scheduled in such a way that it's fast enough for most uses,
but you don't have to handle all that stuff.
You just get a notification when the callback happens and
when the interrupt happens and
you don't have to make all that
machinery in between.
You don't get notified when the interrupt
happens. You get notified when the interrupt
has been handled and you are
in regular code again.
Yeah, you get notified that
this happened and now you can take some action
that isn't necessarily at interrupt level like you normally would, right?
Okay.
But there's other, I mean, using the word async was probably a little bit loaded, but there's other kind of uses like that where you can put something on a dispatch queue and have it happen.
Anyway.
Wait a minute, now you're all getting into Swift.
No, no, that's happening in embedded.
Fine, I'll be quiet now.
Oh, no, I mean, I hear circular buffers and all of that going on, yes.
Okay, one more listener question from Steve.
For embedded hardware and software architectures, when does it make sense to separate functionality into different devices rather than use a more complex
and expensive IC that combines function? Is it in only when forced or are there good reasons
to consider it out of physical necessities, outside of physical necessities?
Yeah. So from my side, there's certainly several times when it can make sense to break code up into multiple devices.
Sometimes it can be if you a lot of times look at costs, for example.
Sometimes if you're developing a product, it'll make sense to buy two of something versus buying one of something that's more complex from a microcontroller standpoint.
So it might make sense to actually put two smaller chips on board
than go and buy the next family size up that maybe is more expensive.
So from that standpoint, you can use that as a decision maker based on just cost and volumes
because buying a part in larger volumes will potentially decrease the overall cost of the product.
You can, nowadays, sometimes it isn't even so much looking at using multiple parts
as it is maybe looking at going to multi-core.
There's a lot of microcontrollers now on the high end sometimes and sometimes not.
There's things like the ESP32, the SCM32H7 that have multiple cores,
or PSOC64, for example.
You can go in and,
for a reasonable cost,
get two cores that you can assign
individual things to
and let them have at it,
so to say.
So I guess you just have to look
at the end application,
look at the costs,
look at the software complexity and decide, hey, is it easier for me to just separate these?
Is it easier to maintain two separate projects or is it easier to just do it in one?
So hopefully that answers his question.
We can certainly have follow up on that, I guess.
I mean, there are a few other things. If you have something that's very time sensitive and then an application that
isn't, like when you're collecting data from a laser system and you need to get a whole bunch
of data really fast and then process it after you've gotten the chunk, there's a good reason
not to have the processing with the data collection because the processing may take time and may interrupt or delay the time-critical piece of data collection. really small cheap chip to monitor a few things and then have it wake up the big chip when the
user actually wants to do something watches um that sort of thing um what else i mean motor
controllers are are often separate because again that has a real-time component that shouldn't be
mixed in with an application yes yep absolutely yeah and some of the parts will actually have a,
you mentioned TI earlier,
they have like their C2000 family,
which actually has a control law accelerator.
It's almost a separate core
that's designed to help with like motor control applications.
So I completely agree.
We need to separate different execution domains.
As an example, machine learning and real-time processing.
You wouldn't want to do those two things on the same part.
I mean, maybe, but be careful.
Yeah, and a lot of the wireless ICs, BLE and ZigBee and all those things are getting dual cores because they got tired of us using the chip to do other things when they have strict BLE deadlines.
Yeah, exactly.
Let's see. Are you working on another book now or do you have another article coming out soon?
Yeah, so at the moment I have not started on the next book
that is going to be this summer. So I know what it's going to be about. I haven't decided yet.
I have three or four topics. I haven't decided if I want to go down the route of, you know,
RTOS application design or tackle security or just target, you know, kind of general best practices. But it's going to be something probably along those lines.
So I'll be starting up on that here this summer.
I try to put a book out maybe every couple of years.
It seems to be a pretty good cadence.
As far as articles go,
every week I'm publishing an article to my blog,
usually pushing things up to Design News
once a month
or more.
I guess
some of the things I'm tackling
right now is how you develop
proper diagnostics for
archivist-based applications and
how you do error handling and things like that.
There'll be some articles along those lines coming out in the next couple weeks.
Security is a tough thing in Embedded.
It's really easy to do wrong.
What kind of thoughts do you have on that that you might put in a book?
Yeah, security is tough,
especially for embedded developers.
There's a lot of good knowledge out there
that's kind of being pushed
to embedded software developers.
One of the things I'm finding, unfortunately,
is that a lot of embedded developers
or products or companies
seem to not care about security like they should.
So the interest is a little bit more lackluster than I would expect it to be, given how many of
us are building internet-connected devices. And doing it right is just hard, and it needs to be
done up. You have to look at it up front, and you can't attach it on the back end because you have
to make sure you select the right parts,
um,
that you've got the right software frameworks in place and,
you know,
developing a root of trust from beginning to end for the whole thing.
It's,
it's tough.
Um,
and I think that's probably why we ignore it because it can be costly and,
uh,
time consuming,
but.
And frankly,
some of it's kind of new,
I mean, using
small values of new, but a lot of the things
like the
security features in micros have only been around
for, I want to say,
six to ten years, right?
I mean, that's fine if you're just
coming up in the world and you're learning that, but
for people who've been developing a long time, it's something
new to kind of absorb and
add to their expertise.
Absolutely.
I mean, when you look at it, I mean, for example, TrustZone, you know, with the Cortex-M V8 architecture, you know, figuring out how on earth do you use that properly in the, you know, the microcontroller domain. And this is where some of the vendors
are moving to multi-core parts
just so you can have a security processor
and they can lock it down
so that us developers can't trip over our own feet.
And then they give us a separate core
to do our own application code.
So it's interesting and tough.
And it's not something that's been around
or that we've had to really deal with a lot.
But it's something in the future.
Going forward, we have to be very careful with.
I know in one of your books, you recommended the SWEBOC, the Computer Society's Software Engineering Body of Knowledge.
Do you still recommend it?
Are there newer versions?
What is it about?
Yeah, so that's a great guide
that the IEEE Computer Society put out.
And they periodically update it.
I think it's currently at version three
the last time I checked.
So unless there's been an update
in the last couple months.
But I still recommend it
because I think it provides a really good overview of best practices from a general
software engineering standpoint for developers to follow. And, you know, it's free. All you have to
do is, you know, like anything nowadays, put in your contact information, they send you a free
PDF, but there's a lot of good references. I mean, they had dozens or maybe a hundred different people put input into it.
It's just a great resource, I think, for developers.
Some of this stuff, obviously, you can overlook because it probably doesn't apply to embedded,
but kind of the general software engineering best practices are fantastic.
And it's best practices like everything from working in a group to ethics to design, construction, testing.
Yep, requirements, testing, yeah, the whole gambit, which is why I think it's so great because it's a good overview of kind of all the major pieces of software engineering. I mean, it's a little intimidating,
but it is also really impressive in its,
if I did all of this,
I would be a better software engineer.
Yeah, I would too.
So it's, yeah, it'd be difficult to probably do all in,
you know, I don't know, individually,
but it definitely gives you something to shoot for of, oh, I could probably be doing this and, oh, I could be doing that differently.
Or, oh, you know, it gives a new perspective on some things.
And sometimes it's just a good reminder, you know, as we all develop software and evolve
and kind of change our processes, sometimes it's good to go back and review what other
people are doing too and say, oh, yeah, I forgot about that.
I probably should be doing that.
And I do need reminding.
I mean, I will happily have like trace, for example.
I've used it before, but, you know, I haven't used it in my current project because,
well, because we didn't need it.
And yet, as you were talking, I did look up what the trace would be for this
processor and, hey, look, I already have that software. Stuff like that, if you don't use it,
there's a good potential that you'll lose it. And so it's good to just look a little bit. You also recommend something from Renaissance,
the Synergy Software Quality Handbook.
Oh, yes.
Yeah, that is a, their quality handbook,
I was actually really happy when they published that
because I think it shows a really good set of metrics
and a simple process that, maybe it's not a simple process, but something for, I think, developers to target and try to do themselves.
It kind of shows a nice, sophisticated, well-thought-out process of how they develop software, how they test it, the metrics that they use to show that, yeah, this meets quality. They show an example of how they do their automated tests and continuous integration
and the reports that they generate on kind of the other side of that to show whether
the tests are passing and failing and how they rate that.
And to me, it was just fascinating.
I thought they did a fantastic job with that.
And the fact that they shared it with everyone, it's like, hey, you know, I'm not saying let's steal it, but let's look at that as a really good example of some practices that, you know, we can be leveraging ourselves.
And you don't have to do, like they have nine different reviews from market requirements to code to managerial.
You don't have to do all of them.
Just recognize that, you know,
maybe if you were building a satellite or a medical device,
those would be useful.
And which parts of it can you use in your current code?
Exactly.
It's a perfect thing to look at and say,
what can I leverage from this?
And you look at some of the, you know, the over-processor. For them, they have so many stakeholders, you know. But you can look at and say, what can I leverage from this? And you look at some of the over-processor,
for them, they have so many stakeholders. But you can look at, like I said, if you look at
their metric section or the reports that they output when they test it or how they interface
their static analyzers to do hardware in-loop testing and stuff like that, there's some
interesting things in there that you can probably look at and say, well, maybe I don't need to go that far, but where can I start and maybe improve my own design using some of these types of
capabilities? I was thinking about stakeholder meetings recently or stakeholder sign-offs
when one of my systems is going to start production, very first production line.
And we found out that the schematic revision was not the last thing, the last schematic revision that had been given to firmware.
And so we didn't know what had changed.
All we knew was that it had a version change. And there was a little bit of freaking out because
what are you doing changing the hardware at this late date without telling us? And I was just like,
this is how we end up with meetings where everybody has to sign off before anything had happened.
Absolutely. It's one of those scary things for sure.
It's been really nice to talk to you, Jacob.
I know you have a coupon for the conference.
Can you share that with us?
Yeah.
So if anyone's interested in the Embedded Online Conference, they could just go to
embeddedonlineconference.com.
And in the registration area, they can just put embedded.fm, and that should give them
a nice discount. You guys can register for
$60 versus the
end of this month, I think
it goes up to $190
registration. So you can get in for $60
with embedded.fm.
A month
is March or February? Just checking.
It's going up at the
end of February to the 190.
But the coupon code embedded.fm will let listeners register at $60 through March.
Thank you.
Yeah.
Yeah.
Thanks for asking.
I wasn't very clear there.
Well, this is going up in March.
So there's confusion.
Okay.
Time.
Tiny whiny.
Gotcha. And I dony-wimey.
Gotcha.
And I don't like promotional stuff anyway, so I talk really fast around it.
And this is happening May 18th, 19th, 20th.
And what time zone is it?
Yeah, so it's going to kick off. What we're doing is the European talks will actually start pretty early.
I think we have our first talk at seven or eight Eastern time.
But the way the European speakers will talk first,
their recordings are available immediately.
So for people on the West Coast,
they don't have to necessarily worry
that they're going to miss it.
They'll be able to get back on in the recording
and things like that.
So it kind of starts a little early,
but it goes all day
to four in the afternoon.
And after each talk,
we'll have a little Q&A session
with the speakers
and try to get everyone together
periodically throughout the day
in case they did miss a Q&A,
they can still ask questions.
Four o'clock in the afternoon,
Eastern time.
Yes, Eastern time.
Yep.
Jacob, do you have any thoughts you'd like to
leave us with? Yeah. I, I think the thought is, you know, when it comes to software development,
I think it's great for people to, you know, try to look at what other people are doing,
try to always be expanding your knowledge. You know, kind of, I guess a recommendation is,
you know, I like to use one of my lunch periods every week to kind of make sure I look around to see what the latest and greatest
stuff is.
So my recommendation, I guess, is, you know, don't stand still.
Make sure that you're constantly evolving the way you do things.
Otherwise, you could get left behind.
So our guest has been Jacob Beningo, an independent consultant and lecturer who specializes in the design of
embedded software. He has authored three books. His most recent is MicroPython Projects.
Thanks, Jacob.
Thanks, guys. Thanks for the invitation. It was a pleasure speaking with you.
Thank you to Christopher for producing and co-hosting. Thank you to our Patreon supporter
Slack group for some excellent questions.
And thank you for listening. You can always contact us at show at embedded.fm
or hit the contact link on Embedded FM. And now a quote to leave you with from Admiral Grace Hopper.
Humans are allergic to change. They love to say, we've always done it this way.
I try to fight that. That's why I have a clock on my wall
that runs counterclockwise.