Embedded - 266: Drive off the End of the Universe
Episode Date: November 1, 2018Chris (@stoneymonster) and Elecia (@logicalelegance) talk about conferences, simulations, and future episodes. Simulation/Emulation: QEMU and Renode. Chris also noted there were QEMU for STM32 instanc...es such as this one from beckus. For conferences, we named several but had no particularly useful advice. We did recommend classes such as James Grenning’s training on TDD in Embedded Systems and Jack Ganssle’s Better Firmware Faster. There are several (free) machine learning courses available from Udacity including Intro to Machine Learning which was part of the Self-Driving Car series that Elecia took. The future basics episodes were grouped into: Flow of program control (pre-RTOS) Design patterns RTOS information
Transcript
Discussion (0)
Welcome to Embedded.
I am Alicia White here with Christopher White.
And this week is going to be a short one because it's just Chris and me talking to each other.
And we kind of left it a bit late.
And eventually there'll be kids who want their candy.
Yeah, I thought about telling you scary stories during the show, but then I realized it wouldn't still be Halloween.
Oh, version control.
So, yeah, that's what we're doing today.
Kids! So yeah, that's what we're doing today. Get it!
All right, so it's mostly just me today.
And Christopher apparently is not really here.
Oh, I'm here.
Still no?
I was trying to think of something else to say spookily, but I should have prepped first.
Yeah, isn't that the thing that happens?
Do you realize this is episode 266?
266.
Is that a special number or is it just a high number?
It's a high number.
Usually we, um, we say something about it being every 50 because that is essentially
a year's worth of episodes
and i somebody asked me something and i wasn't sure whether it happened one year or two years ago
and then i just thought about what podcasts were around there and i could figure it out based on
the numbers it was kind of weird that's that is, it's been more than five years.
Well, according to Wikipedia, 266 is a sphenic number, a Harshad number, non-totient.
It's no good if you don't tell me what these things mean.
I have no idea what they mean.
A sphenic number is a positive integer that is a product of three distinct primes. I know that you come to the Embedded Podcast in order to have Christopher read Wikipedia to you.
No, no, it's for me to actually search Wikipedia first.
That's the service.
Reading is easy.
We also usually send out listener forms, feedback forms, every 50. At least that's the goal. So once a year.
But I still haven't finished the
ones from 200.
Maybe it's time to just toss them and do it again.
Well, no. I mean,
there's lots of good stuff in here.
I mean,
I asked some good questions, but
you know,
do you have a favorite episode?
And then I started collecting what were the favorite episodes to do the statistics so that I could figure out what people liked.
And every episode is mentioned, which is really cool.
I mean, that's really kind of gratifying, but it makes for really bad statistics.
Yeah, it should be some sort of distribution.
Everyone likes a different episode.
That's weird.
And then the what could we do better?
Be more clumpy audience.
Yes, be more clumpy.
That one is clumpy.
Many people suggested more cat episodes and many people suggested fewer cat episodes.
We've only had the one and that was like back before anybody was really
listening to the show. I know.
I think having less cat episodes
is
just being cheeky.
Yeah.
There was one comment that I thought was
funny for you, especially given
your intro today.
Sometimes
Chris will not make a sound for such a time that I've forgotten
he's on the show.
Me too.
Then he'll say something sudden, all of a sudden, and it's like, oh, who was that?
Oh, yeah, it was Chris.
My job.
It's from the days when Chris wasn't even a co-host.
He was a producer who occasionally chimed in.
No, I never chimed in.
Oh, no, you'd type at me and I'd have to read it.
That was really effective.
I'm not even sure I did that.
I just sat behind my console and occasionally made a mark.
Expected you to carry the whole thing.
Yeah.
Without any useless puns or comments being thrown at you.
These were hard days.
I dig the puns.
Yeah.
All right.
That's enough gloating, I guess.
I'm just shocked.
266.
Actually, yes, I should have done the whole shocked thing at 256 or 250,
but I'm running late, software schedule and everything.
So I mostly have emails for us to talk to today.
All right. All right. Emails. everything so i mostly have emails for us to listen to talk to today all right all right emails um
jesse was wondering if we use emulation or simulation tools to prove out things in our
day jobs there's no replacement for actual hardware but sometimes getting things tested
on emulated platforms could be helpful some companies have started using emulators in continuous testing
frameworks. Jesse mentioned
QEMU?
QEMU.
QEMU.
QEMU.
And another
project, a new one,
Renode,
which is an emulator that focuses on helping
people emulate boards and networks of devices. What are our
thoughts on emulation?
Do you want to go first? Do you want me to go first?
You go first. It's more natural. Then it's like we have a conversation. Oh, I see.
I don't think we're supposed to do the stage direction.
This is why they listen, right?
I have used emulation,
or at least been at places that have used emulation next to me
quite a bit, I think.
So one of my early...
Wow, that couldn't be an early job.
One of my middle jobs
was to actually write an emulator for some hardware that didn't exist, which was a processor.
And we absolutely needed that because we were writing a compiler
and an assembler and stuff.
And without that, you can't test those things or see what might happen.
I mean, you could if you had, I guess, the Verilog simulated thing running.
But that's often very slow and cumbersome,
and it was much easier to just write some C code that emulated the chip.
So that was my first kind of experience with emulation.
I have used QEMU at one place
where we had, I think, a group of contractors' job
was to make pretty much the whole system work
from the micro all the way up to display
so that you could build applications for it
without having to have the hardware.
It didn't end up getting used that much, but it was impressive.
So it was definitely possible.
But it didn't end up getting used that much
because the hardware came in and it was plentiful
or it was just too cumbersome?
We weren't doing what it was useful for.
We ended up not doing what it was useful for
as often as it mattered.
If we were building hundreds of apps,
it would have been really cool.
And now Fitbit does have a simulator
for building apps that's public and part of the SDK.
So that's definitely more of a simulator, not an emulator.
It's not pretending to be the micro.
But you had to do a little bit with that because the simulator would show something,
and then if the screen couldn't reproduce it pixel for pixel, people would whine at you as graphics person.
Yeah, I mean, there's bugs that
have gotten fixed in both places,
because some of the pipeline was similar.
I think that's okay
to say. I haven't really revealed anything.
Yeah, but QMU is cool.
My impression of it was that it's
it can do almost anything.
And as you know, things that can do almost anything,
they can be hard to set up.
I did do a search before the show for QEMU and STM32,
and there's a bunch of projects that allow you to emulate an STM32.
So you can do stuff with it, and you can build your whole system
if it's worth it.
That's kind of the trade-off.
It can be a lot of work, and then you have to maintain it and keep it up to date with your system. If it's worth it, that's kind of the trade-off. It can be a lot of work, and then you have to maintain it and keep it up to date
with your system. So if
your hardware is plentiful, then sometimes
it doesn't make sense. Yeah, although being able to have
commit tests or continuous testing frameworks, that's where emulation
becomes really useful.
Even hardware in the loop, that's tough to make it happen all the time when you're doing
that sort of testing.
For me, I'm using a massive simulation system for a recent project.
You're simulating the entire universe, if I recall correctly.
Yes.
Incorrectly. When you drive off the end of the universe, you just tumble over and over, seeing the
bottom of the world and the sun go by over forever, just forever, just fall.
Yeah.
So, the project I'm working on is pretty complicated.
And I can send people who ask links, but I don't think I want to just send the link to everyone.
So you'll have to actually reach out if you want to know more.
But it's a robot operating system thing.
So there are a bunch of different components.
And there is the simulator for the input to the hardware.
And that goes through the ROS systems to all of the various
subsystems. And it's really, really complicated. I told somebody recently, don't use Ross because
their system just wasn't that complicated. And I think he was surprised because he thought his
system was complicated. I'm like, no, Ross is for really complicated systems. And this one is. But ROS also has a bunch of simulation and emulation that lets you build your little worlds and lets you record and play back data later.
And so that, yeah, I'm using that all the time.
Have I used any emulation for boards?
Not lately. At ShotSpotter, we had some simulation emulation systems to help with testing, and those were incredibly valuable.
I've always been a big fan of if you break the line between hardware and software, you can test the software algorithms fairly independently.
And so I called those sandboxes and not emulators because it was where the
algorithms people went to go play.
Here's the thing we did at Procket, which was a networking company.
We were building custom hardware, lots of custom chips,
which is going to take years.
So the software people could,
we could have twiddled our thumbs for two years or start
on the software so we started on the software and we made it so the software could build for
a standard pc or for the control board of the hardware we were going to use one was
i think control board was based on power, and the PC was obviously x86.
So we built the whole software stack
minus the hardware
control stuff with a
abstraction layer
that could run on a PC. And we shipped those
to customers, and they put them in their labs
and put them in their networks as, hey,
this is a router. And, you know, just put standard Ethernet,
a bunch of standard Ethernet cards. And so it was
the whole routing subsystem that was abstracted away and we could run it
on commodity hardware and get actual testing time years before stuff ever came back so that's that's
another approach is if you can if the goal of emulation is to get a jump on not having hardware
aside from the continuous integration thing. Although that's useful for continuous integration too,
because it means you can run...
It's much easier to run on a PC
than if you have an IoT device
that you've got to put in a rack somewhere
and have some computer dedicated to talking to it.
So if you can abstract the important parts away
to something that can be run anywhere,
on a VM or on a standard PC.
That's not emulation, but it's another way around the problem.
Yeah, and it goes back to our usual design for testing,
and then you can break your system into pieces that require hardware and don't.
Shall we go on?
Yes.
Okay. So,
Gopreet Mukar wondered,
as professional
embedded engineers,
do we write
our own library
for every chip
we've used
in our projects?
How do you go about
finding a library
online for a brand new chip?
If a library is available
for Arduino,
do you try to port
the library
to your specific MCU or do you try to port the library to your
specific mcu or do you write your own what approaches in your experience are most efficient
there are a lot of questions there so no don't write our own library for every chip we use in
our projects usually the vendor provides a pretty good stack of stuff that you build on top of,
CMSIS or...
With Cortex, you get the CMSIS hardware abstraction layer.
I would not recommend starting from scratch on that stuff.
It's a lot of boilerplate and wasted time.
Also, the provenance of libraries is kind of important professionally.
Yes.
When you're doing hobby projects, it's fine to go grab a library here or there and paste your system together.
But professionally, you have to know where it came from, who's supporting it, what the license is.
Do you trust it?
Do you have to maintain it?
Are they maintaining it?
Is it free?
How often do they release and how are you going to put it into your code base so that it stays buildable?
It's your product.
And if something goes wrong with a library
that you found off the back of a truck, that's not their fault.
It's your fault for choosing it.
And if it fell off the wrong back of the truck,
you put your entire company in danger if you use it.
Yeah.
As for the, in case if a library is available for Arduino,
do you try to port it?
No, no, no.
I wouldn't take anything Arduino and put it.
I have looked at MIT-licensed Arduino classes
for their interfaces more than anything else.
Say what you mean by that.
So, like, Adafruit has their sensor library,
and they have a number of things that go on top of their sensor library.
Some accelerometer, magnetometer integration pieces for tilt sensing and direction sensing.
And so I may not use that code, but I may go read it to see.
Okay, so I was thinking I was going to have these functions.
Oh, they have that function.
That would be very useful.
I should have something like that too.
So you look at their API, but not the implementation.
I'll look at the implementation if I think it's an area I'm going to have bugs in.
But I don't often use random
libraries off the net.
Random libraries off the net?
No, no, it's funny.
And sometimes CMSIS is
big and bulky and
There are, yeah, there's definitely other alternatives.
And sometimes I have
read the relevant
CMSIS stuff and taken it down
to the two lines I need.
But that's not doing your whole library.
That's saying, this part of CMSIS is not something I want,
or I want to reimplement or optimize or pare down
so it makes more sense for my particular application.
Well, I mean, he follows up,
does every manufacturer ever provide libraries for their chips?
So this is this thing that has come up for me quite a few times with sensor chips and with flash chips.
The flash chips are heading towards the standard, but they all, of course, have their individual timing and subsets of the standard or supersets depends on the chip
and so how am i going to put it in my system maxim doesn't care the the flash vendors they don't care
and there sometimes exists code but it never does exactly what I need I I tend to just write those
um and if I felt like I was writing the same one over and over again then I would figure out how to
put it into a library and reuse it implement it on my own time time and then reuse it for clients. But for all that I implement three flash chips a year,
they're all different.
For most of the peripheral chips,
my experience has been either find some sample code in the app note.
Yeah, the app note.
Definitely check the app note.
They might have it there.
Or occasionally there's some sample code,
but sample code is,
here's something to demonstrate that this does something.
And maybe it even does 75% of the functions of the chip,
but it's done in like...
It pulls instead of interacts.
It's your own adventure kind of writing style where,
okay, we're going to do everything,
but you can't actually take this code and put it in your code
and use it in a sensible way.
It does it at one-tenth the speed, or it uses a huge amount of memory.
It's just one giant function that just spy-pokes everything it needs to do
with hard-coded register numbers and stuff.
And then, oh yeah, see, this works.
It's like, great, okay.
I mean, that has a place.
It is useful to make sure it works.
Yeah, but...
But then when you actually want to put it in your system...
And you can refer to it when you're writing good code
to see if you're making a mistake.
That's sometimes useful.
Oh, yeah, that's very useful.
Yeah, I had a client recently who I think was a lot more excited
when I said, oh, yeah, I've used that chip. It has that problem there. But even as I said that, it didn't, yes, it was useful to have that information, but what they wanted to do, they were hitting that problem and I knew how to fix it, but I didn't have to go back to my old code and look.
I just knew where in the application note it pointed out the footnote that said you have to do this before that.
So I guess I don't write SPI drivers nearly as often as I used to.
Or I2C or UARTs.
Those all seem to be done most of the time.
I still do write peripheral drivers from data sheets for communication, memory, and sensors.
Yeah, and it's not clear here if he's talking MCU or peripheral libraries.
MCUs tend to come with a lot of stuff because they want you to use them.
Although I did see that STM had some code for one of their inertial parts and it was
reasonable code and then I looked at the code and it said
it can only be used on STM parts and I didn't have an STM
so I was using their STM inertial
peripheral but they wouldn't let me use their code
because I wasn't using their MCU.
And I was actually really irritated by that because, come on, it's not, why are you forcing that?
And I suspect if I'd fussed at ST, they probably would have fixed the license for me.
But instead, I just recommended a different peripheral.
But that was for other reasons too.
Do you ever talk to the manufacturer?
I avoid talking to the manufacturer, even if they have stuff I want.
It's usually pretty hard, especially if you're not a big, giant company.
It's hard to get their attention.
They don't want to talk to you.
And then they want to know everything you're doing, and I can't say,
so I'm far more likely to just tough it out and write the code.
I talked to them a couple times for some peripheral devices where it was clear there were problems.
Like, this does not work.
Your timing in this data sheet is completely wrong.
I've spent three weeks, and I can demonstrate it to you.
And then they come back and say, oh yeah,
that's the data sheet for the different package.
And we had a typo in the one for the
can
package, but not for the
dip package.
Yeah.
We were 10 milliseconds off on the can package,
which is the one you were using.
So, just fix that.
You should have read it in Arata, which we updated last night.
They didn't have one.
Yeah.
I actually have been told you should read the errata and then notice the
date was between the time I pointed out the bug and went,
yeah.
Yeah.
What other times have I talked to manufacturers?
I mean,
sometimes when you're bringing up a big MCU,
you work,
you know,
your company's working with them to get their business
and they provide support.
So that could be a different situation.
Oh, yeah, that's different.
But generally, yeah, don't talk to them too much.
They don't have time for little guys like us.
Okay, should we go on?
Yes, I suppose so.
This is from Triple T, who works on medical embedded systems and whose company has recently mentioned they would be okay with paying for Triple T to attend conferences and or classes
on embedded systems for supplemental training.
Exciting.
What advice do we have?
We don't leave our house.
I've done very little training.
I'm completely self-taught now.
Depends on what training you have.
Yeah, well, that's true.
But there's some stuff that's good to kind of cover every five years,
even if you already have it.
Yeah, James Groening goes around the country
and does classes on his test-driven development.
I went to one of those, and it was pretty good.
It wasn't what I expected, but it was pretty good.
I keep meaning to do one of those,
but it's difficult when I'm in a real
company to say, hey, I'm going to go
off for three days and do some training
that you're not going to pay for.
But they are willing to pay for stuff like that.
Yeah, they are. Some of the big companies are.
But there's always a crisis.
Yeah.
Jack Gansel has
a Better Firmware Faster class
that he gives.
I don't know if that's online or in person.
Um, if you want to do an online course, there's an edX course that recently got suggested in the Patreon Slack channel.
So I'll put that in the show notes.
What else?
I mean, going to a conference, we don't do it very much. I'm not a big fan of conferences lately, unless you have a very specific thing that you want.
Like, I know this vendor is here, or these five vendors are here, and I'm going to be checking out this class of thing.
And I want to talk to all of them and, you know up a relationship that's cool i don't know about
i mean the ones i've been to in the recent memory have all been i don't know most of the talks are
kind of producty unless kind of learning anything i mean i guess i haven't really done the tracks
where they have like educational tracks before so i. So I can't really speak to whether those are worth it or not.
I remember going to my first Embedded Systems Conference.
And that was super useful because it was a good awareness of where things were in the industry.
And I don't remember it being as producty as the last one that I went to.
So I don't know whether it's just that I've gotten more cynical
and looking for advertising, or they've gotten worse.
Yeah, a lot of the talks, I remember one, a couple from the Arm Tech Con,
where it was like, okay, here's the architecture of this new chip,
and here's what it does, and here's how we can do security with it.
And I feel like I could have learned all that in five minutes from reading.
But let's say that Triple T has the opportunity to go to a remote conference
and spend a few days schmoozing and hanging out and networking.
Well, I can tell you that Chris Zweck is going to Supercon, which is coming up real soon.
I mean, like the day after this airs.
Yeah, not helpful.
Not helpful.
But there are some other ones.
I think we're going to talk next week to someone who is organizing Bang Bang Con.
That's exclamation, exclamation point con.
And it is... Short talks right short talks and somewhat
similar to super con that it's as much about the people as it is about the tech and it's not a
deep dive into any particular tech arm tech con um listener just got uh saw that and enjoyed it
uh sensors has a couple of conferences.
That's one that I have been thinking
about going to. That's often
product-y, but...
I think if you go in with a plan,
before you sign up for it,
see what the schedule
is like and if there's things that are interesting.
If you find things that are interesting,
then choose them and kind of plan
for it. I think it's a better experience than when we sort of just drop in and wander around hoping for somebody to say something interesting.
But there are also lots of little ones that are, I mean, some of it's because we're near the Bay Area.
But all around, there's open source conferences.
It depends on what you're interested in. And since
Triple T said something about medical embedded systems, I went and looked
and there are a bunch of conferences kind of related to that.
The more specific ones, not the giant embedded systems conference,
probably tend to be better.
And if you can go in with the idea
that you want to see half,
if it's multi-track,
if you can fill half of your schedule
with stuff you actually truly want to see,
that's a good conference.
And then the other ones,
you pick up a couple of random slots
just to expand your horizons.
Totally worth it.
I haven't been
good at... I used to like to go to conferences,
but either it's product-y or
I'm just a hermit or
I always feel like I could
have given that talk, which is not the best
mindset. They're expensive. I mean, that's
the consideration, right? I mean, if
a company's paying for it, that's
fine, but it is a
big commitment of time and money and so that that's the trade-off i always can't quite square
yeah yeah it is a pretty big commitment especially if travel's involved too so i mean
it's good for your career though to go see stuff and meet people. Yeah, I agree. I'm not saying no.
I'm just trying to balance it against taking a course,
which might cost the same amount of money.
I mean, I'm not trying to save money for the company.
Yeah.
Or the more private courses. I mean, Grenin will come to your company and do the class there with your coach.
And that's probably way more useful.
And that's way more expensive,
but it's a conference for everybody
and you're all on the same page
and it's sort of team building, blah, blah, blah.
Yeah, and if one person does it
and comes back and says,
hey guys, I learned all about test-driven development,
we should totally do it.
And then it's easier for people on your team to say,
well, we'll get to that.
Yeah, someday.
What are you doing for the next sprint?
But if everybody's doing it, then your team to say, well, we'll get to that. Yeah, someday. What are you doing for the next sprint? But if everybody's doing it,
then you can kind of plan on,
okay, how are we going to incorporate this, all of us,
because we've all spent the time in this
and we've all learned why this might be useful
instead of having one person have to come in
and re-proselytize to the rest of the team.
Yeah.
Listeners, are there any conferences you've gone to that you've enjoyed
that Triple T would be interested in going to
with a medical embedded systems bent?
I think some of the security conferences people really enjoy,
but I'm not sure that's where we should suggest.
Yeah, and any
courses too, beyond the
ones we mentioned.
Next one is
Sergey.
A friend is working in
the embedded field and interested
in machine learning and looking for
a job near Berkeley, California.
There are opportunities at Planet Labs,
but where else would there be opportunities
on the intersection of these two fields?
This is such a softball question.
I mean...
Everywhere?
Everywhere.
I mean, to some extent,
the machine learning is the flavor of the month
and it's probably being applied in places it shouldn't be. But I mean, it's being done everywhere. Even they're doing machine learning on the back end, right? In the databases, they're looking for patterns
or finding uses for data that can't be done through traditional methods.
So that opens it up to a lot of companies you might not think of.
Most of the companies that are doing wearables or local sensors in the home.
Health field too. Health field, too.
Health field.
Some of them have big initiatives to do machine learning on the back end of devices, but also
to not send as much data through the network.
Yeah.
And so they do all of these analytics, and they figure out some machine learning stuff
that they want to do, and then they want to put it on the embedded device.
And I mean, I can't even tell you how many contracts have looked like that.
And it's getting better.
The more you know about machine learning as an embedded software engineer,
the more it helps with that.
Sometimes the analytics people will give you completely random weird things
and you have to be able to talk to them in their language in order to convince them.
There might be more efficient ways to do it on an embedded system.
Yeah.
Smart home stuff.
There's a bunch of startups doing that.
Power stuff.
Remember we talked to the grid lady.
Yep.
The obvious ones,
the self-driving car companies,
Cruise, Tesla,
they're all, of course, doing that stuff.
And it's embedded,
whether or not it's, you know,
a gigantic GPU attached to a huge SOC,
it's still embedded.
It's a car for a while.
All the mobile companies, Amazon, Apple.
I mean, we're not given a very good answer here
because we're basically saying apply to anyone you like
and you can probably find a machine learning post.
And if you're looking for instruction in machine learning,
I really like the Udacity classes.
I took a couple of different sets.
I took the first set of the self-driving car and I took AI for robotics.
They had pretty different views.
It wasn't the same material.
There was a little bit of the same material, but it wasn't that much overlap.
They have some introduction to machine learning and data science. Many of them were free
and they were quite good. So if you're just starting and you want to know more about machine
learning, I liked those. They were better than, or they were easier for me than most of the books
that I picked up, which is weird because I like books so much more than videos.
But I apparently needed the back and forth and the hand-holding that the videos, lectures,
and quizzes gave me.
Now, here's a thought.
Maybe you shouldn't be looking for a machine learning job.
Maybe we should make them less smart?
No, no.
I'm saying if you're having trouble finding a job
and you're limiting yourself to machine learning,
maybe just look for embedded jobs in general
at places that might have machine learning
that you can eventually slide over into.
Oh, yeah.
Even if it isn't on their embedded development side.
Because that's going to be pretty uncommon,
is an embedded machine learning job posting.
Because a lot of times it's the algorithms people,
and they're somewhere else, they're not embedded.
Yeah, I guess most of the people who wanted me to do both were tiny companies.
And so they couldn't afford both. And let's face it, I am not
ready to be the solo machine learning person.
I can implement your algorithms, but I'm not quite ready to create.
The data set creation is bad. And from the question, it says, interested in machine learning.
So I don't know if that's, I'd like to work on machine learning,
but I'm a beginner or I'm an expert.
So if you're an expert, then...
You should have plenty of options.
Then you'd already know.
So I'm guessing not an expert,
and you should find a job at which you can learn machine learning,
but that's not going to be a job.
That's going to be a job wreck for a machine learning embedded person,
if that makes any sense, which probably does not.
That makes sense.
It means go to an IoT company,
which is like everything.
Sorry.
That was what the internet was planned for.
That was the whole point.
Is it really what the internet was planned as of 1995
well that covers my questions what we have to have more questions didn't some come in today
sure some came in today uh well there was the person who wanted to know if we could
if they could put um articles on our blog so the answer to that is no okay um please hold no we talked about that
you're right wow i guess we got a lot of spam uh malta did email back he was the one who gave me
the good lightning fact I used recently.
Suggesting more basics episodes.
Which is tough, because the basics one require quite a bit more prep, or at least
discussion. So, yeah,
we're going to do more of those. It's kind of a
what do we do next? And the things we've been talking about are RTOSs and flow of control.
Some of that's hard to do on a podcast, but it might be fun to try.
And, you know, when would you use an RTOS?
Well, we can talk about that a little bit.
Let's talk about, let's not do the show that we're going to do.
Let's talk about each of the options in a little bit of detail,
and then we can open it up to the listeners to say which one they'd like most.
So talk more about the first option here.
So one of the things that I would like to do for a show
is not directly about RTOSs.
It's sort of pre-RTOSs.
You can set up your code in different ways.
I think we're all kind of familiar with the Arduino init and setup.
And what happens is init gets called once and setup gets called every pass through a while loop.
And the while loop just goes on and on and on.
And if you dig into the Arduino internals,
the while loop is doing more stuff,
but that's fine.
And in a lot of embedded systems are like that.
You,
you have some init and then you have a while loop and it goes on forever.
And of course you might have interrupts,
which then signal the while loop in some way or another,
maybe through globals, maybe through globals.
I mean, in an operating system, you would have semaphores and mutexes
and you pass all those things around.
But if you really just bare metal, it's usually the interrupt sets a global.
And so that complicates the system.
But it's still pretty linear.
You're still going through this while loop, which now probably is an event loop.
And when somebody pushes a button, something happens.
When you get multiple buttons or buttons that depend on what the previous button was,
you start building state machines and state machines can be beautiful and they
can be horrific.
Beautiful and horrific.
Yeah.
Well,
that's the thing is sometimes it's at the same time and sometimes it's not.
When they're horrific is when the code is not organized as a state machine
explicitly,
but it is a state machine and that. And that's where things get messy.
It's like, yes, there are states, but they're implicit
or they're handled in various different places.
And there's no one view of the state.
It's all a bunch of flags that can be ordered together,
which means you actually have, you think you have three states,
but you actually have eight or you think you have 10 states
and you actually have a thousand so we'll get into
detail of that but yeah i mean you can run into trouble there and there are many ways to set up
state machines that are better or worse um i have always well since i i've discovered it at leapfrog
i have loved the table method of state machine setup.
You know, a lot of people use flow charts, which are fine, but they get complicated really fast.
And with the table method, you have your states on the left side, and you have your events on the top.
And you can just say, say okay when this button is
pressed and i'm in this state i go there and by doing this you end up answering the question of
well what happens if the user does something stupid like presses this button when they shouldn't
because you have a blank state and you have to handle all the states okay we're doing the show now okay sorry um so
no i just say that's good it's just state machines and then the other one that i wanted to talk about
was function pointers and callbacks i've seen this a lot lately And some of it's just weird callbacks.
But you register for some information.
And then when that information is available or needs to be acted upon,
your function is called.
And what happens is your whole while loop can go down to just while one do nothing or
while one idle and instead the whole system is based not on interrupts but on interrupts
activating functions in non-interrupt space i missed something how does that work um if you
provide a callback how is it non-interrupt space?
Well, you usually have, I guess I said the while one didn't do anything,
but the while one has to say whatever function is ready to call,
go ahead and call it. Got it, all right, all right.
That could be fine.
The trouble with function pointers gets in places
where there's security issues and things.
It can get really complicated.
Crossing system to app privilege levels and stuff like that
but it also i mean yeah that definitely an issue but you there is no obvious flow of control yeah
yeah things happen asynchronously
yeah that's how most applications are written now, by the way. I know, I know.
Speaking as a professional Swift developer.
Obvious flow of control goes out the window.
Well, that's the thing.
As an embedded system, you get to one of these weird callback systems.
Yeah.
And it's all just like gears turning around you and you can't see what's happening.
Yes, yes.
And it's very confusing, but that is
how most modern software is being
done these days.
Asynchronous and React
and all those things.
But, I mean, there's a place
for it. It's hard to understand, but I think
it can be understood, but I'm not sure
moving to that
for embedded systems is necessarily what we want to
do yeah okay so there's that one how to build your own operating system without having an operating
system basically yeah because that callback thing leads pretty directly to schedulers
um it's it's all kind of a hidden scheduler there starting that way is a great way to kind of a hidden scheduler there. Starting that way is a great way to kind of learn how
things work in an RTOS
or to make a bunch of mistakes
and figure out how things don't work
in an RTOS.
Okay, so what's the next idea?
So another idea for
a basics episode are design
patterns and embedded systems.
Okay, so what does that mean? Because I've just
spent three weeks
reviewing actual
design patterns
as commonly referred
to. So things like observer
pattern and factory
and all
that good stuff. Is that what you're talking about?
Yeah. Oh, okay. I mean, that was
part of my book, was that I
wanted to talk about the design patterns that happen in embedded systems.
Because there are things that we do commonly,
and we have our own language for them,
but the standard software people have a different language for them.
And we all think we made them up, but it turns out they're pretty common. With robot operating system that I've been using a lot, the whole thing is based on
subscriber and publishers. And so if you want to know the position of my robot, you subscribe to my robot's position. And I publish it at some frequency
with some message definition, and you receive it.
You can do whatever you want with it, and it doesn't affect me. I am my own little
node, and you are your own little node, and we
can do whatever we want. And that's the observer pattern. It is, in fact, the observer pattern.
And the observer pattern uses the words subscriber and publisher.
Okay.
But there are lots of patterns.
And it would be good to go through the patterns book and say, oh, this we use a lot.
But we don't say we use it a lot.
We don't say we use it a lot.
Well, that's why they're design patterns.
They weren't invented.
They were discovered, right?
Or they were emergent.
People are already doing this.
Let's give them a name and formalize them.
And so that everybody knows the usual errors that happen with them.
It's a formalization.
Yeah.
Okay.
And then the last one on my idea list is RTOSs, RTOS bases.
When to use an RTOS?
What's a process?
What's a thread?
What's a kernel?
Memory protection, maybe.
Maybe.
They have them on Cortex.
Well, they have like, I mean, the A's have them, but the M's have the sort of brain-dead version.
It's not that brain-dead.
It doesn't have virtual memory.
Okay.
But it allows you to prevent accesses to certain blocks of memory from other privilege levels.
So we need to talk about memory protection units and what they do and why they're important when you talk about RTOSs
and why they're important when you talk about having untrusted third-party apps.
Yes.
Or just being good about security
firewalling within your system.
You might not have an app situation,
but you might have a situation
where you have system code
and other code that interacts
with the internet
that you don't want
talking back and forth.
Can't trust that internet.
Yeah, so that one is the one I'm kind of most interested in doing,
but I could see doing the build your own super loop one first.
Yeah, but that one might be easier because it's more of a standard topic.
I mean, some of ours flags in mutexes,
and how do you debug multiple threads,
and that goes into asynchronous design.
I don't know.
I keep going back and forth.
And we have to find some poor sucker to talk to us about these things.
Scheduler.
Scheduler's not on there.
Round Robin versus...
Preemptive.
Preemptive versus cooperative.
Yeah.
Somebody was telling me about how important it was that it was time-sliced instead of interrupted.
And I was like, those aren't the words.
They are words.
Those aren't the words.
Time-sliced is interrupted.
Okay.
So, listeners, tell us what you'd like to hear next,
and then we'll find somebody to talk about it intelligently
while we make puns yeah i can do that i'm all about puns okay that was the last topic what
are you gonna be for halloween uh i i ordered a shirt so that i could be the scariest thing
in embedded systems the shirt just says volatile across the front.
I decided today that I'm going to go as a plant-eating man.
Which came out of him seeing a giant...
Man-eating plant.
Man-eating plant.
Yeah.
And he said he was going to be a plant-eating man like six times.
Yeah.
As we walked by this.
That was mostly because the first two times I had said it incorrectly.
Oh, I just couldn't understand it.
I just, it was like, okay, dear, that's fine.
You didn't get that.
And then I was like, well, so you're just going to wear your normal?
Yeah.
But you don't eat plants.
If I don't eat plants, I'm in real trouble because I don't eat anything else.
Yeah, that's true.
What do you have for candy?
What do I have for candy?
I have a bag of candy from Sweetwater.
Sweetwater is a musician's store?
It's an equipment online store like Amazon that sends candy with every purchase.
The same amount, no matter how much you spend, by the way.
If you buy like a thunderbolt
cable they'll send you a little package of candy if you buy like a custom shop vendor guitar you
get a little package of candy i think it should scale i think you should get like a huge box of
candy if you buy a guitar so they usually send like Smarties and the not expensive version of candy.
Do you think the amount of candy should be the same?
But the quality of the candy
should go up for the larger purchases?
We could talk to him about that.
What else do I have for candy?
Nothing. That's it. That's all I've got.
Well, that's because I don't give out candy. We give out fake tattoos.
And glowy things.
And glowy things.
Also, nobody comes to our house.
Well, not now that we live in a scary house.
Our house is scary?
It is from outside.
All right.
That's the show.
That's the show.
I hope you enjoyed it.
If you're still listening, why?
No, no.
Just have a very nice day.
Happy Halloween yesterday.
Yes.
Happy November.
Welcome to November.
Yay, November.
November.
How is that possible?
But thank you for listening.
Thank Christopher for co-hosting and producing and making us sound good.
Which actually, on
the list of things that people talked about
in the
show feedback
thing, the fact that we sound good
is very important to them.
Good job.
And now I will
read Winnie the Pooh instead of
Christopher's scary, scary smile.
What?
Is that the end of the story? asked Christopher Robin.
That's the end of that one. There are others.
About Pooh and me, and Piglet and Rabbit and all of you, don't you remember?
I do remember.
And then when I try to remember, I forget.
That day when Pooh and Piglet tried to catch the heffalump,
they didn't catch it, did they?
No.
Pooh couldn't, because he hasn't any brain.
And did I catch it?
Well, that comes into the story.
Christopher Robin nodded.
I do remember, he said.
Only Pooh does it very well.
So that's why he likes having it told to him again.
Because then it's a real story and not just a remembering.
That's how I feel, I said.
Christopher Robin gave a deep sigh, picked his bear up by the leg, and walked off to the door,
trailing Pooh behind him. At the door, he turned and said, coming to see me have my bath? I might,
I said. It didn't hurt him when I shot him today. Not a bit. He nodded and went out.
And in a moment, I heard Winnie the Pooh bump, bump, bump, going up the stairs behind him.
Embedded is an independently produced radio show that focuses on the many aspects of engineering. It is a production of Logical Elegance, an embedded software consulting
company in California. If there are advertisements in the show, we did not put them there and do not
receive money from them. At this time, our sponsors are Logical Elegance and listeners like you.