CppCast - Teaching Embedded Development
Episode Date: February 25, 2022Khalil Estell joins Rob and Jason. They first talk about Matt Godbolt's recent keynote at CPPP on C++'s Superpower. Then they talk to Khalil about teaching C++ embedded development and some of his tho...ughts on embedded development, including why not to avoid runtime polymorphism. News CPPP Keynote: C++'s Superpower - Matt Godbolt Visual Studio 2022 17.1 is now available Making a cross platform mobile & desktop app with Qt 6.2 VSCode Map Preview Links libembeddedhal San Jose State University GitHub Khalil's YouTube Channel Sponsors Use code JetBrainsForCppCast during checkout at JetBrains.com for a 25% discount
Transcript
Discussion (0)
Episode 338 of CppCast with guest Khalil Estelle, recorded February 22nd, 2022.
This episode of CppCast is sponsored by JetBrains.
JetBrains has a range of C++ IDEs to help you avoid the typical pitfalls and headaches
that are often associated with coding in C++.
Exclusively for CppCast, JetBrains is offering a 25% discount for purchasing or renewing
a yearly individual license on the C++ tool of your choice.
CLion, ReSharper C++, or AppCode.
Use the coupon code CHEPBRAINS for CppCast
during checkout at www.chepbrains.com. In this episode, we talk about C++'s superpower.
Then we talk to Khalil from Google's Wear OS team.
Khalil talks to us about teaching embedded development.
Welcome to episode 338 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
I'm okay, Rob. How are you doing?
Doing good. It's 2-22-22?
Oh yeah, it's two.
A lot of twos. Although I always write the year in two digits just because everything now,
I just use ISO date format.
So if I was signing a document today,
I would actually write 2022 slash 02 slash 22,
which I know no other human does,
but it works for us in America because the month and
the year are in the correct places. So no one in the States complains about it. And I get to feel
better about myself. It's good to have standards. Okay. Well, at the top of every episode, I like to read a piece of feedback.
I was looking through our YouTube comments, and we got these ones in regards to our recent episode with Brian Kernaghan,
saying Brian sounds like a really likable person on a human level, and another comment just saying legend.
Certainly was great talking to Brian on the show a few weeks ago.
I saw a bunch of Twitter comments that were like, oh, he's a treasure.
That was a pleasure to listen to,
like all kinds of things like that.
It was fantastic to talk to.
It was great.
Well, we'd love to hear your thoughts about the show.
You can always reach out to us on Facebook,
Twitter, or email us at feedback at cppcast.com.
And don't forget to leave us a review on iTunes
or subscribe on YouTube.
Joining us today is Khalil Estelle. Khalil is a software
engineer at Google's Wear OS, currently working on bootloaders, audio, and haptics. He spends his
free time as a volunteer staff member at San Jose University College of Engineering. He's mentor and
sponsor president of the robotics team at San Jose State University. And Khalil is also a creator and
maintainer of the lib embedded HAL open source projects. Cleo, welcome to the show.
Hello, nice to be here.
What is Wear OS actually?
Like, is that the watch OS?
Yeah, smartwatches, yeah.
Okay.
So like, you get like the latest Galaxy Watch 4, like that's using our operating system.
Okay, okay.
Is it directly related to Android at all?
It is Android.
Oh, it is Android, okay.
Just crunched down to fit onto a watch.
Okay.
Does it run anything else besides the watches, the Wear OS, or is it just for watches?
Just watches.
We have different kind of variants of Android that go for different here, but now I'm totally curious.
What are the capabilities of a watch that Wear OS has to run on
compared to, I mean, a modern Android phone might have 6 gigs of RAM, right?
I mean, it can be ridiculous.
What's a watch like?
Yeah, crazy thing is that they're not,
there's a gigantic spreadsheet with all the different types of watches
and what their capabilities are.
For the low-end watches, you could see,
depending on which version of Android you're using,
maybe half a gig to a whole gig.
And then you can get some that have multiple gigabytes in there.
So yeah, it gets kind of crazy.
For the low-end, you can see pretty reasonable SOCs being used for it.
I don't remember any of their speeds or a lot of their characteristics.
But yeah, when it comes down to at least RAM and some storage,
kind of comparable to actual phones.
Right.
I feel like it was a day not that long ago.
Well, not that long ago to me, but now I just feel old.
Like 20 some odd years ago that you would just assume
embedded development meant like 128 bytes of RAM
on a
device clocked at one kilohertz or something. And now you're like, oh, well, embedded, I actually
have a full Linux operating system with five gigs of RAM. It just happens to be running on a watch.
Yeah, it's actually kind of amazing because like it's on there, like we were able to get
into that form factor. Like that's crazy. Yeah, it's really impressive all right well cleo we got a couple
news articles to discuss feel free to comment on any of these and we'll start talking more about
the work you're doing in embedded development okay so this first one is the keynote from cppp
is now available on youtube and this is matt godbolt on C++'s superpower. I unfortunately made it halfway
through the talk before we had to get on. It's a really good talk so far, and he's talking about
what he thinks is C++'s superpower as being able to look at really old C++ code and kind of bring
it up to speed incrementally. Yeah, Matt had given this talk at my meetup. So yes, I have a meetup. It's mostly remote these days,
just that I would say that out loud again.
But as a preview a while ago,
but I had actually forgotten what he talked about.
So I was rewatching it also just before this.
It is a good talk.
I actually love that talk.
I know Jason's done quite a few videos
converting C to C++.
I remember seeing a portion of the Doom.
You didn't stick around for the 15-hour live stream?
Yeah, no. I saw a bit and was like, this is awesome. I'll watch it later. And then I remember
looking at it again and was like, that's a long stream. I was like, I'll think about it. And then I just got
other things. I love the cat also making an appearance on the
presentation. But I actually like the style of showing the
transition between old code and new code. I think it was actually
quite entertaining, kept me the entire time.
It made me also realize that maybe I should
learn a bit more about ranges. I don't
use them very much. I've had too many cases
yet where I needed them embedded, but I think
they may come up useful. And then
I didn't know about zombies, like the term zombies
for unit testing. I feel like I've done
a lot of that stuff before, but
I just never had
like an acronym to go with it. So it's happy to know. And if any of the viewers are sure what
zombies is like zero one mini boundaries interface definition, which I actually don't know what that
means. And so you look that up exceptions and simple use cases. So I thought that was really
cool. And there's one other bit that I really wanted to talk about that he talked about, which is a compiler error-driven design.
Oh, yeah.
That was cool, and we all do that.
We all let the compiler tell us what we've done wrong.
But I go one step further,
and this is from my time when I worked at Google Assistant.
I use, taking this everywhere else too,
I use CIDD, continuous integration-driven design.
So let's say I have a whole bunch of features,
and testing them takes
half an hour to an hour or something extreme like that
where you're having to build
a good quarter or half a million files.
Rather than
write your change, test your code on your machine,
I will just throw it off to the server,
have the server run
all its continuous integration steps, move to my other
branch, make my changes there,
shoot that off, and keep round robin until eventually
I get back to my original, and then check the CI
to see if it's there. And if I have more time to spare
and everything's still running, I'll go get a snack.
Yeah, this is something I've been doing for years now.
And it's maybe just abusing GitHub actions
and things like that, but it works out really well.
Yeah.
Curiosity, when you're doing something like that,
do you end up doing a squash
merge after that because you had a bunch of tiny commits commits that were just basically poking the ci yeah so in this
particular case when i was like i do this on my open source projects as well as like when i was
an assistant i was yeah a google assistant a lot of the features i try to keep separated if they
were the same feature i'd probably just squash commit them and then send them off because that's
a big thing at google it's like having a single feature in one single commit not multiple commits
i usually just do a man commit and yeah yeah, I try to do this only on
separate commits. If they're the same thing, then I test them all in one go.
The next thing we have is a post on the Visual Studio developer blog, and this is Visual Studio
2022. 17.1 is now available. This is covering all the changes in the new version of Visual Studio.
There are a couple highlights, I think, in here for C++.
Enhancements for embedded and RTOS make it easier to be productive.
So that's being able to look at embedded registers,
which I've never gone and looked at,
but seems like something that might be helpful for someone like you, Cleo.
Yeah, to this end.
So I saw that little blip.
I was like, oh, that's really neat.
That's really cool.
And I know most debuggers usually won't present that
unless you put it into some aspect of either as part of a local variable
or your memory map of some sort.
Although I don't actually use Visual Sodium almost at all.
I use just normal VS Code, GDB from command line.
I use this thing called GDB dashboard.
If you or someone uses GDB on the command line,
I highly recommend looking up that git.
So do you use VS Code with gdb
but with CL?
I do not use VS Code's debug
system. I had used it before.
The reason why I don't use it a lot is
because back in the day I didn't have it, so I used gdb a lot.
Got used to that. When I finally got
it, I was able to use it. But now I'm on this
Mac, and this Mac has problems with running
gdb. I have the latest Mac bug, so right now I'm on this Mac, and this Mac has problems with running GDB. I have the latest
Mac bug, so right now I'm just going
back to my old roots. You said LLDB
then? Yes, yeah. So I
have been learning LLDB because I've actually
never used it until I moved over to
the Mac. The only thing I was going to say on this is that
normally I would just turn on
the MIM and Accessible from GDB so I can
access random pointers, and just do print
name of the register, and then it'll pop up all information.
So it's similar to how Visual Studio does it.
I have just a completely tangential question here.
When we're talking about Visual Studio,
and I mean this for everyone,
like Visual Studio 2022 17.1,
that doesn't even reference the compiler version, right?
So we've got the name Visual Studio 2022
and then its actual version number.
And then I think the compiler version is technically its own version number. How do we
reason about Visual Studio? Should we just call it 17.1? Is that like acceptable? Do we need to
say 2022 each time? I think of it as Visual Studio 2022. I forget what that, you know, the 17,
I forget what that version number is. the 17, I forget what that version number is.
That definitely, because Visual Studio 2019 was Visual Studio 16.
Right.
It's like a marketing name versus a version number.
Right. I think of, you know, the marketing name, I guess, is what I remember. But yeah,
what version of the compiler? Yeah, that is different from each of these as well. And I can't remember what the compiler version is right now. The next thing we have is a blog post
on making a cross-platform mobile and desktop app
with Qt 6.2.
And I think we talked about Qt 6 coming out a while ago.
A lot of big changes from Qt 5 to Qt 6.
And this is a nice post on setting up a project
if you want to go fully cross-platform,
both desktop and mobile.
And it's a really simple application,
doing the OpenGL, drawing a bunch of balls and objects in 3D space.
It's pretty cool to see them porting all these different operating systems
with one project.
Big article.
Yeah.
Yeah.
Looks like it's a good resource
if you want to get into
Qt development, though.
Like I was trying to read,
like, I think a portion of it
was like, this is going to be a while.
But I do appreciate the fact
that it is possible
to build one place
and have it deploy to other places.
That is my MO.
That's actually the reason why
when I first started programming,
I started with Java
because that was like their big thing.
It's like, oh, once,
it works everywhere. Uh-huh. So yeah, it's been with me since forever. No, I started with Java because that was like their big thing. It's like, oh, once, it works everywhere. So yeah, it's been with me since forever.
No, I'm totally curious now because did you then end up deciding that C++ is actually more
portable than Java? I know people who have come to that conclusion.
Yeah, yeah, yeah. So that's an interesting one. Because if I think about Java and try to put it
on any embedded system, I don't entertain that idea most of the times.
I would say that in a way,
yeah, actually it is.
I hadn't really thought about it that way.
I actually stopped using Java
just because a lot of the work I ended up doing
fit better with C++,
a lot of low-level operating systems type things.
I didn't know that I knew that,
but that's one of the other reasons
why I use more C++ than Java
is because it's in so many different locations.
And for what I'm developing,
this is exactly what it's meant for.
Okay. Well, that fits with our target audience here, so that's good.
I mean, there was a day that when Sun first released Java,
they advertised it as being able to run anywhere.
Well, one of the first Java world conferences or whatever,
they had the Java ring that was running a little JVM
that they like
gave out to attendees or something that like made like a big deal about this. But then
it seems like that notion dropped off pretty quickly. Yeah. So I had one more thing that I
put in the show notes right before we went on. So I don't know if either of you had a chance to look
at this, but I, one of my coworkers sent me this. It's a VS Code map preview.
And if you were really interested in our discussion last week
where we were talking about GDAL,
if you're into geospatial development,
this is a handy-looking VS Code extension
where you can open up different formats like GeoJSON,
which I think is one of the formats we talked about last week with Matt Butler,
open it in VS Code and be able to visualize it on a map.
Just seems like it's a really neat and easy to use
for previewing some content.
That's geospatial.
And I think anyone who's done trail mapping
would be familiar with GPX as well.
Yeah, GPX, KML is another format it supports,
which is what Google Earth exports.
It's not a really common one.
All right.
So, Khalil, let's start talking about how you got into embedded development.
Now that you mentioned that you work currently on Wear OS.
Yeah, yeah.
Kind of rolling it back into my starting.
So I actually got started with programming at around about 13.
My uncle Alonzo got me started on this.
And right when I hit around senior year of high school,
I was writing some programs and something just kind of hit me.
I was like, how does any of this work?
Like I do system.outputline or do printf.
And I'm like, things come out on my console.
But how does that even work?
And how does my terminal work?
And I started like really digging deep into realizing
that I don't understand anything about this computer that's in front of me.
I can tell it to do things, but that's pretty much it.
So I went on to OS dev and started looking at how to make my own operating system so
I can learn it from scratch.
And I made some progress, made a small little operating system that you can work with, which
was really neat.
And a little bit of a side story that goes into this as well is I was applying for colleges.
I put in computer science.
I go to get some food for my mom.
And she's like, what school did you apply for?
I was like, what college did you apply for?
I could be a science college.
She's like, you should go for computer engineering because your grandfather was an engineer.
You should go for engineering.
Like, you know, you should be an engineer.
Don't be that scientist.
I think, no, I'm a scientist.
I'm a biology major.
No, you should go for computer engineering.
So I was like, okay, all right, fine.
So I go back.
I didn't look it up at all.
I just kind of, oh i go into the san jose states smarten day kind of like the
open house and see the major and like we do embedded systems they show us all the projects
all things you could do and working with robots and working with like you know all these things
that like you know got down to the metal and when i went to one of their talks like yeah when you
come into our department you'll learn about like how to build a cpu from scratch you know how to
do system like computer system architecture and all that stuff, as well as also learning how to make
the software. I was like, this is literally what I've been asking. It's the question I've been
wanting to know. So that is what kind of started me off into being interested in embedded. And then
later on, I started getting into robotics, and that's pretty key into embedded as well. And that
kind of kept me rolling. Fascinating. I've never stopped thinking about that before. I didn't
actually ever ask that question of like, how. Fascinating. I've never stopped thinking about that before. I didn't actually ever
ask that question of like,
how does this work
until probably
the year before
I started at university,
even though I had been
programming since I was like,
hey, I just accepted it.
I never stopped to say
how or why.
I think there's
something just kind of clicked
and I was like,
maybe someone asked me
a question about something
I didn't know.
And I was like,
wait, hold on.
How does that work?
And I realized that
none of the things I've been learning thus far had given me any insight in that. And I was like, wait, hold on. How does that work? And I realized that none of the things I've been learning thus far
had given me any insight in that. And I was like,
tell me something. Someone had to have built this.
I got it out. The tiny operating system you wrote,
was that hand-rolled x86 assembly?
What were you doing? There was a small
little portion that was x86, and the rest is C.
I didn't even think of the idea
because I think all the manuals that were all about
doing an assembly in C. So I didn't even think
about doing C++ with it.
But now, if I was to do it again, I would totally write it in C++.
Dash F, no exceptions.
There are definitely limits if you want to write boot code in C++.
Yep, yep.
So you went to school, you went for the computer engineering major,
and you now spend some time as a volunteer staff member at the
university. Is that right? Yeah. So tell us more about that. You do some teaching of embedded
development to other students? Yeah. So there's a course, Computer Engineering 146. Its name is
real-time, actually, this name is kind of confusing. It's like real-time coprocessor
design or something like that. It's basically the embedded class where we teach concepts like
GPIO, what pins are,
voltage levels, SPI, serial communication protocols, RTOS, and specifically we use free RTOS because it's free. And honestly, every place I've worked at usually uses free RTOS anyway.
Along with teaching that class, I also kind of help out with some of the other coursework as
well. I helped out with updating classwork for the courses that go into FPGA development. I've
helped out with classes that have to do with computer architecture design.
I really love that class where you say, here's a CPU and here's some peripherals.
Wire them up so that they all work and do memory mapped IO and stuff like that.
Do a simulation and prove it out in an FPGA or something along those lines.
It's really cool and you get to really feel everything from the bottom all the way up.
I've done a lot of work with that kind of class and some other little things with working with some students and summer projects like that. And that's kind of like the majority
of what I do for the class. Oh, I will say that one thing that I love to talk about is just if
at the end of the class, you have six weeks, I usually have six weeks to do the final project,
not because it takes six weeks. It's because like, I want students to be able to like live
their lives and do other things for college. Cause yeah. And you build an MP3 player.
And the cool thing is that in your class
every single lab you build another driver and i tell you whatever driver you build here is what
you're going to use for your final project so make sure you wrote it right make sure you follow the
guidelines that i have here make sure you follow the right guidelines because if you don't it'll
cause issues you're still you're able to edit them later on but you have to edit from where you are
and if you make a dramatic change you have to talk to me so i'm making sure you're not cheating along
those lines like that whole combination setup it's so much more interested in making
sure they get the lab work done and that they understand how everything works so that when it
gets to the final project they can blow it out of the water and i've always been super impressed
with what my students have come up with that's just an awesome idea do you ever have the situation
where someone gets to the end and they're like well you know i was never ever able to actually
complete x driver therefore
my project just doesn't work and like how do you address that so there's two ways so each team has
like a partner you have one person you have a partner okay we have these people combined so
it kind of helps blend that together but if they're in a situation in which both teams did
not work on that same driver i will tell them like you're going to want to develop that driver
yourself and the later ones are the harder ones So hopefully you develop those ones when you have
the time to and not later on. I do have like leeway for people who can't make it to certain
things can't get the lab done at the right time. For you know, other reasons outside of school.
I've had a couple students who couldn't get a certain driver done and they implemented it
in line with the project, but use it in the actual project.
Now I'm curious to when you say say they make an MP3 player,
like what is the end result of this?
Like how much of the hardware, software
did they actually like assemble or whatever?
So I built this board for the class.
It's called the SJ2 board.
It's derived from another board called the SJ1,
which another faculty member had made before.
So you actually like designed that board?
Yeah, like everything about this board is my design and take off from oh it's probably getting blurred right now yeah you got to watch
on the youtube video everyone here okay so it's what is that like uh two inches square ish like
four square inches total maybe yeah about two yeah about two inches square okay it's a don't
my board has like a couple of buttons leds io expansion wi-fi port screen a lot of other
sensors on there as well.
And the students are kind of given this to develop on.
And there's a particular MP3 player module that you have to talk over SPI to.
You have to use one of your drivers that you built to talk over to it.
So it's a little hardware module.
Yes, a hardware module.
So you're kind of dictated if you do this particular project to use that hardware module.
And the only things you're really given is I give you a SD card driver that I wrote, that file system driver
that Elm Chan made.
And I think
that's pretty much it. Everything else you have
either written already. Oh, and the boot up
code is the code I gave you.
But everything else is drivers that you've developed yourself.
And you just apply those to the project.
So the end result is they're going to plug in an SD
card with some MP3s and connect
a speaker to it and actually listen to it?
That's really cool.
A hundred percent, yes.
And that's RTOS on there, sorry?
Yes, and there's some rules on how you have to do multi-threading in order to get the performance just right and stuff like that.
And they're doing this in C++?
C++.
Nice.
Was there a display on there so they could see what track was being played or something?
Yeah, so there's a display on the board.
And then for them, they can also choose their own display i've had a lot of
students who decided there's this other really cool display where like all you have to do is put
in the sd card it does all the reading and like you can say like you send it some command that
tells it to print this image in this location it'll do it for you and i'm like that's fine
i'm cool if you can find some devices that do some really cool graphic stuff i've seen some
people do some really crazy graphics work for the
MP3 player and I love it. Has anyone done anything like, because I know I've seen like Arduino
projects that do this, like direct drive a VGA display or something like. There hasn't been any
direct displays that VGA displays. I think I had a one person, a coworker of mine who also was from
SJSU who would have very much done that if he was able to go back in time and take it after I
started teaching it. But no, nothing like that yet. That would have been super dope. I did write a VGA
controller for students to use for the FPGAs, so they can do it for that. But for this project,
no, no one's done that. That's a lot of high-speed IOPort pin twiddling there. Yeah.
Yeah. That'd be awesome, though. Look, I got it working. It's only eight pixels because that's how fast I was able to drive the IOPins, but it works. And then really that's all that matters sometimes, you know, like.
So when you're teaching these classes and you said you are programming in C++, are there any like specific techniques that you kind of wanted to share with, you know, how you teach them C++ for embedded
development? I've been teaching in like the last couple years, COVID kind of started up. And then
we had another couple faculty members who were taking over the class. And as a volunteer staff
member, they kind of have more priority, since they were, you know, paid individuals. And I
really wish I could have gone over solid principles. I keep in mind that like some students were very
afraid of C++, because we do use it in one or two classes.
But after that, a lot of classes don't really need it.
So I try to give them a nice introduction to encapsulation, making sure they understand how to design an API and what makes sense to put in the API and what makes sense not to put in the API.
So it's kind of like the rudimentary basics of software design.
And then making sure that they aren't doing something silly.
Like, for example, I had some students who would cache the value of a pin,
and then I'd ask them, well, why would you cache it?
Because you almost should always ask the actual peripheral what its state is,
because if you cache it, what are you going to return?
This cached version or the actual version?
And these are things that some people don't even think about.
They think, oh, I'm just going to cache it now,
and that'll make it easier for me to use for later.
But then they read it once, and then they never read it again.
For my students, it's really just kind of getting the idea of encapsulation
and also explaining to them why using C++ and embedded
is not going to blow up their file size.
A lot of people assume that,
but there are things you can do to make sure it doesn't do that.
Don't use Iostreams, ever.
Yeah, don't even include Iostream
or you've got some static globals that are instantiated.
It's 150 kilobytes added to your memory.
Just including it?
Just including it, yeah.
That's special.
That actually raises an interesting question to me because then something like lib format
for actually formatting strings that you want to display on your tiny embedded display could
be useful.
Does that get used? Yeah. Well, so I haven't seen anyone else use it because it's still really new and
people are kind of like and not sure most people are going to be doing printf stuff right i have
a couple of defines into lib format or format lib just to optimize it a little bit further so if you
turn on all the flags and also remove floating point support you can very well get the size down
to 12k 12 kilobytes.
But if you're going to use printf,
it'll go down to, I think, like 4 kilobytes
if you have no float support.
If you add on float support, it goes up to 19 kilobytes.
So it's pick and choose on what you really want to do.
You can very well get format lib onto your beta system.
I've done it before.
And since we were all just talking about Matt's talk
at the beginning of this,
and he talks about some of the issues of using Sprintf and overflowing your buffer,
does that kind of conversation come up when you're teaching these students?
Oh, absolutely. 100%.
So the idea of how to design your stuff, for example,
he had that one buffer that was static to a particular function,
and if anyone else goes to try to capitalize something,
that would result in a whole bunch of shaggins
if you have anything multi-threaded.
So I talk about, here are certain resources that you need to check,
and I ask them, does it make sense to have a mutex
using the capitalize function,
or should you maybe come up with a different way to implement this?
Maybe just pass it as an out parameter.
It depends on what we're doing.
A lot of times we don't use the STL or
things that allocate in Invented, so I tell them
you should shoo away from that. You could string it. It will
fit. It'll work for your class, but
normally, if I was telling you if you want to make it efficient,
you could just pass it as an out parameter. Those are some of the options
that they have available to them, but
that is the kind of stuff that I would talk to them about
so that they don't make the same mistakes when they work on the project
also when they go out into the field.
You mentioned multithreaded and RTOS,
but are these actually multi-core processors that are being used?
No, I think the main thing is that usually,
probably capitalizing whatever string you're capitalizing,
would never hit the boundary.
But if you just started it right before the system timer hits,
and then basically swaps you to somewhere else,
then now you have problems.
Oh yeah, you still have a race condition, yeah.
Yeah.
For the sake, I'm just thinking
back to this mp3 player project,
and you said they need multithreading
in some cases to get the performance
of that done, in that case just mostly about
task switching, like while you're waiting
on I.O., go do this other thing.
Very much so, yeah.
With the mp3 player, is this part of computer
engineering or computer science?
Computer engineering specifically
okay i think i learned like i think a month ago that they may have because of covid and the chip
crisis because you can't make more of these boards yeah like i think they might have demoted it to
being like an elective versus it was actually part of the core of what you learned as a computer
engineer from san jose state but it is part of the computer engineering department see now that's
really unfortunate because you know there was some contingent of our listeners
that were just thinking,
how can I order Khalil's board?
And you were getting ready to start your own side business here.
And now that's just not possible.
Yeah, unfortunate.
Yeah, the LPC chip is very popular.
I think especially in the automotive space.
Because of that, you just can't really find them anymore.
Go on Mouser or something like that, and there's maybe 20, and that's it.
And those will be gone if you refresh.
Did you use PCBWay's board-populating services, or what?
How do you actually build these things?
Oh, actually, for that one, I use...
God, I'm forgetting the name, because it's been two years since I've used them,
since a group called Next PCB.
Don't quote me on that.
I will probably put it in the show notes later on yeah i don't actually use pcb way
i usually use jlc pcb when i came prototypes okay but when i actually want to get everything
probably made and manufactured at like the scale of 250 to 500 to 1000 boards i think i use next
pcb yeah and so it's relatively easy i'm not like trying to get you to like sell a particular brand
or anything i was just curious you do use some sort of one-off
kind of service where you
can say, I want the board with these components
and this design, and they'll do everything
and then ship you back the completed board.
Yep, pretty much. How much does it cost to prototype
something-ish? Prototyping is expensive
if you want to get everything put in. So for example,
I think when I got a set of five boards,
it costs, with everything
else and all the components, I think it might have been $400, $500. But when I got a set of five boards, it costs, with everything else and all the components,
I think it might have been $400, $500.
Okay.
But when I buy in bulk, when I buy, let's say, $200 to $50 or $500, I remember somewhere
around $4,000 to $8,000.
So you can see that it doesn't scale completely.
It trails off.
So when you buy in bulk, that's why I usually try to use things like JLCPCB, the small ones
that I can just at least solder most of the parts on.
Right.
And then test things incrementally. There's some parts on here that
I cannot get to because they don't have any pins available. We have some of the things that we can
do to put this up. And I never know if it's like, is it my soldering issues or is it my design that's
wrong? So you have one of these hacked toaster ovens that I see people do on YouTube so you can
do reflow in your living room. Oh yeah, right in my living room with no ventilation.
I have no friends who've done it. I have not done it yet. I've been planning on
doing one of those things for my own sake, but I haven't gotten it yet.
I feel like I should clarify here for the sake of our listeners. There's, I believe, a completely open source
project that is a toaster oven hacking module
that you put in temperature sensors and add a fan
and temperature sensor to your toaster oven and mount this thing on the outside of it. And then
you can program what you need for reflow. So if you've got the board with the solder mask on it,
you can put the components in, put it in there, put the program and hit go. And you just took
your $30 toaster oven and made it into a solder reflow machine
and then you can prototype your boards at home.
It's pretty cool.
It's really cool.
I want one, but I have literally
no use for it at all. I don't even know the last time I turned on
my soldering iron. I mean, I have a decently
good soldering iron for minor things,
but anyhow.
We'll interrupt the discussion for just a moment
to bring you a word from our sponsor.
CLion is a smart cross-platform IDE
for C and C++ by JetBrains.
It understands all the tricky parts of modern C++
and integrates with essential tools
from the C++ ecosystem,
like CMake, Clang tools, unit testing frameworks,
sanitizers, profilers, Doxygen, and many others.
CLion runs its code analysis
to detect unused and unreachable code,
dangling pointers, missing typecasts,
no matching function overloads, and many other issues.
They're detected instantly as you type
and can be fixed with a touch of a button
while the IDE correctly handles the changes throughout the project.
No matter what you're involved in,
embedded development, CUDA, or Qt,
you'll find specialized support for it.
You can run debug your apps locally, remotely, or on a microcontroller, as well as benefit from the collaborative
development service. Download the trial version and learn more at jb.gg slash cppcast dash
CLion. Use the coupon code jetbrains for cppcast during checkout for a 25% discount off the price
of a yearly individual license.
I guess maybe we should go back to C++ code a little bit.
Do you want to tell us some more about the types of things you teach in this class?
One of the things that's kind of different between how I design my APIs
versus how other people look at it is my use of runtime polymorphism
versus static polymorphism.
And this has been an interesting topic that i've spoken with a lot of the people you know about and i hope you guys
enjoy this part because one thing is i'm a big proponent of runtime polymorphism i use virtual
i understand that you want to keep your api when you use virtual as the lowest number of
functions that make sense for your particular project or particular class as possible because the more items you add the longer your bigger v table is and then you
have that duplicated over every implementation inside your code so that's one of the things i
talked about in my class is when you come to your api design if you're going to do something that is
polymorphic let's say you have a gpo a pin that can be high or low you should not have a virtual
class for set high another one for set low
it should just be set or level or similar designs or voltage and then you put in a bool usually some
people say you know bools aren't the best for parameters i say if it like level is true or
false that kind of fits the semantics of what you're doing when it comes to the output of the
pin yeah i'm not going to argue with you as long as it's one bull parameter if it's four bull
parameters then oh yeah that's awful.
Those are the kind of things
that we kind of go over, and I talk about the impact
of what virtual kind of has.
And I say, hey, if you want this capability,
I actually now would say, before
I would say add a method, or class
function, a class function that
is set high, set low, so it's like a little helper, so people
just can run that versus having to put level and the
value in. I would now say that it should be, set low. So it's like a little helper. So people just can run that versus having to put level and the value in.
I would now say that it should be,
if we ever get unified calling convention put in,
you can just have like your own free function that takes your interface as a input parameter.
And then you put your true false at the other side.
And you can basically call that
like as if it was a class function.
That would be really in the future.
But now I just tell people
just have them as separate functions
just because we get this feature in the future.
It allows you to extend an interface
and also allows you to not bombard a single interface
with so much detail.
Because the moment you're like,
oh, I want this other cool, useful thing,
you're going to think,
oh, I'll just add it to the interface side of functions
and that just gets gross
and the number of includes and have is so on and so forth.
So that's one part when it comes to API design.
There's another thing when it comes down to static polymorphism that rare students
who are really clever are like, well, can you just do it this way? And I'll say this.
So my biggest reason for runtime polymorphism over static, and this might sound
oxymoronic, is for size. You may think, wait, hold on, shouldn't
static be smaller, usually? Well,
you kind of run into some issues. A lot of times when
nowadays, since we have concepts now,
you could go, I have a concept of a pin
and here are the different set of functions.
Anytime you use a concept inside a function parameter,
it turns that entire function into a template.
Not too bad, that's fine.
Now let's say you called one of the
functions with one physical implementation,
another function with another implementation.
Assuming automation is all turned off, you'll have two
different instances of that particular function.
If you have them, you have n number of instances.
And if you have multiple other functions that
will take that parameter, you'll get
n times n for the number of functions you
have and the number of types you're trying to put through them.
So that starts to grow. Whereas if you were using runtime
polymorphism, you wouldn't have to worry about that size increase.
All the call conventions are exactly the same.
And I think one other thing to add in is that
when it comes to embedded systems,
you're usually compiling with OG for debug,
but reduced size,
O2 for reduced size,
and also higher performance,
OS for smaller size,
and you almost never see O3 used
unless you need more performance
and you're okay with increasing the binary size
because it'll almost always be greater than O2
and OS much greater than OS. So keeping the size down is really important for a lot of projects
because you'll always have like a half megabyte or a megabyte or 32 megabytes of flash space. You
may have 64, in some cases 32, in some cases 16. It can get pretty small. And here's the monster
I want to paint for this because it gets a little complicated, but I think it's important to bring up, which is what happens when you start to scale and when you have multiple implementations of the same concepts or same types.
Let's imagine that we have some four-layer system. At the base, we have our pin. Cool. We have some driver on top that uses that pin for some reason. You have another driver on top that's maybe part of a... It's another another high level driver, not exactly a system, but it's still another high level driver that needs that particular
other driver to work. And on the top, you have your system controller, like your big system
controller that talks to all the things below it. If you were to use static polymorphism all the way
up to the top of this, and let's say your system is meant to be modular so that you could swap
implementations for different things if you needed them. If for all four of these layers, let's say you needed just one copy, not one version of your
system, you need multiple and many of them. If for any one of those other variations, you change
the implementation for the GPIO, you now have new implementations, several new implementations pop
up out of nowhere inside your code. And as you continue to add more and more of these separate
implementations, you start to grow and and more of these separate implementations,
you start to grow and grow and grow. Let's assume the compiler can't inline everything for you.
You now have a lot of code, a lot of the same algorithms, but just in separate places because
there are separate instances of that. If you do do a lot of inlining, as you might know,
if you have a very large function and you try to inline it, let's say you call it five times in a
row, inlining it is going to be a lot larger in your code size,
duplicating that same code over and over again
versus simply having a single function call that you call back to.
And this is something that I actually noticed in some of my own projects
where I did a lot of CRTP before we had concepts
in order to make this thing happen.
And as I started to see this monster start to grow,
I'm like, this is becoming a problem.
But this is supposed to be faster. It is faster.
It is higher performance.
But my code size is going to get pretty big and I want to add more features.
And I hadn't hit the limit of my flush yet, but I was getting kind of worried about that.
And that's why I started making the switch over to runtime polymorphism. And then I realized that
things like people say like, oh, if you use virtual, you are forced to use RTTI. That's not
a thing. You just turn off RTTI. You're good to go. It's not necessary. If you want to do dynamic
cache, you need it. But otherwise I've heard people say that virtual forces exceptions.
I don't know where that comes from. I've seen some articles on that. But yeah, like I've heard
that too. And I was like, that's not a thing either. Also, just turn off your exceptions.
The RTTI one, I've never heard people say that requires exceptions. That's interesting.
I'm not arguing with you. I'm just saying that's interesting.
Yeah. So those are the kind of things that seem like myths that aren't, for the most part,
true. Then there's the things that are kind of true.
Like, for example, it's slower. That is true.
And there's the other part, which is that
it adds more space, and it will
add more space on small projects.
If everything gets in line with the other code,
your static problem will
be smaller than if you had a vtable
plus an actual function call plus your actual
code that's calling it. But once it starts to scale up, it starts to invert. And that's what I started to notice. And
that's why I went over to runtime. But here's the crazy part. Let's say you want to have static
plot morphism performance. You can still do this with runtime plot morphism. You can still use
virtual for this. Just take that object as its type. Let's say I have a GPI implementation for the LPC chip here.
Pass that into a function that takes it as a template and use it that way. Your compiler
will de-virtualize it almost always once you bump it up to optimization level one or above.
And once it de-virtualizes, it realizes, oh, I have all information about the function's
implementation. I'm just going to inline it for you. So to me personally, if I use runtime
polymorphism, if I use virtual, I get both the
runtime polymorphic version. And if I ever want to go over to static to get the performance up,
I can just simply do that by making the thing a template or make a concept for it, pass it in that
way. Do you use or teach finals to give the compiler more hints about when it can de-virtualize
something when it's appropriate to use final? Because it's hard to about when it can de-virtualize something, when it's appropriate to use final,
because it's hard to determine when it's appropriate as well.
Yeah, I think for the longest time,
so I would say now I'm still kind of figuring out
when is the best place to put final,
just because I've had some students like,
hey, I want to extend this particular class to do X, Y, and Z.
And I'm like, do you really need to?
Yeah, they had one particular point.
I was like, actually, that makes sense.
And hmm, okay, sure, go for it.
But in the class, I would tell people,
yeah, there's not really going to be any deeper level of driver for this.
If they needed to do anything more,
they should probably edit the driver itself and that kind of thing.
So I would actually tell them, yeah,
after you've inherited from the GPIO interface,
or the output pin interface, make market final.
But that's the thing that I still battle with myself.
Although one thing I just don't want to leave on the table here you mentioned rtti and exceptions but i think most
people really associate dynamic polymorphism virtual functions with dynamic memory allocation
is that necessarily true yeah so the answer to that's going to be no because as someone who
literally has all my builds check to see if malik is ever even a symbol it's not there i have no problem i have no problem with new i think dynamic memory
allocation is actually fine and embedded i just think that like my driver should be the ones to
do it sure if i can keep it static should be static or like non-dynamic allocating right i
remember i saw that an article years ago that someone said that and i forgot about that fact
too and i don't remember why people thought that. Well, it comes, I think partially comes down to like,
generally often if you're using dynamic polymorphism,
it's because you want to shove a bunch of these things
into like a vector.
And then the only way to do that
is to have a bunch of like unique pointers
or shared pointers in the vector or whatever
that then point to the generic interface.
Yep, yep, yep.
So you create these things locally on the stack and then you just use polymorphism just to simplify the generic interface. Yep, yep, yep. So you create these things locally on the stack,
and then you just use polymorphism just to simplify the calling interface.
Exactly.
So you mentioned not using exceptions, not using R2TI.
How do you deal with error handling when writing these drivers and everything?
Yeah, I actually asked the guy who works on this project
if I could name drop him in this stock.
He's like, yeah, absolutely, dude, go for it. It's a free country.
So I use Boost Leaf. And okay, not gonna lie, when it comes to error handling, my main thing was always exceptions. I actually love exceptions. I know a lot of people
hate it. I think exceptions are great just because of the fact that I don't have to muck around with
my return types or my input types for my parameters. I can any type i like i can including the documentation
saying hey this thing can throw this in the situation i can inform people about what this
thing can throw i know there's a lot of reasons why people think you know exceptions are terrible
but i've used it before and actually my old development environment my old framework that
i was building actually used exceptions and i was so forward for exceptions that i would
recompile the arm gcc compiler for all the platforms, Windows, Mac, and Linux, so that you could actually turn it on and then actually use it there.
Side note for anyone who doesn't know this, if you try to use exceptions in ARM GCC, by default, the compiler is compiled with exceptions turned off.
So it will just terminate the moment you try to throw something.
Yeah, the moment you try to throw something, it will terminate.
Now, I used other things, like I did the global variables, I did the return integer, like the error no solution. Yeah. And I think we
all know like, that's just not a not a helpful way to get information out. I just error code error
code has one big fatal flaw that makes it kind of difficult for anyone who's embedded to use it.
You're talking about the one in the standard library, right? Yeah. It has a virtual message.
It's return type. One, message is virtual,
which is fine. Its return type is string.
So I can't change the allocator
for that. I don't have control over that.
I am stuck with the default new
delete allocator. Really? Yeah.
Huh. If it was templated
and I could control the allocator that
way, that could work. But it's not.
I'm sorry. You actually can't have a virtual template,
so it would never work that way. Sorry.
So because it returns a string, I'm
stuck with needing
to have memory allocation there.
That's one problem with error code.
Let's say I ignore that. Let's say I'm fine with that.
There's another problem with
std expected. I think std expected is actually
still useful in a lot of cases. Just when it comes
to interface design, I had a lot of problems. Specifically, what do I make the error type?
For example, I could have an error type, a whole bunch of custom error types for GPIO, and
I learned about this problem I was going to have, which is called the composition problem, or the
implementation problem, where everything in my interface can be rebuilt using I squared C,
that one communication protocol, pretty much does everything.
So if you want to create a GPIO,
normally GPIOs don't fail.
When you try to set them higher load,
it's not really a failable thing.
But when you introduce I squared C, that could fail.
So now I need to be able to transport,
if my implementation uses I squared C,
you communicate some chip external to my board
and wants to twiddle some bits that way,
that could fail. And then I need to report that.
How do I report that through an interface that does not allow reporting of it at the moment?
If I assume that GPIO setting high and low doesn't return an error.
So I need to return std expected, probably void for setting it, and error type on top of that.
Now, what do I include in those error types for the enumeration?
There's an error code type for whatever enumeration I have.
I can have all the list of I2C errors and all the list of PD-ROM errors and whatever else I can put in there.
But the issue comes, well, one of them that happened to me was,
some student was like, when someone working on the code base was like,
hey, I have a problem here and your code's in a different repo,
so I need to add a new enumeration.
I need to make a PR to your repo to add another
field to your enumeration because I have an error that's outside the range of what you've already
supplied. You didn't think of every single possible error. And you can't really use the
POSIX errors because a lot of them actually do kind of work, but some of them don't really fit
embedded because they're all about like off-price system stuff. So with all of that, I ended up
realizing that you have this central place for where all this information for what type of error
can occur. You have to keep updating and updating and updating. And what I really want
is people to just be able to throw whatever errors they like. If they come up with a new error
category, I want them to just be able to express it versus needing to talk to me or to work on the
framework to say, hey, I need to make this change. And if you have control of the code, you just
notice that that particular information is expanding, expanding, expanding. They may even
see that, oh, these two actually are the same thing.
We have now two separate names for
the same type of error condition.
We have to just live with that. And people are going to ask,
which one do I choose? Whichever one you like.
Whichever one is your fancy, whichever is shorter.
So there are other things we still
expected that I had other issues with, like
some copulation stuff that might have just been on me
and me doing some funky stuff.
So going back to Boost Leaf, Boost Leaf gives
you that capability. And C++11, without
exceptions, you can throw any type you like.
Throw. You don't only throw. You return new error.
Does it ever do a dynamic allocation if
it's like a big object or something?
No, it never does a dynamic allocation. It's always
stack-based, unless you choose to dynamic allocate it.
That's a great question because
I'll get into that a bit, like how it's done, and I think it's
super clever.
But yet your return type needs to modify,
you need to modify your return type,
so we still haven't stopped touching the return type.
It's now boost, leaf, results, chevrons,
you put whatever type it is.
In this case, void or bool, whatever you're interested in.
What that type is, is just simply
that type that you specified and a bool on top.
The bool just tells you if there's an error that occurred,
does not have any information about the error in particular.
Here's how you capture errors.
You have this thing called do something handle or do all handle.
You give it a Lambda. Lambda is like your tri-scope.
Then you have after that Lambda,
you have all of your catch scopes,
which are just more Lambdas.
The variable you put as the parameter for that scope,
once you put in a new handler,
Boostleaf will figure out what the parameters
are, then allocate those parameters
on your stack, point
the leaf system to point to those variables
so if any calling function deeper
in your code ever throws an error, it
writes directly to your stack above. So when you
actually make your way back up to the top of your stack,
that one's already been filled out with the error information.
He has this thing called like 0-0-1
transport of errors because
the moment you throw it is the moment it gets written to.
And all you do is make your way up
before you can actually see what it is.
Now people can throw arbitrary things.
You can now handle arbitrary things.
There is one caveat, which is that
if you forget to catch something, it doesn't terminate
or doesn't abort or something like that.
That's one of the issues. That's kind of how BoostLeap works.
And because it's not allocating, it's very dynamic. You can choose however you want
to do your errors. And there's a lot of the cool features with it. I found it to be one of the best
ways to kind of couple all the embedded things, all the worries people have for embedded when it
comes to error handling and have it work for the event space and have it be as flexible as
exceptions. It's like the author hand-rolled return value optimization ABI calling conventions, basically, because it's basically a global variable for the most handler needs to have a
pointer somewhat globally so that any of the lower code can basically write stuff back. So each
thread needs to have their own implementation of each one of the different types of errors that
could come up. Does that make sense? I don't know if I said that a little best. It did. I just realized
that that technique probably would have solved some of the problems I had in ChaiScript
if I were still maintaining it.
I know you
are working on a framework for
helping embedded developers. I thought
I would give you a chance to kind of talk a little
bit about that. If you're ready to release
it publicly or not.
It is not ready to be released publicly.
I'll talk about it. It's called Live Embedded How.
The idea is actually to kind of'll talk about it. It's called Live Embedded How. The idea is actually,
to shine a shortcut through some stuff,
it's an implementation of Rust's Embedded How.
They have this library, this crate,
that's supposed to be like an interface
for all their embedded stuff.
And I remember I kind of broke down that day I saw it
because I was like, this is what I'm building.
And they already have so many people
already working on this.
I'm like, why doesn't C++ have this?
I'm so mad that C++ doesn't have this.
And I even considered at some point, I was like, I was reading the Rust book and I was looking into it. I'm like, why doesn't C++ have this? I'm so mad that C++ doesn't have this. And I even considered at some point,
I was like, I was reading the Rust book
and I was looking into it.
I was like, was I wrong this entire time?
Then at some point I spent some time soul searching.
I was like, you know what?
If anything, C++ should at least have this available.
I haven't seen anyone else do it
because most times they'll tune it
for a very specific architecture or very specific,
like, oh, we just work just for ARM.
We just work just for RISC-V.
We're not going to try to expand it to work with anyone else. Well, I want to support everyone.
I don't care. I'm not just for ARM. I'm not just for RISC-V. I'm not just for Arduino or AVR. I
want it to work for anything, including SBCs as well. I don't think there's any reason why
you couldn't have your code work in a Raspberry Pi. Or I did an example where I was able to blink
my caps lock on my laptop. There's no reason why you couldn't have you know gpio be represented by the
caps lock led and the idea here is to make a general set of interfaces that can be used basically
anywhere so people can write their algorithms however they want to and go oh so i'm gonna ask
does your chip have drivers that are live embedded how compatible because they are i can just do
conan install Conan install your thing
and then get all your libraries in
and then start using them immediately. And that's kind of
the hope is that I want to make development
for embedded as simple as what I used to do
when I worked on Node.js a lot, which is
or Python, pip install,
NPM install, get the driver in,
start using it immediately, don't have to think about anything further than that,
just start going.
When do you think it will be ready to release publicly?
Honestly, so like I take a lot of time
to make sure that I got things right.
And even the last like year,
the counter class has been one of the things
I've been fidgeting with for like the longest time.
Just a hardware counter.
And I would say like,
I'm hoping to get the first version in Conan
maybe in the next six months,
like the next half a year,
just so I can like get some of the students
at San Jose State,
the robotics team helps me out with this and they work with stuff there. Side note to anyone who's
viewing this, I think one of the most useful resources is having students look at your code
because I got people who are experts looking at my code. They'll be able to understand it,
but I give it to students. They're like, I don't get what you're doing here. I'm like,
that means that I need to make my code a little bit more expressive and make it a little easier
for people to ingest. So yeah, they'll probably be using it for their little robotics projects here and there.
And they'll be able to tell me, yeah, I tried doing this thing and it didn't work correctly.
Or I was able to get myself into this hole and this is a problem.
Like, ah, got it.
I never thought that no one would do that, but you did it and let me fix that.
Good point.
Now, I just want to say, if we have any hardware manufacturers or anyone listening right now,
I think that every laptop and keyboard should add
a Khalil Lite so that you don't have to actually twiddle the caps lock, but it would be some little
GPI low light that just happens to exist there for people to program.
Well, Khalil, it was great having you on the show today. Thank you so much for
talking to us about embedded development.
Thanks for having me on.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in,
or if you have a suggestion for a topic, we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate if you can like Cppcast on Facebook and follow Cppcast on Twitter.
You can also follow me at Rob W Irving and Jason at Lefticus on Twitter. We'd also like to thank
all our patrons who help support the show through Patreon. If you'd like to support us on Patreon,
you can do so at patreon.com slash Cppcast. And of course, you can find all that info and the
show notes on the podcast website at cppcast.com.
Theme music for this episode was provided by podcastthemes.com.