CppCast - emBO++
Episode Date: March 2, 2017Rob and Jason are joined by Odin Holmes to talk about the recent Embedded C++ development conference emBO++. Odin Holmes has been programming bare metal embedded systems for 15+ years and as a...ny honest nerd admits most of that time was spent debugging his stupid mistakes. With the advent of the 100x speed up of template metaprogramming provided by C++11 his current mission began: teach the compiler to find his stupid mistakes at compile time so he has more free time for even more template metaprogramming. Odin Holmes is the author of the Kvasir.io library, a DSL which wraps bare metal special function register interactions allowing full static checking and a considerable efficiency gain over common practice. He is also active in building and refining the tools need for this task such as the brigand MPL library, a replacement candidate for boost.parameter and a better public API for boost.MSM-lite. News Elle, our C++ core library is now open source Yet Another description of C++17 features; this time present mostly in Table form Atomic Smart Pointers COMMS Library Odin Holmes @odinthenerd Odin Holmes on GitHub Odin Holmes' Blog Links emBO++ - Embedded C++ Conference in Bochum Kvasir Meeting C++ Lightning Talks - Odin Holmes - Modern special function register abstraction Brigand Sponsor Backtrace JetBrains
Transcript
Discussion (0)
This episode of CppCast is sponsored by Backtrace, the turnkey debugging platform that helps you spend less time debugging and more time building.
Get to the root cause quickly with detailed information at your fingertips.
Start your free trial at backtrace.io.cppcast.
And by JetBrains, maker of intelligent development tools to simplify your challenging tasks and automate the routine ones.
JetBrains is offering a 25% discount for an
individual license on the C++ tool of your choice, CLion, ReSharper, C++, or AppCode.
Use the coupon code JetBrains for CppCast during checkout at JetBrains.com.
Episode 91 of CppCast with guest Odin Holmes recorded March 2nd, 2017.
In this episode, we discuss some new C++ libraries.
Then we talk to our turning guest, Odin Holmes.
Odin talks to us about the EMBO++, the only podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
Pretty good, Rob. How are you doing? I'm your host, Rob Irving, joined by my co-host, Jason Turner. Jason, how are you doing today? Pretty good, Rob. How are you doing?
I'm doing good.
For our listeners at home, we're actually recording these episodes out of order.
We are.
It's throwing us off a little bit.
We recorded episode 92 yesterday, which we've never done before.
Right.
Yeah.
And most appropriately, since I forgot to mention it yesterday,
the announcements for who's going to be speaking at C++ Now came out.
Oh, yeah.
And are you giving a talk?
Yeah, I'll be giving one talk on my own and one shared with Ben Dean.
Awesome.
And what are the two talks going to be?
The one with Ben Dean is called Constexpr All the Things.
That's a great talk. And the one that I am doing is called Abexprallthethings. That's a great title.
And the one that I am doing is called abusing C++17.
Abusing C++17?
Abusing C++17, yes. Very nice.
Very nice.
Okay.
Well, at the top of our episode, we like to read a piece of feedback.
This week, Adi Shavit wrote in on Twitter referring to episode 90 saying,
great incentive to go on my next run.
Jason, you're a runner, right?
I am.
I'm not sure if I could do a podcast while running.
I like listening to music when I go to the gym.
I always listen to podcasts while driving.
I don't like having headphones in my ears when I exercise.
Oh, so you don't listen to anything.
I don't listen to anything.
I want to listen to the birds and the cars
that might be getting ready to pull out in front of me,
that kind of thing.
Yeah, that's a good point.
I used to listen to music while biking,
but not with headphones in.
I would just let it come out of my phone.
Right.
Because I want to be able to focus on the cars and stuff too.
Anyway, Adi, thanks to you for the kind words.
I've seen him uh tweeted to the
show several times before so i know he's been listening for a while uh and we'll uh set you up
with uh the jet brains license and we'd love to hear your thoughts about the show as well you can
always reach out to us on facebook twitter or email us at feedback at cpbcast.com and don't
forget to leave us a review on itunes Joining us again today is Odin Holmes.
Odin has been programming bare metal embedded systems for 15 years,
and as any honest nerd admits,
most of that time was spent debugging his own stupid mistakes.
With the advent of the 100x speedup of template metaprogramming
provided by C++11,
his current mission began,
teach the compiler to find his stupid mistakes at compile time,
so he has more free time for even more template metaprogramming.
Odin is the author of Kvasir.io, a DSL which wraps bare metal special function register
interactions allowing full static checking and a considerable efficiency gain over common
practice.
He's also active in building and refining the tools needed for this task as the Brigand
MPL library, our placement candidate for Boost Parameter,
and a better public API for Boost MSM Lite.
Odin, welcome to the show.
Hi, guys. I came back quick.
I think it's been about five months or so, right?
Yeah, yeah, no, it's been a bit.
But, yeah, I'm glad to be back on.
I'm curious. Sorry, go ahead, Rob.
No, go ahead, Jason.
I'm curious how much of an increase in interest in your library you've seen since you were last on, if any.
There has been some increased interest.
Actually, recently, very much so.
It's actually my bio is outdated.
I didn't think
it would be outdated so I told you guys
to use the same one but actually
Quasar is no longer just a DSL
we've
kind of turned it into a
collection of different libraries
having to do
with embedded microcontroller development
for example there's
Quasar Toolchain, which
is
basically CMake tools to
get you up and running. There's not
a lot of documentation out there for
how to get
microcontroller stuff
CMake.
I have a very talented
research student who
put that together. We actually have our own metapro who put that together, and we actually have
our own metaprogramming library now,
which we did way back when,
but then I moved to Bridgend,
and then I'm working on
some new stuff, so we
decided to
do a Quasir's own
MPL, which
is basically optimized for speed
and speed only.
So, yeah, it's a little bit volatile.
We're not, you know, suggesting other people use it yet or anything.
But, yeah, we're hoping to put some other, you know, microcontroller-relevant stuff there,
like, you know, some of the other kinds of schedulers that don't have to do with threads.
So, yeah, there's been a lot of people coming recently with interesting ideas for libraries
and wanting to code review others and things like that.
It's snowballing way more than I had actually initially expected.
It's cool.
When you said your MPL is optimized for speed only, do you mean compile
time or run time?
Compile time, because at run time there will be nothing
left, right? Right, just
double checking. Yeah, the best
programs on Compiler Explorer
are the ones where the right-hand pane where the
assembler goes is empty.
Yes.
That's very
true.
Okay, Odin, well, we've got a couple of news articles to go through,
and then we'll start talking to you about what's been new with the embedded community, okay?
Yeah, sure.
Okay, so this first one is L, a C++ library which was just open-sourced.
And this came from Infinite or Infinity.
I'm not sure what the proper pronunciation is, but Infinity is a storage system.
Have you guys heard of this before?
It was new to me.
It was completely new to me.
And looking into it, it's actually quite interesting.
I didn't have time to look into the whole thing.
But as opposed to last time, the news links are really relevant to what I'm doing.
So thank you, guys.
Yeah, I dug into their implementation of name parameters because that's actually something that I play around with.
And it looks like they have a pretty similar approach, and it's pretty well done. I think their compile time speed could be increased
considerably because they're kind of recursively
iterating over things.
Yeah, don't do that.
They also
implemented
positional and named arguments
as well as defaults if there is no
named argument, but not uniquely
convertible arguments, which is something that
boosts name parameters back in the day,
also implemented and is super dangerous in most cases,
but is also super useful in some cases.
Like if you have an allocator, for example,
chances are very low that something else is going to be convertible to your allocator,
and chances are very high that you'll be able to generically identify that something's an allocator.
So if you do uniquely convertible, you can just throw that in there anywhere.
It doesn't matter where you put the allocator.
You don't have to name it either, and the name parameters TMP will just figure out where that goes and what that is.
So, yeah, I think that's a missing feature in my mind.
But it's actually, you know, from a library implementer standpoint, my first impression
is quite good, actually.
They know what they're doing.
So, yeah, I'm glad that I saw what that is and we'll delve into it later.
Yeah, and just to give a little more background on it,
they're calling it a C++.
Their library is giving a C++ asynchronous framework based on coroutines,
and it comes with several modules for networking, serialization, cryptography,
and it's being used by their infinite storage platform,
which I think was acquired by Docker,
so it probably has some, you know, seems production.
Yeah.
And it's a very large library, to be honest.
It looks like there's a lot going on.
Yeah.
Okay.
And this next one, yet another description of C++17 features.
This one presented in table form.
It's actually Tony Tables tables which i've never seen
before or heard of but uh basically just small little code block tables which i thought was
you know pretty readable um you can get a nice concise example similar to uh what we saw in uh
the last c++ 17 over we talked about, which I think was from
Jason, help me with the name.
I don't remember, I'm sorry.
JF Bastion.
Okay.
Yeah, that was it.
What did you guys think about the Tony Tables?
I liked it.
Yeah, good.
Yeah, like I said,
Go ahead.
Yeah, sorry.
A lot of C++, or future future stuff is expressed in sort of theoretical terms
and turns out to be like a big blob of dry text,
but usually seeing the example of how to use it, it clicks way faster.
So, yeah, I really liked it.
And just kind of as a side note, if you guys aren't familiar with Tony,
this is Tony Van Eerd.
He has some talks from C++ now that are very entertaining from the last few years.
So I would check him out.
And that's the Tony who put together this.
Not the one who named Tony Tables, right?
Well, I don't.
Yes, he says that he didn't give it the name Tony Tables, though his name is Tony.
Yes.
Isn't he the guy that's a me on the meeting C++ t-shirt?
Maybe.
I think so.
I might be wrong.
Anyway, I thought that was funny.
Okay.
Next, we have an article on atomic smart pointers.
This one from Rainier Grimm, who we've talked about a few of his articles recently.
And this one just goes into kind of the use case of
atomic uh smart pointers shared pointer and unique pointer and and why we need them and for c++ 20
you know it just strikes me as amazing that we have not yet finalized c++ 17 and we have all
this information about c++ 17 and now let's go ahead and start talking about C++20. Yeah. Yeah, the ink isn't even dry yet on C++17.
We're already looking for the future.
It's funny stuff.
But, you know, we're moving forward so much faster these days than we were five years
ago.
Yeah.
One example I liked in his article is he showed, you know, how atomic smart pointers will be
better, but if you're using C++11
you can use all the atomic versions of these functions
in order to achieve the same thing
right?
you don't use shared pointers I'm guessing a whole lot Odin
oh no no no
we actually have a whole
we have our own kind of thing that we've named space optimized pointers.
Like if you have a memory pool, which is basically just array of blocks,
and you know where that memory pool is from a compile time standpoint,
like if it's a static member of a templated class,
then if you know the type of the class,
then you know where the thing is, right?
So you can kind of type alias
the address of the beginning of those blocks,
and then you also know the block size
because you know the slice size of your allocator,
and so you can just store a much smaller char or a short
depending on if you have less or more than 256 blocks in your allocator.
And then you can wrap that in a pointer interface.
And so you can make a pointer 8 bits rather than 32, and it still kind of works.
So that's more the kind of pointers that we use rather than, you know, this bloated thing that's like, you know, yeah, probably on balance like 20, 30 pointers in size because it's got, you know, a heap partition.
You have all the heap bookkeeping wrapping that and then you got the stack portion and then you got virtual functions and all that stuff in a shared pointer yeah and i'm gonna risk going off into the weeds for just
a minute we're not even through the news articles yet but clang can do heap elision and i'm guessing
you use clang for at least some of your stuff uh starting to their their uh arm thumb you know cortex uh cross compiler hasn't been
around for a long time so okay i've been mostly using gcc but uh yeah that's super interesting
i'm just wondering if you've seen any real world usage where heap elision can actually translate
to something practical in a microcontroller um the thing is if you know if if you don't have a heap to begin with uh heap elision
is going to only be happening in release mode so debug mode will probably still be going to the
heap and so then you would need a heap so then you i mean you might even be using like a version of
libc that doesn't even have where you you ripped out the heap support anyway, and things like that.
So, yeah, I mean, depending on sort of the size, I mean, you know, some microcontrollers
do use a heap.
They're usually bigger or with other design constraints.
And in that case, it's probably super interesting.
The problem is, within the real-time constraints and memory constraints that you have, you
usually have to run debug mode
as well.
So if you
don't
want to use a heap because you have some
real-time constraints, and heap could take
a long time if it's fragmented or something,
then
if that doesn't run in debug mode,
that's probably a deal-breaker.
Because you need to be able to debug, you know, real world things.
And if you're missing, you know, if some buffers overflowing because there was an allocation somewhere else that starved it, then, uh, you can't, you can't really debug your program.
I mean, you can use like a bigger chip with more Ram some of the time, like sometimes there'll be like a big sister or something of the chip that you're using.
But you can't...
Usually it's a very different architecture if you want to go
with one that's faster or
somehow deal with the real-time constraints.
I mean, there are simulators,
but they cost more than a house.
So...
I prefer that my
development tools cost less than my house.
Just personally.
A friend of mine prides himself that his oscilloscope costs more than his house.
But he's an electronic engineer in high-frequency stuff.
High-frequency stuff you would need.
Yeah, something high-end.
Wow.
That's a lot of money.
Okay, so this last article is one we've actually talked about briefly on the show before.
Jason, I think we used this in one of our recent news episodes,
and we both looked at the article and didn't have much to say about it
because neither of us are in embedded development.
Yes.
Yeah, and I think I actually remember seeing a comment on Reddit
with someone saying they wish we had talked about this one more
because they were really interested in it.
Well, then we'll let Odin talk about be a good one to bring back for Odin
because I'm sure you're interested in this sort of thing.
This is Coms Champion,
which is a library for embedded development com protocols.
Yeah, it's quite interesting,
and it's actually a perfect segue into, you know, some of the some of the meat part of the show, because we actually we we asked him to come talk at our conference.
And, you know, the Alex Rubenko, he's in Australia.
So that didn't work out.
But if you want to see a great talk by him, sponsor his ticket.
Some company listening um
but uh yeah um i i really like his stuff uh i think it's kind of it's it's got some of the
the same problems that a lot of people in this domain have that you know uh it's kind of
developed sort of you know somewhat in isolation isolation of other people because we haven't really found each other yet.
And it's, you know, it's developed with a certain set of assumptions about, you know, what's happening.
But for many use cases, it's actually quite good library, in my opinion.
I mean, I reviewed some of his stuff, and yeah, it's clean.
Assuming it works, I haven't tested it on a lot of different chips.
But yeah, I like his stuff.
He's kind of going a bit different route
than things like protobuf or whatever,
where you, you know, generate code
for sort of both ends of a communication.
And, you know, the problem with that is the generated code
may not be something you can even run on your controller, right?
So that's kind of a problem for us
if we're on the other end of that protobuf.
So, yeah, and it doesn't do things like, you know, right so that's kind of a problem for us if we're on the other end of that proto buff um so yeah and
it doesn't do things like uh you know i stream o stream or you know things that are that are uh
you know don't don't do so good on on micro controllers but uh yeah i mean if that's
something that you uh um you know is kind of in your line of work i would seriously suggest
looking at this guy's stuff like it's pretty mature. It's more than just
proof of concept.
Yeah. Cool.
We'll start talking about
this conference that you just went to,
Odin. It was EMBO
Plus Plus. What's the conference about?
It's about
C++
Unembedded, which is
kind of a super niche,
I guess you could say.
The embedded field is actually huge, but
not a lot of people use C++
in at least the bare metal end of it.
And
yeah, we
decided to make a conference about
just that and
figured if we get 50 people, then
we're good. And we actually know if we get 50 people then uh we're good and we actually did
end up getting 50 people and uh um you know we even sold out of early bird tickets which
is surprising i thought oh crap we're gonna just sell early bird tickets and that'll be empty seats
and whatever but uh yeah it you know worked out we actually got some sponsors, KDAB or
CppCon,
JetBrains, because they sponsor everything
that's cool.
And also
the local
city of Bochum
and the university that it was held at, uh, were sponsors.
Um, and, uh, yeah, we, you know, it was, it was kind of, you know, somewhat born out of,
you know, uh, me watching the, uh, um, CPP con 2016, uh, um, grilled the committee panel,
and some guy had this question like, yeah, why isn't C++,
I'm paraphrasing here, but why isn't C++ used more on embedded devices?
And people kind of talked in circles.
I think Michael Wong kind of had a feeling because he chairs the SG14 group
that is in charge of embedded, And he kind of had a feeling, but the rest of the committee was kind of staring him down.
The takeaway from that panel was, oh, it's a solved problem.
It's just there are a lot of efficiency myths and stuff about C++,
and so we just need to go out and evangelize the C guys that work in the embedded domain.
And I thought, well, that's not all of my opinion.
I mean, there's, you know, C++ is a great language because it allows you to use code written by experts.
You can encapsulate things.
You know, some other people can write awesome libraries and encapsulate it.
And then you as a user can use it even though you don't understand how it works. That's kind of
the awesomeness of C++ in a very, very small nutshell.
The problem is, what if there is no code out there written by
experts that you can leverage?
We actually, for various reasons, can't use a lot of the standard library
mostly because nobody from the embedded domain was there
when it was standardized.
If I want to use interrupt service routine
which I probably should explain what that is.
An interrupt service routine is
when an event happens, essentially you inject a function call onto the end of the stack of whatever you're doing.
And so local work registers are stored to stack and wherever you return to are stored to stack, just like a function call.
But it can happen anywhere,
and then it does some work, and then it returns.
So if you want to be super, super low latency,
then that's the way to go.
It's super fast, but it's also kind of dangerous because it's not a thread in its own right.
It doesn't have its own stack.
It doesn't have its own context and everything.
So if you have
shared data,
and one of the
writers to this shared data,
or readers for that matter, is
an interrupt service routine,
then you can't just put a lock around it
to make it thread safe.
What if that lock is locked when the interrupt service team fires?
In the interrupt service team, you go try and lock the lock.
Well, you can't you can't take it yourself because it's already locked.
But you also can't switch back to whoever is holding it and let them finish because you're on their stack.
Right. So essentially deadlock, you essentially deadlock you're dead right there's
there's there's no way to get out of that so so we can't protect shared state with locks
and then you know essentially everything falls down because uh um you know atomics uh depending
on standard library implementation uh can also be locks, right?
There's only a very few standard atomic operations that are guaranteed to actually be atomic and not falling back to a lock.
Which ones, actually?
I don't know off the top of my head.
I think there's something, you know, some atomics.
Sorry, I don't know off the top of my head because I don't use them because, you know, the vast majority of them, you know, can be locks. you're receiving CAN messages over the CAN bus and in an interrupt server team
because its FIFO buffer is only two messages deep,
so if you don't empty it within two messages,
then you're going to lose data, right?
So you have to, you know, very low latency,
go get that message out of the buffer
and do something with it, probably put it in a queue.
Well, what container would you use as a queue there?
You can't use a deck because it's not thread safe,
and you can't put a lock around it to make it thread safe.
And with a deck, you don't have guarantees about how long it could take
because what if you're growing the queue
so it has to allocate another slice, that could
take super long time
and you have to make guarantees like
I'm going to be
back out of my interrupt service team within
time amount X because some other
interrupt service team might need to fire with
a lower latency and if I'm blocking
the whole system
by allocating,
then they're not going to meet their real-time guarantees.
So, you know, allocating randomly, well, you know, people could,
you could write your own allocator to get around that problem.
But, you know, it's a hard sell to embedded C programmers to say,
yeah, learn how allocators in C++ works, and then you're fine,
right? But we still haven't solved all the problems because, you know, we're still also
not atomic. And also the slice size, like if you're using pools rather than a heap,
you have pools of, you know, certain sizes, whatever you tune to your program, maybe you
have a pool that's got slice size of 16 bytes and then 64 bytes
and then, I don't know, a kilobyte, right?
And what if the queue wants to do 64 elements and then two pointers?
That's a little bit more than 64 bytes, right, if our elements were chars.
And so you would go up to the next slice size, you would have to use a whole kilobyte
for essentially
64 bytes plus two pointers of data.
So, yeah, it would be
nice to be able to set the slice size,
but you can't do that without ripping out
part of the standard library, so it's not flexible
enough. There actually
is a container
or proposal
that maps pretty much exactly to how people deal with
this and see i mean and see they make you know a fixed size buffer and then uh you know a head
and a tail pointer into that buffer that just chase each other around in circles so um i put
data after head then i move head and so you know as the producer and the consumer will only see that move of head after I put the data in there.
And since, you know, loads and stores of pointers are atomic and are not reordered and there's no cash coherency issues and all that stuff on these tiny chips.
That's an atomic container. That's the poor man's atomic container.
And it actually works quite well and you know guy davidson had a proposal of
standard ringspan which is essentially exactly that or you know c++ nice safe public interface
of that um and you know this has been going back and forth to the committee for a long long time
and they've watered it down to the point we can't even use it anymore in the current proposal
because now it's not like a fixed-size container.
You give it two iterators,
and it uses that range to go around in circles.
But since that's not compile-time known,
that's going to be way less efficient, right?
Could it be, though, if you gave it fixed pointers
around a standard array object
or some sort of C-style array begin and end?
The compiler probably could make it, you know, for example, if you have it on a line to, say, 256-byte boundary and then make it 256 bytes in size,
then making it wrap around would be a super, super cheap operation
because you'd just be clearing one bit, right?
Right.
To put the pointer that just advanced past the end back to the beginning.
The optimizer could do that, I think,
with the current version of Guy V. Davidson's Ringspan
if the stars align and blah, blah, blah,
but only in release mode, right?
So we're kind of back to the same problem I mentioned originally
that in debug mode, your program still has to run, right?
And whatever you're talking to on the other end,
be it a robot or some other motor control unit or who knows, it can't time out
or you have to give it data
or whatever. You have to meet your real-time guarantees
in debug mode as well.
The other thing is they took all the atomic-ness
out of it, like the concurrency out of it, because there's another proposal for a concurrent queue.
And they, I guess, didn't want them competing for that title, which is stupid in my mind because – I mean, I'm sure it was well-intentioned.
Like I'm not – but essentially they're taking memory fences out of it by making it no longer concurrent.
Right. It's not like there's any any kind of atomic mechanism or locks or whatever that they're that they're saving people.
It's just some reordering rules.
So, yeah, I mean, I think that's that's a small price to pay, seeing as essentially everyone's going to use it as a concurrent structure anyway,
at least in our domain. I mean, there's literally 20, 30 billion devices out there that have this
kind of queue on them, and it's almost always written in C because of that. Yeah. But I mean,
I think the problem is, I mean, I can understand the standard committee because, you know, two people say we need this and everyone else says, what do you use that for?
Right. Because, I mean, there isn't really a standard committee representative from Bosch or Siemens or KUKA or, you know, whoever, whoever these these, you know, whoever employs just the vast amounts of C programmers that work on this stuff. I guess there is one from Nokia, Vicente Bote, but he's got his hands full in other language groups.
But, yeah, most of the embedded companies, like the traditionally embedded companies,
don't have anyone on the standard committee, I think much to their detriment,
because things are going to keep getting standardized and the ships keep sailing.
And they, you know, usually it's these very esoteric changes that would need to be made for it to be able to work in our domain.
Right.
Do you think that's because the companies are just already choosing to use C and maybe don't care about the direction of C++?
Or, like, why would they not go?
Well, actually, I got a lot of feedback recently from the conference
and then from the aftermath of the conference
from big companies.
And some of it's actually pretty positive.
I mean, I think there's some really interesting stuff
that Logitech, for example, is doing on that front.
I mean, it's a huge company, but there are some people that have talked to us about what they're doing.
And, you know, we're also at our conference.
And, yeah, I think, you know, the way that they're going, I mean, this is the very bottom end of the spectrum, right? In Logitech, like if you save a penny, that's like a million dollars with the volumes that they produce, right?
Right.
And I think they kind of get that you can save many pennies, right?
You know, especially as far as, you know, making batteries cheaper or last longer, things like that with C++.
Because, I mean, there's kind of an optimization curve.
You start
pretty fast using C
with huge
development
costs,
but you start pretty fast, but then
if you want to make it faster, it starts to get
incomprehensibly
ugly quite quick.
So you do hit a wall at some point, whereas with C++ metaprogramming,
you can express yourselves in ways that you won't recognize in the resulting assembler.
You can make it clear and then make it fast, and the one generates the other,
rather than if you want to make it fast it has to
look like what it's
actually doing.
So if you're thinking about
sorting special function
register access by
address
because that's faster because
it can load with offset rather than loading
a whole new address and then
writing to some other special function register,
that's going to result in terrible code, right?
Or if you want to make those things atomic, right?
You have some read, modify, write thing that you want to make atomic,
then the solution now is just a forest of macros,
which I guess some people can read,
but it's not my...
Yeah, and there are things that you just can't do
with macro metaprogramming
that you can do with template metaprogramming.
So if you're trying to go past the wall
that you hit at some point with C,
then you can go actually quite a bit further with
C++.
Yeah, they seem
actually pretty open to the idea. It's just
nobody's come along and
showed them how to do it from the outside.
They've actually gotten
some stuff going on the inside,
which looks good.
I really like what they're doing. There's some other companies uh, some stuff going on the inside, which, you know, looks good. Uh, so yeah, I mean,
I, I really like what they're doing and there's some other, you know, companies,
companies in this domain seem super tight lipped. So I've talked to some other companies that are kind of similar, but they don't want to be mentioned. But, uh, yeah. Um, I, I think,
you know, there, there are very legitimate reasons why they use C now.
I wanted to interrupt this discussion for just a moment to bring you a word from our sponsors.
Backtrace is a debugging platform that improves software quality, reliability, and support by bringing deep introspection and automation throughout the software error lifecycle.
Spend less time debugging and reduce your mean time to resolution by using the first and only platform to combine symbolic debugging, error aggregation, and state analysis.
At the time of error, Bactres jumps into action, capturing detailed dumps of application and environmental state.
Bactres then performs automated analysis on process memory and executable code to classify errors and highlight important signals such as heap corruption, malware, and much more. This data is aggregated and archived in a centralized object store, providing your team
a single system to investigate errors across your environments. Join industry leaders like Fastly,
Message Systems, and AppNexus that use Backtrace to modernize their debugging infrastructure.
It's free to try, minutes to set up, fully featured with no commitment necessary.
Check them out at backtrace.io slash cppcast.
Okay, so at a high level, how did the conference go?
And were the talks recorded?
How many talks were given?
We had, I think, seven talks and three lightning talks, all one track, uh, one day, uh, it, I was actually, uh, um, uh, super, uh, uh, super
happy with how it went. Um, when the, the talks, uh, were recorded, um, I think, uh,
looking at some of the ones that we've been, you know, the first ones we've been converting here,
the quality wasn't particularly good. So next year we're hiring a like a real
recording firm uh but uh so lessons learned first conference but uh you know seeing as you know that
was probably pretty much the biggest hiccup um i was uh yeah uh it was really nice that it was one
track because it was so much easier to start conversations because you knew what other
people had seen right so it wasn't like have you seen the talk xyz by that guy oh no okay next guy
oh well yeah you probably didn't see it either it was like yeah you you know did you see what he did
there on that you know whatever so so yeah it was it was quite uh was quite a tight group and a lot of experts, actually, and very, very interesting conversations.
I think I heard from a lot of people it was kind of a feeling of, oh, my God, we're not alone, right?
And for me, it was the same way.
It was like, oh, my God, I'm at a conference and they want me to talk about microcontrollers.
Because, you know, I usually, mean hey uh to c++ now i think
i submitted nine talks and seven of them were about microcontrollers and the meta got in so
and yeah and that's kind of like uh you know what what uh um people know me for
because every other platform except for here i talk about metaprogram rather
microcontrollers mostly uh but that's just because the microcontroller talks don't get
get uh accepted because it's you know it's a broad audience i mean i can understand but
uh yeah so we have our own conference now and can talk about microcontrollers
um yeah and a lot of sort of similar experiences i mean there's there's you know so many people
like oh yeah and then i tried that and this and this didn't work and then it's like oh yeah well you probably
i don't know you know do you know printf allocates oh that's why i was corrupting my heap or
you know things like that where uh um you know in in uh in isolation you might spend a whole long
time on that bug i mean there was a lot of technical conversation going on in the breaks.
And, yeah, we also had like an after party kind of a thing,
which most people also went to, which, you know,
so up until four in the morning or something,
we were talking about, you know, similar generation
and all sorts of stuff like that.
It was also nice because, like because the ratio of speakers to guests was quite good as well.
We also had a SG-14 meeting, very informal in that case, SG-14 meeting,
but just with all the audience because it was kind of our psychological trick to
try and get more of them to actually take part in the SG14.
Because, I mean, there are people there from a lot of the bigger embedded companies that
don't have anyone in the SG14 or in C++ at all, the least that I know of.
And it was just kind of like, hint, hint, this is fun.
And it was actually quite a good conversation um we got sidetracked on a lot of stuff but uh yeah me going off on a tangent how
anyway so yeah we had just wondering about the the audience you had there were they mostly
embedded developers who were using c++ or were a lot of them still using C and maybe
wanted to move to C++? Well, there were, you know, some of them were just library people that
kind of like C++ people that might want to go into embedded because, you know, if you're good
at C++ and can thereby, you know, cause a two order of magnitude increase in productivity of other people, there's probably a lot of money there, right?
There were some people interested in that kind of thing.
And then there was a lot of people.
I mean, I think the most typical guest profile was, I'm not allowed to use C++ at work, but I use it at home.
And how can I convince
my boss? That was, you know, especially, you know, some of the people from the bigger companies
where, you know, it's not only that they have to program in C, but they also have to use some, like,
20-year-old code generator and just, yeah. Chris Uziak did a state machine talk.
It was really well received because people saw, oh, we don't have to use this like arcane state machine generator that spits out C.
We can just use this template metaprogram thing with pretty much the same syntax.
And as far as describing the state machine
and it's all in one compiler.
It's all in one
source control.
You can see diffs
on a pull request that just don't
look like everything's new.
So, yeah, that was very well achieved
and we hope we can get them back
for next year.
We've been joking.
We should call Embo++ 2018.
Actually, we should just call it++ Embo 2017.
Okay.
Yeah.
So what does Embo stand for, by the way, or does it stand for something?
Embedded in Bochum.
Oh, okay.
Yeah.
We hope to get Chris back
as a
training day.
We're going to do a training day
with two tracks and
two training courses
per track, so two times four hours,
so that people can match two however they want.
Because I think that's kind of the fastest way to get people from what they're doing now
to be able to use some advanced C++ libraries
in this domain is a training rather than talks.
I think for the experts, talks are great,
but we want to reach more than experts next year.
This year was mostly the hardcore developer guys.
Actually, I was surprised some people showed up from academia that have been working on this kind of stuff
and just hadn't found an audience for it.
It was funny.
Emil, I don't remember his last name,
but from a Norwegian university,
was working on another flavor of scheduler
that just uses interrupts, which is how a lot of people actually program microcontrollers.
It's very typical.
The main loop is just a while one with a sleep in it.
The sleep is not a thread sleep, but a power down hardware, turn off the instruction clock and all that kind of stuff and wait for another interrupt because that's, you know, super power saving. And so actually your entire program runs in interrupt service routines.
And that's kind of all done by hand now. But if you do it as a library,
then you can do things like associate an interrupt priority or a shared resource with an interrupt priority,
as in the highest priority that can access it, and then just block up to that priority.
So you can still be interrupted by higher priority interrupts
while accessing it.
And you can model that as like a monitored pattern.
So you can't forget to lock or anything
because the resource is actually owned by the monitored thing
and you just give it lambdas that can work on it.
So yeah, that way you never touch locks.
It just locks and unlocks by itself.
And you can you can prove things like Emil actually came from a mathematical background, I think.
He he was able to prove things like maximal latency or no deadlock potential and things like this from a theoretical standpoint, which is, you know, obviously, you know, way more security or way more reliability than we're used to.
Like, you know, people don library, you can actually get, you know, some formal proof to do analysis on your code in the context of microcontrollers,
which as far as I know is like she's probably one of three people that have gone down that route
and the other two haven't published yet or something.
So, yeah, I think that's super interesting.
I mean, it was just sensory overload overload all the ideas of how we could
actually make our lives better in this domain um so yeah i'm i'm super uh super pumped i have a
question about that interrupt based scheduler kind of idea you were talking about i'm curious like
how um hardware specific is a design like that? Does his implementation require an ARM
or something?
Yes, it does now, but
I think it could be modified
to work with other controllers.
I mean, there's actually
not a lot of
controllers. Well, I'm going to make some chip
manufacturer mad, but
there's
outside of ARM, from my uh you know not
knowing the whole market perspective uh most other controllers uh usually have one or two
interrupt priorities where there's whereas on the arm chips you you can have up to like 32
priorities or something oh wow um so yeah you can can you can priority things much more fine grained.
Yeah. Although, I mean, it does work with just one interrupt priority. You just can't make the
same kind of latency guarantees because, you know, you don't have the same preemption ability.
Right. You won't get preempted. You'll just have to schedule the next time okay yeah but yeah i mean it it gets into
uh you know very different styles of programming because on that kind of a kernel you can't sleep
so you you know basically means you're programming in something co-routine like rather than uh
um something thread you know normal normal uh uh the normal way whatever that's called
uh yeah if if if you're thinking about your thread of control you do something and then
you wait for something by sleeping and then checking if it's there right whereas you know
with the um coroutine you can switch off better but i guess the the c plus plus coroutines uh um
you know more more kind of aimed at every coroutine has its own stack,
and then you recycle these stack slices and things like that, whereas we usually don't have enough RAM to do that.
So we – and we have less coroutines because it's a smaller program,
so it can be advantageous to put, you know, the local variables that would otherwise go on stack into, you know,
globals or some kind of state local storage or, you know, other storage strategies.
That seems like coroutines is another one of those cases where the debug might not be
something that you could possibly run, but the release is going to completely go away.
Yep, yep. run, but the release is going to completely go away. Yep.
I think from a core language standpoint,
exceptions and the size
of debug builds are the problem.
Whereas
exceptions,
you can not use them, and then
everything's okay.
That's another problem with the standard
library.
It can potentially throw,
and this is not just an efficiency thing.
If you're thinking in an event-based context,
stack unwinding and going back in time
until you can actually deal with the exception was thrown
are not the same thing.
Right.
You know, normally, you know, if you have a function that calls a function that calls a function and then that has an error,
then it maybe it can't deal with it there.
So it unwinds its stack and then, OK, the next function call can't deal with either.
So it unwinds its stack and then you're back to the place where you started where you can catch it and deal with it
right so you go forwards
in time and stack exception is
thrown then you go backwards in time and stack
until you catch it but
if it's an event based thing maybe it
was two events ago you knew how
to deal with it right
and so yeah that doesn't map to the stack
at all and so
it's kind of the wrong mechanism, even if it were efficient enough.
And you don't want to do, like, a catch in absolutely every single interrupt service routine, right?
Or keep track of everything that can throw.
Like, you know, throw specification got deprecated because that's impossible, right?
You can't keep track of everything that every code that you ever call,
especially in situations of generic code.
I mean, you have no idea what can throw.
So, you know, in normal desktop programming,
you assume that everything can throw,
and so you write your code so that that's not a problem, right?
But on a microcontroller with an interrupt service routine
that's maybe, I don't know, eight assembler instructions long,
if you want to add a catch around that, then that's going to seriously bloat stuff, right?
Right.
And, I mean, it might be fixable.
I mean, I guess there's like, you know, throw is a keyword, but the implementation is that it calls some underscore, underscore,
CX something function that you can define yourself, and then you can kind of make it do other stuff.
And I mean, actually, I haven't tried it out yet, but I wonder what would happen
if you were to make one of those functions no return, right?
Like the allocate exception function, make that no return.
So if an exception is thrown, it goes straight to some, you know, catastrophic whatever handler and maybe, you know, switches into some limp home mode or resets or whatever.
And then you wouldn't have to deal with any of the stack and winding, any of the buffers that come along with the exceptions or any of that kind of stuff
if the optimizer was hard enough.
I doubt it is because I doubt nobody's
ever even tried that yet.
But yeah.
What I'd like to see from the optimizer
actually is,
well, first of all, to
support
Pragma Optimize
correctly so that you can raise the level of optimization
as well as lower it.
I think none of the major compilers
support arbitrarily raising it.
Like you're in debug mode
and then you call into a library
and the library implementer can say,
oh, you don't want to step through
dereferencing of unique pointer.
I'm going to raise the optimization level
so that that gets inlined, right?
It is very rare that you want to step into a dereferencing
of a smart pointer. Yeah, or a standard function.
Those eight steps that you have to do
to get back to your own damn code.
Who steps through a sort? You're only really interested in your
lambda in that sort,
like your custom comparison, if anything, right?
So it would be nice.
I mean, obviously they can't optimize everything
because, you know, okay,
you can have part of your code using debugger iterators
and the other part of your code using normal iterators,
right, or, you know, things like that.
But they can inline small functions.
They can do some reordering to make some local variables go away,
as long as it's code that no one ever wants to see.
I would like to be able to just specify optimization level by namespace,
so I could just turn that on for everything in namespace standard.
I mean, you could turn it off again if you expect there's a problem, right?
But then all of those, you know,
function calls a function calls a function calls a function
because we don't have if-cons-expert yet
or things like that, right?
With metaprogramming, you have to make a function call a function
to make a decision a lot of the time.
And, you know, those functions should be in line.
Nobody ever wants to step through that code.
And even if you do, you probably don't understand what you're doing to begin with.
I mean, you know, I, for one, you know, like I have to look at that for a while before I understand, like, how the standard library is handling that tag dispatch thing or whatever.
So, yeah, it's not something you want to step through.
And if we could find a solution to just make those all go away,
that would be awesome for hard real-time or bare metal type people.
It sounds like you're kind of talking about two different things here,
like where can the standard improve and where can compiler implementations improve?
Because I don't imagine, I mean,
pragmas, generally speaking, aren't in the standard
anyhow.
Yeah, actually,
that's kind of a frustrating point.
I think I'm working on a lot of things that
are relevant to C++ on
microemputer, but I don't really have a lot of suggestions for
SG14, because
this is tooling.
It's purely
a tooling problem. And the other thing is
it's purely a library problem.
This was another discussion that happened at Embo.
Oh, what should we put in the standard
library? Well, why don't we build it first,
test it for a while, and then decide if we
put it in the standard library.
I think we need to build a lot of
libraries to work
with these sort of domain-specific
problems.
So you mentioned that you submitted
a bunch of talks to C++ Now.
Which ones did you get accepted? Do you want to give us a little
preview before we let you go?
Yeah, well, just one.
Type-based
metaprogramming is not dead
is the title because, you know, Louis almost killed it with HANA, right?
Like, you know, everything kind of shifted towards value-based metaprogramming is the future.
You know, the HANA style where you don't have the pointy brackets, but you have, you know, round and everything's actually a value that may be wrapping a type.
But problem is with uh you
know from a library implementer stand standpoint i mean the user or the you know the casual meta
programmer use louis stuff i mean it's it's it's definitely a lot less cryptic and and he has a
math background so it's actually sorted well which is not not uh true for a lot of this kind of stuff. But if you're pushing limit,
if you want to make your entire program a state machine,
so you want 1,000 states or something,
that's not happening in HANA.
Or if you want all of your hardware interaction
to be done in a DSL, domain-specific language,
then you're going to need to be able to compile things that
are huge. And I mean, it's kind of out of necessity that I've been trying to push the
limits in this domain of how big can your metaprograms get, right? I mean, in Boost.npl,
it was like the default was, I think, like a list of 20, and then you had to go change some macros
to make your infinite lists, in air quotes, longer than 20.
And there are use cases where you'd like to do 4,000 types in a list.
And so, yeah, this is something that I've been pushing kind of out of necessity and not out of to all the Haskell programmers flaming me, metamonads pattern.
It's not a monad.
I know Haskell guys.
It's kind of like a monad, but it's not a monad.
Anyway, I mean, you know, the idea is to leverage aliases more than type instantiations
because it's a lot faster.
And yeah, I think it'll be a talk
with a lot of benchmarks against HANA
and a lot of actually sort of fundamentally different ideas
rather than tricks.
I mean, a lot of metaprogramming talks are about tricks,
like, oh, yeah, I can use this weird operator
to figure this out in less time,
and I can build a giant list of pointers,
and then the thing that I'm looking for,
change that to a pointer,
and then put that in, like, a templatized function,
and then I can do, like, indexing quicker
and just really weird tricks.
But, yeah, I think this talk will be about uh um you know just sort of the the strategy with which i i'm trying to sort of
change the paradigm of how we write meta programs that have to be super fast i mean it's a little
more like like currying than than uh anything else but i'm sure other programmers are gonna say no
it's not currying so So I'm going to present it
as an unnamed pattern
and they can name it.
That's probably best, particularly
at C++ now, because there's a lot
of functional kind of
yes.
And I've heard that there's a lot of people
that like to voice their opinions
from the audience.
It's a very interactive conference.
Yeah, yeah.
I actually like that.
But this is the first talk I've really been nervous about because, I mean, up until now,
it's like, yeah, I got language features in there that probably nobody understands, so
I'm good.
I mean, they think I'm an expert.
But with this one, also with the feedback, I really liked the feedback from the review process because there was a lot of negative feedback.
A lot as in I submitted a lot of talks, so in some, not like the majority wasn't negative.
But I really liked it because it was well thought out and pretty much correct.
So, I mean, negative critique that's right is
the most valuable and hardest
thing to come by. So,
thank you, thank you
C++ Now review guys.
They're anonymous. I don't know who had said it.
But, yeah.
Come out of the closet and I'll buy you a beer.
Okay. Well, Odin, it's been great having you on the show again today.
What's the URL for Embo?
It's just embo.io.
Awesome.
Okay, and where can people find you online?
Well, odinthenerd.blogspot.com or at odenthenerd on Twitter.
I figure that's kind of the shortest description of me that you could get to, right?
As opposed to Odin the God or something like that.
Yeah, no, I'm definitely not him.
Yeah, no, no, definitely less like big, scary awesomeness points
and a lot more sort of I sit in the dark
in front of a computer screen
and maybe do some cool stuff.
As we all do.
Yeah.
Okay, well, it's been great having you on the show again today.
Yeah, it was fun.
Thanks for joining us.
Yep.
Thanks so much for listening in as we chat about C++.
I'd love to hear what you think of the podcast.
Please let me know if we're discussing the stuff you're interested in. Or if you have a suggestion for a topic,
I'd love to hear about that too. You can email all your thoughts to feedback at cppcast.com.
I'd also appreciate if you like CppCast on Facebook and follow CppCast on Twitter. You
can also follow me at Rob W. Irving and Jason at Leftkiss on Twitter. And of course, you can find
all that info and the show notes on the podcast website at cppcast.com. Theme music for this episode is
provided by podcastthemes.com.