CppCast - Boost DI and SML
Episode Date: January 17, 2019Rob and Jason are joined by Kris Jusiak to discuss [Boost].DI and [Boost].SML libraries. Kris is a C++ Software Engineer who currently lives a couple of doors down from CppCon 2019. He has wor...ked in different industries over the years including telecommunications, games and most recently finance for Quantlab Financial. He has an interest in modern C++ development with a focus on performance and quality. He is an open source enthusiast with multiple open source libraries where he uses template metaprogramming techniques to support the C++ rule - "Don't pay for what you don't use" whilst trying to be as declarative as possible with a help of domain-specific languages. Kris is also a keen advocate of extreme programming techniques, test/behaviour driven development and truly believes that 'the only way to go fast is to go well!'. News Meeting C++ 2018 Playlist C++Now Submission Deadline Jan 23 If constexpr isn't broken Kris Jusiak @krisjusiak Kris Jusiak's GitHub Kris Jusiak's Website Links [Boost].DI [Boost].SML CppCon 2018: Kris Jusiak "State Machines Battlefield - Naive vs STL vs Boost" CppCon 2018: Kris Jusiak "[Boost].DI - Inject all the things!" C++Now 2016: Kris Jusiak: A C++14 Dependency Injection Library Concepts driven design - Kris Jusiak - Meeting C++ 2017 Sponsors Download PVS-Studio Technologies used in the PVS-Studio code analyzer for finding bugs and potential vulnerabilities Hosts @robwirving @lefticus
Transcript
Discussion (0)
Episode 183 of CppCast with guest Chris Jusiak, recorded January 16th, 2019.
Today's sponsor of CppCast is PVS Studio.
PVS Studio is a tool for bug detection in the source code of programs written in C, C++, and C Sharp.
PVS Studio team will also release a version that supports analysis of programs written in Java.
In this episode,
we talk about the next big thing.
Then we talk to Chris Jusiaia Chris talks to us about his DI and
SML libraries that have been proposed to boost Welcome to episode 183 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, how are you doing today?
Pretty good, Rob.
Getting ready for C++ on C coming up Making sure all that's ready
That's coming up in what?
Two, three weeks now?
Yeah, the conference is in three weeks
I'm looking forward to hearing how this
Conference goes for its first time
I'm sure it'll be good
Phil Nash does a good job with everything he does
So I'm sure it'll be a good Phil Mascha does a good job with everything he does.
So I'm sure it'll be a good conference. Yeah, and talking to Phil, it looks like things are going well.
The conference is going to be pretty impressive for the first year.
And I've got a fair number of students signed up for my class.
It's looking pretty good.
Awesome.
Well, at the top of our episode, I'd like to read a piece of feedback.
This week, we got
this tweet from Titus Winters
that he sent a day or two ago.
And he's referencing
our episode last week with Arthur
O'Dwyer saying, in this week's
CBPCast episode, Arthur made an offhand comment
about C++ needing to move to fewer
releases in order to ensure things are
baked properly. That's backward.
You get proper big time
when you take away release pressure small frequent releases and yeah i i definitely like the the
current three-year cycles um and uh you know i'd be sad if we didn't have something to look forward
to every three years at least that's my perspective on things and if nothing else it keeps those of us doing training
employed yeah keeps us making content have lots of stuff to talk about yeah could you imagine if
there was only a new standard like every 12 years what would we be talking about on the podcast i
really have no idea i don't i don't think we could do this show if it was 12 years away and and that's
you know maybe another thing that is good about the three-year cycle. It keeps the community more active.
That's almost certainly true.
The conference is more interesting.
I mean, before 2011, for all intents, well, at least from my perspective, it seemed like there wasn't a C++ community.
Yeah, I agree.
There's like a conference, pretty much.
Yeah, and there was like the Boost conference, of course.
That's where maybe most of the activity was happening. and that's the one i'm referring to that's the
one i went to what 2009 right something like that well yeah so i guess we're in agreement
that we we like the three-year cycles yeah but we're biased we are biased
uh well we'd love to hear your thoughts about the show as well you can always reach out to us
on facebook twitter or email us at feedback at cpcast.com.
And don't forget to leave us a review on iTunes.
Joining us today is Chris Jusiak.
Chris is a C++ software engineer who currently lives a couple doors down from CppCon 2019.
He has worked in different industries over the years, including telecommunications, games, and most recently finance for QuantLab Financial.
He has an interest in modern C++ development with a focus on performance and quality.
He's an open source enthusiast with multiple open source libraries where he uses template
metaprogramming techniques to support the C++ rule, don't pay for what you don't use.
Whilst trying to be as declarative as possible with the help of domain-specific languages,
Chris is also a keen advocate of extreme programming techniques, test behavior-driven development, and truly believes that the only way to go fast is to go
well. Chris, welcome to the show. Yeah, hello guys. Thanks for having me. So you said you live a
couple of doors down from CBPCon 2019. I guess this is the part of the disclosure where we point
out that you are also one of the attendees of the meetup here in Denver. So you'll be at this point, like the fourth person
I think that we've had on who goes to my meetup. Well, if I can call it my meetup, it's really a
group effort, but it's great to have you on, Chris. Yeah, thank you. Well, Chris, we got a couple
news articles to discuss. Feel free to comment on any of these and then we'll start talking about
some of the libraries you're working on. Okay. Okay. Okay.
So this first one is a meeting.
C plus plus 2018 has started posting videos.
Uh,
so far I think just the keynotes or three keynotes and,
this one other lightning talk,
uh,
are the first ones to go up and a deleted video and a deleted video.
I don't know if that's sad.
Yeah,
that was the keynote,
right?
I'm sorry. Yeah, that was the was the keynote, right? I'm sorry?
Yeah, that was Andre's keynote which was deleted at some point.
Oh, okay. Andre is still there.
Yeah, it was re-uploaded.
Oh, okay, okay. Yeah, that's
difficult in YouTube to manage.
Like, if you have to correct a video
or something, there's no clean way
to do it. Yeah.
And you can't just remove it from the playlist i guess
you should be able to remove it from the playlist yeah you should be able to do that well i did watch
uh andre's keynote so far and there's also keynotes from lisa lippincott and nicolai jasuda
so i plan on watching those soon but uh do you guys have a chance to watch any of these yet
i haven't yet have you chris, I did watch all of them.
They're pretty good. Yeah, I
need to get back around to this.
It's kind of funny that there's only one lightning talk
up, but yeah. I mean, just looking at
the times, hour and a half, hour and a half,
five minutes. Well, Jason, you'll
probably find Andre's
keynote to be interesting. He talked a lot about
Ifcon Stexper, which
we'll talk about again in a moment with our other news article, but you'll probably find that interesting. He talked a lot about if constexpr, which we'll talk about again in a
moment with our other news article, but you'll probably find that interesting.
Yeah. And I should watch it. Yeah. And we'll talk about it, like you said. So I only know what other
people are saying about the talk at the moment, which will be an interesting way into going into
it. And also for our listeners who don't know, at least some of the keynotes at Meeting C++ have 120-minute time slots.
So Andrei's here is an hour and 54 minutes.
Yeah.
That is a really long keynote.
I think it—I don't remember exactly how long the keynote presentation itself was.
I think it might have been closer to an hour and a half with, like, 20, 30 minutes of questions.
Right.
Yeah, that's probably the smart way to do it as a presenter, plan for an hour and a half with like 20, 30 minutes of questions. Right. Yeah, that's probably the smart way to do it as a presenter,
plan for an hour and a half.
Yeah.
Well, since we just referenced this article,
let's talk about if constexpr isn't broken,
which is a response to Andre's The Next Big Thing keynote.
And, yeah, so he talked about what was the word, DBI.
Do you remember what he called it?
Static if? No.
Well, he was talking about the
defects per line of code,
right? Yeah, but he
kept talking about the use of
static if or if constexpr
as DBI or something based
introspection. I can't remember. Or design based
introspection? Design introspection, yeah.
Yeah. So he was talking a lot about that and showing that, I guess, in D, using static if,
you can do all this great introspection. And he thinks if constexpr should work the same way,
but according to the proposal, it doesn't. But this is an interesting post pointing out how,
you know, according to Andre, you can't do all these things with if constexpr that you could do with static if and d.
But this author is pointing out that there are other ways to achieve pretty much everything he does in his d library that makes heavy use of static if.
Well, I think Andre's point is point out that in D, you can make it the design by introspection,
which just requires a few things like the introspection, verify the methods, and after that, also the generation.
You can do all of those things in C++ as well.
However, I think the main point he's making is the fact that the number of bugs per line of code is constant despite the language.
So it's like between 15 and 50 per 1000 lines of code. So the less code you have to write,
the better. And with Static GIF, obviously you can achieve those things with the less lines of code
than in C++
in which you can do all of those things
as the article says
besides the generation
which we don't have at the moment
that's true
what do you think about this Jason
as someone who hasn't watched Andres talk yet
I feel like
this conversation about the ranges example
that we had two weeks ago
and the week before where we're all like ultimately the conclusion example that we had two weeks ago and the week
before right where we're all like well ultimately the conclusion is this is a terrible example
now and i i'm looking i i read this article and i was looking through these examples
and and his just the direct comparison because like we said i haven't seen andre's talk
but him saying well uh we can actually do most of the things that we want to do
with if constexpr.
And I'm looking at this and I'm like, the number of
times that I've had to write
code with this complexity
and C++ where I have this
many different ways that I need to control the behavior
of a function has been like
three in like 20 years.
Right. I think that's
a really valid point. Sorry for breaking you, Jason. No, that's fine. i think that's a really valid point sorry for
breaking you jason no that's that's fine i think that's a really valid point you made however
what andres is proposing is like he made like the policy by design famous before and he's actually
proposing design by introspection as a way of you know new way of developing software. So it's kind of a new approach in which you have to do the static if
kind of things to achieve it. So if I hear you, you're saying, well, if we come to fully
understand this new approach, then it might just change the way in general that we write software?
Yeah, basically, yeah. So for example, at QuantLab, we are using design by introspection quite a lot, besides the generation part.
So, you know, we verify the existence of members and stuff like that so that we can have a kind of a generic interface, but at compile time.
So it's a bit different than, for example, if you use the virtual interface and things like that.
But, yeah, there are pros and cons.
Right. Yeah, that's, I mean, yeah. And the more we can do at compile time in general, the better.
Right. I agree. Okay. And then the last article we have
here is there's the C++ Now submission deadline,
which is coming up on January 23rd, I think.
So that's one week from today that we're recording.
So you'll have a couple days left after you hear this episode to submit your talk to C++ Now.
Yes, and try to not be late.
I know like 50% of the submissions come in like the day after the final date or something ridiculous.
I mean, that's just a given with conferences.
I know enough conference organizers.
Don't do that to our conference organizers.
Get the submissions in,
please.
Very true. They work very hard.
Be nice to them.
The other dates to keep in mind are
proposals and decisions are going to be sent out
February 25th, and the program
will be online March 17th.
I assume you all are planning
to go, Chris, and by you all I mean your
crew at QuantLab, who we've had Lenny on recently.
Yes, definitely. It's a big conference to be in, and it's really close, so yeah, definitely we will be there.
It is very close, and off-topic, but have you made the drive from Denver up to Aspen yet on any of your previous trips?
Yes, I did.
Okay, that is definitely a drive worth doing.
And for anyone else
who's listening,
a lot of people
sometimes,
well, a lot of people
sometimes,
some people make the decision
to fly to Denver,
rent a car,
and drive to Aspen
just to experience
driving through Colorado.
And it's something
to think about
if you're going to come
to the conference,
for sure.
Yeah, definitely.
How long is the drive?
Three hours or something.
But very scenic.
Oh, quite.
You see everything that Colorado has to offer,
from the plains to the high desert and the Rocky Mountains,
and then a forest in between, and a couple of rivers.
Okay, so Chris, do you want to tell us a little bit about the two libraries you're currently maintaining?
Sure. I actually maintain four libraries at the moment.
Oh, four?
Yeah. It's a lot of work and hard to set up as well.
But the ones which we want to cover are the dependency injection library and the state machine library.
So, yeah.
Well, then what are the other two?
Yeah. So, there are two.
One is for the type erasure.
So, yeah, we have quite a few
of them, like in both type erasure,
Dyna from LuiDion
and others. So, I wrote
my own as well.
And there is, because why not?
Obviously, I claim
it's better in some ways.
And the other one is a do unit, which is a wrapper for Google Test,
which allows you to do things more like in a catch-to style.
And also, I added support for Gherkin.
If you follow BDD, you might find it useful as well.
Right, right.
So since you mentioned Boost, you might find it useful as well. Right, right.
So since you mentioned Boost, you do refer to both these libraries,
or the two, dependency injection and the state machine library, as Boost libraries,
but they're not officially Boost libraries, right?
Right.
I actually do not refer exactly to Boost. I'm trying to put quotes or practice around Boost.
But, yeah, that's a valid point. point as a disclaimer they are not in boost official booth released but they were you know created as a
as as you know being accepted at some point in the future so they follow all the boost guidelines uh
and that's the that's why you know i call them Boost, because there's no boost staging kind of approach
in Boost before they got merged.
Do they also require Boost?
No, actually, any of my libraries do not require Boost, neither STL, so it's just one header
usually, which is generated, and just copy the header, and you can use the library without STL
or Boost. You just need a C++
14 compiler. That's all what you need.
So I have a
question I just thought of before we move
into actually discussing the specifics of any of
these libraries. With four open
source libraries, how much
difficulty do you have in
keeping them up to
date as compilers are growing and changing?
And going back, I'm assuming, if it's anything like any of my libraries,
that you have workarounds for certain compilers and whatever and some of the code,
and then you realize, okay, now that GCC 8 just came out, I can go and remove some workarounds or something.
Do you manage to find time to keep all that kind of thing up to date?
Yeah, I'm trying, especially for the DI and Sl i'm trying to keep on on top of that and as you pointed out there's a
lot of workouts which have to be you know handled at some point and fixed afterwards so it's a lot
of work i also support visual studio which adds 10 times work as well on top of that. So, yeah, it's quite difficult, and it takes a lot of time.
I would say, like, the Travis,
making the Travis pass for all the compilers
is like 50% of time you spend on the library
when you actually maintain it after developing.
So, yeah, it's a lot of work.
Do you, how easily, or how,
I don't know how to phrase this question you have to drop
support for older compilers at some point and visual studio honestly with as fast as it's
changing it's the easiest one to draw be like never mind you have to have at least visual studio
15.4 whatever the latest release is do you do you do that do you keep dropping off old
compilers yes yes uh, I'm trying to support...
Like, I'm trying to
keep it, you know, quite
modern, but also
not that... Also,
I'm trying to keep the C++
14 support for all of them,
which means that
it's like Visual Studio 2015
and Clang
3.8 or 3.6 and 5 upwards.
Okay, that's relatively old compilers to be maintaining at this point.
So one more question about Boost before we move on.
Have you actually submitted them to Boost for acceptance?
Well, they were on the review schedule.
However, I never found a review manager for them.
So maybe a
shameless plug. If you
see that and you would like any of
those, after what I will
be talking about in a sec,
feel free to contact me, and
we can arrange the review manager
part for it.
So you are interested in getting them accepted
if it goes down that road? Yes, I am.
I still am.
Although I don't think maybe it's a bit controversial,
but in this day and age, boost release is not as important
since we have GitHub and GitLab,
and it's much easier to plug libraries using Conan and other package managers.
But I still feel that boost is important and helps a lot of companies.
So, yeah, I would love them to be part of it, officially.
Okay.
So let's start off by talking about the dependency injection library.
And maybe we could start by just having you explain exactly what dependency injection is.
Right.
Yeah, that's a good question, which I, you know, hear a lot of times.
Because when I develop the library, it's like a lot of times, because when I develop the library,
it's like a lot of people trying to figure out why you'd even care, what's that?
So, DI is actually a kind of mysterious concept, which grew a lot over the years, but it's
very simple.
It's all about the construction.
It comes from the object-oriented patterns, but it's kind of useful in all paradigms.
And when I say it's all about the construction, it's basically the idea that if you use the constructors,
you probably use some form of the DI already.
It's also called and referred sometimes as Hollywood principle.
So it means like, don't call us, we'll call you.
And it's like, depending on what you actually pass by the constructors,
it depends whether you use the DI correctly or not.
So for example, there is like this anti-pattern,
which is called service locator,
which means that you pass in only one thing,
one constructor parameter to all your objects,
which will be like referred as a God object,
and you pass it around.
So that's not really a DI,
although you can say, oh, I'm using constructors,
so it's like, I'm using DI.
No, that's not really a good DI.
That's not the idea about DI.
That's like one giant global state.
Yeah, or like, you know, you pass the service locator,
you can resolve your dependencies,
but you do it afterwards.
And then it's like you really couple to the service locator
and you don't want to be coupled.
Like DI is not about coupling.
It's actually the opposite.
The other problem you may have with the DI
is like when you carry dependencies.
So it means that you pass things to the constructor
which you not use immediately.
So for example, you have the service which you pass
and you don't use it,
but you have the other which you pass and you don't use it but you have
the other object which is your which you dependent on and you pass it through as well to that object
so so that's bad as well that's not really a proper di and that's actually called the law of
the matter it's a design guideline in which you should only talk to your immediate friends so you
can see that you don't follow that rule when you have a lot of
dots in your code.
So for example, if you have
a class object
in which you say
.get,.get,.get,
.value, if you have a lot of dots
it means that you carry
dependencies somehow, and that's not a good
pattern either. The other way you can
screw up di is by
using singletons singletons on their own maybe a bit controversial not that terrible as long as
you inject them but if you don't inject them and you use you know them directly in your code that's
really bad from the testing perspective because you can't really test them it's really hard to
fake them because they're coupled to your code so so you don't want to do that either.
One more thing about the DI which you don't want to use as well
is the non-named parameters.
So that's quite regularly often said about it recently.
So, for example, if you have rectangle
and you have width and height as an int, two ints,
you don't want to use two ints for them because the name actually doesn't matter in C++. So if you do int width int height, you can have
different names in header and the definition of the file. And if you swap them around, it doesn't
really matter. So you may have like ADI changes like
subtle bugs because of that right so it's much better and it's much better
for DI as well because you can you can directly inject named things like the
strong type depth so that's really really good for DI and the last thing is
the dependency inversion which is a concept in which you don't want to depend on the abstractions,
on concrete implementation, sorry, you want to depend on the abstractions.
Because if I have to say there's one rule about this,
there's nothing certain in software development except for bugs
and constantly changing requirements, I'd say.
So it's obvious that you'll at some point have to change the classes or objects you are injecting.
So if it depends on the concrete implementations,
you couple yourself so that you won't be able to easily change that.
So that's important to keep in mind so that you will depend on the abstractions instead.
And the final theme I wanted to mention
about the DI, since I
started this a bit long,
but I'm trying to point out the things which are
important from my perspective, is
the composition route. So the composition
route is the unique place in an app
when modules are combined together.
So you know that you use and apply
DI correctly. If you have
the main and you say, you know, create me an app, that's my app. And you don't have to, you know,
do anything else to create any other objects. Like, so for example, if you have classes which do create objects themselves,
so if you see make unique within your classes,
then it means that you didn't follow the DI
and you don't have the composition route.
And that obviously makes the testing difficult
and it's not really loosely coupled code.
So that's bad from the
DI perspective in a sense.
And DI has a lot of
benefits, but
the main ones are
the fact that it produces a loosely coupled
code so that you can easily test it
in isolation.
You can separate the modules and
develop them separately.
And it separates the business logic and object creation.
And also the testability, which is the main part, which DI gives you.
It's like if you follow TDD or BDD.
With TDD, you can easily inject the mocks for DI.
With BDD, you can have different configurations for the production and for testing environment.
So, yeah, that's the main
benefits of di okay so the first thing that i mean hearing everything you said and you said
singletons are terrible basically unless you pass them in as a dependency and because of
testability and everything else first thing that comes to mind is logger like because it's the
thing that everyone wants to make a singleton and does does that apply to
the dependency injection mindset and how yeah i think it applies to to di mindset as you pointed
out so if you use the framework which we'll get to in a sec right then you don't really have to
think about it because you just say in your constructor that i need a logger and then you
can use the logger within your functions because
it will be injected for you. You don't have to
really think about it, but
that will give you the idea
or the ability to easily test it
and fake it. I'm not sure whether
testing the logger output is a
good idea.
But having a single
terminal and trying to fake it,
it's quite difficult.
That's one of the examples which you may do both.
I would still use DI for that.
I would pass it via Constructor,
but I wouldn't be a strong opponent if you don't really want to do that,
if it comes to the logger.
Because usually you have a macro or something like that, either way.
So, yeah, it depends.
But yeah, Logger is a good example in which you may don't want to use DI directly,
but I would still encourage that because it gives you the ability to fake it
and possibly test it or do whatever you want with it.
Right, because if you're not using DI with something like a Logger,
then first of all, you have to create a concrete logger singleton
before running a unit test,
because if that unit test hits the logger
and doesn't have a singleton, it's going to have a problem.
Yeah, right.
You'll have problems if you use singletons either way.
Right.
Unless you inject them, then you may have less problems.
But yeah, singletons are bad in general,
but there are use cases for them.
And I would encourage everyone to inject them so that your unit tests or other tests, your integration tests, will be clean.
So do you want to tell us a little bit about your DI library?
Sure.
So DI is quite a nice concept, in my opinion, because it gives you a lot of flexibility with your design and produces a
loosely coupled code. But there's like an issue with it, because when you approach the manual DI,
which means that, you know, you have a lot of classes because you follow the solid principles
and mainly the single responsibility principle, because you want to have, you know, small classes
which you will inject, you don't want to service locators.
So that means that you will have a lot of them.
And if you have a lot of them, you will have to create a lot of them in the main,
let's say, if you follow the composition route and pass them through.
And obviously, that will cause a lot of problems for you
because the order will be important.
If you change shared pointer to unique pointer, you will have to change that.
It's quite a lot of work just to maintain that and when i'm saying quite a lot of work you can
see the projects which have hundreds of thousands of lines of code of that worrying and then you
know developers are lazy by default i'd say and they they don't want to you know uh over design
and stuff like that so it's often you will break the single responsibility principle
by having a hack because I will just extend it a bit
because I'm in a rush.
I have these, you know, fix to make because, like, it's difficult
because I have to maintain all this wiring mess.
So, you know, and that's where actually the framework comes handy.
And the framework I developed is called Boost with quotes, DI,
as we pointed out before.
And it is a C++ take on popular frameworks in other languages,
like Java and C-sharps.
DI is way more popular in those languages than in C++,
and I think there are a few reasons for that.
One is that they're more object-oriented only kind of languages.
C++ is more about multiple paradigms.
And just to mention that DI might be applied for functional programming as well.
You can have functional DI where you inject functions,
not necessarily objects, but the concept is the same.
And the framework itself
follows the rule that don't pay
for what you don't use, so I'm trying
to make it as complete as
possible. And the main
idea for DI
framework in general is to automate
the wiring mess
and do more things, but it's like
this wiring mess which I mentioned things. But it's like this wiring mess,
which I mentioned like a few minutes ago about the manual DI,
when you have in main, when you follow the composition route
and you have to instantiate all the classes,
pass them through the constructors to one object,
and after that create the other object,
pass it through all the constructors.
And I said it's like there might be quite a lot of them.
So DI, the idea behind the framework is that you say,
create me the app, and DI will figure out all the constructor parameters,
all the templates, everything for you, and will just create an app for you.
So you don't have to deal with that wiring mess.
It will be totally automated for you.
So that's the main principle
of of the framework on its own and on top of that there are like few benefits of using it
so the main one would be the the performance so by default there's no runtime overhead with the
boost di library because it generates the same code as you would generate
by writing it by hand.
Okay.
You know, like make unique or, you know, having a class
and just, you know, moving objects.
But in most cases, actually, it can generate better code
because it knows all the objects which have to be created, right?
So it can, you know, create a different layout, combine objects together which should be together,
allocate the memory up front for objects and stuff like that.
So all of that might be analyzed by DI at compile time
because that's the idea behind Boost.DI.
And it gives you better performance.
What is actually funny about it is
I made a lot of benchmarks regarding the compilation
times because I was really scared that
C++
would do it C++ way, but it's like
it will compile C++ way
meaning forever.
But actually,
if you compare the Boost.di to
a different library in Java,
which is called Dagger 2, it compiles faster than Java.
So because the Java Dagger 2 is actually generated by, you know,
they have these annotations and they have the parser,
which you go the code through and try to find the annotations
and generate the code out of that, and then you
recompile that code. So it's like a
step before the proper compilation.
Right.
Like preprocessor kind of thing.
And that actually compiles slower
than Boost.di
with the same example, so that's quite handy
in my opinion. So
to sum up a bit of the
library, the main benefits of using it is the fact that
it reduces the boilerplate because you don't have to do all this wiring yourself. You don't have to
write it yourself. And you can easily factor the code as well. So if you change the constructor
parameters, if you do the manual DI DI you would have to change the wiring mass
again, you know, find
which object
you know, what order
it has to be passed through, maybe the order
of the dependency changed, so you have to
go through all of that
and that's, as I pointed out before, mainly
to workarounds. With DI
that will be automated and
you don't have to
think about it at all.
And on top of that,
it's non-intrusive,
which is something
I'm really actually proud of. It's an idea
how to deduce the constructor
parameters out of the
classes.
So obviously, in C++,
a constructor is not a function,
so it has to do the function traits
on it. But there are ways
in which you use some
TMP tricks like
implicit template, conversion
operator, and you
do
small magic with
if-is-constructible stuff.
You can actually verify the constructor parameters and that's important because you don't want the
framework to be... you don't want to rely on the framework. So it's like it's often
a case with DI frameworks that you depend on the framework itself
by using annotations or macros.
It's like you have to, you know, instead of having normal constructor,
you have to pass, you know, inject macro or annotate it,
like with add inject in Java.
And that's not ideal because you couple yourself to the DI framework,
and DI is all about loosely coupled code.
So why would you actually couple yourself to DI framework?
It doesn't make sense, right?
Because you can't create, for example, the objects
without having those annotations or macros.
So yeah, with Boost.ai, you don't have to change your code at all.
In 99% of cases, and you just say, create me an app,
and it will just do it for you.
So you say, create an app app and you pass in what the type
that you want it to create as a template? You want to create yeah and yeah it depends on it will it
will create the object graph you know without any any problems however if you have polymorphies of some sort, like templates, concepts, inheritance, variant, or type erasure,
you would have to add the binding, the wiring, what that kind of an interface concept has to be bound to.
So, for example, you have an interface, you know, ilogger, let's say, not the best example, but let's say you have iLogger
and you would like to bind it to, you know, FileLogger,
you would have to say,
bind iLogger to FileLogger, for example.
Or if you have a concept or template
which you can use DI with as well,
you just say that, like, the class Logger,
concept Logger binds to
file logger or something like that. But yeah, that's all what you have to
actually write down explicitly. You don't have to write down that this class takes
these constructed parameters. You just say these polymorphic tags have to be
bound to that and I'm fine, just create me somehow by
going through all the
object graph for that given
type, find out all the constructor
parameters, create all of them,
and pass
the required polymorphic
types into them as well.
Okay. Are there any
performance implications
to looking up all your DI objects
with this object graph?
Right.
So I kind of mentioned that already a bit.
But no, there's actually no performance implications,
the negative implications of using it.
Actually, there are positive implications of using it
because you
can have a better cache layout for it or you know better memory layout and you know combine types
together and yeah you might get better performance or by using it but it obviously depends on
on di applications because a lot of um a lot of programs actually do not require DI to be as fast
because it doesn't start up, for example.
So you usually create objects at startup and then you don't care, right?
Right.
But it's sometimes important when you want to create some objects afterwards.
So you want to always some objects afterwards. So you
want to always be as fast as possible. But it also depends on the application.
So with Boost.di you can have both. You can have no runtime overhead or a runtime
overhead if you want to have something totally runtime kind of base. If you want
to bind the dependencies and runtime. So for example, you know runtime kind of base if you want to bind the dependencies and
runtime so for example you know kind of a java style you have an xml file and you want to wire
things accordingly to what you have in that file at runtime you can do that with boost.di as well
but then you know if you can't create them you'll throw an exception or do whatever you want with that,
which is a runtime error, which Boost.di is not about.
Boost.di is about compile-time guarantee of objects creation.
So the idea is that if it compiles, it means that I can pass some objects to it.
So, for example, if you miss the binding for a polymorphic type, Boost.di won't compile it.
And it will give you, hopefully, a nice
error that, well, did you forget
to bind this interface or
concept because
it's abstract and I can't really create that.
So, it will give you a
compile-time error.
Well, I hate to ask, but is it a readable
compile-time error?
Yes, I would say yes.
I mean, it's C++.
I'm trying to achieve the best error possible.
So in most cases, it will give you a free line kind of error
where the call was made for the create.
And after that, this type cannot be created because it's abstract.
And it gives you a hint that did you consider binding an interface to an implementation.
And to achieve that, I'm using kind of a trick
with the linking part of the application.
Because by default,
if you do a static assert,
unless you will do the static assert
on the kind of a call-y side,
you will get all the call stack.
And you want that.
So the way I'm doing it,
I'm having a deprecated create method,
which will give you the first callat, just the first one,
that this method is deprecated, which is the create,
which means, and I also put like the constrate is not satisfied,
so it kind of imitates the concept part.
And after that, it goes through all the program,
and when it cannot create the type,
because it's not bound or it's not creatable for some reason,
I have the inline cons expert static function without the definition,
in which instead of static assert, which says like error,
this type cannot be bound or cannot be created.
And that actually gives you a warning, which I turn into an error by diagnostic
and then I get the second line of the error.
So it's like the coli side by the deprecated attribute
and after that the error on its own by the inline missing definition kind of the function.
So that's the idea behind it and in cases, it gives you two, three lines of
error. However, it's more complicated when you bind templates. But yeah, I won't go to that
because it's not as good. But by default, if you use standard things like variant or, you know,
type erasure, you will get a nice error message. It might be the first time on the program, I think, Rob,
that the deprecated attribute was mentioned,
which was added in C++11, I believe.
Right.
And most people don't talk about it.
Yeah.
It's an interesting use of it.
Yeah.
So basically, you're pretending like it's going to work
by providing a deprecated function that will then cause a link or error later.
Right.
So I do check it up front so I can check at compile time whether this type will be creatable.
Right.
If it is creatable, then obviously I don't mark the function deprecated.
I have two overloads.
And if it's not creatable, I mark it deprecated.
And then somewhere in the call stack,
I'll get the other error message.
So just using regular overloading,
if you happen to call the deprecated version,
then we get the diagnostic, basically.
Yeah, and you happen to call the deprecated version
if the type is not creatable, we just check the compile time.
So out of curiosity, have you given any thought to
or proposed anything to the standard for something that would make this process easier and help you give better error messages?
Well, concept's supposed to help that, right?
It's supposed to, but I don't know. People argue about that.
So the problem with concept, I'd say, is the fact that you can easily get the error message up front,
like with the deprecated, because that's what I kind of imitate.
But why the constraint not satisfied might be actually a bit more difficult to achieve.
So right now, the clang and GCC, the part of it which supports the concept,
tell you kind of exactly from the first call stack why,
but it's quite difficult to tell it.
In my 10 layers below, I have this error because of something,
and it's like propagate that up front and tell me about it.
So yeah, it's not as good, but I think you can make it work as well.
So it does a bit of more effort.
And the other proposal is like the trace for constexpr,
which kind of is basically the same as static assert,
but without the call stack, right?
And you can put the types on that.
So that would be useful as well, like static printf kind of thing.
Debugging a compile time failure of a constexpr call can be difficult.
I agree.
So some tools to make that easier would be,
I mean, at the moment, personally,
I just convert it to a runtime call
and then use the debugger to figure out what happened
because getting any real diagnostic at compile time is hard.
Yeah, I totally agree.
Usually you have to, you know,
I follow like TDD and trying to make like
as small changes as possible
so that I know what's going on.
But yeah, I do it at runtime.
I am not trying to do that compile time.
Right.
Was there anything else you want to talk about with DI
before we move on to talking about the state machine library?
Sure.
Just a small thing.
I also wanted to point out that with BootDI,
you can have also other benefits.
It's a small core.
It's like 2,000 lines of code.
So it's really easy to add extensions.
For example, a runtime injection is just an extension which you can
write on top of the static compile-time version, which is quite neat.
And also you can use policies. So because you own the types creation kind of part
of your application, you can for for example, imagine the situation where you would like to restrict types which can be injected.
For example, I have a policy in the company that, for example,
that do not allow raw pointers to be passed through the constructors
because they're good practice, let's say, or something like that.
Then you can do that with DI because you own that and you can write easily a policy for that.
And the last part is that you can also create the mocks automatically.
So you can imagine that if you have the testing part of it for TDE,
you can say, create me these units and for all the polymorphic types, inject the mocks and create them.
You know, using like fake it or GUnit. You can do that as well.
You can create mocks without using the mock method part of Google test,
Google mock.
And you can have different configurations for production and testing.
So yeah, just wanted to point out that there are a lot of benefits of using.
Sounds like a powerful library.
I wanted to interrupt the discussion for just a moment to bring you a word from our sponsors. Sounds like a powerful library. PVS Studio team invites listeners to get acquainted with the article, Technologies Used in the PVS Studio Code Analyzer for Finding Bugs and Potential Vulnerabilities, a link to which
will be given in the podcast description.
The article describes the analyzer's internal design principles and reveals the magic that
allows detecting some types of bugs.
So do you want to tell us about SML?
Sure.
So SML is a state machine library, which is kind of like, if you don't know what is a state machine, it's usually a model of computation, which kind of prevents the spaghetti code. and B and not C and D. If you have code like that,
you probably would benefit from using a state machine library because that kind of makes it more declarative
and you express what and not how.
So that's the main benefit of using a state machine kind of part.
And it's modeled by the unified model language.
So it's a standard for writing the state machines and this currently
is like the version 2.5. It's kind of like the C++ standard and it has multiple features
on top of just the simple transitions like defer, history or other things which
are not by default important but it's good to know that you can do more stuff with it.
So yeah, that's about the state machine itself.
And yeah, I actually wrote the boost, again, with the quotes,
SML library, which kind of tried to implement the state machine.
I did not know that there was a standard for specifying state machine.
It's kind of
the standard for specifying the diagrams
for UML.
So when you have the UML
you usually would like to
have a standard way of
defining the diagrams and
one of the diagrams is the state diagram
which kind of translates to
the state machine. And the SML is kind of the diagrams is the state diagram, which kind of translates to the state machine.
And the SML is kind of the declarative way of writing the same thing as the diagram.
So you can have translation one-to-one.
So, for example, if you have planned UML state diagram, the code which you have to write for the boost SML is pretty much the same.
So that's quite neat, and you can easily translate from planned UML to boost.sml and vice versa.
Now, if I recall, you gave a talk on your SML library at CBPCon 2018, is that correct?
Yes, I gave a talk about, you know, it was called State Machine Battlefield.
So I kind of compared different approaches of writing state machine,
including like if, switch, and using both libraries as well,
coroutines, variant, which is quite popular as well recently.
And moving kind of part of that is like you may ask why we need another
Boost state machine library because we have two already in Boost. One is called StateChart and
one is Boost.msm. So the StateChart is kind of oldish. I don't want to be offended here but it's
like it's not as as fancy as Boost.msm so I won't be talking about it because it's runtime,
it uses virtual interfaces, dynamic allocations.
It's not really as neat as Boost.msm,
which I find it extremely, extremely nice.
However, it has a few problems.
The main one, it's extremely slow to compile.
It's like really slow because it's still using Boost boost MPL, like C++03, MPL
vector kind of thing. It has tons of macros, and it generates only jump table, and its
error messages are terrible as well because of the MPL vector. And the binary size is
not ideal either. So I was using Boost.msn before.
I was, like, really happy with it until, you know,
I had to actually write the production state machine,
which was a bit bigger than just the simple examples,
and I wasn't able to do that.
But I still wanted all the goodies out of it,
which is the declarative style which follows the UML.
So, yeah, I decided to write the SML, which actually compiles
like 60 times faster than Boost.msm. So because it's just using modern C++, there's no
that magic about it. It's just modern C++, variadic templates, you know, constexpriff,
or all the fold expression stuff is just much faster than MPL.
And also libraries like Fuzzy or MP11 as well,
they're much faster than they used to be before.
And one thing which I didn't like about MSM was the fact that,
because I'm in the finance industry, so I care about the latency and I weren't able to change
the
generated code out of MSM
because there was no policy
to do that. However,
MSM has a really good design.
There are different front-end
and different back-ends. So I actually
kept the idea as well.
However, the back-end,
we did it compile-time, so you don't pay for what you don't use. If you don't use so I actually kept the idea as well. However, the backend, which is a compile time,
so you don't pay for what you don't use.
If you don't use a UML feature in your transition table,
which is part of the definition of your state machine,
then it won't be even compiled in.
So that's pretty handy, like don't pay for what you don't use.
We all love that.
But it only generates the jump table for you,
which sometimes is good, but it's not the best always.
Before, I added a few policies to SML
instead of just being coupled to one.
And that's what I was actually referring to at the talk at CEPCon,
that you have different ways of implementing state machines.
You can do switch, if-else, fold expressions,
coroutines, compute-to-go-to, jump tables.
You have tons of ideas how to do that.
And that's, for example, a variant as well.
It's just runtime dispatching on events, right?
So you can approach it many different ways.
And that's what I wanted to add to the SML.
So SML has different backend policies
which allows
you to change them and experiment with them.
And that gives a lot of
performance benefits
for you because you can easily
switch how you want to
tackle that.
Are any of these libraries
used in any released applications
or projects?
Yeah. They're used in the production.
It's actually quite often that I'm asked whether they can be used,
and yeah, they are.
So a lot of latency, I can't really talk about the names that much,
but a few big names are using them,
and yeah, they're using production because of the latency profile. talk about the names that much, but a few big names are using them, and
they're using production
because of the latency profile,
as well as they improve the
expressing
the what, not the how.
So they're quite handy.
But there are different profiles
and different companies which are
using them.
As I pointed out, there are a lot of latency profile companies
which are interested in them because of the performance.
However, in the same sentence,
there are companies which do not care that much about the performance
but do care about the declarative part of it.
But also they would like to change the ability to, for example, for DI,
change the ability to
add the bindings
at runtime from
config or something like that.
So yeah, there are different use cases
for running libraries. And there are pros and cons
of using them.
It always depends on your use case,
I guess.
Before we let you go, Chris, is there anything else you wanted to share?
Do you want to give us your Twitter handle or blog or anything like that?
Sure.
So, you know, kind of a last word of wisdom, I'd say.
Sure.
Leverage zero-cost libraries as an idea for, you know, to consider
for anyone who is willing
to use libraries.
We have this, you know,
idea in C++,
don't pay for what you don't use,
but it's often forgotten
that we can develop libraries
which we won't have to pay
for anything as well.
So if you can leverage them,
I think that's a really good idea
for everyone because we don't want to repeat the code a lot of times. pay for anything as well. So if you can leverage them, I think that's a really good idea for
everyone because we don't want to repeat the code a lot of times. And yeah, if you want to contact me,
go for it. I have a Twitter. As I pointed out, if you want to be a review manager for any of
those libraries, feel free to contact me too. My Twitter is at Chris Juszczak.
I guess, yeah.
It's Chris with a K.
And my GitHub is pretty much
the same, so
it will be really easy to find.
And all the libraries are on
GitHub. They're open source,
obviously, on Boost
Experimental. And
I wanted to point out that Experimental is not really experimental,
which means that those libraries are not experimental in the ABI or anything.
One change is just called experimental because they're not part of the Boost.
Ah, okay.
So maybe the better name would be staging or something like that.
But yeah, that's the GitHub part of it.
Okay.
Well, it's been great having you on the show today, Chris.
Yeah.
Thank you for having me.
Yeah.
Thanks for joining us.
Okay.
All right.
Thanks so much for listening in as we chat about C++.
We'd love to hear what you think of the podcast.
Please let us know if we're discussing the stuff you're interested in,
or if you have a suggestion for a topic, we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate if you can like
CppCast on Facebook and follow
CppCast on Twitter.
You can also follow me at RobWIrving
and Jason at Lefticus on Twitter.
We'd also like to thank all our patrons
who help support the show through Patreon.
If you'd like to support us on Patreon,
you can do so at patreon.com
slash cppcast.
And of course, you can find all that info and the show notes on the podcast website at cppcast.com.
Theme music for this episode was provided by podcastthemes.com.