CppCast - Synchronization Primitives
Episode Date: August 10, 2017Rob and Jason are joined by Samy Bahra from Backtrace to talk about lesser known synchronization primitives and his work on the Concurrency Kit. Samy Al Bahra is the cofounder of Backtrace, wh...ere he is helping build a modern debugging platform for today’s complex applications. Prior to Backtrace, Samy was a principal engineer at AppNexus, where he played a lead role in the architecture and development of many mission-critical components of the ecosystem. His work at AppNexus was instrumental in scaling the system to 18 billion impressions with orders of magnitude in efficiency improvements. Prior to AppNexus, Samy was behind major performance improvements to the core technology at Message Systems. At the George Washington University High Performance Computing Laboratory, Samy worked on the UPC programming language, heterogeneous computing, and multicore synchronization. Samy is also the founder of the Concurrency Kit project, which several leading technology companies rely on for scalability and performance. Samy serves on the ACM Queue Editorial Board. News ReactiveX Beast accepted to Boost A summary of the metaclasses proposal for C++ C++17 in details: Filesystem CppCon 2017 Schedule Samy Bahra @0xF390 Links C++Now 2017: Samy Bahra "Multicore Synchronization: The Lesser-Known Primitives" "Multicore Synchronization: The Lesser-Known Primitives" Slides Concurrency Kit Sponsors Backtrace Hosts @robwirving @lefticus
Transcript
Discussion (0)
This episode of CppCast is sponsored by Backtrace, the turnkey debugging platform that helps you spend less time debugging and more time building.
Get to the root cause quickly with detailed information at your fingertips.
Start your free trial at backtrace.io slash cppcast.
CppCast is also sponsored by CppCon, the annual week-long face-to-face gathering for the entire C++ community.
Get your ticket today.
Episode 113 of CppCast with guest Sammy Barra, recorded August 9th, 2017.
In this episode, we talk about the CPPCon 2017 schedule.
Then we talk to Sammy Barra from Backtrace.
Sammy talks to us about lesser-known synchronization primitives and the concurrency kit.
Welcome to episode 113 of CppCast, the only podcast for C++ developers by C++ developers.
I'm your host, Rob Bergman, joined by my co-host, Jason Turner.
Jason, how are you doing today?
Doing good, Rob.
Just a little bit of a disclaimer today that I am using a different setup,
so if we have any audio problems or anything, I can be blamed.
Yes, but we're hoping it fixes some audio issues we occasionally have with you.
Yes, we will see what happens.
Yeah.
So how was your DC trip?
It was pretty good, interesting.
Got to see some old friends.
And I don't know if we've talked about that trip at all publicly here, but I gave a short talk at Northrop Grumman and had a good time.
Awesome.
Okay, well, at the top of every episode, I'd like to read a piece of feedback.
This week, we got an email from Vladimir, and he writes in,
Hi, thanks for the show. I really enjoyed listening to it during my commute.
I wanted to make a comment on coroutines versus futures that you discussed on the last episode.
The discussion was under the assumption that gore compared
coroutines to futures directly and said futures had no future because of coroutines but i don't
think that's what he said he compared futures to reactive system reactive streams which could be
thought of as a generalization of futures where a future is just one type of a stream that has
just one element in it reactive streams have been getting a lot of traction across the industry in many different languages, and he
shared a link to reactivex.io.
And it's really interesting
because ReactiveX shows
that it's this framework
that runs on multiple different languages,
including C++.
Interesting. Yeah, and it looks like
the main contributor to
that for C++ has given a talk at cppcon so
maybe we will try to get him on and dig into reactive streams a bit more and i believe that
you can correct me if i'm wrong that we do have some updates on some reactive stream links that
we could also add that we received in the past week yeah Yeah, yeah, definitely. Okay.
Okay.
Well, we'd love to hear your thoughts about the show as well.
You can always reach out to us on Facebook, Twitter,
or email us at feedback at cpks.com. And don't forget to leave us a review on iTunes.
Joining us today is Sami Barra.
Sami is the co-founder of Backtrace,
where he is helping build a modern debugging platform
for today's complex applications.
Prior to Backtrace, Sami was a principal engineer at AppNexus,
where he played a lead role in the architecture and development
of many mission-critical components of the ecosystem.
His work at AppNexus was instrumental in scaling the system
to 18 billion impressions with orders of magnitude and efficiency improvements.
Prior to AppNexus, Sammy was behind major performance improvements
to the core technology at Message Systems.
At the George Washington University High Performance Computing Lab. Sammy worked on the UPC programming language, heterogeneous computing, and multi-core
synchronization. Sammy is also the founder of the Concurrency Kit Project, which several leading
technology companies rely on for scalability and performance. Sammy serves on the ACMQ editorial
board. Sammy, welcome to the show. Yep, it's my pleasure to be here.
What is the UPC programming language, if I might ask?
Sure. So UPC is Unified Parallel C. There is, in fact, a UPC++ that is also being worked on.
As far as UPC is concerned, it is a PGAS programming language that extends C99.
More specifically, sort of the core extension there is this notion of a shared qualifier,
which allows a variable to be accessed by a distributed system.
So it's the same program, multiple data.
You could have the same instance of the program running on different machines,
each accessing a variable with a shared qualifier.
And by PGAS, what that means in this particular context is there is an
explicit notion of locality.
So it's not some oblivious global address space with no notion of locality. It actually has strict semantics around locality,
which allows for software developers to make design decisions
and performance design decisions around locality.
So I feel like generally in my world, if I'm going to talk about locality, I'm thinking
like cache locality, like how close things are in physical system memory.
But I'm getting the impression you're talking about like locality as in these distributed
systems are geographically near each other.
Yes.
So this is locality in all senses.
So it's memory locality on a single node and then memory locality across nodes.
So these could be located, typically they'll be located somewhere else on a rack or another rack entirely. For most high performance computing applications, you
tend to use interconnects with much higher bandwidth and lower latency guarantees than
most commodity hardware. So typically over there, you're at least in the same rack, per
se, depending on the hardware that you're on. But regardless, that's obviously all
significantly more expensive than local memory access.
Right.
But in all cases, we're talking about
reducing latency effectively.
Reducing latency or improving bandwidth,
depending on the problem that you're dealing with.
Okay.
Well, Sammy, it's great to have you on.
We have a couple news articles to talk about,
and then we'll start digging into your work at Backtrace
and your recent talk at C++ Now, okay?
Great.
Okay, so this first one is that Beast has been accepted into the Boost libraries.
We've talked about Beast quite a few times on the show and had the author on a while ago, Vinny Falco.
So it's great to see that he was able to get it accepted.
Yeah, that's pretty big news.
Yeah.
There was one comment on the
Reddit thread here which I thought was interesting.
People were asking him if he's going to keep the name
Beast since it's kind of
nondescriptive
instead of using something like
boosthtp or boost.socket.
And he actually pointed
to a question on
the Boost fact page about
the decision to stick with Beast. I'm not sure if I completely agree,
but I thought it was interesting that he has thought ahead of that. I would
at least maybe theorize that if it ever goes the standardization
route, the name may change. Yeah, I don't see standard
Beast, really. That doesn't sound quite right.
Yeah, maybe. Maybe not. We's see but it is uh exciting news
for a minute i'm very curious after everything else that we saw recently about the boost
standardization process for we're talking extremely tiny but controversial things this is a much
bigger and less controversial thing um it would be interesting interesting to get a window into how that process went.
Yeah, maybe we should get him on again.
Yeah.
Do you have any thoughts on this one, Sammy?
No, I do not.
I'm a
neckbeard through and through.
Okay, this next article is a summary
of the Metaclass proposal
and this is coming from Jonathan Bakura's blog, which is always really great.
And he actually had this reviewed by Herb Sutter, and it provided just a really good summary of all the work Herb Sutter has been proposing for Metaclasses.
Yeah, as always, well-written article on Jonathan's blog. Yeah, so if you hadn't had a chance to watch Herb's video
or read the actual proposal paper,
I would highly recommend reading this summary of the Metaclasses proposal
by Jonathan on Fluency++.
I personally can't help but being left with the impression that
Metaclasses is going to be like learning a new programming language
yeah yeah of course how many people are really going to have to work with metaclasses that's
what i'm also wondering uh you know things start out with crazy verbose syntax they're esoteric and
no one you know only the experts use them and then after a few years or a decade or whatever,
the average programmer is throwing around templates when they need to.
I mean, you know, it's...
That's true.
It's specifically the constexpr blocks that make me go,
I have a difficult time reading this more than other parts.
So we'll see though.
Okay, next up
is C++17
in Details File System, and this
is on Bartek's coding blog.
And it's just a good
in-depth summary of
the file system proposal that has been
accepted in C++17.
If you haven't read too much about
the file system proposal
or if you haven't used Boost File System,
this is a pretty good overview of how it works
and what you'll be able to do with it.
I really like how you put that in an in-depth summary.
Maybe not too in-depth.
Yeah, it's a good summary.
It's definitely a great primer.
Yeah.
Okay, and then the last exciting bit of news is the CPPCon 2017 schedule is out.
And obviously you're in here a couple times, Jason.
Sammy, is anyone from Bacteria Spunning going to CPPCon this year?
Yeah.
So Abel will be giving a talk at one of the open sessions.
So Abel is a giving a talk at one of the open sessions. So Abel is CEO and co-founder.
He will be focusing on debugging technology, unsurprisingly.
So if folks are interested in debugging technology, I do highly recommend it.
There is another talk that also stood out to us that seems very interesting.
It's from a bunch of folks from Microsoft that are giving a talk on debugging
large-scale commercial applications.
Okay.
So Abel's given a couple of talks on GDB, right,
that have been recorded and such.
More on debugging technology.
Oh, debugging.
We have talks on internals of debuggers,
how they work, when they don't work,
why they don't work, and what you can do about them,
and then debugging at large.
So if you're in a distributed system where you may have either thousands of servers
running your code or hundreds of thousands of users
or millions of users running your code and you have crashes,
how are you actually able to effectively triage,
prioritize those issues, and act on them?
And there's a lot of interesting technology
behind solving that problem.
Okay, right.
One of the things I wanted to call out in the CPPCon schedule
is Sarah Chips, who we had on a while ago
talking about Jewelb jewel bots is going to
be doing a talk on friday and and not only is she doing this talk which is titled building for the
best of us design development with kids in mind but she's going to put on a workshop uh so kids
can get their hands on jewel bots so if you're going to the conference and you you live in the
seattle area then you could bring your kid along and And I think, you know, you don't need to pay a ticket entry for your kid. You can just
bring them. Um, or if you make the CBP con conference into like a family trip, you could
bring your kids with you. But that's, uh, something I think worth mentioning too. Well, I mean,
we haven't seen a lot of kids at CBP con yet, but it'll be neat if we do but also just on the
topic of some of these more um wider appeal sessions the open sessions specifically like
sammy was talking about we've got um none of them have been announced yet because no one you know
that's there's no official schedule for them yet i don't think but the open sessions yeah they say
tba on the schedule uh anyone can come to you. You don't
have to be a registered attendee of the conference. So the open sessions tend to be in the morning
and at lunchtime, and they're not recorded, but they're more kind of general content that people
can propose later in the schedule. And yeah, anyone in the community who happens to be by can
visit them or can submit their own talks.
We had that happen last year.
Someone gave a talk who wasn't even attending the conference.
Okay.
Well, Sammy, let's get started with just talking about what exactly your role is at Backtrace.
Sure.
So I'm the CTO and co-founder, primarily focused on product, engineering, and the technical roadmap of the company.
So I make sure that our core technology is able to meet the demands of backers and our customers,
make sure that the engineering organization is able to get its job done effectively.
So a facilitator of sorts.
How long have you guys been in business now?
Just over three years at this point.
Okay.
And you gave a talk at C++ Now 2017 this year,
and you talked about lesser-known synchronization primitives.
Can you give us a bit of an overview of that talk?
Yeah, sure. And you all again have to excuse me. I'm currently in the middle of a terrible cough.
So I will likely cough a few times. So the talk was fairly broad. It covered things from basic atomic instructions, hardware support for concurrency, and touched
on topics around memory management and advanced data structures.
All these primitives either help with performance, reliability, or reducing program complexity.
So as far as the primitives are concerned, these are things that are not exposed by the
standard library or by the language standard itself. For atomic operations and hardware
support, for example, there is quite a notion of decrement and set if zero operation, which
is a very common thing for reference counting.
You decrement a reference count,
you wanna make sure that it's zero.
There is a fetch and add operation,
but even if you're simply checking for a value of one,
this will always compile down
to a suboptimal instruction sequence with
all C++ compilers that I have tested.
So this does have implications on latency if you have some heavy reference counting
code and it does have implications on the instruction cache.
Then of course there's also all sorts of other interesting extensions.
So for example, Intel recently released an instruction that allows read for ownership.
So it allows finer grain control over cache coherency.
So modern processor today obviously has a cache coherency protocol which ensures consistency
of memory on the system.
And this, the cache currency protocol itself can be a significant bottleneck for a lot of classes of applications.
And if you don't, if you're not careful with regards to how your parallel program is reading and writing to memory,
you will have suboptimal performance. So a very common pattern, for example, is to read from a region of memory and then subsequently
write to it.
And in that situation, you end up generating unnecessary cash currency cycles. So a very common pattern in modern programs for example is to
read a region of memory and then write to it. This does end up generating cash
currency traffic which can be fairly expensive. So now you have the ability to
actually read a region of memory but indicate the intent of writing to it. So rather than having two cash currency cycles,
you will only have one.
And then of course there are other very interesting things
like Intel's recent support
for restricted transactional memory,
which allows you to do all sorts of interesting things
for concurrent data structures that have data parallelism.
So that was sort of the first segment.
Happy to sort of dive into that further.
I have a couple of questions, yeah,
before we move on, I think.
So, atomics are kind of essential to lock-free programming, I guess.
But there is still...
They're not free, right?
If something is an atomic operation,
it's still going to have to cause all of the cores
to at least agree that we're all, you know, that something is being operated on an atomic operation, it's still going to have to cause all of the cores to at least agree that we're all,
you know, that something is being operated on an atomic way, right?
Yeah, I mean, this depends on the underlying memory model and sort of the concurrency requirements of the algorithm being developed.
And then the patterns of memory axes that are being made. So if you're using a true read, modify, write atomic operation on x86, for example,
yes, that is extremely expensive in the sense that you are serializing,
you're effectively serializing the pipeline.
So there is a real cost even in the absence of shared memory axes.
Now, if there are shared memory access, things obviously get even more expensive because you have all this cash currency traffic.
You have all this chatter that's going on between processors.
And that in itself will have other impacts.
Not only do you have increased latency because now you're going off the core that you're on or the memory controller that you're on,
but you also have the potential for increased latency due to congestion over the interconnect.
Now, if you do constrain the complexity of your algorithms, in many cases you can get away with using
loads and stores
which are atomic on x86.
So if you don't have
shared memory axes
or you don't have actively mutated shared memory,
then those loads and stores are practically free.
Interesting.
So we keep bringing this around to x86, but a fair number of people are using ARM for,
well, I mean, lots of work these days.
Do you have any experience with how these things impact a multi-core ARM?
Yeah, I mean, it's all the same principles.
Okay. So ARM has a much more relaxed memory ordering model.
It is relaxed memory ordering.
So for the purposes of correctness, rather, the developer has to explicitly emit fences
or ensure that the appropriate acquire release qualifiers are added to the atomic operations,
those end up serializing things.
Okay.
So you did briefly start to mention transactional memory,
and I'm curious if you have at all followed the transactional memory standardization stuff that's been discussed with C++.
Yes. So I have looked at that. transactional memory standardization stuff that's been discussed with C++?
Yes, so I have looked at that.
I did not have any real thoughts around this. I think the extensions seem sane.
Okay.
Yeah, no real thoughts beyond that.
There was nothing really novel about it.
It was very logical, I thought, and well thought out.
And, you know, one thing I really did like was, you know,
there was this emphasis on, how can I say,
exposing composability of data structures, which are suitable for restricted transactional memory via the type system.
And I thought that was a clever way of dealing with that problem.
You know, the more general question I have is, you know, how relevant will RTM be, restricted transactional be, outside of, so far,
Intel and power architectures? So I had seen a couple of comments from people that were
something along the lines of, like, transactional memory, I thought that went out in the 90s,
and we gave up on that. And now to hear that Intel is adding instructions to support it and we're talking about standardization and C++, it's not a topic I'm terribly familiar with in the first place.
Yeah. Yeah, I think transactional memory went out of fashion in the context of hard transaction memory.
So transactional memory where you have guarantees of forward progress.
And there's a lot of tradeoffs there,
and there are a lot of issues just around feasibility.
Actually, what performance advantages are there to that model,
and what are the advantages to simplicity when you don't necessarily always have,
you know, transparency around things like contention. Restricted transactional memory, on the Intel and Power does not guarantee forward progress.
So it is best effort.
Essentially mark a region of executable code as executing in the context of a transaction.
And you write to memory, you can read from memory, and then you commit.
If any of the regions of memory you interacted with are conflicted by,
there are conflicts to those regions of memory from other sites, for example,
reads or writes, that transaction will abort and you're forced to retry.
And because, you know, of those forward progress guarantees,
simply you do have to ensure that you have a fallback path
that's built on blocking synchronization.
And that's sort of the driving principle
behind lock deletion, which leverages transactional memory.
So lock deletion will essentially
elide a set of locks,
elide lock acquisitions, a set of locks, a live lock acquisition,
so as if the lock and unlock operations were no ops
and treat them as transactional memory sections.
And as long as there are no conflicts,
it'll go through without really introducing any contention
to other lock holders.
But if there are conflicts and you retry,
then you actually have to acquire the lock.
Okay.
Yeah.
I do touch on this in more depth in the slides, so check that out.
There's also a whole bunch of other stuff I talked about.
So other one is synchronization primitives.
So these are alternatives to traditional condition
variables, mutexes, read write locks.
So you have mutexes that essentially provide
perfect scalability.
The notion of a scalable mutex might sound like a misnomer,
but what I mean by a scalable mutex is under load you want to saturate the
system.
You don't want to degrade performance.
So a typical lock that you'd see in libraries today is in one region of memory and then
you have a bunch of threads essentially spinning and or adaptively blocking on that lock.
So you can imagine if you have eight you know, eight threads or multiple cores,
as you continue scaling that, you end up introducing a lot more cash currency traffic.
And what ends up happening there is you end up degrading application performance,
and in some cases degrading system-wide performance as that load increases, right,
as that lock becomes overloaded and you have contention.
But you have locks out there, for example,
which will ensure that the system is just saturated
in that situation rather than degrading the performance.
So these have been around for a very long time
since the early 90s, but they're not,
they haven't been put into active use outside of Java until recently.
So Linux recently adopted a lock known as the NCS lock.
And then you also have alternatives for condition variables, read write locks.
You have read write locks that provide perfect scalability.
And then you have all sorts of
ways to handle biasing for these locks.
Most locks that we deal with today, a practitioner deals with today, tend to be write-biased
locks, meaning those locks in the presence of contention will prefer writers over readers.
There are implications to this for a lot of applications.
So you may want fairness.
You may want to ensure that, you know,
reads and incoming lock acquisition attempts
are serviced in a fair manner,
maybe in the order in which those requests are received,
or fair with respect to readers versus writers.
So the beauty about a lot of these perm primitives is they're a plug and play.
You can replace an existing lock and you'll get performance benefits for free.
So it's very practical.
While still having the same semantics.
If you get your shared writer mutex lock,
then you know you're safe to write to this memory or whatever.
Yes, yes.
There may be tradeoffs with some of them with regards to fast path latency or memory usage.
But then you do have also alternatives that, you know,
where there are essentially minimal tradeoffs in those regards.
And then, you know, other, I'd say last two things that I touched on here, one is the topic
of memory management.
So when building a concurrent and or parallel system
involving dynamic data structures, a very common thing is to decouple the liveness and reachability of objects
and rely on things like reference counting in order to reduce contention.
However, reference counting itself tends to be extremely error-prone.
It's very complicated and there's a cost on the fast path
because you're implementing all these reference counts. So even a
read-mostly workload, which is great for a parallel system, ends up becoming a
write-mostly workload if you're constantly incrementing and decrementing reference counts.
Interesting.
And if you're utilizing more advanced data structures like lock-free data structures,
traditional reference counting is not a good fit.
So for a long time now, there's been a lot of work around safe memory of reclamations that are essentially alternatives to reference counting
that also work for things like dynamic lock-free data structure.
So probably the most well-known implementation there is read copy update or RCU of Paul McKinney and EBR of Keir Frazier.
EBR is used in concurrency kit and Rust. So the big thing with these techniques
is for read mostly workloads.
They perform extremely well, they scale.
And then the thing that I really like
and I think is underappreciated
is they're actually very simple.
So in a lot of the stacks that I've worked on, we heavily make use of these, and the whole
job of memory management is handled by the core subsystems of the software platforms that I'm
working on rather than the developer. So for example, at Backtrace, we have a columnar database. If a developer is adding something like an HP endpoint, they don't have to worry about anything around memory management with respect to accessing concurrent data structures.
They can just access anything, any pointer, dereference any pointer, and, you know, all the reference counting is managed uh by the the core
engine uh and so i do highly recommend folks do take a look at those not just for performance but
also for simplicity uh you don't have the same issues as reference counting do you want to tell
us a little bit more about the concurrencyKit project that we mentioned in your bio?
Yeah, sure.
So that project was kicked off several years ago.
I was at the GW High Performance Computing Laboratory.
And it's designed to allow for the design and implementation of high performance concurrent systems.
It is released under the BSD license.
And the purpose of that project, beyond enabling practitioners to make use of these techniques,
is to support freestanding environments.
So, for example, it is actually part of the FreeBSD kernel at this point, but also enable practitioners and academics to collaborate.
What I found during my time at the High Performance Computing Laboratory is you have all this great literature on multi-core synchronization,
but A, you don't actually have production quality implementations for a lot of these algorithms,
or B, it turns out a lot of these algorithms just don't make sense in the real world.
So part of the goal of that project is also to serve as sort of a central repository for these algorithms and the reference implementations.
So you have a whole bunch of lock implementations, read-write locks, advanced data structures, et cetera.
So just concurrencykit.org is the homepage of the project.
And, again, it's liberally licensed.
It sounds kind of like a proving ground, I guess, for these techniques.
That's how it started out in the first month.
And then what ended up happening is I needed concurrency kit for every single job that
I've had and every real world system that I've built.
And so did others.
So now we have a very active community around it and it's heavily used.
You know, you've probably all either seen an advertisement online
or received a packet or an email
that has went through ConcurrencyKit today.
Interesting.
Do you know if there's any plans,
either by you or someone else involved with ConcurrencyKit,
to propose them to the standard for C++?
I personally do not have plans.
However, I do know that a lot of other folks
have been investing there.
So Paul McKinney, who invented RCU,
Magid Michael, who invented the lock-free FIFO
and hazard pointers,
as well as a whole bunch of others,
have been working hard to modernize a set of concurrency primitives available to C++.
So they have been involved in transactional memory extensions and incorporating RCU into the standard.
Interesting. Yeah, I was just thinking, if you said some of these primitives are used in the FreeBSD kernel,
I think you said FreeBSD, then it's probably not written in C++ currently.
Correct.
So concurrency kit is in C99.
However, it is used in C++ software,
so there are large portions of it that are compatible for incorporation into C++.
Okay.
Okay.
Do you have any advice for programmers who want to get working with lock-free,
multi-threaded programming?
Yeah, sure.
So you'll find some great introductory resources at ACMQ.
So a while ago I collaborated on an issue with Paul McKinney of RCU,
Maggie Michael, Matthew Desnoyers.
If you Google for non-blocking synchronization ACMQ,
you'll find a great introductory guide and references to all these articles.
If you're interested in reading resources,
two great resources are Paul McKinney's book, very long title, Parallel Programming is Hard, and if so, what can you do about it?
Something along those lines.
Great book. And another one is Nia Shavatt, Morris Hurley, a bunch of folks who are sort of, how can I say,
the founding fathers and or heavily involved in advanced synchronization or multi-core systems,
wrote a book called The Art of Multiprocessive Programming, which is very accessible to practitioners.
So it covers both theory and practice.
So these would be I think these are
great starting points.
And last but not least, you also
have code out there.
So concurrency kit is
written to be readable. You have a lot
of comments. I do recommend checking it out.
What an interesting
concept.
Comments, yeah. Style guides, or you know there's a style guide too. And then you
have URCU as well. So both are fairly accessible and you'll have references to papers, documentation, etc.
So a bit off topic, if you don't mind, from actual concurrent programming,
but you did just mention that there's a style guide.
And that's something that's kind of come up a little bit on our show in previous episodes.
And I'm curious if you do anything to enforce the style guide on that project?
So in previous organizations,
we have integrated the Clang formatter in there,
and you can essentially have a pre-commit hook that rejects something that either doesn't respect the style guide
or just automatically stylizes anything prior to check-in.
At Backtrace and for concurrency kit, we actually adopted the FreeBSD kernel or variant sort of a
mix between the FreeBSD kernel style guide and the Solaris kernel style guide.
Unfortunately, the portions you adopted from the FreeBSD kernel style guide cannot be handled by existing formatters.
So what ends up happening is we have everyone is a style Nazi.
So in any code review, we'll essentially reject the review if style does not pass. How much flexibility do you leave for pushing the rules of the style guide
so that the code is more readable in a particular situation?
Yeah, that is on a case-by-case basis.
So there are cases where the style guide does not make sense,
and these are pointed out during the review.
Interesting.
Yeah, I was curious about that.
Thank you.
I was looking at the Backtrace blog before we got you on,
and I saw that you spoke recently as a keynoter, I think, at GluCon.
And I had not heard of that conference.
I was wondering if you could tell us a little bit about it.
Sure.
So I wasn't a keynoter there.
No.
I talked about debugger technology,
so debugger internals, how they work.
GluCon was a great conference.
It's in Bloomfield, Colorado,
a beautiful little town.
That's a few miles from the guy.
I have no idea what you're talking about.
I was pleasantly surprised.
We had high-quality speakers there.
There wasn't one theme to it, I would say.
It was focused on technologies involved in microservices,
but it really had a nice balance between operations, software development,
et cetera.
So you had something like my talk discussing assembly and low-level debugger internals
to a talk revolving around deployment infrastructure to talks on best practices for microservices in Java.
So there's a lot of variety there.
I think it's a great conference if you want to exchange ideas
and or be exposed to different disciplines within our field.
I will make a note of it.
It looks like next year it may conflict with C++ now.
I'm not positive, but...
Yeah, they were back-to-back.
So I was at CPP now, and then the following week it was BluePound.
Okay.
Yes, I think that's your...
Sorry, what was that, Jason?
It'll be the exact same situation next year.
It's like the day after.
So I know you have a hard stop in a few minutes,
but before we let you go, any updates on
Backtrace that you wanted to share with us?
Yeah, a whole bunch.
So for those that aren't familiar,
Backtrace is a plug-and-play
crash reporting and crash management platform for natively compiled software, such as C++.
So a major focus for us lately, we started off in the Unix server world.
Now we've made a lot of investments for software running on Windows, Mac OS, mobile. Backtrace is now running on
things ranging from tablets to video games. So we've done a lot of great work on easing
integration there, and we've made a lot of investments to improve support for popular
frameworks such as Chromium embedded framework,
eventing libraries used both in server-side and client-side,
game engines, et cetera.
Recently, we exposed a new facet of the product called the Query Builder,
which takes full advantage of an embedded columnar engine
that we've built for the purposes of better understanding the impact of crashes for triage and prioritization,
as well as extracting patterns for better root cause investigation.
So you can do things like, hey, give me a linear histogram of process uptime for all unique crashes.
Give me the distribution of faulting memory addresses or sum by user impact or concurrent events,
and let me triage and prioritize according to that.
So it's pretty interesting stuff.
We will be shooting out a blog post discussing the internals of our embedded
columnar database and why we built one to begin with as well.
Other than that, if you're writing C++, I do highly recommend checking this out.
It should be easy to integrate.
In my previous life, I was a mobile app developer, and I worked on iOS and Android.
I know we looked at, I think it was Raygun.io, which was a similar crash reporting service,
but at least at the time, I don't think they had any support for C++
so you're now in there to fill that need in the mobile space?
Correct.
So we have first class support for C++
on pretty much any target.
That's great.
Okay, anything else you want to share with us before we let you go?
Where can people find you online?
Oh sure.
I have a horrible Twitter handle.
It's 0XF390 is my Twitter handle.
Or just search my name and you'll find me.
And my website is repnop.org.
And if you want to learn more about Backtrace, it's backtrace.io.
You'll find my contact information there as well.
Okay, great.
It's been great having you on today, Sammy.
Yeah, thank you all for your time.
Thanks for joining us.
Take care.
Thanks so much for listening in as we chat about C++
I'd love to hear what you think of the podcast
please let me know if we're discussing
the stuff you're interested in
or if you have a suggestion for a topic
I'd love to hear about that too
you can email all your thoughts to feedback at cppcast.com
I'd also appreciate if you like
CppCast on Facebook
and follow CppCast on Twitter
you can also follow me at Rob W Irving and Jason at Leftkiss on Twitter Thank you.