CppCast - PLF List
Episode Date: October 5, 2017Rob and Jason are joined by Matt Bentley to talk about his work on plf::list and discuss some updates from the SG14 Working Group. Matt Bentley was born in 1978 and never recovered from the e...xperience. He started programming in 1986, completing a BSc Computer Science 1999, before spending three years working for a legal publishing firm, getting chronic fatigue syndrone, quitting, building a music studio, recovering, getting interested in programming again, building a game engine, and stumbling across some generalized solutions to some old problems. News From Algorithms to Coroutines in C++ A Beginner's Guide to CppCon 2017 CppCon2017 videos online Matt Bentley @xolvenz Matt Bentley on GitHub Links PLF C++ Library Sponsors Backtrace JetBrains Hosts @robwirving @lefticus
Transcript
Discussion (0)
This episode of CppCast is sponsored by Backtrace, the turnkey debugging platform that helps you spend less time debugging and more time building.
Get to the root cause quickly with detailed information at your fingertips.
Start your free trial at backtrace.io.cppcast.
And by JetBrains, maker of intelligent development tools to simplify your challenging tasks and automate the routine ones.
JetBrains is offering a 25% discount for an
individual license on the C++ tool of your choice, CLion, ReSharper, C++, or AppCode.
Use the coupon code JetBrains for CppCast during checkout at JetBrains.com.
Episode 121 of CppCast with guest Matt Bentley, recorded October 4th, 2017. We'll be back. for some recorded sessions from CppCon 2017.
Then we talked to Matt Bentley.
Matt talks to us about making a better linked list and shares some updates from SG14. Welcome to episode 121 of CBPCast, the only podcast for C++ developers by C++ developers.
I'm your host, Rob Irving, joined by my co-host, Jason Turner.
Jason, welcome back from CppCon.
Hey, thanks, Rob.
It's a good trip, good conference.
Everyone should go.
Absolutely.
I hope I can make it there next year.
I'm really sad I had to miss it this year.
And you're going to be going to even more conferences soon, right?
Yeah, I'll be going to Pacific++ and meeting C++.
Okay, so just a minor programming note,
because Jason's going to be away at these conferences,
we might be pre-recording some episodes.
We might miss a week or two here and there.
I might try to bring on kind of a guest co-host with me
while he's out at those conferences.
But that's somewhat up in the air right now, but just
be ready that
some of our scheduled
might be a little hectic over the next few weeks.
Okay? Yeah.
Well, the top of our episode, I'd like to
read a piece of feedback.
This week, Jason, we got these
tweets from HoodieD,
and he was commenting
on how he just found a new Rust language podcast
and then mentioned how he had just been listening to us on CppCast
without realizing that this Rust language podcast is hosted by our cousin.
Yes.
Yeah, that's pretty cool.
And, you know, people might just think it's some weird coincidence
that two different turners host um cpp or excuse me podcast for languages but yes i am related to
john yeah and he's been like a rust advocate for a while now and i guess he just started the podcast
recently because it looks like it was just the second episode yep episode number two aired today
pretty cool okay well we'd love to hear your thoughts about the show
You can always reach out to us on Facebook, Twitter, or email us at feedback.cpcast.com
And don't forget to share with you on iTunes
Joining us again today is Matt Bentley
Matt was born in 19...
Hey Matt
Matt was born in 1978 and never recovered from the experience
He started programming in 1986, completing a bachelor's in computer science in 1999,
before spending three years working for a legal publishing firm, getting chronic fatigue
syndrome, quitting, building a music studio, recovering, getting interested in programming
again, building a game engine, and stumbling across some generalized solutions to some
old problems.
Matt, welcome to the show.
G'day.
How are you going?
Doing great. You know, Matt, I think in another bio of yours that I saw, you comment that your keyboard also doubles as a self-defense
weapon. Oh yeah, yeah, the IBM Model M, that's fantastic. It's got this huge metal plate in the
back of it, which means that it actually has like a lot of weight and the plastic is not like the soft kind of
plastic that they use for keyboards nowadays i think they've actually done a test like somebody's
done a test online with like a hardish watermelon which is meant to be a i don't know how accurate
this is meant to be about the density of a human skull or thereabouts you're actually able to like you know smash a watermelon
with one of these things like smash it in half so you know if i ever come across a burglar and
they happen to have a skull with the approximate density of a watermelon then i'm i'm totally saved
plus the extension cord on it is really super long, so you could probably defend yourself without having to unplug it from the computer.
Well, that's definitely handy.
That would have been one of my biggest concerns, actually,
whether or not I would tear the cord out or something.
Did you see that they're actually...
You don't want to damage the keyboard
while you're defending yourself.
Right, yeah.
Priorities.
Definitely.
Did you see that a Kickstarter kickstarter or something i think is
actually uh recreating the model m no but it didn't doesn't surprise me there's i think there's
been several sort of model m-esque uh keyboards that have come out from various manufacturers to
meet that need so but i mean that sounds like they're going a bit further i guess yeah this guy's
trying to like recreate the like using the original patents for the switches and going back and like
using the exact right metal and getting the right texture and lots of traveling back and forth to
asia to like sit it all sorted out um i don't remember where it last left off. I saw a couple of weeks ago.
That's crazy.
I actually fixed my one recently.
Basically the space bar stopped working properly.
And eventually I discovered that I had to take the entire thing apart and do what's called a bolt mod,
which takes like several days and do all of that and order in like about $50 worth of bolts and nuts
and this sort of thing just to get it apart and be able to put it back together again to fix this
one thing it's absolutely insane but luckily I had a spare Model m that i'd found lying in um a gutter somewhere at some
point that i was able to grab some parts out of because you know people are heathens they just
throw out the stuff like it's not gold so um terrible absolutely i mean what are we coming to? That's it. Right.
Okay, well, we've got a couple of news articles to discuss, Matt,
and then we'll start talking to you more about your upcoming talk at Pacific++ and some other things, okay?
Sweet.
Okay, so this first article is from a guest of the show,
former guest of the show, Kenny Kerr,
who wrote an article on MSTN Magazine,
C++ from Algorithms to Kerr Routines. And it seems like it's a really good introduction to
Kerr Routines. It's a very long article. I'll admit I didn't get through the whole thing,
but it looks like it's a really great intro to Kerr Routines and writing your own generators.
Yeah. Well, and I would say, you know,
if you cheat and skip to the bottom to the very last example,
you kind of go, ah.
Although you do have to, like, it does rely on stuff
from many pages before that.
You want to spoil it for us?
No.
No, it's, you know, definitely, you definitely
have to see all the pieces, but the very
bottom of it is making a limitless generator, which I thought was an
interesting idea. It's all about making generators. So it's all
about making something that's easier to use than IOTA
basically. Nice. Okay.
And then the next article we have,
and I'm sure we're going to see a whole bunch of these,
but this is a CppCon 2017 trip report.
But this one has a bit of a different perspective.
It's a beginner's guide to CppCon 2017,
and it's from this programmer who wasn't originally planning to actually go to the conference
but heard about the free lightning talks and the last day of the conference,
which was free, and decided to attend that
and was just really happy about how the talk went.
And this was a female programmer
and talked about how CBDCon was a very welcoming environment for her.
I'm glad she had a great experience with it.
Yeah, did you meet her at the conference, Jason?
I did.
I see it mentioned that she went out to dinner with a bunch of speakers.
I thought maybe you had met her.
Oh, yeah.
I went out.
She was, well, the last dinner on Friday night, yes, we were both there,
but she was like on the opposite end of a table with like 14 people there.
But I did get the chance to chat with her a little bit before that and earlier in
the week yeah well yeah i'm glad to hear that uh someone so new to cpp con and who's kind of a
self-described beginner at c++ uh is able to have you know find so much great content out of the
conference that's really really good yeah and then next thing also cPCon 2017 related, they're starting to upload more and more videos from CBPCon 2017.
So far it looks like they have all five of the keynotes or plenaries.
And it looks like about six more talks from day one, I'm guessing.
Yeah, with up to 12 now.
12 talks total, yeah.
So yeah, I have a lot to watch i've so far only watched two of the keynotes but i need to uh try to catch up with this as quickly as i
can yeah i'm kind of bummed that i missed the whole thing but um yeah again i'm trying to catch
up with them i watched bianna's one um about to get on to titus winter's one but you know takes a lot
of time yeah yeah and by the time they're all up you'll only have about 100 hours of video to watch
yeah i know
yeah so far i've only watched uh titus's and bianna's talk as well uh jason any particular
ones you think we should uh we should be watching as soon as possible um i mean obviously well i gave two you can watch them
if you want to i was just in the middle of watching patrice's it was one of the ones that i missed and
really wanted to watch well actually i think he gave two but his is up one of his is up right now
episode eight patrice uh which machine am I coding to?
Yeah.
That looked interesting.
I watched a couple of minutes of that as well.
But yeah, again, didn't have time to get all the way through it,
but I want to.
And Matt's end note, as they called it, was very good also.
Okay. I'll have to was very good also. Okay.
I'll have to watch that one soon.
Okay.
Well, Matt, as we mentioned,
you're going to be speaking at the very first Pacific++ conference in October.
Can you give us a little preview as to what your talk is going to be about?
Yeah.
So basically, I'll go back a little bit.
Sometime like late last year, somebody got in touch with me.
I should probably find out their name since I will be mentioning it during my speech as well.
But basically, you know, last time I was on here, I was talking about Colony, which is my other container container the unordered one and he was using or trying to use Colony to kind of build a vaguely like contiguous linked list so because
Colony is all segmented rather than individual allocations he figured that
would be a better way to go to make a linked list that performed better and I thought you know that's a it's not a bad idea it's
quite an inefficient way of doing it though so I started thinking about that
and thought about it for about three months and after which time I'd kind of come up with in my head what would probably be the most performant way of building some kind of
contiguous linked list, so when we're not individually allocating and deallocating every single element. And I succeeded eventually, took like six months to do it properly because there's a
lot of ground to cover in linked lists, there's a lot of built in functions, the splicing
and the merging and stuff.
So basically I'm going to be talking about that and where linked lists are actually useful, because there's been a lot said
about how bad linked lists are in general,
and for the most part that's true,
but there's a lot of areas
where they're actually quite useful,
and some areas where they're actually the best performing
out of all the standard containers in given scenarios,
and that actually includes standard list.
But the point of PLF list is basically to improve
on the generalized performance of standard list
and try and get it so that we're sort of expanding
the potential use cases for it.
So that's it in a nutshell anyway. So it sounds like this list fully
implements the standard list interface and you could see it as a drop-in
replacement for standard list? Yeah, one exception it doesn't do partial splices
so you can fully splice one list into another list at any given point in the list.
What you can't do is splice part of one list into another list.
And the reason for that is basically because if you've got multiple memory blocks
and you've got some of the elements that you want to splice are in one memory block and
some of them are in another um you know you can't have two linked lists both owning these memory
blocks um and you need to be able to transfer these elements without invalidating pointers
so basically the only way you can do a proper splice in this context if you've got a contiguous back end is to transfer all of
the memory blocks from one list into the other one and you can't do that if you're only splicing
some of the elements has to be all of them but other than that yeah it covers everything that's
in standard list merging and sorting and bloody blah so why the interest in revisiting linkless seems like uh the community
has really rallied around the idea of just using vector for almost everything these days
yeah i mean it depends what uh facet of the community you talk to sure i mean um obviously
if you talk to mike acton you'll get a very different answer about that.
But, you know, in games development in general, yes, there is a place for vectors, but it's not generally a huge place.
Okay.
And, you know, not all but a lot of the high-performance fields are similar.
Having said that, not many of the high-performance fields are going to be be using linked lists but they might be using things like intrusive lists but that's kind of a different use case
entirely main reason for revisiting it was kind of because it was there like i saw that
this was the way it was and it could be improved so I decided to do it
and in doing so like I have expanded the use cases I found a way of sorting for
lunch less or for any bi-directional iterator container that's a lot faster like 81% faster than
standard list sorting because one of the kind of misconceptions around linked
lists is probably not amongst too many people nowadays but a lot of people
think oh they must be good for sorting because all you have to do is rearrange
the pointers rather than reallocating the elements but it doesn't actually work out that way so I found a way of hacking around it so that we can essentially
internally use standard sort or some other kind of quick sort algorithm with a linked list
or any bi-directional container and get you know significant speed up in performance. Yeah but I mean there are
some use cases which I'll go into in my speech in a bit more detail but
basically if you've got a if you've got an ordered scenario so you've got a
scenario where you've got a lot of data and you're doing a lot of ordered
insertion and a lot of ordered erasure, once you get beyond a certain threshold
of modification to iteration, so essentially once you're getting, you know, if you're inserting
and erasing about 1% of your elements for every iteration that you do or more than that, then at that point, because the insertion and erasure characteristics of
list are so good, that ends up being the fastest container for that scenario.
There's a few other places where they're quite useful as well, basically because all of the
operations don't reallocate. So you merge, you splice, you insert stuff, you remove stuff,
and nothing invalidates your pointers. So if you've got a highly modular or object-oriented
system that you're working with and you're doing a lot of ordered work,
then you can save a lot of time by using a linked list of some sort because you can just point directly at things
rather than using a vector and then having to update indexes
or find a workaround for it and this sort of thing.
I'm kidding.
Yeah, sorry, go ahead.
No, go ahead, finish.
I was just going to say, I have gone into,
like in my benchmarks and stuff,
I have gone into some of the workarounds for vector and deck,
such as using a vector of indexes
that points to a vector of elements,
and then you update the vector of indexes,
and you don't invalidate your indexes in the vector of elements and then you update the vector of indexes and
you don't invalidate your indexes and the vector of elements and this sort of thing
so i have explored some of those workarounds i can't say i've explored all of them because
there's got to be a whole bunch of them that i haven't thought of but anyway
i was curious if you've noticed any difference in how the container performs,
uh,
to,
because I'm thinking about the cost of moving and copying objects around,
like you were saying.
So it seems like the size of the object and whether or not it is trivially
copyable or trivially movable would have some bearing on vector versus your
list versus regular list performance.
Sure. Um, on vector versus your list versus regular list performance sure um so your question is essentially right yeah so whether in those sort of highly ordered um high modification scenarios whether
or not basically changing the type from a smaller type to a larger type um or something that's not trivially copyable yeah
yeah certainly that will make a difference probably in lists favor more than anything
um okay in testing with just straight um you know either scalar types or small structs where there's
you know it's trivially copyable and that sort of thing. Okay. So in that context, you would think that Vector would have more of an advantage
because if it's trivially copyable, then there's less to do when it reallocates.
But yeah, testing, at least my testing shows that that's not the case.
I have had some third-party benchmarks come back to me from one developer that showed some different results
for PLF list, but some of it was, I don't want to say flaky, but some of it seemed quite
inconsistent, and the guy didn't get back to me about you know what compiler he was using how he
was doing this stuff so i haven't had a chance to process that as yet but at least in terms of the
stuff the benchmarks that i've been running so far it's pretty positive so you may have just
answered a question i had um did you already open source your linked list as part of your PLF library?
Yeah.
Yeah, it's all open source.
Do you have any plans to try to get that into the standard as part of the SG14 proposals?
I have had a thought around that. I mean basically it certainly wouldn't go in as
you know a proposal for a different kind of container. The only proposal that
would go ahead would be like a proposal to make partial splicing optional for
standard list because that's the only thing you need to change
in order to make something like PLF list be a viable standard list implementation.
But of course the problem with that is you run into the possibility of breaking old code
and it comes down to people measuring
up whether the performance advantage is worth breaking this small thing given that people
aren't using linked lists much anymore because of the performance problems.
So I don't know.
I'm kind of undecided about that, whether I should even bother, because I don't know if you have put through a proposal at some point, you'll know that it's quite a wearying process of back and forward and getting feedback and trying to figure out what it means and whether people actually understood what you were saying or all this sort of jazz.
Right.
Yeah, I might talk tog14 about that and see whether
they think it's a good idea to put forward because the advantages as far as i can benchmark so far
and again you know probably want to have some third-party results come back before i go too
far with it but at least under my benchmarks, in terms of single insertion, it's about 333% faster than standard list,
and that's simply because you're allocating in chunks rather than individually.
Erasure is about 81% faster, and iteration is about 16% faster,
and that's just because you've got a bit more memory contiguous
doesn't have as much of an effect as you would think it would i mean that that's simply because
you're just following pointers all the time which uh gets in the way of prefetching and a lot of
other stuff so i have a question that's uh maybe a tad like i'm ignorant on the exact issues but i know
that when we talk to people about game development they talk about things like memory fragmentation
and i'm imagining like your list being allocated in chunks because you want that cash friendliness
and the contiguousness yeah and and then if you have to insert something in
the middle of one of these chunks do you end up with like your objects being now fragmented
throughout your your uh your chunks i guess right what you're saying is i think what you're saying
is you know do you end up with a non-linear uh progression through memory as you're following
the next pointers it seems like you'd have to right yeah yeah you do okay um so part of the design of plf list is you know you can't mitigate
for that because you can't break up these memory you know if you've allocated these memory chunks
and you filled them up with elements um you can't you know you know then go and insert into the middle of that and then reallocate
everything because one of the properties that we like about linked lists is that the pointers
stay valid.
So you can't have any reallocation of elements.
So what you end up having to do is if everything's full up, you just insert to the back and then
you link to that.
So in that case you do end up with sort of a bit more of a stride happening in terms
of your memory accesses but the way that I've tried to mitigate for that is by
finding a basically trying to find the optimal solution for refilling erased nodes. So you erase an element,
and then that node becomes available for reuse.
So part of all the work that I did around PLF list
was actually spending about two months
trying to find the optimal way
from any given insertion position to find the nearest
free erased node and if you do that then assuming you know you're doing a certain amount of erasure
and doing a certain amount of insertion then you can end up with your strides between the different elements being much lower.
I tried a whole bunch of different methods and basically did it all by benchmarking and
the end result is something that I'm pretty happy with.
Probably beyond the scope of our talk to go into too much detail on it but basically it
ends up having per memory chunk free lists for the erase nodes and then a whole heap of other
garbage on top of that um lots of fun yeah it sounds a lot like the kinds of uh algorithms
that you had to uh come up with to decide where to fill in the next free chunk. It sounds
a lot like you implemented basically
an in-memory file system in a way.
Like looking for the next
free node, figuring out where the
next chunk of data should go,
linking things together like that.
I haven't really
worked file systems in a great way
so I can't really
answer that with any clarity but probably
um okay so yeah i wanted to interrupt this discussion for just a moment to bring you a
word from our sponsors backtrace is a debugging platform that improves software quality reliability
and support by bringing deep introspection and automation throughout the software error lifecycle. Spend less time debugging and reduce your mean time to resolution
by using the first and only platform to combine symbolic debugging, error aggregation, and state
analysis. At the time of error, Bactres jumps into action, capturing detailed dumps of application
and environmental state. Bactres then performs automated analysis on process memory and
executable code to classify
errors and highlight important signals such as heap corruption, malware, and much more.
This data is aggregated and archived in a centralized object store, providing your team
a single system to investigate errors across your environments.
Join industry leaders like Fastly, Message Systems, and AppNexus that use Backtrace to
modernize their debugging infrastructure.
It's free to try, minutes to set up, fully featured with no commitment necessary.
Check them out at backtrace.io slash cppcast.
So when we first had you on about 60 episodes ago, we talked about SG14, which is the game
development high-performance workgroup, and also a container proposal you worked on, the PLF colony.
Have there been any progress or big updates with your proposals?
Yeah.
I mean, the container itself has progressed quite a bit.
It now supports splicing as well and has a proper sorter algorithm now.
It's gone through quite a few revisions.
I think the biggest problem with it is partially the name
because people just seem to get hung up on that.
I don't really know why,
and that's probably because I'm too close to it
to really see the wood for the trees.
But yeah, people kind of look at the name and for some reason kind of go,
oh, that's weird, I don't understand that.
And then they try and describe the whole thing in terms of other containers
and it doesn't really work out because it's not similar enough to other containers
to call them different names and it kind of goes on and on.
But yeah, I've had some good feedback and there's been a lot of revisions based on that.
I do find it a bit wearying because you kind of, people ask for stuff and then you put more stuff
in and then it goes for the next round of feedback and people get confused because there's too much stuff but there's not the stuff that they
specifically want and then they ask for that and so you end up putting that
earlier in the document and it just you know it's a bit of a ridiculous process
but I think the biggest problem is that the container itself is fairly complex and not easily described in a couple of sentences.
So yeah, we'll see how things go.
Any other news updates relating to SG14?
Not in a huge way, just sort of things ticking along and some proposals popping up and other
ones sort of dying out a bit.
I know that there is ongoing feedback for some of the other ideas
that are coming through.
Unfortunately, because of the timeframe,
the monthly kind of online talks for SG14 are all at four o'clock in the morning my time which means
and you know I'm not I'm not really copacetic at that point so I haven't really grabbed one
as yet but I mean to do so this month if I can um there was talk of like like building an SG14 like library at one point an idea would I kind of poo-pooed
that idea I wasn't so keen on it because it I think it distributes the focus of the group too
much and you get a lot of you get a lot of kind of noise coming out from that sort of thing but
I understand where people were coming
from um and it's still it's something that may happen at some point i think part of it comes out
of um frustration from people you know kind of putting these proposals forward and uh not seeing
much progress and wanting to get them out there in some way, shape, and form, which is, you know, I think that's a good thing.
So it might happen at some point.
Sounds like an interesting scenario.
If there was a semi-official library
that came out of one of the study groups
from the C++ Standards Committee,
that people would take it seriously,
like it's part of the
standardization and maybe that's not necessarily what you want up front or maybe it is what you
want i uh i don't i assume there's a little bit confusing at that point here yeah i have no idea
honestly but it seems like it would have the potential to i was just imagining maybe that's
some of the discussion that you all have had with within the group yeah i think there definitely wasn't an idea to make it more
like an official thing i think it would have been more like um ea's standard template
implementation sort of thing where it's just you know something that people can use if they want to
and people know that it has a particular focus,
which in this case is around performance specifically.
Okay.
Yeah.
The next standards meeting, I think, is going to be in Albuquerque in a month.
Do you have any plans to try to make it to that?
No.
I mean, I'm very much going to be caught up with pacific c++ and then after that i'm just
going to crash and burn for a while um because yeah i've kind of been going hard on peer left
since the beginning of the year and um you know initially i wasn't sure if it could actually be done well or if i could do it well
um but i managed to do it well i'm really happy about that and once the conference is over it'll
be a good opportunity to just kind of take a rest from that but um yeah maybe next year
i might be able to make it through to one of these things. Patrice, Roy, and...
Oh, gosh.
I've forgotten his name off the top of my head,
and that's a really bad thing.
Guy Davidson?
There's Sean...
Oh, okay.
Yeah, yeah.
Guy Davidson and also Sean...
Middle Ditch?
Yeah. Yeah.
Okay.
All of them have at various points offered to help out with presenting the proposal at the various places.
So I've been quite grateful for that.
Yeah, but it is problematic not being there to actually be able to explain stuff to people,
given that some of the knowledge is kind of fairly deep into it
as to why this, that, and the other.
So I think it is suffering a little because of that.
So if I have the opportunity to do so, I'll definitely do that.
So it sounds like, though from from the way you're
talking that this sg14 meetings don't happen like with the c++ standards meetings you said there's
like an online thing that happens and i know they just had an sg14 meeting at cbp con yeah um yeah that's that pretty much seems to be the way it works um there's
an online kind of group um uh phone call chat etc and that happens every month um
yeah i'm not actually that familiar myself with how the other standards groups kind of work things out so
okay yeah
so do you uh what is in your opinion is the next major thing that sg14 really needs to tackle to
be able to get pushed into the committee for to address the needs of the game programming and
high performance community um i'd really like to see some progress on some kind of flat map.
There's been multiple proposals for that,
one from Sean Middleditch and at least two others.
And I can't even remember whether one of those is inside or outside SG14 or
whatever, but it seems like that's definitely a big thing that needs to happen, like basically
a map whose performance isn't bad for the majority of cases that we're using them for.
And there really needs to be some kind of consolidation there
in terms of looking at the various different proposals
and kind of, I guess, people coming together
and settling on, you know, what is the best way to do it.
And I don't know if that's really happening at the moment,
but, you know, there's the potential there
for that to just kind of die off due to there
being too much diversification in terms of needs, but I guess that's always the case
in terms of proposals.
There's always some people who need something different, particularly when it comes to containers.
At this point, I should note that I'm not even necessarily
in favor of standardizing everything.
Last time I talked to, on CPP chat with John Kalb,
I think he was talking about how in, I think it was in Go,
you know, they basically have the core language and then they just have
lots of different libraries and you know there's not so much of this attempt to standardize
everything and in a way i feel that could be more healthy um Because by the time you've gotten something standardized,
you know, it takes such a long time,
and then it takes even longer for it to come into the language.
Sure.
You know, to actually get implemented in compilers.
And by that time, you know, you can have five to ten years have passed,
and the CPUs are all different,
and the way that we're doing stuff is different.
And, you know know it's a bit
ridiculous um and i've seen it happen with uh colony to an extent um initially when i was
programming it i was doing it all on a core 2 so a lot of my assumptions were based on
that processor its performance this sort of thing.
But more recently I finally got the paper for the jump counting skip field pattern,
which is kind of one of the core components.
Got that paper published in a relatively small journal, I think under Springer.
And more recently I've been trying to get the full paper published because there's quite a bit more I had to truncate it a bit.
Anyway, what I ended up doing for the full paper was including the benchmarks
for both Core 2 processes and Haswell.
And I was amazed at how much of a difference there was in terms of the
characteristics like Core 2 has a really small branch decision history table so Boolean skip
fields work really poorly on it like really really poorly so jump counting skip field always has a
iteration advantage but when you go on to haswell
it's got i'm not sure of the exact number but from the benchmarks it looks like a branch decision
history table that can store about 2500 branch decisions wow so so until you get up to beyond
2500 elements in your container the boolean skip field works just as well as a
jump counting skip field and then after that point then you get the performance advantage from
you know a jump counting skip field so things can change so much in 10 years which is why i'm kind
of i'm quite you know I'm quite, you know,
I'm not necessarily in with that crowd sort of thing,
but I have a lot of sympathy for kind of the more,
the people who are talking about more data-driven design
and this sort of thing, because it's really hard to generalize.
And it's even hard to generalize within the scope of 10 years.
So if we're standardizing stuff now, we're trying to standardize it,
you know, we've already seen what's happened
with linked lists over,
albeit over the course of like 20 years or so.
But, you know, what are things going to be like in 10 years?
We don't specifically know.
That's the problem.
All right.
End rant.
So if you're not necessarily trying to get some of your containers
to become part of the standard,
what do you think are some of the major things SG14 is working on
that they are trying to get into the standard?
Sorry, I missed that.
You're saying you're not necessarily trying to get like Colony
or PLF list into the standard.
I'm just wondering, what do you see SG14's goals as with standardization?
Yeah.
I mean, those are the goals and those are my goals at present.
I guess I'm just pointing out the problematic aspects of standardizing everything.
I think it's good to have that stuff in there if you can,
but it's just that the process of standardization
leaves things fairly inflexible to change long-term.
So there are a lot of things that people use on a very frequent basis,
like what's the one that Guy Davidson is working on?
I'm trying to get it into my head.
Like a loop.
Elements.
Feeling really dumb at the moment.
Didn't have a lot of sleep last night.
Oh, a ring buffer?
Yes, a ring.
Man, where am I? Yes, we talked to Guy about that, I think, even, yeah. Right. the moment didn't have a lot of sleep last night oh a ring ring buffer yes a ring yeah man where
am i yes we talked to guy about that i think you've been yeah right yeah so you know that's
something that people uh in embedded use as well so it's sort of one of these oversights as to
why isn't this in there when there's these other things that are, you know, not actually used as much in there and there.
So it does make sense to have stuff in there and, you know, if you can,
but I'm also sort of in favor of saying, well,
maybe not everything needs to be standardized.
Maybe, you know, we can actually say, you know we can actually say you know let's do things this way and
somehow provide a more flexible way for people to be able to get libraries and use them without
some of the difficulties that come with that and i think that's as far as i understand it, that's part of what Modules is trying to address.
Is that what you're understanding?
I can say I am definitely not an expert on Modules.
Maybe we'll leave that then.
Yeah.
Cool.
Well, anything else you're looking forward to
with the upcoming Pacific++ conference?
Yeah, Akaroa. Akaroa is a little French town, was French, out on the
peninsula about a few hours drive from Christchurch. It's the only place in New
Zealand that was colonized by the French and the English had to
work really hard to make sure that that the French didn't actually claim it for
their country so they found out in advance but by only a small amount that
the French were planning to land there and put their flag down there and so
they like raced ahead and and you know got there in a few days and i think they
got there maybe only a day before the french arrived and put the english flag in there but
it's this beautiful quaint little french town um kind of situated in between big mountains and
next to the sea and all this sort of thing there's whale watching that sort of thing so
that's going to be pretty cool i haven't been there for about 10 years quite looking forward
to that and that's where the conference is being held no not at all that's just where i'm going
after but um no i'm looking forward to the conference itself obviously um it'll be and
christchurch is a lovely town um you know, earthquakes aside, it's a lovely town.
It's been quite a while since we've had an earthquake there,
so, you know, I think you're fine.
If you're worrying about it, don't worry about it.
I have not been worried about it.
Sweet.
But, yeah, it's just a lovely place to be in general.
And, yeah, it'll just be good to kind of, well, see yourself
and see all the other people who are more in-depth in the C++ community
and kind of hang out a bit more.
It's fairly auspicious in a sense,
being the first kind of Australasian c++ conference um so i kind of hope
it goes well you know for uh phil's sake he's the guy who's organizing the whole thing um because
yeah he's really put his own butt on the line there in terms of um you know doing the funding
side of it and all that sort of thing um so if it does well then hopefully you know next time you can get more sponsorship and do a bigger
event and that sort of thing would be good and i'm definitely really looking forward to the
opportunity to meet people that i've only chatted with uh online, both there and at Meeting C++.
And, of course, New Zealand is legendary for its beauty,
so looking forward to that too.
Yeah.
Yeah, I think you were saying you've got a week before or afterwards
to do some sightseeing?
Yeah, we'll be doing some touring around the island,
so should be able to see some a good bit
of it i think yeah i mean it's you kind of need a decent month for the south island to really go
everywhere but yeah i mean regardless whatever amount of time you have you'll enjoy it so that's
good plan right okay well thank you so much for coming on again today matt thank you yeah thanks
for joining us thanks so much for listening in as we chat about c++ i'd love to hear what you
think of the podcast please let me know if we're discussing the stuff you're interested in or if
you have a suggestion for a topic i'd love to hear about that too. You can email all your thoughts to feedback at cppcast.com. I'd also appreciate if you like CppCast on Facebook and follow CppCast on Twitter.
You can also follow me at Rob W. Irving and Jason at Leftkiss on Twitter. And of course,
you can find all that info and the show notes on the podcast website at cppcast.com.
Theme music for this episode is provided by podcastthemes.com.