Algorithms + Data Structures = Programs - Episode 182: C++ Variadic Templates, Swift and More with Doug Gregor
Episode Date: May 17, 2024In this episode, Conor and Bryce chat with Doug Gregor from Apple about C++11 Variadic Templates, C++11 std::tuple, C++17 std::variant, Swift and more!Link to Episode 182 on WebsiteDiscuss this episod...e, leave a comment, or ask a question (on GitHub)TwitterADSP: The PodcastConor HoekstraBryce Adelstein LelbachAbout the Guest:Douglas Gregor is is a Distinguished Engineer at Apple working on the Swift programming language, compiler, and related libraries and tools. He is code owner emeritus of the Clang compiler (part of the LLVM project), a former member of the ISO C++ committee, and a co-author on the second edition of C++ Templates: The Complete Guide. He holds a Ph.D. in computer science from Rensselaer Polytechnic Institute.Show NotesDate Recorded: 2024-04-29Date Released: 2024-05-17C++11 Variadic Templates / Parameter Packs / ExpansionC++26 Pack IndexingC++11 std::tupleC++17 std::variantC++11 Digit SeparatorsSwift Programming LanguageHPX (High Performance ParalleX)Intro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
So when Chris started this off, what was his original vision and intent?
Was it to create a general purpose systems programming language?
I see a nod.
I am nodding, yes.
The overall goal was general purpose systems programming language.
Would you say that's what Swift has become?
Yes. So I think
that is what Swift has become, but that's not necessarily what the perception of Swift is.
Welcome to ADSP, the podcast episode 18, recorded on April 29th, 2024.
My name is Connor, and today with my co-host Bryce, we continue our interview with Doug Greger,
and in this episode, chat about C++, variadic templates, std tuple, std variant, the Swift programming language, and more. But actually, this is perhaps a good transition point because I don't want to talk
solely with Doug about concepts. I actually want to bring up something that you said to me
last year at PLDI, and I'm not sure if you'll remember this, but we were talking about a feature
that you worked on that did end up going into C++11, which was
variadic templates. And I believe you said something akin to variadic templates was like
a perfect feature. There were no defects. It's just a perfect, pure thing. And I'm very happy with it. And I felt a little bit differently
because while variadic templates are a wonderful feature
and anybody who does not feel that way
should look at how Tuple was implemented
pre-variadic templates.
You can go find it in the old Booz libraries
or the Thrust tuple,
a thing that I have had to deal with
for many more years than I would like.
But my objection, my issue with variadic templates
is this head-tail pattern
that you can't index into them.
Although, interestingly, last committee meeting,
we voted in parameter pack indexing. But explain yourself, sir. Why are variadic templates such a
great feature? I should have known it wasn't safe to talk to you about variadic templates.
So the reason I was so happy with variadic templates, I still like the feature, is it's a pretty tiny extension.
And for a tiny extension, it enables a whole lot of really interesting abstractions.
So at its core, you have one, I guess you have two new things.
It only introduces two things into the C++ language.
What did you just do you you put the p sign up for for the listener you know doug doug was uh i had a p sign for two who put his
two fingers up and then a bunch of balloons appeared you know floating up around his head
i'm not sure if there's some birthday sign or something like that.
Maybe they thought you were...
It's not going to happen on our systems, Connor,
because we are both connecting from non-Apple systems.
And I'm pretty sure that this is a Apple thing.
Yeah.
Look at that.
That's amazing that it's an Apple feature in a Microsoft product.
Well, my understanding of how it works is that it's like an injection.
So it's something that Apple layers on top of the camera feed and then feeds through to the apps.
So it's transparent.
Yeah.
That's pretty cool, actually.
It's pretty impressive.
But the net effect is that we got a party with balloons while we're talking about
variadic templates. I'm pretty happy with that.
This is true. This is true.
Okay. The two things that variadic templates added are parameter packs, introduced by the
ellipsis, that abstract over any number of parameters. Same idea, whether it's template arguments or function parameters.
And pack expansion, which lets you take a pack and expand it out into multiple arguments.
As a feature, it's really tiny.
And it wasn't terribly hard to implement.
So the initial implementation took about a week or two in GCC,
including porting over tuple, function, and bind libraries
and getting them completely working.
And so it's fairly easy to teach.
It's a fairly small extension.
And yet it let us get rid of this horrible limitation on the number of template parameters
you could have in a way that I was fairly proud of. Pack indexing is nice,
I agree. So why didn't we get pack indexing in the first version? Why was it like this
head-tail sort of pattern to unpack them? So we didn't have pack indexing in the first version,
mostly because we couldn't come
up with a syntax, which sounds really silly. But, you know, we tried several different things and
you couldn't really subscript. That wouldn't work. And I think we tried subscript with the
leading dot, like x dot subscript, you know, open square bracket something. And that was hitting
problems with the preprocessor. And essentially, we sort of got tired of looking for a syntax for a feature that wasn't strictly
necessary, right? Because you can implement this directly.
Why couldn't you subscript?
Well, because X subscript with an I, that is completely reasonable if your pack X
had a bunch of values in it that were all like vectors or something or arrays.
Right, right.
The syntax that we settled on
that we have in C++26 is dot, dot, dot subscript.
Okay, sure.
Yeah.
That will work.
That may have provoked antibodies
had we tried to add it to the proposal at that point
in the C++ OX timeframe.
To be fair, it would have been almost impossible to have foreseen at that time how widely used
parameter packs would have been and how important the indexing would end up being. I think the same can be said of constexpr.
Constexpr is like the C++ case study in incremental evolution,
where we shipped a very minimal,
restricted form of constexpr in C++11,
and it's slowly grown over the past decade
with every release,
us allowing more things in constexpr,
making more things constexpr.
It's a point now that you can have constexpr vectors and strings.
And that constexpr today is a very different beast
than it was back then.
And it's used, you know,
a whole new form of metaprogramming
has arisen around constexpr,
replacing type-based metaprogramming
in ways that I don't think the people
who originally worked on constexpr
could have ever really imagined. And as you say, variadic templates, a very important and
signature feature of C++11, at the time, it may not have seemed like that because it was this very
easy and simple thing to do. You said it was pretty straightforward to put into the language,
it's pretty straightforward to implement.
And the full impact of it may not have been known or realized at the time.
No.
And our goals when we did it were very much around these libraries.
So std function was mine.
I was working with Jaco Yarvey, who did std tuple.
And we were both really frustrated by this dumb limit that we had. you could only have 10 or 20 or whatever the implementation defined limit was values that worked and so we wanted
this feature to make our libraries better and that that was kind of the scope we imagined that
you know only library designers doing this kind of thing would end up using variadic templates. So keeping it sort of smaller was a benefit.
So tuple actually is an interesting one
because I, for a long time,
have been a proponent of the exposed things
as structing class abstractions.
And the way I like to explain this to people
is like C arrays and C++ are weird
because they don't really behave like other things.
There's a whole bunch of caveats when you have an array object.
You can't return them from functions.
They have all these like when you pass them into parameters to functions, there's all these pointer decay things that happen.
And like std array is better because it's a normal thing.
It is a, you know, it's a struct. So therefore it follows
all the basic rules of what a struct is. And that's a good thing. We like things that sort of
like behave like the other things. And that's one of the reasons why in sort of modern C++ design,
we like to expose things as even things that are language features, we like to expose them through the
library. Even things like, you know, reflection, like the metadata type is exposed as a library
type. And things like, you know, tuple and variant get exposed as library types,
because then they behave like all other class instruct types. With that said, just so many hours and years of people's time
have been spent on making tuple work or making variant work.
And tuple, aside from std git being a little bit ugly, tuple's not so bad.
Variant, I think, is less pleasant to work with.
The real place where tuple's painful is more in the implementation side of things.
And I just wonder, would we have been better off with a language tuple?
Was that something that was considered at the time?
I don't remember it being seriously considered at the time.
Now, Tuple has a pretty long history, so it was a boost library.
Yeah.
And, you know, it and function were the first two papers that were voted into the C++ technical report on library extensions.
Which is like the first thing the committee did after C++98.
Yeah, the TR.
The TR, right?
The good old TR.
It eventually came into C++OX.
And there wasn't a whole lot of appetite for language extensions at that time.
So I don't know that people would have really considered it as a language feature.
And part of the thing is what you said.
There's, you know, Tuple as a library works pretty well.
If we get a nice pattern matching facility, then it'll work pretty well, probably quite
well in C++.
Variant is a completely different story, though.
I do not love using variant.
I tend to hand code my own discriminated union types.
This is one of those things where in Swift, we got it right. And when I get to write the Swift form of, you know, what I think variant should have been, it is wonderful. And then I have to
port it to C++ for some reason. And it's, you know, to fit in the C++ compiler or whatever. I have to express that idea. It is really painful
because discriminated unions is actually a fundamental data type. Like it should be a
fundamental data type. We have structs, right? What do structs do? They let you put a bunch of
things together. What does discriminated union? It lets you express a bunch of things together what does discriminated union it lets you express
a choice between a couple different things and variant makes that choice really painful
you have you have to basically embed each of your choices in another type that you
create just for its name like there's no way to name these things every operation on a variant
is either this like unsafe thing where you want to go and check,
do you have this? And then use it. And then you have big ifs. Or you have to write a giant
type switch visitor thing. It can be so much nicer.
Yeah. Yep. I completely agree. I think at the very least, we should have done variant as a
language thing. But if we were going to do variant as a language thing, it probably didn't make sense
to have a language variant, not a language tu tuple so maybe we should have just done both
those language things in c++ 17. so so before i do want to chat with you about swift um but
before there's one last c++ topic which if and this is like bar stories that I was told many years ago.
And so I'm not sure if I'm going to remember the details or this is something that you're going to want to talk about.
Oh, no.
Do you recall playing a role in the history of digit separators in C++?
Digit separators in C++. Digit separators in C++.
There was a story.
Did we end up getting digit separators in C++?
I literally cannot remember.
I remember a lot of discussions over what it could be,
and there was a serious problem because of user-defined literals.
So you couldn't have nice things.
You couldn't have the obvious underscore syntax.
So what I've been told, so the way the committee used to work was there used to be a plenary on
Friday. And that was the fun plenary where we would, and the way that the closing plenary
has worked has changed like three or four times. So I'm just going to describe one of them,
although it's actually sort of describes, there were three different evolutions of this way of doing plenary, but I'll just describe all of them. It used to be
that we had two plenaries, one on Friday, and that was the fun one where discussion was allowed.
And then one on Saturday morning where we weren't supposed to have any discussion,
it was supposed to be just procedural and we'd vote. And there would only really be discussion if new information had been uncovered overnight. And these days, we don't even, we just have one plenary. We don't
even like have technical discussion during the plenary because we're supposed to like front load
all of it. But we used to do it this way. And those Friday plenaries could go long.
And the story that I had heard was that it was very rare that information changed overnight.
But what I seem to recall is that in one case involving digit separators, on Friday, there was the plenary and we were going to vote out digit separators.
And then everybody went off to the bar.
And then some group of people found some new issue.
And then some folks went around and told everybody about this issue.
And then between the close of plenary and Saturday morning, the room had changed and
the vote on digit separators failed.
And some stakeholders had
decided to fly out on Saturday morning because they were so confident that everything was good.
And then there were some surprises. Do you recall any of this?
This sounds vaguely familiar. I mean, an interesting problem with this approach, of course, is that you only learn
in the Friday plenary what some of the other working groups have been up to.
Right, right. We've gotten better about that.
If you're a compiler implementer, it may very well be that you were in another room when some
of these things were decided and might be a problem. So I have a vague recollection of this happening.
I do not even remember why digit separators were so problematic.
Oh, I will find the meeting minutes if I can and send them around in the show notes.
But yeah, I had a vague recollection that you were one of the people involved.
I won't say the culprit because that would be unfair. That sounds deeply unfair. I just, I had a vague recollection that you were, you were one of the people involved.
I won't say the culprit because that would be, that would be unfair.
That sounds deeply unfair.
I will say that I won't say the culprit.
So the implication exists. I think we should talk about Swift because you were, you were there at the beginning.
And, and Chris will come on our podcast.
So how else are we going to learn about how Swift started?
I see.
So I'm your second choice.
Okay.
I'll remember that.
Thanks, Bryce.
Before we ask any questions, I'm curious, Bryce.
We're going to test Bryce's knowledge.
Just, you know, simple question.
What version do you think Swift, the latest Swift is on? Just. Oh, I'm going to guess the last time I knew what
Swift version was, was around 14 or 15. Because I asked Chris something about when an async feature
was going to go in. And he said, that's not planned until like 15 or something.
And that must have been five or six years ago
because I would have been working at NVIDIA.
So let's assume it's five years.
So let's assume that Swift revs like twice a year at least.
So it's got to be at least in the 20s um but maybe the early 30s so i'm gonna say
24 25 wow the purpose of this question is everything i hoped it would be folks uh
doug would you like to tell bry Bryce what version Swift is currently on?
So the most recently shipped Swift version is Swift 5.10.
Oh, okay.
Well, then what do I know?
The big push right now is for Swift 6, which is actually a big deal. So we treat the major version as the only time that we can sort of make a real change that is source incompatible with the prior versions. So we did it much more often in the early days as Swift was still growing and learning its space. Swift 5 has been Swift 5 since early 2019.
So actually, basically half of its life has been Swift 5.
That's the point at which we declared ABI stability.
So then I must have been thinking of, Chris must have told me 4 and 5,
and 4 and 5 thought something and i must have
in my head mixed that up to 14 and 15 okay it's my theory could have been swift four would have
been around 2017 i guess i may be off by one but but all joking aside I'm actually very excited to have Doug here on the podcast because I do not get starstruck by many people that I was exposed to early in my career who I like looked up to and sort of get that little star struckness with.
Doug's definitely one of them.
I would probably say Dave is another one.
Sean Perrin's actually not just because I spent so much time at bars in C++ now with Sean, it's like he just became a friend.
And yeah, but Doug's one of those people who is like, we have Doug Greger on the podcast.
Wow.
So you are in no way a second choice.
But Chris, if you want to come on the podcast, you're always welcome, buddy.
So, okay.
So you're at Apple.
You've been tasked with working on Klang.
And then at some point, Swift happens.
Like what happens in between you working on Clang and Swift?
Sure.
So we worked on Clang.
We made super fast progress early on.
So I joined in late 2008.
By 2010, we had a C++ compiler that could compile Boost.
So things moved really quick.
And we were rolling it out.
It became the preferred compiler on Apple platforms by 2010, 2011, and sunsetted GCC.
It's worth its time. Why? Why build a new compiler?
We needed a compiler that was more flexible.
GCC was kind of hard to work with.
I implemented a lot in GCC.
So I did variac templates.
I did concept GCC.
I think I also did R value references and so on, as well as some type system surgery.
It was just a hard compiler to work with.
It was an old C code base.
I think they've moved to C++ since then. But we felt that starting from scratch, knowing C++ as of like the C++ 11 standard was, you know, mostly done at that point. We figured we could do a much better implementation that, you know, leveraged LLVM the backend and leveraged the philosophy of the library based design that LLVM had and build a compiler that was just both better for users, but also more extensible to build better tooling on,
make it easier to implement new language features, and so on. And so it went very well.
We did that. We started adding some features. I mean, this is at the point where if someone came
up with a feature in the C++ committee, myself or later Richard Smith could go ahead and try to hack
it up in Clang. It's very possible that your digit separator story was this exactly, that we went and
tried to hack it up in Clang and found some ambiguity or something, even though I don't remember it. And so, but as this was happening, we were also adding more support to Clang for
other things like for Objective-C, which is the other language Apple, you know, supports
in the C family. We added automatic reference counting to try to improve the memory safety
of the language. But we really realized that C++ and just the C family of
languages had some fundamental limitations that we could never fix. Some of those are around
memory safety. I don't necessarily want to go down that rabbit hole, but our view was that
these are unsolvable problems in the language as we have it, or that solving them would change the language so much
that it wouldn't still be the same. And the other problem is that you can't really remove anything
from C++. And so you're stuck with a lot of decisions that we would not have made early on.
And so we decided to, it was time to go and build a new modern native language
based on all we had learned about language design and compiler design.
And so Chris Blattner totally started it in secret.
And then after a bit of hacking, he looped in several of us to go and join him.
So I'm committer number two to the repository for Swift
and have been working on it ever since,
including the generic system was initially my design.
I've also worked on the concurrency system, the macro system,
and most other aspects of the language over the years.
So when Chris started this off, what was his original vision and intent? Was it to create
a general purpose systems programming language? I see a nod.
I am nodding, yes. The overall goal was general purpose systems programming language.
Would you say that's what Swift has become?
Yes.
So I think that is what Swift has become, but that's not necessarily what the perception of Swift is.
Yes, I would tend to agree.
I think people tend to think of Swift as that language that you write your Apple, your iOS apps in, right? Yes. But so, okay. So
maybe you can, you can tell us a little bit more about Swift and Swift's design and specifically
what makes it a general purpose language and not just that app language.
Sure.
So, I mean, what does systems programming language actually mean?
Because you can define it a lot of different ways, and so it's tough. So the things that we look at are, as a language,
it needs to have abstractions so that you can build good libraries
because any system of any size is architected as a set of libraries.
It needs to have a compilation model that admits efficient code.
So in our view, this meant it had to compile to native code.
We couldn't accept requiring a JIT behind the scenes. And the design has to be such that you can scale your program.
So I've had a hard time articulating exactly what this means.
But, you know, if your language depends on having to see every bit of source code everywhere
and compiling it all together, you aren't going to be able to scale up to a larger system. You have to be able to ship separate software components with stable interfaces
that can evolve over time. And so our view is those are the main ingredients.
And the last piece that we cared about a lot is related to memory safety, but it's not memory safety. So it was really, we wanted to
make it such that it was easy to get to correct code. So the compiler and its type system should
support you in getting your code right. Things like undefined behavior shouldn't happen. And
the type system should encourage good patterns that help you write
correct code and deal with erroneous cases and so on in a way that it's obvious when you look
at the code what it's doing and that's what it actually is doing under the hood. And so that's
kind of the overall dream of Swift. And you can see this play out in how the language is actually designed.
So things you're probably familiar with, for example, in Swift,
we push value semantics everywhere.
We love value semantics.
And value semantics have this wonderful property that if you have a value
and you make a copy of it, the original value and its copy are completely independent.
Nothing you can do
to one will change the other. This is so good for local reasoning because you can just reason about
the code that's on your screen. You don't have to think about, well, someone else might have
an alias to my data structure over here that's going to create some spooky action at a distance
and affect my code. And when you have that basis of value semantics, which runs throughout Swift,
then you can make immutability that really works and say, okay, well, most of the time you're not changing things. So let's make it easy to do immutability in the language and make that sort
of the default view that things should be immutable unless you really, really want to
make it mutable. And then those mutations will be local. When did automatic reference counting become the memory management story of choice for Swift?
Or does that just come from the Objective-C background? Or was it a principled choice?
No, it was actually a principled choice. So it was certainly true that Objective-C had a convention-based manual reference counting
scheme that we then codified in Clang to make it automatic. When we started Swift,
we looked at the options. I mean, the main options there were traditional garbage collection
or something like automatic reference counting.
And they're interesting and have different trade-offs. I think the easy answer probably
would have been, ah, let's do a traditional garbage collector. But there's a couple things
that worked against that. One, we'd had sort of a poor experience with trying to do garbage
collection for Objective-C. There was an attempt. It didn't really work out very well. But there's a reason for that, which is when you're interoperating
with a C stack, you have some of your software written in C and some of it written in another
language. You can't trust C to behave the way you want it to from the perspective of a garbage
collector, right? And you see how hard this is in any language that has garbage collection
and also has a native interface to C. And with Swift, we knew that one of our key features had
to be really good interoperability with C so that we could start like incrementally moving the world
over. Having garbage collector would have made that really hard. That's one of the points. The other point that I think is
really important is that the great thing about reference counting is you can locally optimize
it away when you have more information. It becomes a compiler optimization problem. You set up your
calling conventions so that you know where the reference counts are supposed to be maintained.
But if you have some performance
critical code where you absolutely cannot handle the cost of reference counting,
you can force the optimizer to look harder, add more annotations to the language, whatever it
takes to eliminate the reference counting just in that place without affecting anything else
in the system. So it has this really nice property that you can do local
optimizations to improve performance without throwing out the model. Finally, you don't need
a runtime except for like we have retain and release. That's it, right? A couple of instructions.
It's an atomic basically. And so you need very little runtime support to make this model work.
And so that lets you scale your language down to run it in environments where you would never consider having a garbage collector running.
Yeah.
Right.
And so that gave us the option to actually say, well, you know what?
This is actually reasonable to use down in a kernel or down in firmware somewhere where you don't, you could never do a GC and you would have to
essentially switch the model. We don't have to change the model. We just have to make sure it's
optimized well in those places. And it also gives you determinism, right? You know, you don't,
yeah. Yes. That is the other thing I should have mentioned. It gives you determinism
of, you know, at the end of your scope or when you stop using a value that the reference count is going to drop and it will go away.
So you can reason about it better than you can with a garbage collector.
One of the reasons I've always been a fan of Swift has been Arc.
Because I, for many years, worked on the HPX runtime, which is an HPC, parallel programming runtime. And HPX
relies entirely upon reference counting and has this distributed reference counting mechanism
where it's based on, you know, you get a number of reference count tokens and then
as the thing moves, you can split them. And so you don't have get a number of reference count tokens. And then as the thing moves,
you can split them. And so you don't have to have that much reference counting traffic.
And that's what I'd spent a lot of my early work on. And, you know, one of the things that I came
to appreciate in that time was, in particular, when you're working on a concurrent system,
you need to have some form of managed lifetimes because you cannot solely rely upon new and delete,
and you don't have this scoping. So you can't just rely on something like unique pointer.
You need some way of managing the lifetimes of objects. And I really believe that reference
counting is the best option.
It's deterministic. I think it gives you the best performance. It's straightforward to reason about.
The only thing you really have to concern yourself with is the potential risk of cycles.
But in my years of dealing with reference counting systems, it's never been a major issue.
I've run into it a couple of times.
But I think automatic reference counting
is something that almost makes Swift unique
because I think most of the languages
that are in the space of a language like Swift either has no automatic memory
management story or they do a GC. A reference kind of thing has just not been very popular
over the years. And it's one of the things I really like about Swift, I have to say.
Yeah, it's really interesting because, you know, there's been an enormous amount of effort putting into making GCs, like, way better than they were before, right?
Ten years ago, I probably would have said, you know, no GC pauses from when you use reference counting.
But GCs have gotten so much better that, you know, that can be an issue, but it's pretty rare.
The main place it affects users is with reference cycles. And, you know, that, that can be an issue, but it's pretty rare. The main place that
affects users is with reference cycles. And, you know, we do see it, right? We do see users that
end up with a reference cycle and we were very deliberate. And when we picked reference counting,
we said, there shall not be a cycle collector. We consider the presence of a cycle to be a
programmer error. The memory will leak, you will find it with your
tools, and we'll provide the tools that make it obvious where that is, as well as providing the
language tools where you can have a weak reference, for example. Yeah, I was just about to ask. So,
is there in Swift some sort of weak reference mechanism? Yeah, right. So, the way it works is
you have, you know, if you just have a, you know have a variable that refers to an object, the value of class type, then that is a strong reference.
It'll keep it alive.
You can describe that variable as a weak variable, spelled like that.
The type of that variable is always an optional of that class type.
The reason is that optional will implicitly become nil if the object ever goes away.
Because a weak pointer does not keep it alive.
It has to tell you.
Now, optionals are actually deeply ingrained into the Swift language.
And there's a lot of syntactic sugar to make them easy to use. Part of the reason we can
do this thing with weak is that there are no null pointers in Swift. This was a massively
controversial decision when we first did it, when Swift was introduced. But basically,
if you have a value of type person where person is a class, there's always an object there and it's kept alive.
If you want a null pointer, that's fine, but it shall now be of type person question mark, meaning an optional person.
And to access it, you always have to go and look like, is it nil or is it not nil?
And we provide nice syntax to make it easy to go in and and do that um consistently uh to make your code more
robust and it's it's one of those places where it aids correctness like it the first time you if
you're coming from c++ you're like i know how to deal with null pointers i know what i'm doing why
are you making me do all this ceremony you kind of get angry with us. We had that. But after a while, you realize you stopped thinking about no ever.
And the only time it ever comes up is when you have an optional and the type system is showing you what's going on and the type checker is helping you deal with it in a reasonable way.
And you realize you no longer have this background.
I'm worried about this stuff.
Because the language has got a better
model that reinforces correct code. And this is something we could totally have done weak
pointers with the GC behind. That's pretty common in there, but it fits so nicely together using the
reference counting model with all of these benefits of determinism and optimizability and small runtime overhead for that feature.
But I think that the determinism and the performance advantages to me
allow languages with reference counting to thrive in spaces
that they otherwise would not be acceptable in,
which is one of the appeals to me.
Be sure to check these show notes either in your podcast app or at ADSP the podcast dot com for links to anything we mentioned in today's episode, as well as a link to a GitHub discussion where you
can leave thoughts, comments and questions. Thanks for listening. We hope you enjoyed and have a
great day. Low quality, high quality. That is the tagline of our podcast it's not the tagline our tagline is
chaos with sprinkles of information