CppCast - Software development in a world of AI
Episode Date: May 2, 2025Daisy Hollman joins Phil and Anastasia. Daisy talks to us about the current state of the art in using LLM-based AI agents to help with software development, as well as where that is going in the futur...e, and what impacts it is having (good and bad). News Clang 20 released Boost 1.88 released JSON for Modern C++ 3.12.0 Conferences: Pure Virtual C++ 2025 Full schedule C++ Now 2025 C++ on Sea 2025 - speakers C++ under the Sea 2025 Links "Not your Grandparent's C++" - Phil's talk "Robots Are After Your Job: Exploring Generative AI for C++" - Andrei Alexandrescu's closing CppCon 2023 keynote
Transcript
Discussion (0)
Episode 397 of CPPcast with guest Daisy Holman, recorded 28th April 2025.
In this episode we talk about new releases of Clang, Boost and Enloman Jason, upcoming
conferences, and then we talk to Daisy Holman about using generative AI as a tool to help
with software development. Welcome to episode 397 of CppCast, the first podcast for C++ developers by C++ developers.
I'm your host, Fjolnash, joined by my guest co-host Anastasia Kozakova.
Anastasia, how are you doing today?
Good, how are you doing today? Good. How are you?
Ah, yeah, not too bad. Just recovering from the ACCU conference, which was a few weeks ago now.
And now I'm off to Paris tomorrow to speak at the C++ meetup there. So
travelling never seems to end here. How about yourself?
I'm good. I quickly recovered from ACCU because I had further travel, but we luckily handled a nice
meetup here in Amsterdam, talking about Bazel for C++ developers. A little bit more of C++ as usual.
Right. Yeah. You can't get away from C++.
All right. But we'll get more into C++ in a bit. But at the top of every episode,
we do like to read a piece of feedback. This one is from an email from Abe Mishler. So Abe says I'm really enjoying the Cppcast podcast.
Listen to it while I drive to and from work as a software developer. I would like it if you could
talk on the topic of G++ and Glibc versions that come with different versions of Linux,
and specifically their ABI incompatibilities.
Now that is a big topic. We haven't really covered ABI stability for a while. Maybe it's
time to do that one again. It's another green topic, I think. It does go on to compare some
different ABI stability issues between different GTC and Linux versions and compares that with
Windows which I presume it means Visual Studio, Visual C++. Interestingly, I think that whole GTC and Linux versions and compares that with Windows, which, and I'm presuming
it means Visual Studio, Visual C++.
Interestingly, I think that whole situation used to be reversed years ago.
It used to be Visual C++ always broke compatibility, but yeah, big topic, as I say.
Yeah, it would be interesting to cover that.
Yeah.
I remember you had a speech, actually, there was an API compatibility topic at SIPP
because some time ago, that was actually, I remember a swim and listening to the episode,
talking about driving to work and listening to episode. And it was quite a longish one. So I
remember I had a long swim because of that, but it was super interesting.
Yes. With API, it's definitely a single swim. So So Abe's email is actually a two-parter because
he goes on to say, and also related to compiler versions, I've enjoyed listening to you discuss
new C++ 26 features and proposals. So that's nice to hear because most of the feedback
we get is you're talking too much about upcoming C++, we want some real-world stuff.
So at least somebody's listening.
But he says it seems to take a long time for these features to make their way into GCC.
Not sure about Clang or others.
After so many years of stagnation in the language,
what caused all of the recent and frequent developments in the C++ language?
Especially since compiler writers do take so long to implement them.
Another big topic which maybe we could do a whole episode on at some point but
funny enough I've done a talk on this recently called Not Your Grandparents C++. I'll put a link
to that in the show notes but I think the short answer is there was that period between 1998 and 2011 C++98 C++11 where there was no new
standard if you ignore C++03 which is really like a patch release and so people got used to the
idea that C++ was pretty much unchanging and then then we switched to that train model where we have
a new release every three years so everybody's trying to get their proposals in for the next
version. And that's really been quite a change in the dynamic. Mostly for good, but not all
for good, of course. But yeah, maybe we'll pick up that topic as well in the future. So thank you Abe.
We would like to hear your thoughts about the show. You can always email us at feedback at cppcast.com.
That is our preferred way now but
we can still be reached on X, Macedon or LinkedIn we just don't check them quite as often these days.
So joining us today is Daisy Holman. Dr Daisy S Holman is a programming language expert who recently
joined Anthropic after a decade of influential work on the C++ standards committee,
where she made significant contributions to features like MD-SPAN, atomic ref, and the foundations of STUD execution.
With a PhD in quantum chemistry and expertise spanning generic programming, metaprogramming, concurrency and multidimensional arrays,
Dr. Holman has dedicated her career to making complex programming abstractions
more accessible. At Anthropic, she now applies her extensive C++ knowledge to exploring how
programming languages like C++ and Rust will evolve alongside AI systems. Driven by her
belief that better programming languages and AI systems can increase economic mobility
and create more equitable technology. Daisy, welcome
to the show.
Thank you for having me. Yeah, I had my mic turned off because I was worried about interrupting.
There was so much exciting stuff in the intro. I was like, I'm going to accidentally jump
in if I don't turn my mic off. It's nice to be on the show again.
It's great to have you. And yeah, like congrats with the first podcast bingo,
getting kicked, like forgotten to turn on the mic.
I did it also a few times. Yeah, it's actually,
I was wondering you have so many impressive stuff in your bio and how does it
feel there in the future? I mean, you live three or four months ahead of us all. So how does it feel there? Is it good? Yeah, I mean, I think it's constantly scary,
but it's also exciting and everything is evolving so quickly around me. And I'm just incredibly privileged to have gotten to be adentropic right at the
heart of all of this at this time.
I always tell the story that I missed two different opportunities to jump into AI world.
One right after my PhD.
My PhD is actually in an area that is much more closely related to machine learning than
you would think.
The math is very similar in quantum chemistry.
And had that opportunity and I was like, oh, AI is a fad, it's going to pass.
It's fine.
And then even with the multi-dimensional arrays work, I had a lot of opportunities to really
focus back in on the parts of multi-dimensional arrays that were really important to machine
learning and to AI. And I thought, oh, this is a passing fad. And then I got my third
opportunity and I didn't miss it. So I'm, I have never prepared so much for an interview
in my life. I've never wanted a job so much in my life. This is so incredible. And I'm
just really lucky to be a part of it.
Well, third time's the charm. So I'm going to break the rules. I've got a comment in
your bio as well, because I've got a confession to make, which is that I took the bio you
had for the ACC conference, which was about three paragraphs long, which I thought was
a bit too much to fit into this read. In fact, it was still quite long anyway. And then I said that to Claude and asked it to summarize it for the podcast for me. And
this is what it actually produced. So this is a Claude generated bio. Yeah. The funny thing about
that is that, you know, the original, my bio originally was generated by Claude. Also, I
actually was playing around with our new web search feature, which is now public and has been public for a while.
And had Claude search for information on me
and for what matters to me.
And I crafted a prompt that was very careful and everything.
I mean, prompting is a big part of all of this.
We'll get into that.
But yeah, and I said, can you write something up on me?
And I liked it so much.
It was very flattering.
Claude is a sweetie. But yeah, so it's really this full circle thing of like Claude summarizing
itself. It's really cute. So we've addressed one of the concerns people have already, which is what
happens when you feed AI generated stuff to AI generated stuff stuff and we can still get some decent results.
But yeah, well, we'll see how that holds up. We'll get into that in the main topic. So
before we do do that, we do have a few news articles to talk about. So I've reviewed,
feel free to comment on any of these. So first of all, we've got a few releases to catch
up on. And I say catch up because we's been a few weeks again since the last episode.
So some of these are a little bit older.
Like the Clang 20 release.
And that was a few weeks ago now.
I think we just missed it last time we recorded.
So as usual, the first public release of a new Clang major release is a.1.
So this is 20.1.
And I put all the release notes on the show notes.
But there's a few highlights I picked out. One is that they got a lifetime extension
in range based for loops. That was the fix that Nikajasitas put in to C++26. But I think
it's been retrofitted back to previous versions as a bug fix.
So under certain situations, lifetimes could be extended when you create a temporary as
part of a range based for loop where previously it wouldn't have done, which was quite surprising
behavior.
So that's really nice.
Seems like a little thing, but actually can make a big difference.
That's it.
They've implemented module level lookup for C++ 20 modules
which seems like something that just got missed previously. Good to see module support creaking
closer and closer to actually being usable. So another step along the way. And then something
that originally went to Clang 18 which was the the explicit this parameter. That's the deducing this proposal in C++23.
So that got into Clang 18, but what they didn't put in was the feature macro for it.
So you can actually detect it.
That's now in Clang 20.
So a nice little rounding out.
A lot of rounding outs, I think, in Clang 20.
So that's good to see.
Looking forward to the next things coming in Clang.
I've been messing around and some of you may have seen me on Twitch,
uh, messing around in between, uh,
jobs with the Clang and learning how all of that world works and trying to
implement some of the reflection stuff.
And that is a whole nother journey getting involved in Clang.
But I think those are all good features to be seeing,
a feature macro.
I'm surprised I didn't just backport the feature macro,
but I guess that you can't break people.
I don't know.
Yeah, deducing this is great.
I don't know if you've gotten a chance to play around
with it in real code, but it simplifies so many things.
Even if you still use CRTP for certain reasons,
producing this is pretty amazing.
Yeah.
What are those features that has a number of applications
beyond the original use case?
That's always a good sign.
It makes things so much easier in generic coding,
generic programming, and anything
that I've done with ranges. It's just been fantastic with it.
So the next release is a boost one dot 88.
That's got two new libraries in it.
Hash two, an extensible hashing framework, which I haven't actually tried it myself,
but I took a look at the docs and some of the Reddit comments on it as well.
And it seems like a pretty high quality framework for creating and composing hash
codes, which is important, but maybe slightly boring part of normal coding, but important to
get it right, I think. And very easy to get it wrong, should we say. So nice to see that in Boost.
And also MQTT5, the client library built on top of Boost ASIO, I think.
And that's for IoT messaging, which is not something I'm really in the world of, so I
can't really say more about it.
But nice to see, you know, quite a big useful library getting to Boost.
Still active development going on
over there.
Yeah, that all sounds great.
I haven't used boost in years.
I know that this community is good.
I'm going to get a lot of pushback from that, but certainly good to see developments still
happening on top of ASIO.
Yeah, and hashing.
Oh, the other comment I had was that hashing is, I know,
Abseil had a big push to fix their hashing stuff. Or they use their hash tables being weird, but
some of their hashes were not optimal. And this is one of those things that really gives you kind
of step function performance improvements and you don't even know about it. It's very, very specialized.
Yeah, exactly. That's why you really want the right people working on that stuff.
Talking of having the right people working on things, there's a new version of Jason from
C++. That's Dills Lowman's library. We had him on the show a few months ago. So 3.12 is out.
Again I'll put all the release notes up on the show notes. A couple of highlights.
Diagnostic byte positions. So basically just the byte position through the parse now is part of the diagnostic.
If there's an error. Really useful because you're just parsing strings
and you get an error it's really hard to pinpoint exactly where that happens. So nice little addition.
Support for conversions to and from stood optional. So always nice. And one for you,
Daisy, multi-dimensional hurrays. Now supported in an adjacent context. I don't know what that looks like
on the Jason side, but it's supported in the library. So there you go.
Is that MD span or is that just T star star or something?
Yeah. I took a very, very small look and it just seemed to be like a C-style multi-dimensional
house. Maybe it works with MD span.
I haven't looked close to enough, but I think it tends to track a
standard or two behind, so, so maybe not.
Ah, so they wouldn't have MD span.
Yeah.
Yeah.
So that was the, the releases roundup, um, brief roundup of conferences.
So there's a, um, 2025 edition of the Pure Virtual C++ conference.
That's all online.
One day, or half a day really, on the 30th of April.
Still completely free.
There's five live talks going on.
And then there's some pre-conference recordings going out as well.
So I think they're considered part of the content.
So you can watch those at any time.
They're going to be going out from potentially now, I think.
And then five talks on the day.
So definitely take part in that because why not?
As we speak right now, in fact, maybe in an hour or two, I'm not sure.
I haven't checked the time zones, but I think C++ now should just be starting.
So you're probably too late to get there because it'll be finished by the time this airs. I mentioned it just because it'll be rude not to.
Obviously a great conference. The videos I'm sure will be out in a few months.
The next conference you can actually get to would be C++ on C, which for those of you that don't
know for full disclosure is a conference that I run that's in Folkestone
in the UK, 23rd to the 25th of June. We did mention it last time so I won't talk too much
about it this time, just to say that the full speaker list has now been announced and the
full schedule will be out soon. If it gets out in time, I'll put that in the show notes
as well, otherwise just the speaker list so you can see who is speaking.
And then another C++ with C, C++ under the C. If you like it on the C, you can go under the C, which is here in the Netherlands for sure.
And yeah, I keep saying thank you, Thiel, for allowing us to use this kind of interesting name.
I would be honest,
Bratda, where we're handling the conference,
is actually not under the sea,
but we can go down in the building,
and so you feel the true experience here in the Netherlands.
But yeah, that's the conference we started last year,
and I'm super happy it ran very much very successfully.
And so we're going to have finally two days this year. I'm
super grateful because I'm the conference chair and one year conference chair program chair,
and you have to pack like we had 60 submissions last year to two tracks in one day. That's a
challenge. So I asked like our organizers never do that to me again. So we have two days. Yeah. And the call for paper is
actually running. So the conference is in October. The call for paper is open till mid-June. So you
still can use your chance to submit. I encourage everyone to do so. And we have announced one
keynote speaker is Sean Parent. I think you all know him, but we also have another keynote speaker.
It's, he's not on the site yet,
but I think I can tease already
because we got the official yes,
and it's Dr. Walter Brown.
So I think that should be an amazing event.
And more keynote speakers are coming.
We also have a day of workshops.
So yeah, please submit and come under the seat.
Thank you for the exclusive.
I'm excited about both of those.
I think I was going to say, I think CPP on C is probably the top of my, either CPP on C or
NDC Tech Town or the top of my list of conferences I have not spoken at and want to.
Probably, probably CPP on C is above it. And super excited
to do that sometime. This year is all crazy. But I am doing, I am hopefully doing a lot more speaking.
Anthropic was I think a little bit like, well, we don't know about this talking to C++ people.
And then they saw the response from ACCU and they're like, oh yeah, let's do a lot more of this.
So that's, I'm very excited about that.
I'm very excited about all the opportunities interface with all of you and work on all
of these things.
And I am trying, I am actually trying to talk them into letting me submit for CPP under
the sea.
I'm a little late for CPP on sea, but yeah, I mean, it's a little, it's a small conference to fly all the way across the
Atlantic for, but if I can pair it up with some other things, then maybe so.
Hope to see you at all the conferences.
Yeah, so excited.
So let's, let's move on into our topic for today then. Now we have Daisy here. So obviously you mentioned Amphropic,
AI, Claude.ai is the main service. But interestingly, Anastasia, our guest co-host today,
you're obviously not working at an AI company, but JetBrains has been embracing AI tooling recently
as well. And both of you gave keynotes at the ACC conference
a few weeks ago on the subject of AI in software development.
And interestingly, the response was actually quite polarizing.
I mean, mostly very good.
Great feedback, actually.
I had one memory of being in the hallway just
after your keynote, Daisy. And I was talking to someone
and they looked like they were just in a bit of a daze. And they just said that they were
still processing what it all meant. It really had a profound impact on them and I had some
other experiences a bit like that as well. So definitely very impactful. But then I also had
some, heard some concerns from a couple of places that since you both actually work
for companies that either provide the LLM-based AI services, in your case, Daisy, or use them
as part of your products, in your case, Anastasia, that there's some sort of bias that play
there that this is, you know, you just like selling your products.
So let's get that out of the way right at the start. Can you address those concerns?
Yeah. I mean, well, I'll start with saying that Anastasia is absolutely not. She was
selling competitors products. The only demo she did was with Windsurf, which is a direct competitor
to JetBrains. So I mean, let's get that out of the way. There's no profit motive there.
I happened to use Claude code for a lot of my demos because I have
free credits there because I worked there and I wasn't going to spend a lot of money
on some other service just to be unbiased. Maybe I should have, but no, this is not about,
I mean, it's certainly not about promoting my own company's product. It's about promoting
a shift in the industry. I think the first thing I would
say is, yeah, I mean, don't trust me. Like, this is not, I would view any of this with
a healthy amount of skepticism, but there's no, there's also no like secret cabal plot.
Those of you who know me personally know I am honest to a fault, like to a significant fault. And I don't think that I would be doing
this. I mean, I went to anthropic because I saw this,
this kind of tide shifting. I submitted this, Phil was on the
thread, we were talking about this topic as a as a keynote
topic. I'm not sure if I actually
sent it before I started at Enthropic, but it certainly was in my plans well before I even
started interviewing at Enthropic. And this was like a something I've been wanting to keynote on.
It's something I've talked to extensively with Andre Alexandrescu about
since his last keynote at CppCon on this.
Yeah, that was amazing.
Yeah, we wanted to update that. And I think, look, I'm not going to lie, Anthropic being ahead of everyone right now in coding benchmarks
and me talking about AI at a coding conference looks suspicious.
So like, I'm not going to deny that.
But it is also true that I think that it's only going to get in your way if that is your focus here.
I'm not part of some conspiracy.
I know you, I'm not trying to say that it's a conspiracy to think that product-fit motives
are real.
But yeah, I don't know.
I'll let Anastacia say what she's going to say too. I think, yeah,
she's absolutely promoting a competitor's product. So she's completely above board on
that.
Before we head over to Anastasia, I think you may have even mentioned this in your keynote,
but the cause and effect is your way around. You know, you're working at Amfrofit because
of your interest in AI rather than the other way around. So, absolutely. Interest comes first.
Yeah.
Yeah.
And I think what really happened to go one step further on that is I was skeptical of
working on AI because I was skeptical of the societal good that could be done and really
diving into where Anthropic stands on a lot of that.
It's not perfect.
It's not perfect. It's not perfect. And I continue to kind of push back on things that are not perfect, but it is really, really
focused on societal good and societal impact. The ethics team at Anthropic really kind of has
really kind of has overwhelming veto power to the tune of millions and millions of dollars of lost profits sometimes. So I know the story is out there officially that both Google and
Anthropic had very similar to chat GBT like models, um, working internally before chat GBT was released.
And, uh, there is still kind of a lot of like, well, what if we had, if we had done the thing
that we didn't think was, was right at the time, maybe then we'd be able to do the right
thing this time and have a stronger voice.
Um, but the, yeah, I mean, the ethics team blocked the release of what would have been the revolution that everyone saw
Chad GBT to be.
Yeah, I think we'll touch a bit on some of the ethics part of it as we go through our main topics
in a moment. But let's just hear from Anastasia, who we move on.
Yeah, I really...
Although you've sort of been covered.
I really love what Daisy said about taking with a grain of salt.
I think that's what everyone should do when they talk about Gen.E.I. and Daft Tools.
And like, you know, in Jarbrains, we do Daft business for 25 years.
And I came here also because I felt that I'm passionate about that.
I'm passionate about tooling. So I quite often do speak about tooling. Sorry for that. Just
because I love speaking about what I'm passionate about. I mean, I'm not working with simple
spasm production for many years now. I stopped before JetBrains. So I really can speak about, you know, some in-depth language things. I
just don't feel that I'm that expert. So for sure, my passion is tooling. So I just speak
about tooling and AI, Gen.AI actually is disrupting our business a lot in a good way. And we were
struggling with that. We were challenging that we were, you know, all the stages like rejecting that, then accepting that. And honestly, speaking about Gen.EI and Daft-to-Lincoln C++ community is quite
a challenging thing because it's much easier to go to, I don't know, JavaScript, TypeScript, Python
community now and speak about Gen.EI. It will be very easy. But when you speak about that in C++,
It will be very easy, but when you speak about that in C++, it gets questions. Yeah, and people start thinking about it.
They came to me talking about all the things that there are questions around that, and
that's actually very good.
And I was really happy that during the keynote, Jason asked me quite many very good questions
actually about the quality of the C++ code generated and all that
stuff. And quite recently, he actually posted on LinkedIn, I think about a week ago. So,
and I suppose when he was comparing Clam and Chubb GPT and asking to program C++ in the style
of Jason Turner. I really love that. I really love that passion to try and learn more from people
who are questioning the JNEI. And I mean, that was the goal for me. I was there not to really not that much to promote the tools.
I mean, when I'm wearing a JetBrains t-shirt, I think I'm already promoting that so I can
escape from that. Matt Godbolt was one surprised when he saw me in the regular clothes.
So I mean, I'm doing that by design. And that's it. And unfortunately,
maybe you have just to accept that. But talking about Gen.EI and C++ was a challenge about,
yes, let's raise all these questions. Like just talk about them. Let's indeed think about how
challenging it is for Gen.EI to generate something decent and for you to get it out of the Gen.EI.
And a lot of conversation, actually, there were questions, there were concerns.
And I was happy that the people came and asked me a lot about that.
So, yeah, I've read some feedback from people who didn't love it.
And I think that's fine.
I'm always not always a big fan of all the keynotes I'm listening to.
I have to be honest.
So it's good that people do share their opinion.
I read them very carefully actually to see if there are idea points I could
address and just listen to people and their feedback was really really good.
So yeah, I'm fine with that.
And let's keep talking about Gen.EI.
It's not dead and it's growing.
So I don't think we should just keep silence.
Yeah. Well, thank you both. I think it was worth taking that bit of time to cover that
mostly from a non-technical perspective, because it has been a really polarizing topic in the
community. And there are some people that won't go near it with a barge pole, as Rene mentioned, of generative AI, using AI for coding, get some going the wrong way. And that's
fine. Maybe the show won't be for you. But on the other hand, maybe it's worth listening and see
whether it's something that can change your mind and get you thinking along new lines. So let's
actually start with some more sort of philosophical questions on the topic, starting
with the sort of the obvious one that the people have been asking either directly or
perhaps indirectly, which is, you know, is GEN.AI, whether it's Claw or ChatGPT or whatever's
built into JetBrains tools, is it going to put us out of work as software developers?
Very unlikely. I mean, I think, look, every advancement we've had in, like, this is the
largest step function that we've had in software engineering, probably since programming languages.
And I understand that the software engineering, the software engineering ecosystem was a little
different in the world where programming languages were invented. But every time we have seen
even just non-step function improvements, look at how IDEs are now compared to 2000.
What we've just seen is that there's more demand. There's so much more software that can be written
and it can be done.
Yes, maybe we will end up in a world where software engineers are not paid three times
any other profession.
I think that that is possible.
I think that that is not a terrible outcome.
As someone who does make those kinds of salaries, I recognize that I am not necessarily more
valuable to society than a doctor or a nurse that I make
two or three times as much as. That is potentially going to change, right? Because there is a limited
amount of money that can be made off of software, period, but there's so much more that can be built,
right? Like if you had said in the 1950s and 60s, like this improvement in computing and growth
is like there's only so much market for this, et cetera, et cetera.
And then you didn't see like this entire video games industry coming, right?
Like the entire video games industry is not something that you ever could have seen coming
when we were having improvements in computing in the 60s, 70s, and even early 80s.
That has, yeah, I mean, there's a lot of people employed by that.
There's going to be a lot of spaces that software isn't used in currently that it's going to
be used in more.
And that AI isn't used in currently that is going to be used in more.
There's a lot of need for infrastructure.
I mean, the really good news for C++ developers is that the majority of the AI
industry does not know how to read bytecode, does not know how to read the
assembly, does not know how to vectorize a loop, does not know that Python
can't vectorize a loop. These are things that the listeners of this show are very good at.
And all three of the big AI giants are hiring like crazy for that. So if you are good at making programs fast, reach out. Seriously, because everyone
is capacity bound right now. Right. Everyone is just trying to get more and more instructions
through and their whole stacks are written in Python. Right. Like they're they're like,
where could we find a few extra cycles? And you're like, well, have you looked at your entire software
stack?
So, yeah, there's a huge opportunity for C++ people who are willing to embrace AI.
AI agents are going to have to be your teammate in this work because otherwise you're going
to fall behind people who are willing to use this as a teammate. And this is just
going to be, I think, part of it. But if you're willing to do that, like you have a huge opportunity
right now.
So if you're an experienced C++ developer, you've got nothing to worry about then. But
what if you're a junior developer just starting out? Isn't the sort of thing that these AI tools can do right now going to make it much
harder for them to get their foot on the ladder?
Yeah. I think yes and no. I think on the one hand, onboarding with an AI agent alongside
you, if you know how to use it, is just way easier.
Onboarding into a large code base
and understanding where things are,
if you are a junior developer being given a task right now
and you have access to an AI tool through your work,
I would suggest starting with just copying
and pasting the task into the agent
and see what it says for recommendations.
That is something that was never available to me.
Don't just have it do it because it's not going to do as good of a job as if you were to listen to it,
reason with it and work through it as you go.
But like, yeah, I think it is a scary time to be a junior developer.
But also, I think if you're just starting out in your career right now and you can learn to prompt from the start, right, you can learn to think about
software in terms of prompting.
I think that you have a big leg up on people who are going to be slower to adopt that,
right?
I think that is an important, like, I guess kind of what I'm trying to say is that for me, at
least, approaching this entire experience with some humility about the human condition
has been really productive. I kind of got forced into that by getting laid off at Google,
which was an incredibly humbling experience.
And I was in this spot in my career where I was kind of open to being humble about things
and then kind of got connected to AI in a way that I was just in the perfect mindset
for it.
And I know that if you're not willing to approach this with humility about the human condition,
me saying it's maybe not going to change your mind.
But if you're somewhere in the middle, maybe thinking about what we're good at and what
machines are good at and trying to figure out how to maximize the things you're good at.
And that's going to be large-scale design. That's going to be ideas. That's going to be
thinking about how other humans work and what they want from your software,
thinking about how your collaborators work and how they interact with the code that you write.
All of those things are things you have to add on top of AI
and those are things that junior developers
as kind of, especially people who are in undergrad right now
or in high school right now,
you have the opportunity to be part of the first kind of
AI native developer generation.
And that could give you a big leg up
on people who have been doing this for 30 years
because they're restarting from scratch too, to some extent.
I actually wanna confirm that with example.
I've recently read about Hackathon and MIT
and it was actually seen there
that the much younger people are more bold because they
think that they can do much more with the AI and sometimes they really can.
And so they don't have this kind of artificial limitation that that's not going to work.
And yeah, it might sound like they're just young, greenish, and can't understand things,
but on the other side of the spectrum, it makes them bring a much better result sometimes,
because they're bolder and they're not afraid
of doing all these technical depths.
It's nothing to them.
They just go to AI and they actually built cool things.
And it was a hackathon with Gen.EI,
and they just proved that they can build cool things
in much shorter time.
So that's already happening actually.
So the starting points are changing, but so are also the approaches. And it just means that those people that don't have that baggage of what we've done before can get further ahead faster. So
actually, in a way, maybe even more opportunity, I think is what you're saying. So that's quite a positive
way to look at it. So I like that. Then let's think about how the tools that we have available
now today, because we know tomorrow it's going to be different, maybe even next week from
what we were saying before the show. But right now, how can we use tools like
Claude, chat, dbt or any of the others to help us in software development as someone that maybe
is a bit more experienced perhaps? Yeah, I would say agents are probably very much likely to be
the future. I think this whole code complete mode is a bit passing.
I also want to like looking maybe, you know, a few months into the future,
which is the month is the AI world for years. It looks like potentially, I think, vibe coding, I know we're going to talk about
vibe coding more specifically later, but I think this mode of vibe coding, uh, is, is maybe a
somewhat passing phase also, I think, I think that, yeah, I think this idea that, um, you write this
really crappy code that doesn't live for very long, and you
cordon it off into some part of your code that you're not going to touch, and then you just
embrace the vibes because it works, is I think that agents are going to get much, much better
at sustainable code. Agents are going to get much, much better about thinking long-term.
I think these are the things, these are the big, you know,
a part of the problem is that everything has been driven so
far by I'm realizing I'm tangenting and not even answering
the question. But the part of the everything so far has been
driven by these benchmarks, right? Like, you sell software
based on how well you you sell L software based on how well you, you sell LLMs based
on how well you do on Sweet Bench. And that's a huge fraction of it. And so it's really
driven by which AIs are the best at just getting something working. And I think we're going
to very quickly realize that that is not a sustainable way to do software. But I would get used to a tool.
So here's the thing that I did when I got to Anthropic that has really, really helped
me, especially if you're at a company that has one of these tools available.
Be stubborn.
Be stubborn for a week.
Go one week without writing code.
This sounds crazy, but go one week without writing code by
yourself. Just be stubborn for that week. Make yourself, make the AI generate things.
There's this whole proverb about teach a person to fish. And? And there's kind of another step there. It's like teach a person to
teach. And then, you know, all of the people that they can teach to fish will benefit from that,
right? And what you're really learning by doing that exercise, by being stubborn about saying,
I will not write a line of code for a week. Now, maybe you don't have that time. I totally
understand. Like software engineering is, I'm not saying that I of code for a week. Now, maybe you don't have that time. I totally understand.
Like software engineering is,
I'm not saying that I have that time either.
I saying that I worked longer than I should for that week.
But like learning how to teach something to an AI agent,
learning how to tell an agent to do something
when you're not allowed to touch the code yourself for whatever reason
will actually improve your efficiency a lot.
And so I really would suggest trying that exercise or maybe some smaller version of
that.
Maybe you say, you know, between three and five PM every day for a week, I will not write
a line of code.
I will only use code that I am able
to generate through prompting.
It's naturally like practicing your prompt engineering skills.
Exactly. Exactly. And it's very frustrating. I will tell you right away. It's very frustrating.
And then it gets to be easier. Right? The first day, you're just going to be easier, right? The first day you're just going to be like, I just want you to write this code. And then by the end it does get easier, you get used to the kinds of mistakes
it makes. You get used to the things that you forget to say. One, so we put out a blog
post on agent decoding best practices recently, my team at Anthropic.
And I think all of those apply to any agent you wanna use,
whether it's Claude code,
whether it's this Codex thing
that OpenAI has recently put out,
whether it is,
or Cursor is is another agentic,
primarily agentic first thing.
There's quite a lot of other ones out there.
And like there's definitely a technique to this.
There is a best practices to all of this.
Get it to ask you questions.
Tell it to say, don't write any code
before you've asked me three clarifying questions. Right?
You will be surprised at the things that it didn't understand.
If you tell it to jump back in, right?
The whole notion of extended thinking is basically just telling the AI to think, right?
And that's basically it. It's like, you know, a prompt engineering thing is telling the AI to think, right? And that's basically it. It's like, you know, a prompt engineering
thing is telling the AI to think and you can keep, you can extend that idea to like, tell
the AI to ask you questions, tell the AI to ask another AI for permission to make changes,
right? Like play around with these things. There's a lot of ways you can put fail-safes
into the system. I will say if you're coding by yourself, this gets very expensive very
quickly. And I wish I had better answers for that. Right? I had it bugs me constantly.
But this is way cheaper at a company or way easier, more accessible if you're already
have a job than if you don't.
And we can talk about the ethics of that and my thoughts on that later. But yeah, using
an agent is not going to be efficient if you're trying to minimize the number of tokens you
use. You're going to use a lot more time. So, what you were saying sounds like the idea of coding caters, where you do some sort of
exercise in the self-contained way just to learn how to do it, or learn some skill along
the way.
And that skill could be how you talk to an LLM to get the work done.
Because in my experience, one of the obstacles is that it does feel
like you're talking to a human, but the way that they respond is not like a human in that
you can give them a one sentence summary of the problem. And they seem to have it all
sussed out right at the start. But as you say, there's these big holes in their assumptions
that you need to probe at too to get at. And it's almost like the reverse of talking to a human where you hit the
assumptions straight away. And well, he sometimes hit the those holes straight away. And then then you get to an
understanding a bit later. But you sort of have to work backwards to it. At least that's been my experience. So yes,
spending a bit of time feeling out how that works is what will get you the way you want to go faster.
I think.
Yeah.
And that also leads people to the major concern regarding the vibe coding.
And maybe you can address that because it's like they see the vibe coding
as a disruption to their passion, but also as a disruption to the quality.
Because who knows what is generated there when you vibe coding, right?
Because, because you don't
look into the code. What do you think about that?
Well, I mean, I'll first start by saying I don't work in security critical areas, right?
So I get that someone would be very worried if their bank's security code is being written in a Vibe Coding approach.
The other thing about Vibe Coding that I will say is that we've been doing it for a while,
right? If you're a senior engineer, you absolutely know how to minimize the damage that a junior engineer can do.
I don't wanna say it,
that sounds like a really stuck up way to say it,
but if you're a senior engineer listening to me here,
you're probably nodding, right?
And I'm not saying that all junior engineers,
if you're a junior engineer listening to this podcast,
you're probably not one of those people, right?
Like if you're listening to a podcast in your free time as a junior engineer, you're probably not
the person that they're trying to minimize the damage of. But software engineering as a practice
has been structured over many years to create self-contained units, self-contained modules
units, self-contained modules that can minimize the damage they can do to other modules. That is not changing.
People are so surprised that these are like, oh, this is going to mess up our entire code
base.
No, you are still in charge of creating the structures that minimize the damage of something
going wrong in one place.
And that's software engineering, that's not programming, right?
Software engineers are going to keep doing software engineering.
They may not do much programming.
And that's a really important distinction to make here, right?
You are still going to be in charge of structuring your program, of designing abstractions that can separate concerns in ways that the
damage from something going wrong is minimized.
And that has been the past 50 years of software engineering.
We have had people who can write bad code for as long as this profession has been around.
And that's going to continue.
It's just that it's happening faster.
And so that your ability to isolate those things
is even more important than it's ever been.
And because of that, people who can leverage that ability
are really going to thrive.
And people who cannot are not really going to have
vibe coding accessible to them.
And so they're gonna to slow down a lot.
Right. That's really what's happening is that vibe coding is going to create a lot of bad code
and people who can't create structures that contain that are going to fall behind people who can.
So your skills are still there. Your skills are still extremely valuable. And that the secret is that this has always been the case, right? It's just happening
a lot faster. And so it's, it's amplified, right? The inability to contain this bad code
into units that are very well testable, that are unlikely to affect the rest of your code base
is amplified because it's happening so quickly.
And there's so much of it happening.
So you have 10 years of, you know,
kind of bad or poorly written code
getting added to your code base in a month.
And people are like, oh my gosh, the world is falling apart.
And yeah, if you didn't have structures that were going to work over that 10 year
period, then they also won't work over that month period.
It's just that in that 10 year period, you could have a lot more time for people
to find those bugs, find those, those shortcomings.
And I think people with really well-structured code are seeing kind of exponential benefits
already.
And people with poorly structured code are seeing spaghetti already.
Yes.
So, um...
We want more lasagna, less spaghetti.
Yes, exactly. How can we get a better... I mean, people were coming
to me and asking at ACC a lot and I realized that I don't have a good answer here maybe. So quite
experienced people, those who are training people to write better C++ code, they had a very reasonable
question. So how can we make algorithms write better C++ code? How had a very reasonable question. So how can we make LLMs write better
C++ code? How can we contribute? You've been working in Anthropic. You probably is a proper
person to ask that. So how can people who can write good code teach those agents and
LLMs to write better C++ code, at least to some extent?
Yeah. One of my friends at Anthropic is in charge of the reinforcement learning team, the
code reinforcement learning team, the reinforcement learning specifically
focused on coding abilities.
And she and I have had a lot of talks about this.
And yeah, one thing we have to do in the industry is get rid of
living benchmark to benchmark.
Like this has benchmark to benchmark.
This has got to stop.
If you're selling your product based on benchmarks, then you're going to get benchmark-like code.
You can only make software projects so big in RL before it becomes impractical to tune on them. You can't have a project that it takes
a week for the AI to figure out and then iterate on that. You can't run for one week and then be
like, okay, it did slightly better than the previous one.
So we're gonna take a step
in the direction of the weight change.
Like that's not how these things work, right?
You do have to be able to do
some of these things very quickly.
But I do think that we have to start
doing reinforcement learning specifically for code quality.
I would love to see a dry benchmark,
which is like measures how dry the code, don't repeat yourself, the code generated by AI agents is.
I'd love to see those kinds of things actually happening in the industry
where we actually measure code quality by various metrics and put a benchmark out for
that. I think that is challenging with the way the economics of the industry is going
right now. But I think it's something that we need to look very seriously at as
an industry, especially in systems languages. If you're making a website and you're making
the front end and it's going to be relatively well contained to a single webpage or a series
of web pages, whatever. But if you're editing the Linux kernel, like this is a different beast altogether,
right? And so I think code quality, you know, I had one of my agents, one of the things
that I was using an agent for the other day, I was doing a code review of it. And it rather
than trying to figure out how to put the two pieces of code in the same
place, it just duplicated the entire main function, which was, you know, 200 lines or something
like that.
And I asked it why and it was like, Oh, yeah, that was a really bad idea.
Let me go fix it.
And then it went and fixed it.
Right?
Like it wasn't a big deal.
But it got me thinking to like,
where was the AI ever incentivized
in the reinforcement learning process
to make those pieces of code match?
And I don't know that it really was.
I think I have to be careful what I say
about reinforcement learning because a lot of that
is very trade secrety and stuff like that.
And, you know, I don't
actually even know enough to know what I'm not allowed to say and what I'm allowed to
say. So I'm going to be very careful about that. But I will say, I can say this, like,
if you've been frustrated with the quality of C++ it generates, none of these models
have been fine tuned on C++ code very heavily.
The entire industry is focused on Python.
The one benchmark that matters by far the most, Sweet Bench, is 12 Python projects.
So everything is fine-tuned around Python.
In fact, a lot of models are kind of fine-tuned to do badly on C++ currently.
Let me explain. One of the ways that you interact with an LLM's output,
and you can get it to make some sort of
sections to its response that are like
parsable in a reliable way is that you teach it how to use
XML very well, right? You teach it how to say, if I start
you completing after an XML tag, you will
put everything that goes into that XML tag, you know,
within that XML tag, and then you will close it.
Now, what are, and we do a lot of reinforcement learning
on this, a lot of other companies do,
this is not a secret, right?
It's really a very easy type of kind of customizable
open-closed thing that you can do,
is very easy to parse out by the rest of the software
that interacts with the LLM.
And what does an openXML tag look like?
It looks like a template.
Like, LLMs are being conditioned to write very bad C++,
and yet you're getting the results you're seeing.
Now imagine a world in which we've done better than that.
When I learned that, I just laughed out loud.
I was like, wow, how much better could these things be
at C++ if we start actually training them on it
and not counter-training them?
Every time it sees stood vector of int,
it has to go against its training to figure out that it is not writing a summary
of something related to int and then closing that off with a slash int XML.
It goes through all of those gymnastics and then starts trying to continue writing the
code.
And it's doing as well as it's doing now.
So I mean, I think I don't really buy that AI has plateaued.
That's one of those anecdotes.
It was just like, I think I probably thought, I am a bit of a technophile.
I definitely am a bandwagon jumper. I am definitely drinking
the Kool-Aid at Entropic. I am fully admit all of those things. But when I see things
like that, I'm like, there is no way that we are at the point where we've plateaued.
These things are so easy to fix. It's just no one asked a C++ developer whether angle brackets was
a problematic syntax to use for something else.
Yeah, I always knew that XML had done a lot of damage, but I had no idea it was that bad.
It's very interesting.
You touched on a topic that I've seen a lot of, and I know Anastasia has commented on this
before as well, but there seem to be like these two modes that it will operate in.
One, when you're actually getting it to write some code and it couldn't knock out some code,
but it's not following any of these best practices or principle, design principles or anything. When you point
that out, it knows exactly what the problem is and it can go and fix it, as you say. It's
like there's two different personalities and you have to pit them against each other. Is
there sort of any, maybe this is something you can't talk about, but is there any work
being done to try to merge those so you don't have to teach it about itself? So there's, I mean, there's a few things.
One is that we haven't done a lot of constitutional, constitutional AI is something that's anthropic
more or less invented, but it's being used everywhere now, which is basically where you
give in your reinforcement learning, you give it a document of best practices in a particular
area, and then you have another AI judge the first, the training AI's completion of that
or adherence to that constitution, right?
And so, and then you let the one that's being trained evolve based on whether or not it adhered to that constitution.
So this is really important in things like ethics, where say you want to defend against
someone trying to build a chemical weapon based on knowledge they get out of the AI.
If you give it a constitution that says you are not allowed to say this,
this is much more effective than a system prompt. System prompts are very hackable.
We know this at this point. You can tell it to ignore everything you've seen already.
And we're a long ways away from that being the way around it, but there's still lots
of other ways around it. And so it it's really effective in that area, but
I don't think we've really explored it in the area of coding best practices where you
have kind of a constitutional AI set up with a coding best practices document, and you
reward or punish the AI based on how well it adheres to that constitution.
So I think we're going to see a lot more of that over the next six months.
And I think that that I mean, look, I still think the incentives have to be in the right
place.
Right.
This industry has to change how they're thinking about reporting incentives. And it can't just be benchmarks.
And I say this as the company that is clearly leading in benchmarks, like we are profiting off
of these benchmarks. And I do not think that it is a good thing. Right. Yeah. I think that we need to
be and I'm not the only one to say that in entic, by the way, like that there are plenty of people who will very openly say that, very important people. And so I do think
we need to think about where our incentives are. But then I think if the incentives are
there, we have plenty of ways to do this. I think another thing is that we need to get
better at agentic setups that involve multiple agents. And I've played around with this a little bit in my work.
As we build out agents, you can imagine that you would have,
instead of one agent interacting with a human,
you have one agent interacting with a human
and another agent.
And the other agent has been instructed to go ask about best practices
every time before coming back to the human. So the loop goes something like the worker agent
will actually start writing all of the code and everything and then maybe anytime that it wants to
make an edit to the code,
it has to ask the other agent first, and then it can ask the human, you know, if the other agent approves of its edit.
Or if you're doing a kind of more autonomous loop, you say,
every time the edit has to be approved by the other agent.
And then before even coming back to the user,
the other agent asks it these questions, right?
Very specific questions about, hey, are lines 293 through 475,
is that really the best dry code you could write?
Or could it be the same?
Could it be done in lines 23 through whatever?
Those kinds of things are, these are not giant.
This is low hanging fruit in the industry, right?
Hooking two AIs up to each other is not some revolutionary idea.
It's like one of the first things you would think to try.
And we just haven't done that in a systematic way yet
because there's too many other things advancing
and everything is changing too quickly.
Now imagine, you know, this mixture of experts
is a big thing that has come up in the past.
I almost said year, three months, four months.
When was January?
I don't know, four months ago?
Years ago. Years ago.
Years ago.
A mixture of experts has been a big deal in the industry lately. And if you really were
to have a subset of that where you have an expert on software design, an expert on coding
practices, observing an expert on generating code.
Right?
That's not really, that's kind of much more specialized, much more granular version of
mixture of experts than most people are talking about these days.
But it's not entirely unreasonable in the next six months.
Yeah.
So it sounds like pair programming is valuable even for AI agents.
Absolutely. Maybe even more programming. And we see this. Absolutely. And we see this even with group programming.
I often tell it to ask three sub-agents.
This is something you can do with Claude.
I don't know about any of the others.
Claude Code, I don't know about any of the others, but you can specifically ask it.
Use three sub-agents to check your work. Make sure all three of them agree before you move on.
And if they disagree, then ask a fourth one to resolve the disagreement. I do that quite
frequently. It sounds like a cheat code. You're going to use a lot of tokens doing that. And if you're paying by the token personally, then I'm sorry.
That sucks.
And this industry needs to really think about how it treats the open source world.
So we can talk about that at some point.
But if you are working with this in your day to day job, and your company is paying for
your tokens, that is actually a really good job and your company is paying for your tokens.
That is actually a really good way to get better code out of your time.
Well, we're just getting going here, scratching the surface, but we're already running over time.
We should probably experiment with the serialization approach, but I know you both have to
do other things as well. So we'll do that this time. So we're going to start wrapping up, but I do want to get back to one maybe more philosophical or maybe more ethical question, which is all this use of LLMs obviously comes at an energy cost.
And that's been talked about a lot. So is that really something we should be worried about?
Is that going to be the same long term?
So a lot of advancement.
So I will start with this.
A lot more of the evolution of AI's AI efficiency is coming from people writing better code and faster code for actually doing inference, for actually running the AI.
There's this term in the industry called a compute multiplier,
which is like, it's the most trade secret
of trade secrets right alongside our weights, right?
Which is like, this is how we are able to speed up
from one model to another. You know, the amount of compute used is not growing with the size of the models.
It is actually sub-linear because, you know, there is a lot of money put into making these things more efficient.
I definitely struggle with the carbon costs
of these models in the short term. I think one of the things that we can really do is
to make sure that these things are really good at innovating in clean energy. I think
that being responsible about the way we train them to have the right skill sets
to accelerate those areas of research, I think is a very responsible thing to be doing.
I think there is the discussion of, oh, all of this carbon's being used and it's getting
out of control, and I completely agree with those things.
But I also think that there's not enough discussion
being had about how much this could accelerate
our search for green energy,
how much this could accelerate our search
for sustainable programs.
I think that we need to be reasonable as an AI company about maybe
this is the time when I should say my opinions are entirely my own and do not represent my employer.
Maybe cut that out and put it at the beginning. I don't know. But I think that we should probably be granting discounts to Green Energy Research Labs.
I think we should be thinking about ways in which our model could accelerate research
and putting money behind that, right?
I think we have said a lot, if you read Dario's essay,
our CEO at Entropic is an essay on his view
of the futurism around AI.
And it's a really kind of positive take, optimistic view.
It's a really good essay, Machines of Love and Grace.
I recommend anyone read it. It's one of the things that convinced me to come to Anthropic.
But he talks about this, right? He talks about how, yes, there's a lot of carbon being used in
this pursuit, but we have to also account for the potential gains in clean energy.
And that sounds like an excuse.
I absolutely understand it sounds like an excuse.
If I were on the other side of this, I'd be like, that's an excuse.
I don't think we're going to stop it just by saying that it's generating a lot of carbon.
There are enough people who don't care about the amount of carbon that's generating that will get ahead of the rest of us.
Then we just won't have a seat at the table anymore. But I think as a company that controls
one of the best models for reasoning and science, I think that we could be better about singling out wind and solar energy research
companies.
I have my own personal opinions on nuclear, and I think that we should be spinning up
more nuclear plants right now to bridge the gap to when we actually have sustainable wind
and solar.
But I understand that's a controversial opinion among leftists
and I am counting myself there. But I think that very careful nuclear, I think that fusion
research really, really needs to be kicked forward by all of this. We've never had more of a need for energy. And I, you know, it bugs me. It does
bug me for sure. The amount of power being used. I think there's some simple things we
can do that are like, maybe remind people of their carbon footprint when they finish
using cloud code. And that is a feature request we've had. It's something that I've reached out to our environmental impact team on.
And hopefully we'll add at some point.
There's not a real easy way to do that right now.
So we're trying to figure out a way to do it.
But I think that Anthropic is one of the companies that's willing to do this,
even at the expensive profit.
If reminding people of their carbon footprint means they use Claude Code less, I think we're okay with doing that. That's why I'm there. I wouldn't be there otherwise.
Anyone who sat down over drinks with me and talked politics knows that that is absolutely the case.
me and talked politics knows that that is absolutely the case. But I think having people who are really bothered by this working on this is very important.
I think you brought that background to a positive note to finish on. So thank you for that.
I was wondering if we were going to end on a down note.
But we got there. That's great. And as I say, there's so much more we could talk about,
but we are at risk of burning too much carbon ourselves right now. So we will stop here.
Maybe we'll set up a follow up at some point. But thank you. thank you so much for giving your time and coming on the show.
Thank you so much for having me.
It was amazing.
You're very welcome.
I think we'll have to skip our usual closing question about what else in the world of C++
excites me.
We'll save that for next time.
I'm sorry.
I talked too much.
Not at all.
It's a good topic.
I'm not very good at generating stop tokens.
But is there anything else that you want to tell people or
somewhere that people can find you if they want to reach out to find out more?
I mean, if I am allowed a shameless plug, we are hiring like crazy. Anyone who is good at instruction counting
and understanding performance, anyone who is good with accelerators, especially. We
had a direct call from our CEO last week. Was that only last week? Wow. Yeah. Last week to...
months ago. Yeah, months ago. To reach out to anyone we know who's good at performance
tuning. And I was like, Oh, that's my whole community. How do I, how do I narrow that
down? But if anything that I said appeals to you, reach out, we have performance tuning
website for those of you that have my contact information, reach out to me or reach out
to me on, I guess I'm still on Twitter. I'm not there very often. Blue sky X. I think
my email is on my website or something like that, that I haven't updated in two years.
But yeah, I think I'm probably talking to 15 people right now from our community about
trying to get just better expertise in fast computing into the AI world.
And I think there's a lot of benefits to be had there.
And I really do believe that we're the company
that's doing it the right way.
I know that sounds cheesy and it sounds very bandwagony,
but yeah, reach out and just really appreciate
the opportunity to talk again.
This is really cool and thank you.
Oh, thank you for coming on. Thank you for all of that. Thank you, Anastasia,
for being an excellent co-host as well. It was a pleasure. Thank you. Thank you, Phil.
We'll see you all next time. See you.
Thanks so much for listening in as we chat about C++. We'd love to hear what you think of the
podcast.
Please let us know if we're discussing the stuff that you're interested in.
Or if you have a suggestion for a topic we'd love to hear about that too.
You can email all your thoughts to feedback at cppcast.com.
We'd also appreciate it if you can follow cppcast at cppcast on X or at mastadon at
cppcast.com on mastadon and give us a review on iTunes.
You can find all of that info and the show notes on the podcast website at cppcast.com.
The theme music for this episode was provided by podcastthemes.com.