Algorithms + Data Structures = Programs - Episode 246: Not High on AI?
Episode Date: August 8, 2025In this episode, Conor gets Ben's thoughts on AI!Link to Episode 246 on WebsiteDiscuss this episode, leave a comment, or ask a question (on GitHub)SocialsADSP: The Podcast: TwitterConor Hoekstra:... Twitter | BlueSky | MastodonBen Deane: Twitter | BlueSkyShow NotesDate Recorded: 2025-08-05Date Released: 2025-08-082025 Stack Overflow Developer SurveyADSP Episode 244: High on AI (Part 1) DiscussionMeasuring the Impact of Early-2025 AI on Experienced Open-Source Developer ProductivitySoftware Unscript Episode 109: GPU Programming and Language Design with Chris LattnerDeclarative Style Evolved - Declarative Structure - Ben Deane - C++Now 2025Intro Song InfoMiss You by Sarah Jansen https://soundcloud.com/sarahjansenmusicCreative Commons — Attribution 3.0 Unported — CC BY 3.0Free Download / Stream: http://bit.ly/l-miss-youMusic promoted by Audio Library https://youtu.be/iYYxnasvfx8
Transcript
Discussion (0)
I think why not both?
You know, people should be experts in as many things as they can.
You know, your brain is not a bucket to be filled where if you put too much things in,
it pushes other things out.
No, your brain is a tree where if you grow a new branch, you have more places to hang things.
Welcome to ADSP, the podcast, episode 246, recorded on August 5th, 2025.
My name is Connor, and today I get my co-host, Ben's thoughts on AI.
I'm curious to get your thoughts on AI.
coding tools. I'm not sure if you listened or skipped to the Bryce episodes. I did. I did listen
to those last couple of episodes. I'm not an adherent of AI assisted coding tools. I want to work
on things that AI can't do, frankly. And I don't think, I'm wondering whether AI is good for the
world. You know, it doesn't seem to be doing much good for the world. This is important, right?
It is important.
So either, you know, either I'm going to come across as an old fuddy-duddy and you can reject
what I say, or I'm going to come across as someone wise and you can listen to what I say.
Either way, I have opinions.
Well, I would like to hear your opinions, and I'm sure the listener, I mean, there was,
I think it is the Stack Overflow survey, which admittedly now that I think about it, I forgot
had come out, but I heard about it on a Python podcast.
and the reason I'm bringing up now is they asked a bunch of AI tooling and AI questions
and one of them was like what's your general experience and I think it was 67 or 70% of folks
said they were frustrated by them where they felt like they weren't meeting their needs for
what they were doing so I think that potentially Bryce and I are in the minority and actually
most of folks at least according to the Stack Overflow survey are not on the hype train and so anyway
over to you I'm happy to hear your opinions and I'm sure everyone else is a
well.
Yeah, I think, well, I don't know about that.
Well, I think one thing we can say is that from that survey, the average working developer
has less trust in AI this year than they did last year.
That was one of the data points I think came out of that survey.
Am I remembering that right?
Well, I haven't read the survey, so we'll take your word for it.
I think one of the points was, you know, after having tried it, now we trust it less as a community
of developers.
Not, not, we don't, that number didn't go down a whole lot.
It went down by like, I'm going to say five to ten percent.
I can't remember the exact figures.
But yeah, like many people who work in any kind of corporate enterprise, there's been a lot
of fuss over AI, right?
And every time I have, I'm going to say, I'm going to say,
say tried using AI, been subjected to using AI as maybe another term I could use,
it hasn't really gone well. You know, I've had AI do code reviews on my code. It turned out
to be very good at making work for people and saving no time at all. In fact, the opposite.
I've seen AI leave so many comments on pull requests that I basically couldn't load the pull
request in the browser anymore. That pull request had to die because it could not be reviewed
because the AI had left so many comments, it had basically killed the experience. It just wouldn't
load. Is this on GitHub? GetLab? Yeah, on GitHub. Wow. I've seen AI leave a dozen comments on one of
my pull requests. And here's the thing. The suggestions always sound good on the surface. But when you
dig into them and it takes you a half an hour to go through each one and think, okay, you know,
and it suggests writing this test or writing that test or doing this or doing that. And in one
particular case, in this case, I'm thinking about all of them turned out to be wrong. It was
zero for 12 on its suggestions. But it took, it took 45 minutes to figure that out, right?
as a human, right? It took, it was, it's like dealing with, I've heard it described as being,
it's like dealing with a brand new developer who you have to train, except they never learn.
Right. Right. You always have to tell them what to do. They never learn. And, you know,
I don't really have time for that. I have a lot of time for teaching people who will learn,
but not for teaching AIs who won't. Are you, I'm not sure if you're allowed to say what,
because I know it sounds like you're using like GitHub,
agent or CodeRabbit or one of those tools?
I don't actually know.
Okay.
There are internal tools that we use.
We do use GitHub.
I mean, we use GitHub as for our regular development.
But as far as the AI specifics go, I can tell you, because I actually don't know.
Okay.
But anyways, it was some kind of code review tool that was leaving automated.
This particular one, yeah.
Yeah.
And have, is that the main experience?
And maybe, and, well, you know, I have colleagues who use AI and who find it useful.
You know, it's horses for courses.
The kind of work I do.
Horses for courses.
What does that mean?
It means different things are suited to different jobs.
Okay, okay.
I guess.
Sorry, it's a Britishism.
You know, for some kinds of work, like, like, if you do work that's like summarizing documents or, you know,
I don't know because I don't do that kind of work,
but I'm sure there are many areas
where AI can be useful to pull things out.
But if you're doing work programming,
in particular, if you're programming
and your aim is to provide lasting value
in the code base,
then I don't think AI currently helps you.
And bonus, it's environmentally extremely unfriendly as well.
So not only does it not help,
it's very bad to the moment.
I mean, we can definitely agree that it's bad for the environment.
I mean, most of the AI people that are pro-AI, I should say, they say that, you know,
oh, we get to a certain point and then we can fix all our problems with AI that is predicated on
getting to that point.
Well, yeah, it's predicated and getting at that point.
In the meantime, you know, data centers are projected to be the energy cost of the fifth largest
country in the world. And, you know, according to many reports, the cost of a simple web
search has gone up by 5x in terms of the environmental cost.
Oh, right, because, yeah, Google is prefixing all your questions with a little AI summary
answer. So, you know, maybe I'm old and commodgingly. Maybe I'm old and wise. You can decide.
But, you know, I don't think, I don't want to, I don't want to do that.
Also, to my previous point, if our job as developers is to add lasting value, right?
I have not yet seen a case where AI really helps with that.
I'm sure AI is great at producing code, right?
It's great at solving problems if you want to throw away the solution.
But I don't think it's very great at solving problems over time, solving software engineering problems.
So let me ask you your experience there.
I mean, I think you might be right in the moment, but I'm also not sure.
I definitely can say anecdotally for myself, I find these tools like crazy force multipliers,
even for like non-coding tasks now.
Like there's so many times now I reach for these tools when I just need to solve some problem
that is sometimes just in the real world.
like for instance over the weekend we were at a shima and i were at a different place not our
own and we were trying to watch and or season two fantastic show if you haven't seen it and season
one and uh we had it i had it on my laptop and we were trying to um with an hdmi cable
displayed on the tv that was in this place we were staying at and it was 4k so like the rates
were super choppy and i don't know how to fix that so i went and asked it and actually i went in
just got cursor, because now, like, I'm on my laptop, and I just say, hey, I don't actually
need you to write any code for me. I just need help. This 4K TV is choppy. Can you fix it?
And a minute later, it basically has run a couple commands using this tool called XRand, which
I've never heard of before. And it basically, it did some XRand list where it found the
sources, it found the 4K TV, and it said, okay, let's just switch this to, I think it was
60 frames per second instead of 120 or something completely solved the issue.
I didn't even know this tool existed.
I'm sure I could have using Google searches, like gotten to the solution eventually,
but it definitely would have taken way longer than 60 seconds.
And that is like a concrete thing where, you know, it is actually solved a real-life problem
for me, like over the weekend.
Right.
So I definitely think that these tools can be useful.
Are they creating lasting value?
I guess that's TBD.
I mean, I know a lot of these companies that are writing these tools are kind of like dog fooding,
and there are, I don't know what the numbers are of the number of developers that are using these tools,
but to a certain extent a lot of folks seem to be getting value out of this stuff.
I will say that I have a developing view that there are certain types of work,
potentially all the code that you're writing, you know, template, metaprogramming, variatic, C++,
you know, on embedded software
that it's terrible for.
And you can throw, you know,
writing kuda kernels in that category of stuff
that it's just, it's not going to help you
when the, for a certain set,
like define some parameters on, you know,
very complicated code.
It's not good at doing that.
But if you need help spinning up some kind of like
single page application that, you know,
is just statically serving you some information,
like Shima now is using these tools to like,
build like
like basically personal websites that
are helping her with her
job in terms of I mean
Bryce and I talked about the the Ohip codes
but she started building another one
for basically
medical prescription
translations if you want to get
you know someone with
you know certain medical thing on there on one drug
but you want to taper them off onto some other drug
there's like some complicated like math that goes into doing that
and right now when you're
an addiction doctor you basically just do that personally like by hand and it's but it's very like
you're doing the same thing every single time but there's like a huge matrix of like are they taking
pills is it ivy i don't know all the details right but it's like a very complex matrix of stuff
and she said it would be amazing and there are actually websites that she's she mentioned md calc which
is this kind of website that doctors do utilize but it's only for a certain set of these calculations
um and she said like if i could have one specifically for my thing this would save me like over
my career, potentially like weeks and weeks of work, because it's just, it's stuff that
you could do with some kind of medical calculator. It just doesn't exist, right? Someone's
not going to go and build a one-off thing for that bespoke thing. I would love to, I would love
to hear Shima's opinions. And I think you and she should, if she's willing, you should record
with her. Because, you know, other way, this podcast is in the genre of two guys talking. And
Sometimes it's you and Bryce, and sometimes it's you and me.
But either way, the genre of two guys talking is not what the world needs, I think, sometimes.
So I'd love to hear Seema's opinion, you know, as medical professional, as a woman, as someone who's not in the same demographic.
Yeah, I mean, we, well, if you want to be on, she'd be happy to come on.
In fact, we were talking about it back in June, and we were going to record.
But then I said, oh, really, we should, we should try and get Ben, because,
I live a little bit more in her world,
so probably I'm not the best person to ask, like, questions.
Better to be, have some third party, aka you.
Sure.
I'd be happy to do that.
Because the interesting overlap here is that, you know,
the other big problem with AI is, for me,
is AI cannot take accountability for code, right?
There's a legal problem, right?
If I'm an engineer writing some code,
I need to be able to take accountability for that.
If I'm using AI, LLMs to write it, where's the accountability?
Like, this is why I say, it might be okay for like throwaway things, right?
But I am not going to have an LLM write my code, and then I have to stand behind that code.
Right.
And so if an LLM writes it, I still have to write it, right?
I still have to read every part of it.
I still have to understand every part of it.
Like the typing of the code is not the slow part there.
In fact, the more code produced,
the less I'm likely to understand it.
Right.
There is definitely a dichotomy
between the vibe coding stuff
that's like a personal thing
versus stuff that you're shipping to production.
And I will say, like,
I have used the code-assisted tools
to help with writing libraries,
but that is much different
because I'm actually reviewing all the code
that's being written there.
It's still, of course, multiplier,
but it's not the same as,
oh, I have this dashboard
that I want to build.
and I can just spin it up.
And I really don't care of the JavaScript that it generates.
I just know what I want it to look like,
and I can verify what it looks like at the end of the day
to know that it's correct, you know.
And in the case where you have to verify everything,
you know, there is one data point,
one study that's been done that I'm aware of,
which I think someone mentioned on the GitHub
for one of the last episodes.
There is one study so far,
and it showed that developers think they are 2020,
25% faster. The data shows they are 20 to 25% slower. But even when confronted with the data,
of course, the classic thing is they still believe they're 20 to 25% faster. I have to say,
Connor, your response to that was perfect. You just said, well, I flat out don't believe the study.
I'm going to call you out on that. You know, so I think there is this, maybe it does reflect
this dichotomy of like vibe coding. You can produce a lot of code. But coding when it matters,
which frankly is the code that I want to do professionally, right? I don't.
want to be vibe coding on stuff that doesn't matter. When it matters, it's not speeding
us up. I mean, that's where I, I mean, like I said, I do think that my developing view is that
there are types of coding that it helps you for and other types that it is much less helpful for.
That, I mean, that being said, I'll read because Ben is not mistaken. He did, he did read my
reply in the first sentence is anything. And so the, this is, well, let's see if we can get their
name. Their handle is M. Heyman, which is for Michael Heyman. So this was on episode 244. They posted,
so there's this measuring the impact of early 2025 AI on experienced open source developer
productivity that just came out saying what Ben just mentioned. And we'll skip over the rest of
their post. I replied saying anything that says Gen AI tools are slowing people down,
I just don't believe. And then I'll skip the next thing I said.
But I pulled out two things.
I didn't read the study in full, to be fair, but they were using Cloud 3.5 and 3 points on it.
So I understand that when you do a study, you're doing the latest and greatest, and probably
that was the latest and greatest at the time.
But my first remark is, do this study again with Cloud 4 and see what they say.
And the other thing is, I wanted to see, like, who are these folks?
Like, what is experienced?
Because it's kind of like, the analogy I had in my head is if, like, if you're going to go
to some race car track or some F1 circuit, and then you get some, like, moderately experienced,
which is what they say in the paper.
They say it's 16 developers with moderate AI experience
and you put them in some car
that doesn't really show like what you can do with the car.
But if you get, you know, the best drivers in the world
or whatever, the people that are the power users of these tools
and see what they can do,
that's actually measuring like what the car can do.
So it might not be a perfect analogy,
but I kind of thought, you know,
the lack of detail on how much experience do these folks
like have with the AI tools,
it doesn't it's not a super fair to say that like oh these people and if your point is like well
what if we just unleash these tools on the world like what impact is it going to have okay maybe
that's fair yeah i mean i i i don't think you can dismiss that point because they're not power
users if they're representative of your your regular working programmer then it's not just
you know what's the cost of unleashing these tools on the world the question is like
are the economics even there, right, regardless of the other externalities?
Economics meaning...
Like, is it worth it for companies to adopt these things, which they are trying to do in droves?
Right, I see. You're saying that companies don't employ just the power users of whatever tools they're using.
There's a distribution of folks.
Right, and everything's on the power curve.
So your power users are like 2, 3% of your users, right?
Maybe. I'm all parking.
Right, right.
But like, yeah, I guess that's fair.
I mean, I'm just saying, like, it's fair to say, like, this is not my personal experience.
But I don't think we get to reject the finding.
Yeah, maybe that's true.
I mean, I still, in my heart of hearts, find it hard to believe.
I mean, Richard Feldman, who has his own podcast, Software Unscripted, he just had Chris
Latimer on for the second time talking.
about Mojo primarily.
But obviously, they're talking about AI as well.
And at one point, Richard said that he just didn't believe, like, when people are throwing
out, like, 10x and 100x productivity gains, he had some quote where it was like, you know,
you can see 100x from space.
Like, there's no way anyone's having that kind of, like, productivity gain.
Well, I think that my biggest pet peeve with the hype is when people start throwing around
numbers like 10x, 100x.
Like, you know, I appreciate that at the beginning of the conversation,
you were very quick to be like, hey, we're not saying we're 10x faster than Rust, right?
But like with AI hype, it's like the opposite.
Someone's like, we're 100 X is productive.
And I'm just like, wait a minute, wait a minute.
Like, you can see the outputs of 100x productivity from space.
Like, if you're actually 100x productive, you don't need to tell anyone about that.
You just have people walking by with their jaws open at like what you're producing.
And they're like, how are you possibly outputting this much, which of course is not happening.
Correct.
100%.
Yeah.
Yeah.
And in my head, in my head I was screaming like, well, it's, I think it's like for certain people like Shima, who admittedly, she actually at a internship or at some point in her life was actually, did do a little bit of MATLAB in some research position back when she was, I think, in her undergrad, so many years ago.
So it's not like she has zero programming experience, but she definitely would not call herself a programmer.
But the kind of apps that...
But she's, you know, she's a doctor.
she is well qualified in a technical field right I'm sure she's done plenty of math in her
yes yes yeah at one point yeah she considered pursuing a career in mathematics um but the point
being is the apps that she is able to build with these tools is something that's like
without going and you know developing a whole different vertical skill set she is incapable
of doing so that's like it's zero to like impossible to possible so like if you
we're measuring, like, productivity improvements, it's like, well, that's something that she
would have to go spend days, weeks, like, however long getting up to speed where she could build
that.
So, like, in that sense, like, isn't it greater than 100x?
Like, it's, and I feel like people, like, in this study, too, the, the thing that they're
primarily measuring is open issues on open source, like GitHub repositories.
Yeah.
Well, they tried to measure, like, a regular day job, right?
I think they tried to take Hira regular programmers doing a representative thing they would do during their day, right?
And that's different from Shima, right?
Shima's main job is not writing code.
It's a very, very small part of her job.
Now, if she can amplify that, then it's great.
But in the end, what does that account for?
2%, 5% of her job?
I don't know.
But it's not, you know, it's not the focus of her job.
Yeah, I guess that's true.
I'm guessing. I should say I'm guessing.
Well, the tools that she's building are just to help her.
It's not, it's, yeah, maybe not indicative of the majority.
I guess that's a question is, what is the pie chart of software development and software
engineering and code writing that happens in the world?
Because that affects, I guess, how people are going to have their productivity amplified.
The cases where I think it's like the best is where you're going from zero to 100 of
some tool that you want or some website that you want, and it's going to do all the scaffolding,
all the setup.
Like just this morning, I wrote a tool that I called STT, which is speech to text, because
I'm like, I'm spending so much time dictating to these models that like, or typing, I should
say, at the moment.
I just basically want to speak to it.
And I looked on Linux.
There was a couple different options, but speech note, the number one recommended one,
didn't work for some reason, and then the second option was just to use opening eyes whisper.
Interesting.
Anyway, so I just built, I just asked it, like, hey, you wrote me this script to run this tool,
throw it in a little gooey.
But the main thing I want is just like enter to record, enter to stop, and then when it's
done transcribing, copy it to the clipboard so that I can just immediately control V it
into the agent window.
And within like 10 minutes, I was done.
I mean, it did have to figure out some TK, enter, Tickle TK, there was
some issue but that was just like on my desktop and it fixed it but like it's like i don't know
350 lines of python code i don't know how long that would have taken me but i guarantee you i saved
at least like 10x uh like there's no way i'm going to write a 350 line python script that's making
use of tk inter and whisper and a bunch of stuff that like sure i've never messed around with
like it's going to take me at least a date right like and that's it's also predicated on the
idea that you can speak comfortably faster than you can type comfortably for extended
if you're doing text to speech and I think for some folks that's true maybe for most folks
it's a great accessibility gain I would say I think for some folks that that proposition is a
marginal marginal win at best you know some folks can type a lot faster than they can speak I
you know I have a friend who uses a stenography keyboard oh yeah you
yeah we chatted about this once yeah yeah yeah so it's a dying art but it's sort of coming back
through you know the niche the niche internet um so it's a thing if you know and if you're an
experienced stenographer you can type at 250 words a minute and you can type all day long with
with no rSI right right so but that again is a niche thing i think for most people it's true
text to speech is
a productivity gain
an accessibility gain
let's say as well
it also means like
I only use it when
when she was not around
because I'd probably drive her nuts
just talking to myself
Yeah there are some
Yeah exactly
And you know
We move at some point
Then I have my own room
Then okay
I could close the door
And I won't irritate her
And hopefully she'll be
At a different
Different corner
of whatever the place we're at
But this all to be said
Is that like
This is an example
Of a tool that like
definitely like it took me like grand total like 10 or 15 minutes and I don't know if it's a day I don't know if it's half a week but like the product like my ability to make that tool is like at least 10x faster this is the type of thing where like I said I'm going from zero to 100 if you're working on an existing very like old legacy or not even legacy code base maybe it was built in the last year but you know it's tens of thousands if not hundreds of thousands heaven forbid millions of lines of code and you're working on you
you just have a bunch of open issues,
potentially the productivity you're going to gain,
you're going to get there.
Like if all the,
you know,
like I know your workflow,
you have all the bells and whistles already set up.
You've got your testing frameworks
and your clang tides and formats.
Yeah.
And if all that stuff is already done,
and the only thing you need help with
is basically solving these open issues,
whether they're simple or gnarly,
you're probably,
uh,
what are they,
your mileage may vary.
Like the,
the mileage that you're going to get out of these tools is,
is,
definitely way less than the zero to 100 use case.
Well, and let's face it, these AI tools are great at spinning up boilerplate,
like spinning up a new project.
But let's actually be clear here, it's pretty sad that we need them to do that.
Like spinning up a new project, even in C++, even with dependencies,
ought to be, I can do it in a couple of minutes.
I don't need to use a tool, right?
and if you have a certain workflow
and if you have things set up
like I do
for my professional environment
it is that way already
Yeah
I mean I made
that is a 100% agree
and I made a comment when I was chatting with Bryce
that I said maybe it was because of the
painfulness of the C++
experience that I
I like these tools and there's a ton of people
that are not
thrilled with it and I mentioned
Rust and Ruby because those are
you know, off the top of my head, two languages that I know that really, really are tried
to make sure that the tooling and the ecosystem experience of those languages is really
good. So potentially if you're in Rust, when all you have to do is rust up, you know,
I don't and you can't remember what the Canaan's are, but it's just basically a couple to
install the latest Rust and then cargo and knit, I think, to, and they even initialize,
you're like, Hello World. And then you're off to the races. And you've got unit tests
built in to basically like the language and document.
all that stuff.
So potentially the value that you get from, you know,
these models if you're a Rust developer is a lot less than if you're in a language like
C++ or that has a lot more tooling in ecosystem, ecosystem problems.
And don't get me wrong, C++ has made massive improvements over the last decade.
But there's still a long way to go, I think.
Well, and it's not just language specifics either.
This is where I get to put on my really old commmergent.
hat. And, you know, bring up people who use command line, people who use Emacs, people who use
VIM. Generally, you find, as a whole, among those kind of folks, they are much more efficient
at doing things on the, you know, well, I say doing things on the command line. Obviously, that's
a tautology. But they're more efficient at using the tools than, you know, folks who haven't
learned that and are used to using the more GUI-oriented tools.
And even, you know, even modern editors like VS code, they have a lot of bells and whistles.
They do have a lot of things in there.
But they don't have the kind of focus on editing that has been going on for 40 years in
environments like VIM and Emacs, right?
Right.
And, you know, I know I'm sounding like, I know how I'm sounding.
I mean, there's something to be said, you know, people always say, learn your tools and it's going to serve you for a lifetime, right?
Yeah, yeah.
If you really get to know your editor, well, the point about Emacs is it's more than an editor, right?
It's like an operating system.
It's an environment.
What makes it so great is you can look at any part of it.
You can ask it about itself.
You don't know how something is happening.
You can ask Emacs, right?
The whole environment is open for you to do that.
And you can make it do literally anything they want.
Right.
And so if you put that together with some command line mouse, right?
So you know Git, you know, C-Make, you know, grep X-Args find, all those other small HOSIX utilities.
You know, and it's not just E-Max.
It's, you know, E-Max is a lot easier to program than VIM.
But, you know, VIMs great string.
is its language for manipulating text, the movements and such, right?
And that can be implemented inside of Emacs.
In fact, many people use evil mode.
But I think among the camp of people who really sort of mold their environment around
themselves, they already have a super nice workflow.
You know, I've worked with people who are just wizards of that kind of thing.
You know, you come to their desk, you ask them a question.
Before you've even finished asking the question, they've been typing to find the answer, right, and not asking Google, but just doing some command line query, stringing together some Unix tools, and before you finish the question, literally they have the answer, right? You're asking about some, you know, some question about a data set that the company's working with. They can literally work at the speed of thought.
Yeah, I mean
My thought
This is going to sound
I don't I don't mean this
This is just the thought that popped into my head
Is that is there analogy
Because I definitely agree
That learning your tools
It serves you for life
Like I completely agree with that sentiment
And personally have experienced it
Like I remember when I did my first
kind of real co-op
I worked at my university
a couple summers
but the first time I had a real internship
during university
was at an insurance company in Canada
called Manulife
technically actually was at the subsidiary
John Hancock
which is an American company
and actuaries use a lot of Excel
and I remember
standing behind my boss
at one point
who was a young guy at the time
so he was only a few years older than me
And he was, like, moving around Excel like I had never seen before.
Right.
And I was like, what are you doing?
And he was like, what do you mean?
I was like, you just like went down to the bottom of the data in that column.
And he was like, uh, yeah.
And I was like, how did you do that?
And he was like, what are you talking about?
Like, this is, this is like 101 Excel.
Like, everyone knows this.
But I was in school.
I'd never seen anyone do this.
And he's like, oh, you just hit control and use the arrow keys.
And if you want to select, you hit control shift.
It blew my mind.
It was like opening up Pandora's box.
Anyways, the point being is over the next four months of that co-op term, I became like an Excel
guru.
And I remember in the interview process, they asked me on a scale of zero to ten, how are you in
Excel?
And I think I said nine or something, because I was like, ah, there's always room for improvement.
I was a zero.
I think I've told this story probably on the podcast at some point.
I was a negative one.
You know, I knew how to make a bar chart, and that was it.
And I thought that's what Excel was for.
But Excel is this massively powerful tool.
The point being, I totally agree.
with the sentiment,
Emacs Vim,
the folks that are,
you know,
that 1%,
it's like magic
watching people,
people work,
you know,
they never use their mouse.
Yeah.
It's fantastic.
That being said,
I do wonder,
the thought that popped in my head is,
the folks that are,
you know,
doing the magic things,
is it the equivalent of the person
that was like the best at the abacus
right before,
you know,
calculators came out.
And like,
calculators were the democratization
of the ability to do those types
of calculations.
You don't need to be an expert
to use a calculator.
And, you know, it did mean that folks that were experts at the abacus, did it, did it obsoleses?
Is that a verb?
Can you?
What is the verb of, uh, yeah, I think that's right.
Yeah, to obsoles that skill set.
Anyways, that is, I don't mean it like, oh, if you've become an expert in this stuff,
it's not important anymore.
But at the same time, should it be that we're encouraging folks to, you know, become an expert
in this vertical instead of saying, hey, there's this tool that, you know,
don't need to become an expert in it. And if you watch the thinking or the reasoning of these
models, they're doing all that stuff. They're grepping. They're using said. They're using
Ock. Well, I think, I think why not both? You know, people should be experts in as many things
as they can. You know, your brain is not a bucket to be filled where if you put too much
things in, it pushes other things out. No, your brain is a tree where if you grow a new branch,
you have more places to hang things.
Oh, did you just come up with that on the spot?
Or is that a quote from someone?
That was brilliant.
I don't know.
I don't think it's an original thought.
I wasn't quoting anyone.
But that's how to look at it, right?
Right.
Because that's how knowledge is.
It's associative in nature.
We don't forget one thing because we learn a new thing.
No, we relate new things to the things we already know.
and our whole knowledge tree grows.
Yeah, I mean, I don't,
that's a fantastic sentiment.
And I don't disagree.
Yeah, I'm curious to see what will happen.
Like I said, my evolving view is that there's a set of things,
and I think what's going to end up happening, though,
is that for the non-experts,
they are going to veer more towards this kind of AI-assisted code writing,
and then there's going to be
not a bifurcation necessarily
maybe it'll always be some kind of sliding scale
but for the folks
like you
and maybe this is a good transition point
to your C++ now talk
that is not public
are we allowed to link it to the listeners
if they're listening?
It's fine well it's unlisted
at the moment.
It's unlisted but guess what
if you're a listener we'll link it
in the show notes but yes
I mean you are showing
I would say some pretty sophisticated
stuff in your talk, especially at the end there, where you had the whole, the bonus slides,
where you had the whole, that was just the, yeah, that's just something silly.
But the point being is that I do think there's going to be kind of like this bucket of
experts where expertise is really required.
That being said, there will be a percentage or fraction of the work that is definitely outsourced
to these LLMs.
Be sure to check these show notes.
in your podcast app or at ADSP thepodcast.com for links to anything we mentioned in today's
episode, as well as a link to a get-up discussion where you can leave thoughts, comments,
and questions. Thanks for listening. We hope you enjoyed and have a great day. I am the anti-brace.