Two's Complement - Pair Programming with HAL?
Episode Date: March 16, 2025Matt and Ben explore the new world of AI-assisted coding: is it like pairing with junior developer? Matt gets the recording working the second time, Ben worries about what happens when your business d...epends on code you don't understand.
Transcript
Discussion (0)
I'm Matt Godbolt.
And I'm Ben Reidy.
And this is Tooth Compliment, a programming podcast.
Hey Ben!
Hey Matt.
Let's try and make this sound really natural because we can pretend that we haven't just been recording for 20 minutes
And I need to discover that the audio wasn't working and so we're starting again. Yeah
So this is gonna sound really incredible. How is that? How are you doing my friend? It's been a while since we've talked
It's been I had like you in so long so many minutes. Oh
Boy everyone gets a very much a warts and all view of our podcast. You know,
we've you and I have always said this is low effort for us because otherwise we'd stop doing
it. It's just we don't want to like spend too much time on it. But we care enough to do it like
somewhat properly. I think we got a good system. I think so. That's the key. It's working pretty well anyway. So a few minutes ago we were just talking and I was saying to you how, how much,
uh, when I ask you how you're doing, it's more of a genuine question because these
days I don't hang out with you half as much because I legitimately have not seen
you in a while other than 10 minutes ago when we recorded this the first time.
We recorded the introduction that didn't work.
Yeah.
Oh God.
But, um, and the TLDR for the TL didn't record where the R of TLDR is record too long.
Didn't record is I have moved on from the company that we both worked together at and
I am currently on a short break between jobs and enjoying the good life and working on
compiler explorer.
And it's of no fault of my previous employer. I still think they're fantastic.
And I know that you're happy there and there's a lot of cool things going on
there, but it was an opportunity for me to do something new and different.
And I took it and here we are, but we're still hanging out and we're still
recording a podcast and we still have important and interesting things to
discuss and if anything, this means that we're still recording a podcast and we still have important and interesting things to discuss.
And if anything, this means that we're less likely to accidentally leak company.
Right, right, right. Yeah, actually. Yeah.
A few people actually who knew that you had moved to a new firm had asked me,
like, you guys going to keep doing the podcast?
And I was like, absolutely.
Like this is my chance to talk to Matt every week because now I don't have the built-in one of, you know, riding the train home or
whatever it might be. Or yeah, getting coffee or something. No, well, which we obviously can do,
still. But yeah, I've moved on. I am excited to do something new and different. And if you care
about such things, you can follow me on bluesky or hackaderm or one of those things. And you can see
what I'm up to there, but I'm not going to go into it here because this is not what this
podcast is about. This podcast is about you and me chatting about code and our experiences coding
and the things we like about coding and, and, or today, again, it's, it's so difficult to get excited
about something we've just talked about and try to pretend it's new and fresh.
And or today, I was teaching a computer to program for me or asking a computer to program
for me because I'm too lazy to program.
I'm too old now.
I've forgotten everything.
I just ask it to do it.
It does it now.
So today's topic is AI.
What do we think about AI and programming and do we use it and whatnot?
So you know, I'll start.
Yeah, go on.
GPT, please explore the compiles.
That's right.
That's pretty much actually you'd be surprised to how much I haven't actually used it on
compiler explorer very much.
Occasionally I do.
So we a few minutes ago, we were discussing that pairing with an AI is like sort of pairing
with either a junior programmer
was one sort of.
Yeah, that was an observation that an ex colleague of mine made once, which I thought was good.
You then said something which was more amusing.
Yeah, I think it's more like that's that's like not completely wrong, but I think a maybe
better way to think about it is a very senior programmer who is currently taking LSD. Right.
Because you get a lot of like very interesting and cool things mixed with like that doesn't
work that has never worked.
And actually now that I think about it, it could not possibly work.
Yes.
That happens too.
Yeah, that does.
The junior thing I think brings home for me a little bit more. I mean, I have obviously seen it make up APIs that don't exist before now, but usually,
usually a gentle nudge in the conversation window to say like, hey, I tried that and it didn't work.
It's very apologetic and then goes, oh, sorry about that.
Yeah, that's not the right thing at all.
And you're like, okay, yeah, well, why did you tell me that? But, but it seems like a junior
programmer who is incredibly, voraciously learned everything
that there can possibly be about all possible topics. So it's
like, well, there's very keen people that has had no
experience implementing anything. So it kind of like,
the knowledge is amazing. But putting it into practice is
difficult, the executive part of that. And that's how I found
it. But yeah, we were saying that like, you know, um, if, if you've got like a
knowledge deficit, like I have, so the things that I've used it for on compiler
explorer is so I've got copilot integrated into one of my IDs and I'll say,
oh, I don't know how to phrase this in TypeScript because frankly, I don't know
TypeScript very well and it's very good at saying, oh yeah, this is how you
explain this type to the system. And then, And then you try it, it doesn't work. And you say,
well, that didn't work because of this. And it goes, oh, yeah, silly me. And that's been very,
and eventually gets there. That has been very useful for me. What have your experiences been
with using AI? How do you use it? If you use it? Yeah, I mean, I have not done as much of,
like I think there's three basic styles
that I'm aware of, of let's call it AI assisted coding.
Okay.
The most basic form is chat.
So you just open a separate chat window
from whatever tool that, you know,
ID or editor you're using, and you you just ask questions and it gives you answers.
Right. And there's a lot of, you know,
maybe copying and pasting code depending on exactly how you.
I'm just going to pause you there because one thing I'd think I only became
recently aware of is how much extra context,
if you're opening up and say your IDE,
how much extra context your ideas like plug in or whatever is sharing with the AI.
So like, I have been very surprised that it can look at my clipboard and kind of realize that I've
selected, you know, even my X clipboard, I, the stuff I have selected in another application.
And it's like, oh yeah, yeah, yeah.
You probably want to paste that stuff in.
I'm like, oh, that's, that's creepy.
So that's, I didn't realize that, but it's obviously
valuable to the system.
Here are the last three files I've been looking at.
Here's the thing my mouse pointer's
hovering over, a little bit of extra context.
And then it gives it a little bit more mind reading ability
than I perhaps was expecting when I'm just
chatting with it.
Separate to if you open up a chat GPT window somewhere else
in a web browser and then start asking it code questions.
Yeah, yeah.
And I think that the model there you're describing
is sort of the second one that I would say, which is,
you have the tool integrated into your IDE
and you're chatting with it and you're saying like,
can you please do this or can you change that?
And then the third model that I've seen is like
sort of AI enhanced code completion,
where it is like, as you're typing,
it's like filling out, graying out code
that it kind of wants to type and say,
here's the thing that you could do here, right?
And I think those are three rather distinct modes, right?
That's true.
I would add to that, there's a fourth one
that I've seen recently, which I suppose is like two
plus, right with, with the two, that is an IDE based editor.
Um, you're generally driving the show and you know, it says, Hey, you know, you, you, things that I would do is highlight a function and say, can you simplify this
please, or can you make it more, you know, type scripty and then it will give me,
Hey, what about this?
And then I go, yes.
And I can click a button and then replace the code I've highlighted with whatever
the, the, the AI has come up with.
But then there are more like, this is a new word I learned the other day, agentic, but
it is to say agent like where the CIS, the AI has agency of its own plugins.
And I was observing somebody, um, a friend of mine, like driving the IDE by using a plugin that
actually can itself drive the IDE.
So he'd asked it to say, can you add new files?
Can you refactor this?
And it would do all of the things in one go and it would run the test and they
would fail and it would feed back the failure of the test.
And it would actually iterate itself until it had got to a stage where it said,
Hey, I did the things you asked me.
Are you happy?
And then you click yes, or you go, no, roll it back and we'll try again.
And that was, you know, that seems to be very much more like to,
the second case, but with its own agency.
But I wonder if it's distinct at all.
You know, the second I would call that like a fourth mode where it's like,
you're not really the programmer anymore.
You are the executive programmer.
And well, bringing this back to a pairing analogy, right? Right. Like
you always had the driver and the navigator, right? And in sort of
traditional, you know, XP style pairing, the whole idea of that was
that the driver doesn't make any technology decisions. The purpose of
the navigator was to say,
like to communicate their ideas to the driver
and the driver would type what they think they heard.
And that was supposed to be a feedback loop
to make sure that you really had expressed an idea
that was like cohesive.
Got it. Yeah. Yeah. Yeah.
And so like to me, that model sounds like I just have, I'm pair programming, but I'm the navigator. Right. And the AI is the driver.
That does seem more like it. Yeah. Yeah. Yeah. I mean, in this particular instance, this is a friend, actually a mutual friend of ours who I'm currently sharing an office with.
We rented a temporary office space so that I don't drive my poor long suffering wife insane by me being around all the time at the moment.
Um, but, uh, he's a lapsed programmer.
He used to be a programmer, but isn't anymore is more of a manager type person.
And he has been working on some code by driving the IDE.
And it's amazing to see how much he's been able to do without literally
writing a line of code now he can code.
But he hasn't done it for awhile.
And it's yeah, you know, he's got this sort of web application with like a game able to do without literally writing a line of code. Now he can code, but he hasn't done it for a while.
And it's, yeah, you know, he's got this sort of web application with like a game
and these pieces moving around and he's like, yeah, I should like for the mouse,
middle mouse wheel to zoom in and out of the map and then the middle mouse button
to pan and it goes, got I gotcha.
And it just makes all these changes.
He looks at it and he goes, yeah, it seems to work and commits it and then just
moves on.
I'm like, wow. I mean, it is fairly unambiguous when you say that, he goes, yeah, it seems to work and commits it. And then just cruise. And I'm like, wow.
I mean, it is fairly unambiguous when you say that, but it's like, it's the AI has
understand understood the flow of his current program well enough to be able to
then say, I can see how you would add panning and zooming into this application,
which is one of those things that is a pain in the bum to get right.
If you try and do it yourself and that's kind of, that's cool.
Yeah.
Um, 2d matrix transformations in there and a bunch of other things that you
really don't want to have to think about too hard.
That's right.
I mean, yeah, sure.
I you'd get there at the end, but it was, it was fascinating to see.
And watching it work is like creepy.
It's like, you know, the, the idea is driving itself.
All these things are popping up and down.
Things moving around files are being edited and then, you know, it's
explaining what it's doing in its little window.
I mean, that's, that's cool.
But you know, there is obviously potential for a darker side and without going into
like some of the, you know, obviously there are IP concerns, but there's also
the, our computer's taking away our jobs sides to this too.
There's also the, our computers taking away our jobs sides to this too.
And so far I'm like, not in a meaningful way, because, uh, that was toil that,
that, uh, that my friend needed to do and could possibly have done a work today himself, but he could just ask you to do, but he's still making executive decisions
about like, yeah, I don't like the way it did this, or this class is more complicated
and, and iterating with it.
So there's still some programming knowledge that's required and needed.
But I can, I, yeah, I can see the beginnings of something that maybe say
there is a category of programming work that is made accessible to more people
in a way that doesn't require a specialist knowledge.
That's probably the most positive way I can spin it. And I wonder what do you think about this?
What do you think? We both have kids that are looking at a college age. my oldest is going into a computer science program, right at
a prestigious school nearby. And they have asked me this question. And I have told them
that while I've told them a bunch of things, which I'm probably about to recount on this podcast, but I told them that like the, if I were to summarize this in one
sort of paragraph, it's like, what kind of effect do you think that AI will have
on human society over the next, let's say, 50 years? Do you think it will make computers more
important to us or less important to us?
And I think the answer is pretty obvious when you phrase it like that.
I think what's happening right now, and what may happen for a little bit, is there's a
little bit of what I would maybe call an economic dislocation in that there's probably not an infinite amount of demand for software engineers.
Right? There is certainly an amount of demand that up until now has gone
unfulfilled because the cost is just too high. Right? Like this gets back to a
little bit of like, you know,
an economics 101 class about minimum wage, right?
Like you raise minimum wage, you raise the standard of living,
but you also create inflation
and you also drive businesses out of business.
So like, you know, a practical example for us,
living in Chicago is that Chicago
raised its minimum wage to $15 an hour.
I don't know if that has fully taken effect yet.
I think they've been rolling it out in stages.
It might have actually fully taken effect.
And there are, when you go down to Ogilvie Station,
which is the train station that we would ride into every day,
you see a lot of storefronts,
and those storefronts are empty,
and they have been empty since COVID, right? And I wonder to myself, when I walk past those storefronts and those storefronts are empty and they have been empty since COVID, right?
And I wonder to myself when I walk past those storefronts if Chicago had not raised the minimum
wage, how many of these stores would now be open? And I would be very surprised if the answer is it
would be exactly the same, right? There has to be one or two stores that were kind of right on the
cusp of being profitable.
And it was the sort of final straws that they, yeah.
Yeah, yeah. And then it's like, ah, you know what, with a $15 minimum wage, this store
just doesn't make sense to have open, right? There has to be at least one, right? And that
is, I mean, political, you know, obviously, I think you and I agree on most political sides of things, you
know, I don't necessarily think that's a wrong thing. I think that meant that's just economics,
right? That store was not economically viable and shouldn't have been, you know, more. But,
but yeah, from a pure dispassionate economic, economic point of view, I agree, I think that
makes sense. And yeah, so are you sort of making the argument that what we're doing now is we've reduced the minimum wage inverted commas of programmers, because now if I'm an estate agent, sorry, realtor, I translate for you.
And I need to change my website. I don't have to call up the website company anymore because I can load up my website and tell an AI, can
you just change some things about this for me, please? Or can you make it so this happens?
That's not exactly what I'm saying. I'm saying like the real estate agent would never have
even had a website. And now they can, because it wouldn't have made economic sense for them
to have a web development company because it was too expensive. Right. So they just wouldn't. And now I mean, like, so I've took that, for example, just
purely because one of my friends back home is this and like he paid like 200 quid once
to get his website once and he's got like a zip file and that's his website and it's
been his website for like 20 years now. And now it's like, well, I've never changed it
from that. And because I can't afford to keep paying someone to change it. Right. But now maybe I can get the computer to make changes
for me and I can say, can you update this house or whatever? And I don't really know what's going
on. So yeah, and in that way, this is what I suppose what we're saying is AI is a tool and tools
are in the right hands useful. Maybe you know know, like a wrench, a spanner.
Again, I'm doing it.
I don't know why I switching code switching between English and, and, and
American here, freedom speech.
Oh boy.
So much wrong with that at the moment.
Anyway, but you know, like there's, there's a number of things that you can't,
uh, I can't, uh, like I was just literally
replacing this battery of my sump pump downstairs. And I'm like, I can't get the, the, the, the,
the wing nuts off of the terminals. Am I going to call someone in? No, I got us tool to do this for
me. I can use a wrench. Now suddenly I've got enough leverage for it and that's fine. And so
you're getting leverage to people and maybe AI is programmer leverage. That's only how I personally use it, but maybe it's leverage for non-programmers.
I was about to say muggles then, but that's probably not normies.
Yeah, no.
So, I mean, it is, and it's like putting, as I was getting to before, I don't think there's an infinite amount of
demand for programmers.
There is a lot of demand for programmers and it is also possible that having AI will actually
increase the total net demand for programming because as my previous statement kind of alluded
to computers are going to get more important to humanity over the next 50 years, not less important.
And so the question is, at what rate will that increase?
And it might increase very quickly because of this.
But certainly in the short term,
what this is doing is reducing the unfulfilled demand
for programmers
by dramatically increasing the sort of total supply
of program-ish.
Right?
You can either be an expert programmer
who is even now more productive.
And I certainly think you and I can speak to that
of like the way that we use this tool
is to get things done way faster than we would have before.
And we should talk about some specific examples of that,
but also everyone else who is attempting to write code,
whether they be a realtor who's just trying to update their website or
somebody who used to be in programming and hasn't done it for 20 years,
or somebody who is just learning or somebody who is just treating it like their day job and just like, I just need to get more
stuff done. All of those people, all of those people are now going to be more productive
because of these tools, right? And so that has the effect of sort of filling in gaps in the demand
that were there before. And, you know, unfortunately, for us, perhaps reducing the total amount of compensation that we can expect
to get because our skills are so limited, and therefore, in so
much demand, I would argue that from a purely economic
perspective, that is nothing but good. From a sort of selfish
perspective, it's like, maybe not the best, but I'm willing to
take the trade because it makes me as an individual more
productive. And for somebody who kind of getting back to, you
know, this this hypothetical person who might be going into
a computer science degree and considering this as a vocation,
I would say, if you're in it, because you want to make a lot
of money, you can probably do better. But if you're in it because you want to make a lot of money, you can probably do better.
But if you're in it because you love programming, that's not going away.
Right. And certainly I think that advice fell on, uh, to your, to your eldest and they
were very, that is why they're into it.
Right.
Is it?
Yeah, right.
They love it.
So that makes sense in that sense.
Yeah, that's definitely, I think that's, that's
good advice. Anyway, is if you can do something you love, because I mean, certainly at university,
something you love, because you can stick to it. And maybe it demonstrates the ability to learn.
But that's a whole other conversation about like, why you have to weigh in the weeds with,
with that now. But yeah, it's interesting that you might
say there are more, I suppose, again, that we were coming at
this because you know, we are considerably closer to retirement
than we are from starting our careers, right? Uh huh. Sad to
say out loud. Yeah. And so maybe like a third of the way. Oh,
you are younger than me. In fairness. Yeah.
Well, no, I'm just going to program until I drop.
Well, I think that's, that's another thing, right? You know, you and I, as everyone who's listening to this podcast knows, we love this.
We would do this anyway.
I am doing this anyway.
I am doing jobs and I'm doing it because I love it.
Not because I'm being specifically paid to do it.
Yeah.
And so it's, it's very, and like,
literally, while we're talking, Discord notifications popped up where friends of
mine are working on like, Amiga demos and still, you know, for a computer that has
been obsolete for 30 years, you know, that's because people love this, we're
very lucky that at the moment, we still get paid reasonably well to, uh, to, to what we, we
enjoy. Um, that, that I con the economics that may shift as AI changes things, but, you know,
who makes the AI? I suppose there becomes a point, there's maybe an inflection point where,
at the point at which AI becomes determining of its own future designs itself updates itself under maybe some
human direction but at that point I guess that's the singularity type feel
or what I made us a different thing but like I can see a world in which it takes
off and then we're all in trouble or we're all we can all retire and live in
the world of you know Star Trek which would know, Star Trek, uh, which would be nice.
Um,
yeah, I don't, I don't have too much hope for the, um,
there's a certain amount of political background that indicates that that's the
opposite of that might be happening.
But that's, I mean, the, uh, the sort of need free society, I, I'm not,
I'm not holding my breath on that one.
No, sure.
Um, but I, I do think it is like the recent developments with deep
seek, I think are a very interesting and important step
in this whole conversation.
I think we would be remiss to not talk about that.
Absolutely.
Cause it really seems like, cause one of my biggest concerns.
With sort of the longterm of this has been essentially like AI vendor lock-in, right?
Like if these are going to be tools that we rely on to do things, I don't want to be beholden.
I mean, this is like the bad old days of Microsoft and like the, you know, 90s where it's like,
yeah, you know, you need to pay the Windows tax whenever you buy a new computer and like,
you know, everything was just sort of locked up by them.
And that's just terrible for so many reasons.
And I think that a lot of what, at least in my career,
I sort of gravitated to was open source,
not only because you had access to the source code,
but also because of the sort of free
and speech aspects of it that allowed you
to do what you needed to do
and never get boxed into a situation.
And like, you know, the scenario I was painting
to a friend of mine the other day about this was,
so imagine that you are one of these people
who's not really a programmer,
but you sort of figure out how to do things using AI, right?
And you build a little tool or something
that is useful to you,
and then you change it a little bit
and it works really well.
And you're like, you know, maybe I could sell this, right?
And so you continue to use AI
to build like a website or something,
or you get on some payment portal
and you set up something up and you start selling this thing and it does pretty well right and you sort of like
your side gig and you do this for a while and then it starts to become like your main source of income
and you're like you know what I could just quit my day job and I could just do this right and you
keep making some improvements to it and you sort of do this sort of pair programming style thing
with the AI where it's like you say and make changes to do this and change to do that and change
it to this right and then you know, have done this for a
couple years now and you fully quit your job and you this is
how you pay your rent. And then one day you go to make some
change to your thing because you know, there's like some updates
somewhere or something's broken or there's news. I got to go
fix this. And you ask the AI to do it and it doesn't. And you
have and then and then you ask again and that doesn't. And you have and then and then you ask again and nothing doesn't. Yeah. And you're like, what's going on
here? I don't know what to do. Yeah. And then your thing stops
working. And then people stop paying for it. And then you
don't pay your rent. I mean, to to an extent, I mean, you Yeah,
you're talking about obviously, vendor lock-in is, is true in many other
parts of the world too.
I mean, you think about, but this isn't just, it isn't just the vendor lock-in.
No, there is something more insidious.
Standing right.
Like you build a thing that you don't understand.
Yeah.
And you have no way, no recourse at all to fix it.
Yeah. That is, that's a really interesting, cause recourse at all to fix it.
Yeah. That's a really interesting,
because what I was gonna,
most of the thing that was coming to mind was,
folks who make a living from being YouTube people
on Patreon, whatever, it's like, yeah,
but YouTube could say, no, you can't do this anymore,
or it could stop working or whatever,
or Patreon could say like, yeah, we're shutting down
or whatever, and you got like,
but they still understand what they've got.
Maybe they don't own their uploads, but maybe they do still have it somewhat.
You know, you could imagine them setting up shop.
They, yeah, they are not, it's not built.
The foundations are still there, right?
If you are a social media influencer and you do interesting stuff, then that stuff is still
interesting to somebody somewhere, presumably.
And some new site will come up and you can be like, OK, I've got my things.
I'll move over to this new site.
Whereas with this, the yeah, there are no foundations.
The foundation is a black box.
I suppose it's not totally because you could say, all right, if it's worth it,
you could read, you could hire some software engineers.
Hopefully somebody knows still how to program by then.
Right. Like, that's the thing is that I think this is why there will always be a demand. hire some software engineers. Hopefully somebody knows still how to program by then, right?
Like, that's the thing is that I think this is why there will
always be a demand.
No, you're right.
Understand computers and programming.
I think that already is a little bit insidious. And I think I
have more of a more of a problem with this than most people would
do, but it's the reproducibility aspect of it. Right.
You know, if today, you know, already there's randomness inside the way that they work and
I kind of get that, but like, you could imagine them not making it.
I mean, you didn't even suggest that it was going away.
Actually, you just said it didn't.
Right.
So this is perhaps the same thing.
I'm just having the same realization that, yeah, you go to say, can you make a thing
change in this way?
And it does it in a way that doesn't make any sense anymore. You can't change it or you can't, you know to say, can you make a thing change in this way? And it does it in a way that doesn't make any sense anymore.
And you can't change it or you can't, you know, it, yeah, it, it doesn't have to just
go away.
Yeah.
This is your point is like, it can just change.
You've got no recourse.
Whereas in the open source model, you're like, I still have the code on my computer.
I don't necessarily understand it, but I have it here.
I have a copy of it.
Whereas I suppose to an extent, you know, things like AWS is another
vendor, which I'm very much locked into.
And tomorrow they could tell me that they're deprecating something
that I absolutely rely on and then I'm stuffed.
So this already exists in different ways, but I'm stuffed in the same way.
Yeah.
I, yeah, I'm going around in circles in my head here, but I do think there
is a material difference between I know what I'm doing and I'm stuffed in the same way. Yeah. I, yeah, I'm going around in circles in my head here, but I do think there is a
material difference between I know what I'm doing and I'm relying on things that
do those that for me, um, in AWS, you know, it's a load balancer and whatever,
and a storage system or whatever.
And it would be a pain in the bum to change it, but I can versus I've built
an empire on something I don't understand.
Right.
Right.
Or I've built a critical tool on top of something I don't understand.
Yeah.
Although I wonder how much that's common in business full stop, right?
How many CEOs?
Sure, sure, sure.
And I'm not saying this is fatal.
I'm not saying like, and this is why AI won't work.
It's just introducing a risk, and I think it is a pretty significant risk that you don't
have when you have people who
actually understand the code that is being written and how it works.
They can, for example, leave the company.
That seems like a thing that might be top of mind for some of us.
Right.
And so there's an interesting aspect to it there as well, but it seems more,
it's more, more in your control.
You can, as an employer, as somebody who's managing,
you take into account, we've talked about crypto risk, I think, as we've called it,
which is bus risks to everybody else on this podcast before, you manage it and whereas
you can't manage it when it's a magical mystical genie in a bottle that someone else will rent
to you.
Yes, exactly.
Exactly.
On a day by day basis.
Right.
And I think DeepSeek again, bringing it back to that,
makes that easier in that you can go and you can get a model
and just be like, all right, I'm just not going to change this.
This does what I need it to do.
And there is kind of like two layers of this, by the way.
There's the system that you build that is built by an AI,
but fundamentally what it produces is just code that you run. And
then there is a system that relies on an AI for its
operation, right? And that has a much greater risk of just one day
not working and you don't understand why. Right. So
there's two different modes there. But I think it's kind of
the same thing of like you're exposing yourself to this vendor lock-in, this thing that like, you know, back in the
90s, we were very acutely aware of that existed.
And it's like a slightly different dimension on it, of course, and there's two different
things there.
But it's a risk that like, there's a reason that open source got so popular, right?
And one of the reasons is, is it mitigated this risk significantly.
Yeah. I mean, obviously, yeah.
So I never thought of it that way.
So I think DeepSeq also have that effect.
Yeah.
The thing about DeepSeq is it, and maybe this is again slightly different than open source.
So, you know, I could use Microsoft SQL Server and I have all the problems of lock-in that we've just talked about.
I could use SQLite or some other, you know, MySQL or whatever it's called these days,
and I can have the source code. It won't change from underneath me and everything,
but I still don't really understand it unless I'm super into databases as a user of databases.
I couldn't, you know, if for example, they suddenly changed the licensing rules or I
actually, I can't imagine what sequence of events could make an open source thing go
away.
But if they stop maintaining it, and I can't maintain it, or I can't afford to maintain
or don't understand it, I have a similar problem.
And then this, the deep seek has another issue, which is like, they can give you an open source
model, but
can you train it yourself?
Right.
And so far they're like, no, no, no, that's magical stuff that happens behind the scenes
and requires more money than you have access to, even though it's considerably less than
perhaps some of the other models.
And that seems to be interesting to me that it's like, it's a slightly, subtly different
open than other opens.
And unless I'm missing something, am I misunderstanding how that works?
I mean, I think you're absolutely right. Like if you want to train your own model, it's,
it's going to be like millions of dollars, right? Isn't that how that works? So it's
like, that's a pretty significant barrier. But like the, the, I think the analogy in
that particular case, like let's say that you built a system. It's not that you built the system with deep seek.
Right like you built a system that you like you wrote the code and it uses deep seek and it's using so it's like that's not all that different from like hey I've got this database that I don't understand or it's very similar.
because you understand how to change that code. And if that component, whatever it may be, stops working,
you have a reasonable expectation as,
all right, this is going to be a pain in the butt,
but like I can reach, I can fix this,
and I can redo this, and I can do that.
And like, you know, you can find ways
to at least patch around it,
if not just get it working again.
Whereas if your entire ability to change the software at all
is reliant on a tool that
you fundamentally don't understand how it works and you don't even really understand
what it produces, then you can easily get into a state where you just can't make changes
at all.
And the one thing that I have seen in my career is software that can't change dies.
It has to be able to change because the world changes
around you and if you don't change with it,
the software stops being used, right?
Yeah.
So that's sort of like the greater risk for me
is like a world where we have not only the things
that we do in our day jobs,
we maybe have like critical pieces of infrastructure,
you know, the sort of myth. What happens when the load bearing internet person is just, you know, deep seek?
Yeah. Right. And like, yeah, there's this critical piece of infrastructure that everyone else depends
on and nobody understands how it works anymore because it turns out the maintainer died and his
AI took over. Right. Yeah, it's, yeah, it's very, that is, I see the difference in kind between those things,
you know, and the open source model is different to, yeah, that I built something with a tool
that I don't understand that I can't understand what it built either.
That's really, yeah.
Without getting, I have a bit of a change there.
We talked about some of the ways and it's a bit of a gramophone needle scratch kind of moment here.
But I feel like, you know, as we're coming up on time
and I'm thinking it, it's at least
worth talking a little bit about some of the productivity
boost that, you know, we alluded to earlier.
You mentioned that we, you know, and so I don't want to be like
all sort of semi doom and gloom
about this kind of stuff, because I think, you know, there are a lot
of positive things. We started out as being positive.
And so some of the success
stories, I think I've already said for me, we're
like, you know, getting it to do, uh, I don't understand TypeScript.
Can you please do this thing or, you know, can you refactor this code or can
you make suggestions even like, and those are very like pair programy thing.
And sometimes it's really, really good at that.
I've had, so the thing that I've been doing recently, which should be a surprise
to nobody who's listened to this for any length of time is I'm actually working on yet another emulator and I'm working
on a Z80 emulator for a conference talk. And I'm experimenting with different ways of writing
a CPU emulator to see how they come out. And I'm also experimenting with different types
of C++ functionality. But anyway, the Z80 obviously is an incredibly well known, well
understood CPU. and there's obviously
a lot of information about it on the internet.
But I've been remarkably impressed with how my particular take of it currently, at least
the current incarnation, is more like a traditional decode.
So I'm switching on the opcode, and then I'm returning a structure that describes what
that opcode does.
And one of the things in that structure is like the actual
String that is the opcode and then there's a bunch of other flags like how many more bytes am I expecting after this this byte?
What is the source? What is the destination? And you know, these are all in nums
So, you know each thing is like case hex 22 return quote
LD HL comma BC quote comma
one size of opcode, comma, uh,
register colon colon HL comma register colon colon BC comma, and there's some
other nonsense that's just that and right.
Okay.
So it's fairly straightforward for some definitions, straightforward.
Um, but I was absolutely shocked that I would write the next case statement and
the AI would correctly fill in
the completion for that. That is, I'm just literally in a big switch statement, the top of it
is like huge switch opcode and I do case hex 33 colon and it says return and it's filled in.
So it's like worked out that this is a Z80 and then therefore hex 33 is this opcode and it can actually determine
what my program means or needs from the opcode that that was just blew me away.
That was like, and now I want to do is just auto complete the whole lot of them
because there are hundreds of those stupid things, but obviously testing it
and checking it is the thing it's.
But that is amazing to me. And a similar story is in my 6502 emulator, I had a, a test which had a
static, uh, const array of bytes.
That was a routine in, uh, just assembly, assembly code in machine code.
Right.
And I just asked it, I asked the chat thing, can you comment this for me?
And it worked out that it was in fact encoded 6502 and it broke it onto multiple
lines and for each like hex three, three, and it had the comment of like, this
is low day with 27.
I'm like, again, bonkers level sort of beginnings of reasoning about what I'm
doing, not just like predicting.
I don't know.
It felt good.
It felt really clever.
So that's amazing when that happens.
Yeah.
So what, what have you got any, any cool stories?
I mean, I think you kind of hit on it there, which is, you know, my
perspective on these things is the older something is the better AI is
going to be at dealing with it.
Right.
So like really well-worn public APIs and tools is so good at, right.
Only little Python tools that interact with the YouTube API.
So I asked it through it.
He just wrote the whole thing from scratch.
I never had to understand it.
All of those APIs that have been around
for a reasonable period of time
and are very well worn.
It is so good at that.
And it has gotten to the point
where it's like, you know,
rarely will I do things like
look up the documentation first.
Usually I'm looking up the documentation
only to double check what it is that the. you know, LLM in question has done. Yeah. Or to just make
sure that I understand it, right? Yeah. But like anything that is that is older like that is,
it's so good at. And it's, it's funny, because I think like, I think I've thought a few times,
this is I'm going to dip slightly back into the doom and gloom
for just one second, because it's not really doom and gloom.
I think this is actually kind of a good thing.
But one of my concerns with these tools has been like,
are we just gonna turn into like human QA, right?
Are we gonna like, okay, the AI is gonna, you know,
dump out all the terrible code
and then our jobs are gonna be to fix it, right?
And I actually think a probably more realistic version
of that is it's gonna be our jobs to learn the new stuff
and leave the old stuff to the computers
because the computers will have the sort
of encyclopedic knowledge of like all of the APIs
that have been out there for years and years and years
and the best way to use them.
And what our job is going to be is the like, you know, hey, I built my tool in this in this LLM,
and I based my whole business around it, and it stopped working. And I don't understand why it's
like, Oh, yeah, because they released a new version of the API that you use, and the LLM hasn't picked
it up yet. And so my job as a programmer is
to be like, hey, somebody built a new thing. I need to learn how to use the new thing because
the AIs haven't figured it out yet. Yeah. And I think there's going to be a lot of that where
it's like, yes, like, like, in some ways, it makes me a little sad because sort of like the old
graybeard thing of like, yeah, I know every corner of this API, and I know exactly how it works. And
I know where all the bodies are buried.
Like maybe it's not as useful as it used to be anymore.
But what it does mean is that maybe what that means is we're going to be working on new
stuff and you know, I like working on new stuff.
It's fun.
It's interesting.
It's cool.
But yeah, I bring it back to the original topic here is of like, yeah, all of those
sort of like public well worn API's tools that
have been around for a while. It is so good at knowing those things. It writes a lot of
the same code that I would write. Obviously, like I have very particular things around
testability that I do. And I have tried to talk about that sometime. Yeah, maybe once
or twice. And I have tried to like, you know, you can do the things, especially with like some
of these tools where you tell it to remember things.
And I've tried to like teach it like,
oh, this is the kind of stuff that I like.
This is the kind of stuff that I don't like.
This works for this and this works for this.
And I think it's maybe getting better at it.
I'm not really sure.
So I wind up refactoring that code a lot
or not even refactoring it because the code
that it spits out isn't really tested or testable.
But I like wind up, you know, turning it into something that I can then write tests
around and then sort of doing the reverse driving of those tests where I like break
the code and make sure that the test.
Right.
That is actually one that a nice, yeah.
I mean, so yeah, a landmark moment in me, my understanding of AI was, was
realizing that I could be much more discursive about saying what I liked and
what I didn't like as kind of part of an, of a founding document was, was, was a huge, like, oh gosh, yeah, that
makes sense.
You know, tell it, don't do this.
I don't like it.
Yeah, right.
That's a useful thing, right?
Yeah.
Now, otherwise it's just trying to write generic, um, code from the, from the
internet.
Um, and then the other thing would be, yeah, like writing tests is very good.
I got, I, so I, this was back when I was working with, with you or the, the same
company, um, I had a very warty complicated piece of code that was very thready.
I think we probably talked about these things and I asked the AI to try and
poke holes in it and it did.
And I had to argue with it to prove to me, prove to it that like, there wasn't
a window of opportunity when it, uh, you know, a thread bug would happen.
And that was valuable too, because I, you know, I, it was like having a very smart
person to argue with and I didn't have to expose myself as being not sure about it
to anybody else, although I now I'm just talking about it in public.
But, um, also auto completing, you know, I would say I point it at my test and say,
what are missing?
Can you tell me what kinds of things are missing?
You go, oh, have you thought about these?
Is the edge cases here?
And then very often it would auto complete in the test.
You know, you could just write the test, the description, you know, should, uh,
should return zero for negative values.
And it's like, boom, there you go.
You just hit tab and the whole thing auto completes.
That's a valuable thing, I think, or it's a useful thing.
You know, it takes some of the toil out of what we're doing without
necessarily taking away some of the express express ability and the sort of freedom
to do something cool.
It's making us all more productive, which is net a good thing and may create some sort
of weird ripples in the short term. But I think in general, it's good. The whole thing
kind of reminds me, I don't know if you were of this era.
I think you probably were.
Did you ever have a boss that was like skeptical of Googling things?
Did you ever have that?
No, no, no.
Like the very first job out of school,
I had a boss who was a lot older than me.
And he was like, you know, you got to, you got to read the documentation.
You got to get the docs on your computer and you got to read through them and you got to figure out
what the software can do from that. I'm like, or I could just Google it. And he's like, no,
you got to read the docs. And this sort of seems like the same sort of thing to me where it's sort
of like, you know, like I could, I could see someone making the argument. It's like, oh yeah, you know, if you just have the, I write all the code for you, you don't understand it's sort of like, you know, like I could, I could see someone making the argument.
It's like, Oh yeah.
You know, if you just have the, I write all the code for you, you don't understand it.
It's like, well, I mean, that's not strictly true.
You can, you can understand code that other things have written.
That's a thing programmers can do.
Some might say it's the most important thing programmers can do actually.
Right, right. So like, you know, if you want to, if you want, if the way you choose to work is
to have a tool generate some code and then you read that code and you understand
what it does and you fix it, if it has any problems and you write tests for it,
if the tests are incomplete or whatever.
Like that's a perfectly valid way to work.
Um, it's not, you know, Google the thing, look at stack overflow, copy
and paste into your code and then repeat.
Because that's what the AI is really doing behind the scenes.
Anyway, we haven't talked about the ethics of where it got all of
its information from, but like, you know, yeah, yeah, yeah, yeah.
Which is probably way outside the scope of study time, but yeah, that's
effectively it's just, you've automated that part of it.
Right.
Right.
Or, or going back to this old boss that I have, it's not like looking up the
example in the documentation or the sample code that came with the library and
then pasting that into your code and then editing it as appropriate, but it is not
totally dissimilar from that process.
Right.
That's very interesting take.
Yeah.
No, I'd never had a boss that was skeptical of Google.
Thankfully.
I think most of the stuff he came on eventually, but it was the first, the
first like six months I was there.
He was like, Oh, you got to read the docs.
You got to read the docs.
I understand it.
Maybe that's just cause we're getting old.
Yeah.
So the flight sympathize with that kind of slightly.
Yeah.
Uh, that point of view.
Well, we've been talking for a good old while on this a lot more than I was
expecting and it went in a completely different direction the second time from where we
started the first time.
So, uh, maybe we should, we should put a pin in it for now.
We should maybe come back with, uh, if anyone has thoughts on this episode,
cause I imagine this is one that people might have strong opinions one way or
the other, you know, I certainly in conversation with friends, uh, they've
been like, Oh my God, you use AI kind of fear, which I was like surprised
that people could have such a visceral reaction to something, which I perceive as just a tool.
But I also understand that maybe there's a concept of like, uh, it was, it's, it's stolen all the
stack overflow things for me. I don't know another things, but, so I'd be interested in if anybody
has opinions on this. Um, and maybe we could cover them if, uh, if there are on the future episode, but, um, I think we're going to have to leave it there for today. So.
Yeah. Uh, lovely to catch up with you, my friend. I'll see you next week.
Yeah. I see you. Yeah. I guess so. Some time to say, oh, actually, no, uh, for
inside baseball for those. Now we're recording this in February and next week will be Valentine's
day. So maybe we won't want and next week will be Valentine's Day
So maybe we won't want to be recording on the Valentine's Day. I don't know. I've gone very poorly if that happens. Well
Yeah, we'll see so it might be two weeks or something all we do it in a week
But we'll get this episode out probably in oh, I'm gonna say this is cursing. I was a probably in March
We never reveal when the types of these things are recorded.
But never mind, this time, special, what, special, uh, disfusation.
Anyway, it's been fun.
Until next time.
You've been listening to ToosCompliment, a programming podcast by Ben Reiny and Matt
Godbob.
Find the show transcripts and notes at www.tooscomplement.org.
Contact us on Mastodon.
We are at tooscomplement at hackyderm.io.
Our theme music is by Inverse Phase.
Find out more at inversephase.com.