Two's Complement - Vibe Coding and Robot Teammates
Episode Date: November 13, 2025Ben worries replacing juniors with LLMs creates a future hiring crisis - who'll train the robot-wranglers? Matt blames COVID brain fog, then proves it by botching NP-completeness. Capitalism is bad at... escaping local minima.
Transcript
Discussion (0)
I'm Matt Godbolt.
And I'm Ben Radie.
And this is To's Compliment, a programming podcast.
Good afternoon, Ben.
Hello, Matt. How are you?
Wow!
Yeah, you've confused the heck out of half of our listeners now,
I was, not only did I not say hi, Ben. I said, good afternoon. You also asked me how I'm doing
really great. Thank you very much. I am still recovering from a bout of COVID that I caught from
far too many people, seeing far too many people at a conference, which is unfortunately
the occupational hazard of such a thing, but I'm doing fine other than just a bit of a cough.
How about you? I'm doing all right. I have no real complaints. I only have zero with world
complaints imaginary complaints that if there's no real component to it there's no real component to it
they're truly imaginary whereas if there's a mix it's a complex complaint 90 degrees out from any
sort of real complaint so yeah you've completely derailed immediately one minute in and already
what was it we were going to talk about and I'm going to blame the COVID for that because
that's what you're allowed to do you've got in fog brain brain
I feel like that that's that's someone someone posted something on Twitter or to the effect of
something like cognitive hypochondria or something like that and I'm like maybe it is a bit
of that but you know even if it's psychosomatic it seems real enough anyway right but we
weren't going to talk about it today we're going to talk about not my inability to think
but perhaps thinking or getting something else to think for us yes outsourcing your
outsourcing another entity to a thing that can think or maybe can't think, let's not get too
caught up in the specifics, the semantics, but something which looks like it might think.
And that may be an intern, right?
That's, I suppose I can tell they can think, but I've got no proof in my solipsistic worldview.
Nope.
Everyone else is a figment of my imagination.
So that seems to, no, no.
But realistically, no.
You had an idea before we pressed the record button.
I did.
I did.
So I think there's sort of an interesting phenomenon going on with, you know, the development of software engineers.
It's not the development of software, but the engineers that make them.
And AI and sort of learning things and the way that people think about, you know, costs within it.
I don't know. There's a whole bunch of interrelated things that I think are going on right now.
And it might make for a good podcast.
Well, we can certainly sort of explore it, you and I, without really any plan as per usual.
Yeah, as per usual.
So I think the way, first I'm going to back up a level and be like, what is programming?
Just to set some really basic foundational layers for this conversation.
I'll be talking about software development, like the whole process of it, or programming very specifically?
It's really software development.
Okay, okay.
So yeah, tell me what you.
There's a dimension.
There's an aspect of this that I think is important in this for this conversation, which is a lot of what software development is, in my opinion, at least, is taking the sort of messy, inconsistent, poorly specified real world.
not the imaginary world, the real world, and mapping it into the completely uncompromising world of
computers, where everything has to be specified exactly. There is no room for vagaries. And if you
are unaware of the vagaries that you're creating, the computer will create them for you,
much to your dismay. Yes. Right. So you, very apt. Yeah. You know, let's talk about
undefined behavior, right? Exactly that. Yeah. Yeah. So, uh, when you're, when you're,
you know, engaging when a thing is you're engaging this activity of mapping the real world to the
sort of digital world, like inevitably one of the things that you have to be able to do is report
back to whatever it is that is telling you to like, hey, map this thing into a computer for me,
please. And be like, um, actually the thing that you asked me to do is physically impossible. And I can't
actually do that, because it is impossible to do that, or logically impossible.
And so that's sort of like a feedback loop that is essential to the act of programming.
I think it's naive to think that anyone, really, even an experienced software engineer,
can just say, go build a thing and then the thing appears 100% of the time, right?
because there is a very good chance that unless you have actually done the work of taking your
ideas and mapping them into the digital world, there's a very good chance that they don't actually
map and that compromises will need to be made, questions will need to be answered, things will
need to be clarified.
Well, if I can interrupt, even with a well-specified problem where you say solve X, I need
I need something that will do X.
There is an almost infinite number of valid solutions to that problem, and some of them
will be more maintainable than the others, some of them, which may or may not be a goal,
some of them may be more performant than others, which may or may not be a goal, that so many
dimensions feed into what is a good X that aren't often specified up front.
like and you know if you work for a high frequency trading company then probably it's a given
that when you're writing something to go in the trading system it should be as fast as possible
and do no extra work but if you're writing a framework for uh you know wider consumption that's
you know for for anyone to come and use then you probably want to be as generalable as general
general as general as possible without tying yourself in knots.
And so there's a huge amount of unspoken requirements that also come.
And as an experienced software developer, you like to think you can make an intelligent guess about it.
But also, you can be so wrong, so often.
And we are.
So that's not even taking into account the fact that what you were asked may not, as you say,
be possible because it involves solving the halting problem or achieving something that is
going faster than speed of light or something.
Things like this. Right. Yeah. So, yeah, it's a back and forth process where you say, maybe even internally, do I do X, given two equally possible solutions should I do X or Y? What are the tradeoffs between them and all that stuff? And at that point, you may turn around and go back to your stakeholder and say, hey, do you want it fast or do you want it general or whatever? That may not be the right axis, but something like that. Yeah, right. There's a tradeoff to be made. And it's not clear what the right one.
is. And there are, well, there's, yeah, there are many, many, many tradeoffs to be. Yeah. Yeah. Right. So, so the question is, is who or, and it is, and is my belief, by the way. So, so this is a problem, like, this, doing this mapping from the physical world into the digital world is a thing that people have been trying to solve for a very long time. When I was in school, the big thing was, uh, are you going to say you a bell? Yes, exactly right. You know, you know what I'm going to say.
Yeah, I'm a bit older than you, but it was still around.
Right.
UML, Rational Rows.
Oh, my God.
All of these tools that were like, you know, we're going to have all these things that
you solve this problem.
And they didn't work, of course.
But it is my current 4GLs and things like that.
Yeah, came along and said, hey, you know, you'll never need to write another line of code.
You just draw a picture and it'll all just work.
Right, right.
So I have come to believe over the sort of decades.
decades of this, that the least specific definition of what you need to create in order
to actually solve a problem on a computer is the source code. Anything more abstract than
that is not going to work because of this mapping problem. Because you're going to have things
that are unspecified, whether you know they're unspecified or not. You're going to have things
that are unspecified that need to be specified in order for it to work.
And so human beings have kind of invented programming language is this incredible, like,
in object, like fairly objective, like the most objective thing that we maybe ever made
beside maybe mathematics of like how you actually solve a problem.
And that you cannot be any, you can't be any less abstract than that and actually solve
the problem.
Right.
So fast forward to today and we have a new attempt, which is probably, which is turning out
be a much better attempt to solve this mapping problem, which is AI, and specifically LLM coding
assistant.
Specifically LLMs and maybe more specifically agentic coding, right?
Although there are other things.
Yeah, yeah.
Yeah.
Like it is conceivable, and I know that you have actually done this, to completely offload
this mapping problem to an agent and have it solve a problem in a way where it creates source code,
but you have not even read it, could alone understand it.
Correct, right? And that is not, and I'm going to reference what you were kind of saying a little bit before the podcast here, that is not completely different from what we've been doing for decades where a manager says, hey, go build this thing. And the developer goes and writes some code and then comes back to the manager and the manager doesn't read the code at all, sends a pull request, just whatever approve. And then they try it. And they're like, yeah, that doesn't work. Do the thing again. And then we just, you know, it's like we're doing the exact same thing. What was the phrase that you used for this?
find it out now. This is, I'm going to have to say thank you to my friend Jeremy, our mutual
friend Jeremy, in fact. Yeah. It says, managers have been vibe coding forever. Right. Yes.
And it's the loop of like what the vibe coding looks like. And at the end, it's the manager says,
good job, but be faster next time or worse. Right, right, right, right. So I think there was a
perception, I don't know, a year-ish ago, maybe a couple years ago when these tools started
coming out. And I know you and I maybe even have some conversations about this, of like,
you know, is software engineering as a profession going to go away because of this? Is this going
to just be, you know, non-technical people or maybe just like junior people who are right out of
school who are not managers yet using these tools to build software? And if you, especially if
look at some of like the job numbers and like the, you know, numbers of openings and people
are looking for jobs, it definitely seems like what has happened in the last year or two is
that it's the junior engineers who are not getting hired because what people are doing is
they're just putting these tools in the hands of very senior software engineers who are now
able to get way more leverage than they may be used to or at least the same amount of leverage but
with many fewer people, right? Like you don't have the thing of like, okay, we're going to give you
two or three, you know, less skilled junior engineer, yeah, whatever, yeah.
And then you can, you have this team and you just do this thing of like,
you sort of farm the work out to them and you do basically this like vibe manager
coding. Yeah. Yeah. Yeah. And it's like, okay, yeah, don't do that. Just have like two or
three different clawed agents running on your computer at the same time. And there's your team,
essentially. Yeah. And you just do that. Right. And so let that make sense. And that's like
a very rational thing for businesses to do. But business have liked to go.
back to your point. I think, you know, the point we were making earlier on was that the senior
developers would be the ones that would be being cut because they're expensive and their experience
and skill level could be replaced by a sufficiently intelligent AI system. Yeah. And then it just
takes guidance from either a junior dev or no dev at all because, you know, you just have the manager,
the non-technical stakeholder just talking into a window saying, no, no, make it.
lure, no, no, no, now make it do this when you click, that kind of thing or more, more generally, right?
Yeah, yeah, yeah.
But the surprise here is at least, fast forward a year, that does not seem to be the case.
In fact, the opposite.
It's more or the senior folks like you and I who have gone, hey, this seems valuable.
And we can perhaps get some use out of it as a tool.
But the junior folks have not been able to get the same amount of leverage as we do,
but their role is being potentially usurped by the fact that, yeah, you don't need
team of people anymore. You need one guy and the patience of Job to talk to so many forgetful,
stupid, you know, ADHD robots. But nonetheless, that. And obviously, I don't think we're there yet.
You know, you and I both working companies where there still are junior folks around. But that was
the interesting point. But is that seems to be what is kind of happening. As best that I can tell,
and you know, I'm not far from like an economic expert here. And also, we only. And also, we only
have, yeah, our own vantage point from where we are, and we're in a particularly niche
industry perhaps as well.
Also very true.
You know, where technical and technical knowledge and domain specific knowledge is quite
kind of important to what we do and maybe in a more general world where AI has been
trained on more publicly available information.
I mean, that may be they're easier to replace folks with.
I mean, certainly in my limited experience of the last.
few months where I've been less employed, fewer, fewer employed.
Fewer employed.
I've been using AI to help me on my open source projects, and that's all in
TypeScript and JavaScript and web.
And there's like thousands of websites that have good examples that presumably it's
been trained on.
When I go to conferences and talk to people, they have more equivocal results with more
technical things like the minutia of C++, especially some of the new things that
coming down the pipeline because there exists no training data.
And that's also true of me, right?
Someone to come to me and say, can you write me a co-routine?
I'd be like, give me a few months to go and learn how to do that first, and then I'll
be able to.
But it's maybe our particular industry and the intersection of the technology we use and the
special things we use makes it more likely that we all still hire and use people.
I don't know.
I'm just spitball in here, as indeed this entire podcast is.
That's the whole point of podcast.
But yeah, that's a very important.
point is that it's not like we're talking. It's not like I've analyzed a bunch of economic data to come to this
conclusion, right? Like, this is all mostly from just my personal experience and the things that I've
seen in the industry. So, you know, take this with however many grains of salt you believe are
appropriate. Right. But the kind of result of this now is you have senior people. And I think
the explanation is that sort of makes sense to me is that it's like LLM still require a decent
the amount of babysitting, and you need to be able to know when they're hallucinating.
You need to be able to know when they're going off in a weird direction to correct them.
Otherwise, it's the blind leading the blind.
And that sort of explains why you get the most leverage from having one experienced software
engineer using all these various tools to build the things that you need, right?
And yeah, I mean, I was going to say, just interject there, I think that knowing about the
kind of hallucinations or I don't know even if that's such a loaded turn these days but you
know like when when mistakes are made and assumptions are made about oh there must be something
that does X and then it says hey call X and you like that's never existed it would be great if
it did right yeah um but like identifying that and um keeping it on track a kind of slightly
different techniques or different skills than we've had to use before because perhaps as an
experienced programmer who has led and mentored junior developers, you know, there is definitely a set
of mistakes that I have seen folks fall to that sometimes I let them and, you know,
you let them and then you guide them through the process of saying, why do you think that didn't
work and, you know, you use it as a teaching point, which is a valuable thing, I think, for everybody
to learn from. But those are very human mistakes and also the process of doing it that way
allows them to grow.
And LLMs will make those kinds of mistakes as well.
And so I think that if you have led a team before,
that being able to control a suite of LLMs,
some of those skills do overlap and you will go, yeah, yeah, please don't do this.
Whatever you do, don't comment out the test to make everything green
or don't check in without running things, which you see Julia folks do sometimes,
you know, like, hey, that's there for a reason.
Don't just disable it because it didn't make sense to you.
you know always ask me all that kind of stuff but the thing that's frustrating is that the
LLM will not learn in the same way as your junior dead you know the third or fourth time you say please
look if I have to do this again we're going to have to have a talk serious talk about this
then you know the LLM is like nope you you've told me a hundred times before and I forget every single
time I mean like a happy puppet but yeah so I think there are some new skills in my window
yeah right yeah there are definitely some new skills for for leading LLMs and I think it's like
probably a 75% overlap with the kind of skills that help you with a
a team of real humans, but there's definitely, there's, there's a 25% that's new for LLMs,
and there's also probably, I mean, I'm probably understanding it.
There's a ton of much more important, you know, interpersonal skills and didactic techniques
that you can use when you're trying to help an individual loan.
And I forgot where I was going with this, but I interjected nonetheless.
You were in the middle of something, and I put my hand up and you.
Oh, well, no, I mean, it's just that, you know, the, the phenomena now of having these sort
of senior engineers leveraged using these tools is great from an economic standpoint,
like it makes a lot of sense.
Right.
But the thing that I wonder about is if we are, you know, as an industry, setting ourselves
up for very future pain, because if you do this for too long and the senior engineers
are too successful.
So the problem with companies in general is that if you have...
have very talented people and you don't pay them enough, they'll leave. And if you pay them too
much, they'll leave because they're rich and they don't need to work anymore, right? So you have
to sort of, you know, strike this middle ground of not paying them too much so they quit.
You're saying the only reason they don't pay me more money is because I might quit.
Well, let me ask you this. I mean, if you had, you know, hundreds of millions of dollars in
the bank, would you continue to work? Probably, but not, yeah, not probably for you. Maybe for
yourself. Yeah, yeah, yeah, yeah. I'd never stop doing, I mean, we're the same. Right. Like I,
I, yeah, I, I'm never going to stop programming because it's fun, right? But all right, but
I be, yeah. Your point is this is my point is you got to, you got to pay people the right amount.
So eventually, though, they are going to make enough money or they're just going to get to the
point where they want to retire and they're going to retire. And the question is, is if I've just
been teaching robots for the last 10 years, who's going to take over my job? Who's going to start
being the person commanding the robots? Right. And this is a.
problem that capitalism is exceptionally bad at solving, right, of these sort of like long-term
systemic problems. And so I think what may happen is companies will start to discover
that as the junior engineers who are currently looking for jobs sort of diffuse into other
industries because they don't find any of the opportunities that are there, that they're going
to start to have a really hard time finding the people who can lead the robots.
And what it might be is kind of like this pinch point of like, okay, well, we got all these
productivity gains from structuring things this way.
And it made complete sense for this quarter and the next quarter.
But 10 years on, now we can't find anybody that understands any of this or how to make any of it
work because they don't work in this industry anymore.
Or those people maybe have just gotten very expensive because there aren't that many of them
that sort of survived the filter right right um they will be the cobalt programmers of their of their day
yeah yeah yeah like it'll it'll be it'll just be a much hard like an even harder thing than it was
you know in 2021 when it was like there were ridiculous uh salaries being uh handed out because it's like
you couldn't find enough programmers right um on the flip side of this just thinking of it from like
the the sort of trader perspective it probably also
means that in the next five to 10 years, they're going to be some really, really good, quote-unquote, junior programmers out there that could run circles around anyone else in your organization with 10 years of experience or 20 years of experience. And if you can find them, they will be worth their weight in gold.
That's an interesting observation to turn it that way around. Yeah. That's it. Yeah. No, it's interesting you say that this is a problem with capitalism. This is not a politics podcast, obviously. So we won't go too much into the other choice.
are there. But no, I see it is almost like it's a local minima. We're heading towards a local
minima and nobody is going to do the sort of global optimization problem of like, well,
then we just have to do the simulated annealing step of just throwing some randomness into it just to
see because we just need to get out of this potentially local minima. But to sort of counter
that, obviously the more AI proponent or, I don't even know what the right term would
be, but someone who is more confident in the role of AI will just say, well, AI will get
better.
And actually, even the senior engineers won't have to be around anymore.
We can just, anyone can ask that question.
And I think obviously that gets to the heart of the original thing that you brought up,
which is that there is a ton of unknown unknowns that something or someone has to say,
either make a call on explicitly and say, look, based on my years of experience,
this is a database thing, we should probably do use, blah,
versus this is a trading application.
We should probably do something different.
Or you have to ask the person who's asking a question,
should we optimize for throughput or latency, and which point they're going to say the what now?
And then so, yeah, but where will that line be?
Right. I mean, the premise here is that although AI is great, it still hasn't actually solved the mapping problem, where the mapping problem is map the physical world into the digital world in a reliable, sustainable, continuous way. Obviously, you can have somebody sit down that knows very little about programming and interact with Claude and build like a simple app, right? Like those things are absolutely possible, right? It remains to be seen whether you can build more complicated things that way. It remains to be seen whether you can build more complicated things that way. It remains to be seen whether
you can change things that have been built that way and revise things that have been built
that way and grow and scale things that have been built that way, it's possible that maybe one
day, you know, Claude version 75 will be so good at all of this that it really, it really has
like sort of fundamentally solve the mapping problem where you can have a completely non-technical
person, somebody who does not really understand that computer's work at all, sit down and build
a horizontally scalable web app or a trading system or, you know, a, you know, microcontroller
for something that's going to go in the space shuttle.
And all of that stuff will just work.
It's entirely possible.
It's possible.
Yeah, it's hard to see.
I could also see a world where that problem is so difficult that it doesn't actually
get solved.
Well, I mean, that problem is so difficult that it often doesn't get solved even with experienced
programmers and experienced, you know, this is a difficult thing to get to get, to get,
right. But there is, I mean, so you mentioned earlier about the fact that I had had some limited
experience of like this whole closed loop vibe codey type thing where I haven't even looked at the
code of something. And that is, that is true. But uniquely in the situation I was in, I had a very
clear goal and a very clear output that I was looking for. And I didn't really care how we
achieved that goal, it was unimportant to me. And that was a surprise to me, because normally I have
strong opinions about how the software should be developed, and I actually care about that more
than the outcome of the software. And I think we've talked about this before. I think there are
some people for whom the goal and the result is the important aspect. And then really the difference
between, you know, somebody who is, I don't know, a quant or a physicist or whatever. And a programmer
is the programmer is like, yeah, I want to do a particle system, but it's going to be a really
nice piece of code. And I'm going to enjoy that process. And that's what I, that's what floats
my boat. And that's definitely 100% me. But in this particular case, I just didn't care. I'm like,
do a thing, make it happen. You know, yeah. I've done it a couple of times. Another one was like,
you know, hey, I've got some YouTube videos. Can you just write a thing that tells me when there's a
new comment on them, please? Because I don't own the video so I can't subscribe to them myself and get
notifications, but if I run a Python script once a day, then Keeper.TXT file or ones you've seen
before and just tell me. And that was like, again, I don't care how that works. And it seems to
work. That use case covers probably a lot of companies that aren't technology focused need for
technology. Right. Your HR company that just says, I need a thing that goes through all the resumes
and gives me a list of everybody's phone numbers or whatever. Yeah, yeah, those kinds of things.
and so those I can see they don't care how maintainable it is or they don't maybe not even care
if it's reproducible at all it's just a one-off and they care they're done with it right and that's
that's fine right but right once read never software right and that's kind of yeah it's sort of
anathema to everything that you and I stand for and really but also it is pragmatic and it's
it's very pragmatic and honestly I've been doing some more of that myself lately and I try to be
very intentional about it is like, I am going to have Claude generate this Python script
for me, and I am basically going to test it empirically. I'm going to run it. I'm going to
see if it produces the result that I want. And if it does, I don't really care how it works.
I might read it. I might not. Right. And for certain tasks, that makes a ton of sense.
It's incredibly fast. And I can get enough confidence that it does what I wanted to do by running it,
right? But then there's a sort of trade of where that,
one-off tool becomes more and more useful, you start relying on it, and then eventually
you have to turn it into a proper tool. And now you're like, oh, gosh, do I treat it as a
prototype? Do I redo it myself? It's interesting. But then, you know, again, I think we've
talked about this before, like I do a lot of open source software, I used to have in the last,
you know, a few months. And to some extent, all of the code that I maintain is like that,
because it was written by other people that I don't have even the hope of contacting anymore.
necessarily. So they might as well have been an agent that fired up, built up its idea,
submitted a small change to the code and then disappeared never to be able to be contacted
again because I have this weird language that somebody sent for compiler explorer and I
haven't got a clue what it is. Now I have reviewed the code, but it's still a feeling of like
there is a sort of sense in which as team size grows, either open source or a closed source,
you end up looking at code that you don't recognize before. It's just whether
or not it fits your style, whether you've got some overarching idea about how the whole should
be put together, which we've, maybe we talked about before, I think, yeah, we have, I'm sure
we've talked about the fact that, like, setting a project up well for success as a senior engineer
with all the things that come with it can help everyone, including AI's coming in on, but, yeah.
Yeah.
But yeah, I don't know.
I just think that there's a thing that is likely to happen, that I think very well could
happen.
And again, you know, I'd kind of say, you know, capitalism is famously bad at solving these
types of problems where it's like it's stuck in a local minima.
It makes completely a ton of sense why you would be in that local minima.
But there will come a day when you try to.
And your shareholders will ask questions if you say, yeah, we just hired a bunch of people
to do work that we can do with the computer.
And why have you done that?
Well, because we need to bring them up to be the next middle tier.
And that's a harder thing to convince people to spend money.
Right, right.
Like you might be able to have something in much larger companies where you have a little bit,
but certainly like medium and smallish companies in particular.
And really like you say, you know, if you have that sort of like shareholder question of like,
why are you guys hiring all of these expensive, you know, MIT grads when you could just take
your existing engineers and have them be just as productive, if not more by just, you know,
giving them a $100 a month, you know, a quad license or whatever.
But then, I mean, again, taking the other side of that, again, you know, in terms of R&D,
companies have always, you know, forward thinking companies have always had like,
hey, we've got a department of people who just sit around and tinker with things and the hope
that we find something cool and new.
Yeah.
The trick with that, I suppose, now I think about it out loud, is that the results of that,
whatever they are, positive or negative, are owned unambiguously by the company.
you're creating intellectual property for the company, whereas when you're investing in the
brain trust of new grads, you are investing in that person, which is wonderful and thoughtful
and helps the world, but does not specifically help you unless you can retain that person,
which is not necessarily possible.
Right.
And so you are letting, you're giving somebody a gift, as it were, that your competitor could
benefit from and then yeah back to your point about capitalism right right um yeah i i have had
some discussions with uh people who may be in school and a computer science program and i think
you both know who they are i have no idea uh right uh and one of the thoughts that i had in the
course of those discussions is it is when i was graduating and certainly throughout my career one of
the most important things as a new grad, as a, as a junior person, was that you demonstrate that
you can learn quickly, right? Like, that's always sort of the thing is like, we're not expecting
you to be an expert in any of this. That would be silly. We just want to find people who can
learn quickly, right? And I think that's always going to be true to an extent. I mean, no one
wants to hire people who learn slowly. Right? Like, so it's a little silly. But there
maybe a little bit of a shift in emphasis, I think, that is created by this local minima where
it's like, yeah, yeah, okay, you can learn fast. Everybody can learn fast. What can you do? Because we're
going to give you, you know, your cloud licenses and a development environment. And we, cool if you can
learn stuff, but what we want you to do is build this thing. And it has to be right. And it has to be
good. Yeah. And so I kind of wonder if the thing, the thing to do is to say, you know,
if you are a person who's in school, if you're a person who's trying to get a job. And this was
always kind of true, but I wonder if it's becoming more true is the thing to do is demonstrate
that you can get things done, that you can build stuff, right? The old Joel Sploskey test of,
you know, smart and gets things done, right? That's, but yeah. And then,
But then there's still the sort of sort of sort of getting things done does not give you the opportunity to learn the skills that will allow you to get better necessarily.
Maybe you just have to balance those things.
Yeah.
I mean, I certainly know that talking to some junior folks that like some of them avoid using any kind of AI because they perceive that they won't learn from it, which is valid, absolutely valid.
I mean, there's nothing like doing things to learn things.
but balancing that with the sort of the changing winds of our industry that this is an enabling
technology and finding the right balance.
And I wish I had better advice for them when they come and say, like, hey, no, I just
won't use chat GPT.
I won't use Clause.
And you're like, well, you know, everyone is.
I think, yeah.
I was actually chatting with a friend of mine who is a theoretical physicist at a university.
And we were talking about AI usage in general.
like I have to kind of essentially ban my students from using it, even though I use it myself
daily. It's an incredibly invaluable tool for doing certain things. And he and I actually
moved to some ideas. I don't know if I've already said this before. And I know it rise,
we're kidding at time here. I have to be a bit thoughtful about not starting yet a new topic,
despite the fact that we've just said, hey, we should probably look at the end of each episode
and see if there are ones we could continue. But no, I wonder if there is a sweet spot for AI,
a bit like NP hard versus NP complete or one way, one of the things around those ways.
I forget which one's which, right?
But my understanding, limited as it is about these things, is that there are some problems
that are NP hard, which means that there is not a polynomial way of solving them.
And for a subset of those, there is a polynomial way of checking that the answer is right.
So, for example, if I said to you, find me a route through America going through every single
city that is less than 5,000 miles total, then it's NP, you know, it's stupendously difficult
to search that space. But once you found a solution, I can check it trivially just by following
it along going, oh, it's less than 5,000. But if I then say, prime factorization is another
one of those, right? Perfect example. Yeah, you just multiply it all together. And if you get the
thing, then yeah, that's right. Exactly. Perfect. Exactly. But there is another family of,
I think this is the NP complete, but again, I'm really, mathematicians who are listening. I'm so sorry
if I'm doing it, but one of them is not only is it NP in terms of the time complexity of solving
it, but also to check the answer it is also. So like the classical traveling salesman problem,
which is to find the shortest. In order to show that it's the shortest, you have to essentially
show that no other path is long, so every other path is longer. And so where does the analogy break down
with AI? AI, I think can be very useful for the kind of NP hard things or whichever is the one that
can be polynomial time checked. That is. Yeah, easy to verify. Right. That's where I'm going with
this. Exactly. Thank you for saying it much more succinctly than I. But yeah, so again,
physicist friend is like, yeah, I can set it on certain things. And as long as it gives me
the citations of papers, which I can click on afterwards and then read the ones it's found,
there's obviously an error of omission where it didn't read things and they might be
important. But the ones it did find or the ones that it summarized in a particular way,
I can read through myself and go, yes, this seems reasonable. And it was much quicker than me
reading them. But then there are the other problems where you know, you just have to kind of go,
I guess it's right. And I suppose to an extent, actually, if you're a non-programmer and you asked
AI to make you a Python program and it's generated 3,000 lines of Python, you run it and it appears
to work, I mean, you know, maybe that's maybe that is the first category because you can
prove that it's right, but you still don't really know that it works on. I would, yeah, I would,
I would argue that if you really generated a 3,000 line Python program, unless there are, there's
no conditional logic in it whatsoever, and it is just straight through 3,000 lines executed
the exact same way every time, there's no way that you can actually know that that's correct
in all cases.
Right.
You know, to your point earlier, you said, you know, you just ran in a few times empirically
it's okay, you know, and that's so kind of almost like the verification seems okay to me.
And I mean, usually what I, to be honest, what I'm usually doing there is looking through
the conditionals being like, can I delete this?
Oh, that's an interesting way of doing it.
No, I mean, this is what I did with bash scripts forever.
I would have, I intentionally try to write BASH scripts as, I'm not, I never found a good
unit testing framework for BASH. I think we talked about this. We have actually. Yeah, that was
what I was. Yeah. And so my de facto method for writing Bask scripts up until the, you know,
creation of Claw, which now I just use that, was take a shell script, make it executable,
an empty file, make it accessible, put it in watch and have it run over and over and over again,
and then try to solve the problem by writing as few conditionals as I possibly can, right? Just have it go
straight through, there's only one code path through this, and I can just tell empirically that
it works because it prints out seven at the end. Right. Yeah. And like, yeah. And like, and that was
my method because it's like, okay, it's dumb, but it, it like, there's only one possible way to run this
program and the one way works empirically. So that's good enough for now. Right. Right. It's when you have like
the, oh, it handles this case and it handles that case. It has a combination of these two cases. It is a
like that's when you start introducing bugs without realizing it. And that's when you actually need to
understand all of the possible paths through the code and hopefully structure them in a way
where there's not like two to the end of them, right? Because otherwise, you're just going to
build unreliable software, right? So I completely like, I think that, you know, this, the sort of vibe
model of coding, especially when you have sort of non-people, technical people doing it, is really like
a rising tide lifting all boats. It's creating, it's making people.
much more productive. But if we start to try to build complicated systems with this,
I'm not really sure that that actually works. Because at the end of the day, the context window
of the AI can only be so big and it can only remember so many things. Yes. And if somebody
asked the question, why does this not work? There's not going to be anyone that can provide an
answer. That is more to the point. But yeah, I think the context window thing is a whole other thing.
They're obviously technical reasons there are upping those and stuff. And then, you know, the fact
that my own context window as a human is just getting, as I'm getting older, much smaller.
And you rely on prior brain frog.
Right.
Yeah.
Brain frog.
I don't know.
There could be a frog in there as well.
That would explain a lot.
That's a whole other symptom of COVID.
And trust me, you don't want that one.
Oh, man.
Yeah.
No, I think that those issues will start to be.
That's where the innovations are happening as far as I can tell me in the AI world now.
But just one final thing as we're hitting towards the end of one of our longer episodes here.
In I think 1980, Fred Brooks of Mythical Man Month fame wrote an essay called No Silver Bullet.
And I've been ruminating that on a lot recently.
As I've gone through the arc of LLM sort of usage myself from, oh my gosh, this is mind blowing and I think it's going to change the universe to this is terrible.
This is worse.
I'm wasting my time on this too.
maybe maybe not to now reasonably it's another useful tool in my arsenal sometimes i can fire
it on a couple of low-hanging bugs and i can then PR and i'm not sure if that's faster or not i don't know
yet sometimes it can help me by doing things that i'd never think of doing other times it gets my way i don't
know it's at all it's like grep it's like said it's a bit more involved but to your point earlier
i think it's got the biggest chance of being a silver bullet out of all the things that we've seen so
far. But, you know, Fred, Fred's point was everything has come and gone and as nothing has
ever fundamentally changed the problems of software engineering. That is, we need software engineers
to do software engineering practices. And I think it's probably worth, yeah, a reread.
Yeah. There's always going to be somebody who is ultimately responsible for making sure the
technology works. And that person is going to become a technologist, whether they want to or not.
because they'll just, there'll be situations where they're like, why doesn't this work?
It's my job to figure it out.
I guess I need to figure that out.
Yeah.
And that's just, there's no avoiding that.
Work with an AI to like kind of say, why is this not working and have that sort of
debug experience, pair programming experience, but they are still being.
Yeah.
Brought along for the ride on the technology journey and there's no evading it.
And maybe that's hope for us all.
Yeah.
We'll see.
All right.
I think we, uh,
We have reached a good enough stopping point for now.
Yeah, that words.
And I will see you next time, my friend.
Until next time.
You've been listening to Toose Compliment, a programming podcast by Ben Rayleigh and Matt Godbob.
Find the show transcript and notes at www.2.complement.org.
Contact us on Mastodon.
We are at Tooscomplement at hackyderm.io.
Our theme music is by Inverse Faye.
Find out more at inverse phase.com.
Sorry, I was laughing at myself so much.
I didn't actually press the record button.
I do love to give myself work.
