PurePerformance - AI-Ready Codebases: Engineering Discipline for Agentic AI with Adam Tornhill
Episode Date: March 30, 2026In this episode, Andi and Brian welcome back Adam Tornhill—founder of CodeScene and author of Your Code as a Crime Scene—to explore how agentic AI is reshaping software engineering. Adam shares hi...s personal journey from 40 years of hands-on coding to orchestrating AI-generated code, and what this shift really means for development teams.Together, they dive into new research on the hidden risks of AI-assisted coding, why low-quality or legacy code slows AI down, and how to measure the “AI-readiness” of a codebase. Adam breaks down practical strategies from his latest work on Agentic AI Coding, including guardrails, refactoring patterns, enforced processes, and why test coverage has become a surprising cornerstone for safe, fast AI iteration.Whether you're experimenting with AI coding tools or planning enterprise-scale adoption, this episode delivers actionable guidance rooted in data, engineering discipline, and real-world experience.Linkshttps://codescene.com/blog/agentic-ai-coding-best-practice-patterns-for-speed-with-qualityhttps://codescene.com/blog/strengthening-the-inner-developer-loop-turn-ai-into-a-reliable-engineering-partner
Transcript
Discussion (0)
It's time for pure performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson.
And as always, I have with me my amazing, wonderful co-host, Andy Grabner.
How are you doing today, Andy?
Very good.
It seems like we haven't done this in a while.
It has, yeah.
listeners don't know is that you just needed a second attempt for the introduction.
Right, it's been so long.
It's been so long, yeah.
And which also tells me that you are not an AI because an AI would have not made that mistake.
Right, right.
But before we go to AI, because I am excited about today's show,
I have to let you know I'm disappointed in you.
So any eagle-eyed listeners who look at the thumbnails might notice,
I have new colors on my microphone,
And I've had this for a few weeks now and nobody that I interact with that work has noticed it.
And I figured, well, Andy'll notice, he always, he sees me on the mic all the time, he'll notice it.
And you didn't.
So I got some nice little overlay things for my mic.
It looks like some lemon-lime sherbet or something, but I figured I'd get something for me, Andy, but you let me down once again.
Well, I'm really sorry for that, but you know, that's...
But you know, we won't let me down.
Yeah.
Our guests.
Good code from Anne.
I.
Good, girls from the eye, exactly.
And for this, this is kind of like, for me, it's amazing.
Six and a half years has it been since we recorded the first episode with our guest.
Back then, the session was called Code as a Crime Scene, Diving into Code Forensics,
hotspot and risk analysis with Adam Tornhill.
Adam, thank you so much for being back on the show.
Six and a half years later, I'm pretty sure.
A lot of things have happened.
We will talk about this.
But first of all, for everybody out there, who is Adam Thornhill?
Well, first of all, thank you very much for having me back on the show.
So I'm Adam Thornhill.
I'm a programmer based in Sweden.
Been doing software development for a long, long time.
I'm still enjoying it, perhaps more than ever.
I'm also an offer.
And the best known book is probably Your Code as a Crime Scene
that came out in a second edition a couple of years ago.
I'm also the founder of codecine, where I work a lot with code analysis and automating the stuff from your codes of crime scene.
So that's like my life.
And when I'm not coding, I'm super interested in retro computing and music and history in general.
That's pretty much me.
Now you need to film in retro computing.
That means like you have old hardware running around and playing with it.
Do you have a Sinclair?
No, not Sinclair.
That's too modern.
I go to 1970s, like Atari, 2006,
Intelivision, that kind of stuff.
I really love to play with these old machines.
Well, so is this, I mean, unfortunately,
I'm completely not into this type of history in IT.
My first computer was a Commodore Amiga 500 from the 80s.
I'm just curious, how do you still get spare parts on these?
things? Obviously, you have to prepare everything or is there still an industry around this?
Can you still buy these things?
You can pretty much still buy the original hardware if you want to. I'm cheating a lot.
I'm using a lot of amulators because I'm mostly interested in this software.
Okay, cool. Awesome. Well, I learned something new about you. I didn't know that.
Hey, Adam, I remember the reason why we had the first podcast is because we
We met each other at a conference.
And I don't remember, but it was probably in a conference in Europe,
a conference related to code quality, to software engineering quality,
because that's kind of my background, your background, Brian's background.
And for those that didn't yet listen to the Code as a Crime Scene,
or haven't read your book or haven't read any of your material on that,
can you quickly give a quick overview of Code as a Crime Scene
and what you have done back then?
that kind of now also resulted in a second edition of a book.
Yeah, I'm going to make an attempt.
So your code is a cramps in very much about the idea that the ballneck in programming is not writing code,
it's understanding existing code, that's where we spend most of our time.
And your code is a set of techniques that help you pick up a launch, potentially large system,
large code base, and very quickly identify where are the main ballnacks.
and those bolnaks, I call them hotspots because they are development hotspots.
So these are overly complicated pieces of code that we need to work with often.
So what I do is I call this behavioral code analysis,
is that I combine code quality information with behavioral data from our version control history.
And that enables a bunch of interesting use cases.
And the book York Code's Crimesing was written based on the patterns and techniques
I had developed during my years as a consultant.
And after writing the first edition, which was in, what was it, 2014,
after writing that, that's the reason I founded the Coimson,
because I wanted these techniques to become accessible and mainstream.
So there had to be good tooling behind it.
So that's pretty much it.
Now, for me, this sounds, and I'm going a little bit into a direction,
because I'm interested in this now, I want to understand,
if you think about, you said, understanding existing code is a very hard thing.
And it feels what I know about the topic of today, you know, AI coding tools.
This is actually a big strength of it feels like AI coding tools, understanding existing codes or at least, you know, trying to understand existing code basis and then giving suggestions on how to improve it.
based on your work, on your research.
Is this true?
Can AI understand especially very old code bases,
code bases that have been lingering around?
Sometimes we call it legacy code basis,
because I'm sure you have a lot of experience in this as well.
Or is there somewhere a line where you say
AI is not as good as we hope because there's not enough material,
it has not been trained enough.
Just curious to hear from you.
Yeah, sure, sure.
And I'm going to try to keep the explanation brief.
But basically, if I take a step back and look at us humans, human developers writing code,
code quality has always been important, whether we have acknowledged it or not.
And there is previous research.
Me and my team did a piece of research that was called Code Red,
the business impact of code quality, or a couple of years.
ago. And what we showed there was that code quality has a massive impact on development time
and defects. And this is of course unsurprising to anyone that has worked on loggered systems.
But what we found now was that code quality is even more important to an AI than to a human.
So we have done a new piece of research that is called the code for machines, not just humans,
where we study the actual performance on AI
as a function of code health, of code quality.
Wow.
Dominant new questions.
So because one of the things that I,
and obviously I only know a fraction of what you know on that topic,
so excuse me if I ask stupid questions.
But obviously the machines, the EIs, the models are trained
based on available data out there.
Do we have any idea on how much of that code that was used to train is actually high-quality code versus code that is not the quality that we want and therefore resulting actually in bad quality again?
Do we have any idea?
Yeah, I can give you a pretty good idea on that because we've been benchmarking a lot of large-scale code basis over the years.
So the metric we used to assess code quality, we call it code health.
It's an actual metric.
It's a composite metric based on 25 code smells that we try to identify the code,
mostly concerning structure and design.
And what we find is that the maximum is 10, of course.
That's the perfectly readable code.
And then it goes all the way down to one, which is code that you never ever want to see.
And the average in the industry is 5.15.
So roughly in the middle of that scale.
And what we consistently find in our research is that if you want,
to keep AI-induced defects low.
If you want to prevent AI bugs,
then you need to have a code off of at least 9.5
in order to be able to fully accelerate with Argentic AI.
So I would claim that there is definitely a massive disconnect
between what the typical enterprise codebase looks like
versus where an AI can do a really, really good job.
Now, do you know if the big companies that are currently offering coding agents,
whether it's in Anthropic, whether it's an entropic,
whether it's open AI, whoever's.
Do you know what type of material that they use,
have they ever reached out to you?
Because I think they will be very important, right?
I mean, these models, they need to be trained on something that has a 9.5,
as you said.
Yeah, that's a very good point.
Because I think that's one of the missing pieces for these AI coding tools.
What we also find in our research, which is very interesting,
is that once you start to give the AI additional context information on why isn't this code a perfect time, what's missing.
As soon as they get this objective measure, they can form a goal and they can build a plan,
and they do a really, really good job at remediating potential code of problems and issues they are introduced itself.
So this contextual information, it's really, really key.
So, yes, I think that would be a really, really interesting use case to get closer to these.
AI coding tools and agents.
You know, I think it's interesting to point out
just for, you know, the average person, right, who thinks, oh,
you know, AI is intelligent.
You know, I keep making the case that it's not.
It just does what it's, you know, trained on and all.
But to your point, right, it's only going to code as good as the code it learns from.
It can't analyze the code to say this is good code or bad
unless you, as you said, give it that further learning, those further prompts to teach it that this is why this is bad, don't make these mistakes.
But again, I have this argument with my wife and kid all the time.
Like, no, it's not intelligence.
It's just doing what it learned, you know.
If it were to turn around on its own and say, this code is bad, to me, that's where the intelligence is.
But it's really, you know, it's really good to hear that, like, if you can prompt it to say, this is why it's bad, this is what's bad, you know, that I can pick up
on that and then then I mean it makes sense they can do that but you know I think these are
probably things people come people who common people who are using AI for coding are
either not aware of or aren't thinking much of yet which is why again you know looking at the
notes for the show I think this is a lot of fascinating topics so I'm going to shut up so we
can continue getting on to them because I think there's a lot of great stuff for you to share
with with the audience today um
Adam, obviously you're looking at your own personal history.
Just like you, Brian and I, we've started in a time where there was no AI coding available.
So we obviously had to learn the craft from the bottom up.
I think I consider myself very fortunate that I learned software engineering already in high school
and then kind of took it from there and learned more and more.
And obviously for you, the same thing.
we've been around for quite a while.
Since when have you been kind of exposed to AI coding tools
and maybe some of the lessons learned
or some of the things you have seen from somebody like you and I and Brian
where we have a big background in software engineering,
how do those people use maybe AI tools differently
than somebody that now sees an opportunity to say,
hey, we can now finally bring our dreams to life
because we can now use all these coding tools,
even though there's no real background in software engineering.
Do you have any insights here?
Yeah, I mean, I can start at a personal level.
So I have to say, I was quite an early adopter of AI coding tools
because I saw some promise in AI to solve a really, really hard problem
I've been struggling with for years.
So with the code as a cramps in techniques and with the codes in,
we have always been quite good at identifying a bad code that is hurting the business.
But that's really only the first part of the problem.
The next part is, of course, that now someone needs to act upon this.
And that means improving and refactoring the code.
And refactoring really is an expert skill.
I mean, it's a learnable skill, but at least for me,
it took like 10 years before I got any good at it.
So it's a really, really hard skill.
And this is where I had a big hope for AI
because I thought if we refocus AI
instead of focusing AI on generating more code faster,
what if you used AI to uplift existing code
and take away this ball neck?
So we did some early research on that in 2024,
published a paper more than two years ago,
using AI for refactoring.
Back then, it wasn't really a viable option.
But I've been following along closely.
And starting five or six months ago, I went over and started to write all my code using coding agents.
So after almost 40 years of coding and studying programming,
I haven't written a single line of code in moms now manually.
So this was quite a big shift for me.
However, what I find is also, and yeah, this might be my experience,
but what I find is that I could never have adopted a Gentic AI the way I do
if I didn't have all these decades of experience behind me
because those decades kind of talked me the high-level patterns,
how do I reason about architecture and design,
how do I model-rise stuff?
And it also talked me to quickly read a lot of code
because I find that even if AI speeds up the coding process,
you still have, you know, the major ballneck becomes validating whatever the AI built.
So there I'm really, really grateful that I've been
learning all their good engineering and delivery skills.
I just had just before this recording,
I had a solo recording because it was a little too early for Brian.
Again, sorry, Brian,
that you couldn't be part of that previous session with Wolfgang and Benedict.
But we also talked about what is a good capability,
but what is a good attribute for a good software or AI engineer to have?
And I then came up with the analogy.
I'm not sure if it's the right word, but I remember we talked about T-shaped people,
T-shaped engineers, meaning you go broad, but you don't have to go deep on everything.
Broad meaning you are fully understand.
Ideally, in software engineering, everything it takes to understand the business problem,
crafting the requirements,
to code, writing the proper tests, knowing how to deploy, how to observe, how to secure and operate, right?
Ideally, you know everything.
And then it don't need to go deep in every single aspect because you may have your human colleagues
who are now your AI agents that do a much better job when talking with you, when prompting
with you, to then do a certain task.
but you still need somebody that understands everything end-to-end.
It actually makes sense.
Yeah, Andy, sorry, continue.
Yeah, this is why I think I wanted to ask you, Adam.
I think we have the benefit of having been in the industry for so long
that we have seen good and bad architectures,
good and bad examples of software that solves,
so it doesn't solve a certain problem,
and we have a good understanding of an end-to-end software development,
development lifecycle, but we would, we don't need to be a deep expert in the latest runtime of Java, in the latest cloud native technology, in the latest observability trend, because this is where we can then use our agents.
And I think that's just wanted to confirm a little bit if this makes sense.
To me, it makes a lot of sense because that's what I'm finding that, to me, being fully agentic in my
coding, it doesn't fundamentally change my experience.
It feels very much like it always did.
The big difference is that I can take on larger and larger iterations, right?
So the feeling is still very much there.
And I think that's the big benefit of AI, that it kind of, you know,
I can start to thinking bigger patterns, bigger building blocks.
So for me, it's a little bit similar to the feeling.
You know, when I'm starting out as a junior, just learning to code,
you might focus a lot on syntax, like how do I write the fore?
group, right? And, you know, becoming more senior, you quickly start to thinking like
algorithms or maybe complete patterns. And to me, AI is a very natural evolution of that.
But I agree, everything that happens, I mean, coding is really just a small, small part of
software development. And we need to maybe not be experts, but at least have a good understanding
of what makes our robustness from delivery pipeline. How do I guarantee that the code is secure,
that I can continue to develop it and change it a year from now.
And I don't think there were any shortcuts.
I think we need to live through this, right?
Live with a system for an extended period of time to learn those skills.
You know, it's interesting, Andy.
When we were talking with Jeff Blankenberg a little while back,
he had the same concept, right, of still have to check the code, right?
But it sounds like we, you know,
you know, we're going to be more development designers or designers instead of the actual,
like if you think about even like in fashion or architecture, right now might be getting some
of this analogy a little bit wrong because I don't know what goes deep into it, right?
But if you're going to design an outfit, right, you may or may not, you're probably not
going to be crafting or putting that outfit together, or at least beyond the prototype.
But you have to know what's possible.
you have to know if you're going to have some kind of fancy structure going on
and you need to know can that be done, how would it get done,
but you're not going to be doing the actual construction of it.
In this case, it's going to be the AIA that's going to be doing the code.
But as you, Adam, said in your notes and probably already on the show,
as Jeff has said, and I'm sure as Wolfgang and the others on the previous concept,
previous recording, you still have to go back and check and verify what AI is delivering to you.
Right. And to me, that's what solves the problem of the next problem that comes up, right?
If we talk about the newer generation of what I'll call designers right now, if they don't have this background experience, how do they know what's it going on in that?
If they're not spending years in the trenches, writing the code and all that, right?
They're not getting that experience that the previous generation is built off of.
but I think that's taken care of
as
you know Jeff mentioned is like if you
give the juniors
the code to review
to look to make sure
AI is doing the right thing
that's when they're then learning
what this all does right so
at least in my opinion
the way to continue the cycle of
having newer people coming up
and understanding that the fundamentals
is for them to
spend the time in the trenches
doing the review of what
AI did. And as their understanding what that code is doing, I think that's going to then help them
understand the bigger picture so they can be a good designer later on. Am I way off on that, Adam, in
your opinion? I guess the big question is, like, how do we continue to have newer people
coming up, understanding the fundamentals so that they can make sure AI is delivering it in the way
that they want it to be delivered, if they haven't gone through that process like we all have.
Yeah, it's a super interesting topic, and I really hope that you're correct.
Personally, I'm a bit worried for the future of our field, because I think with AI,
it becomes so easy to shortcut our learning cycles, our feedback cycles.
And as you might know, I have a background in psychologists.
I have my second degree in psychologists.
I'm super interested in how we learn and solve problems.
And I do know that true learning comes with effort.
It has to be effortful.
We have to struggle a little bit in order to internalize and truly learn.
And I think the temptation is just going to be there
to kind of shortcut that learning experience.
So I think this is a big challenge for us in the industry to solve,
to kind of grow the next generation of software engineers
because we're going to need them.
I'm very convinced.
In the previous podcast that we recorded, there was a nice statement and kind of concluding the new role of software engineering maybe.
I think it was Wolfgang.
He said, in the end, nobody will charge you on how you code it, but you will be charged on the value that you provide.
And I think now we're opening up this field of creating software to software.
certain problem to provide value to many more people.
And in the end, nobody asks, was it implemented in Java.net to C-sharp?
But I think what will matter, however, for everybody that is using it and then it's
operated obviously with the qualitative aspect, whether it's secure, whether it has
the right quality, it's resilient and all these things.
But I like the statement, right?
We will be measured against the value that we provide and not the individual lines of
codes that
were created.
I would
like to ask you
a question
now on
this
it has to
hurt in
order to
learn because
I remember
when I
started my
software
engineering career
I was
trained
software engineer
from high
school
but in my
first job
I was put
into the
QA
department for
the first
six months
to really
learn the
craft of
good software
quality
or learn it
from the
other side, meaning testing. That means I was a tester and had to test the product that later on
I had to develop, which was really great. I basically gave feedback also to the engineers,
what worked, what didn't work, and so on. So this helped me a lot to appreciate quality.
Now, I wonder, and do you see this, if I have coding agents, do the coding agents have a sparing
partner or a counterpart agent that is the quality agent that validates,
what the coding agent is doing, giving proactive feedback so that the coding agent can learn
with all the mistakes. Basically, mimicking what we are doing in real life, where we had our
different roles for coding, for quality, and they were obviously, hopefully exchanging
information, learning from mistakes. Is this happening already? Do we see these different
agents and also the coding agent and a quality agent kind of collaborating?
So it's kind of interesting because I had a very similar journey almost by coincidence.
I was, of course, also trained as a developer, but my first job was actually QA.
And I think it's very useful because in QA you get that critical thinking naturally.
You're so used at software breaks so you kind of learn what to look out for.
And I think I learned lessons that are still serving me well to this day.
So what I find with AI interesting enough is that, yes, I do meet people.
I mean, we have customers that implement dedicated agents to verify quality.
I personally don't think we are quite there yet because what I find during my agent decoding sessions is that the worst code is always the test code.
And I simply think that that reflects the training data for LLMs.
Because, yeah, let's face it, that's our software community.
We haven't really done a stellar job at developing solid test automation code, right?
That's always where the worst technical depth is,
and we might not even know what patterns do we need to develop good test code.
So that's where I find that I have to coach my agents the most on the test code.
That's where I find most of my review comments,
and that's where I have most of the feedback.
So I think this is a massive, massive room for improvement
over the next few months.
But then I ask you, coming back to both of our histories, right?
We are both trained software engineers, but we had to start in quality.
If we then now adapt this in AI, shouldn't we force the coding agent to do the review for a while
to learn about good and bad software quality?
Would this work?
or is it just completely stupid thinking from my side?
Completely stupid.
No, I'm definitely convinced that it would work.
And parts of the quality problems,
I'm like the internal quality that I think is already a sole problem.
So that's what we do in order to make a gentle coding fly.
We took all our code off knowledge and we encoded it in our code off MCP.
So that's one we put into our agents.
And that works really, really well because I constantly see how the agent checks the code health, gets feedback, self-corrects, and stays on track.
So that works really, really well.
And also see a lot of promise in, you know, I complained earlier that test code isn't of good enough quality.
It often misses such cases.
It tend to duplicate a lot of stuff, which makes it really hard to verify that the right tests have been written.
But what I found is that once you manage to establish good patterns, then the AI is also pretty good.
good at mimicking those patterns.
So that's where I think there is some hope to make even more progress in that area.
So you mentioned the MCP.
That's interesting.
So that means that MCP gives, so yeah, I assume you have your agent instructions that basically
say after you code and you build, then ask the MCP to validate your code smell or your
code health and then based on that make adjustments, right?
So that's kind of like the closest feedback loop that you actually have here.
Yeah, yeah, it's a very, very tight inner developer loop.
The best thing is that you as a human, you don't really have to care.
You just have to ensure that AI knows that MCP is there, that the tools are there.
And then AI will call out and do the code of review and automatically correct.
And that saves you a lot of time because once you get to like a human review, if you do that,
then the code is already healthy, right, which cuts code review times in.
half on average. So it is a big saving. The other big use cases, of course, for refactoring and uplift,
because there is a lot of research coming out now on refactoring and AI. And what we see in that
research is that an AI actually does more refactoring than human developers. However, these AI
refactors, they tend to be very shallow, simple stuff like rename variable, right, that doesn't
really move the needle on code quality. So once you get this horse
structural feedback in, then the AI will take that in and do more impactful refactories,
which I think is really, really important.
Just a thought on the MCP server again, and the feedback to the code quality,
we also try to educate our users on doing the same thing with observability.
So if you're coding and you're making changes to your instrumentation,
you're creating new features that are creating new logs,
then you immediately can get feedback after the deployment,
do the logs meet our criteria, right?
Do they have all the proper metadata on it?
We have not introduced anything where we are logging, let's say, PII data,
so personal identifiable information.
And that's also a closed loop.
I always say this is information.
We need to have closed loop back to the human and the AI
because more and more code obviously is created by the AI.
So we need to make sure that these feedback loops
not only go back to the human, but also to the AI.
I have a very strange thought now.
So we are talking about software languages that we grew up with, right?
I mean, all the software languages we're currently dealing with,
we are training AIs on the languages that humans have created for humans to create software.
Do we think we are going to see a new world where AIs create their own?
own languages to make AI more efficient in coding?
Because why do we constrain ourselves and teaching AIs on languages that we as humans came up with?
Is there any trend going in that direction?
Yeah, that's an interesting fact.
So what I know from research is that programming language matters.
It matters to an AI.
So there are like some general trends, like if you have a statically type language,
then there's a bit more context, a bit more.
constraints and their LLMs tend to perform a bit better.
And then there are other things where a language, you know, take C++ as a good example.
LLM tends to perform quite poorly on C++, simply because C++ is not one language.
It's like a multi-paradigm language where C++ code can look basically any way you want.
It's a really, really complex language without any clear established patterns.
I'm obviously overgeneralizing now.
All C++ developers will be angry at me, but that's the way.
So the thing is that the language really, really matters.
And to me, it means it would be wherever, wherever natural to start to develop AI-friendly languages
that are simply a bit more constrained, right, and have the right type of verbosity.
And perhaps, I think the constraints are really key that are certain things that you maybe shouldn't allow in that language.
that can take away some common problems and common baldnacks.
So I think there is an opportunity here.
I do think that the language will still have to be human-friendly too,
because for the foreseeable future, I really see a hybrid model,
where you always want to face, safe,
where you want a human to be able to go in correct or validate, right?
So it's not something I've seen in the short term,
but I would definitely think that's where we're going.
Based on all of your research and the analysis of all of these large code bases,
do you see a preference of language that is used by AIs
or by developers that are using AI generators or like AI coding tools?
Is there like a shift towards certain languages and certain runtimes?
I think it's still, I mean, I definitely can guess what the trend is.
I think it's still quite early stage.
So that's also the thing.
It's so easy to get carried away, at least for me.
I think the whole word is agentic.
But when you look at these large enterprises,
they are really just at the where we're all the stages of exploring AI encoding.
So I think the big shift is yet to happen.
But what I do see is that the teams that seem to be really, really successful,
they typically use languages like Golang, Rust seems to be super popular.
And our team internally, we do a lot of Python, which is interesting because it's dynamical at type,
but there's so much Python out there.
And the language has a lot of good stuff going for it that seems to serve it well with the LLMs.
So meaning with Python, there's a lot of training material out there,
and that's why the models are so well trained.
Yeah, and Python also.
I mean, if you compare it to say C++ that I mentioned earlier,
Python has had like the opposite philosophy, right,
that there should be one obvious way of doing things, right?
And that, I think, has served it well
because that's reflected in its training data to some extent.
You know, I've been wondering during this conversation,
me as an outsider, right,
I'm not doing development at this point,
talk about AI in the podcast and with colleagues,
and here's some stories of how people use it.
From where I stand,
before talking to people like you and Jeff and all that,
there was this idea of, oh, yeah, we just tell the AI what we want.
It gives us some code back.
Maybe we look at it, maybe not, right?
But what's coming out in these conversations
is that there is much more depth to it, right?
Much more intervention required, much more review required.
what is your take or understanding from what you've seen on the awareness of developers using AI to code
of these additional steps that are required to get good and reliable code versus let me just get
it quick and dirty AI slop style you know throw it in there and I'm done right are is the
community that's adopting it at large aware of the
level of effort that should be going into using AI,
as opposed to let me just toss it there and take what comes back
and pop it in as sort of extremes.
Yeah, I think my experience is that it's very much a mixed bag.
And there are a couple of things happening right now
that I've never seen during my 30 years in software.
And one of them is that AI coding, after all,
these are developer tools.
It's the first time ever I see a developer tool being pushed,
from the very top. We talk about like boards and owners of company that are pushing a developer
tool at an organization, which is really, really interesting. And of course, that push comes
with a massive expectation that, yes, we're going to be, I don't know, 55% faster or whatever we think.
And I think this hype is really, really dangerous because AI to me is so promising. We're seeing
very real benefits in our own development. Yes, we are faster and our quality is better. We have
that measurable. But the thing is that it won't happen on its own.
It succeeding with AI, as we've talked about, requires more discipline than doing things
the old-school way, simply because everything moves so much faster and the impact is so much
larger. So I think a lot of organizations aren't super well-prepared to deal with this.
And the big risk is, of course, that, yeah, we tried AI. It didn't work, and you throw it out,
and you lose speed, and you lose time.
So there needs to be that awareness of...
You know, almost like, you know, kind of going back, I look at this point in computing history as somewhat parallel to cloud migration, right, where people, hey, we're just going to toss their stuff in the cloud, or, yeah, I'm going to turn it into microservices.
And there was this idea, like, you need to know what you're doing before you do that.
You need to know how to break down these microservices.
You need to know if it makes sense to move this to the cloud, right?
And so many early cloud efforts failed because there wasn't that awareness of how to properly do that, which,
again another parallel if you just say we're going to use AI and it's going to be magic
it probably won't be right there are you know we'll probably see the rise of um you know
AI practitioners going around um sounds like a great opportunity right go around and help educate
people on if you want to be successful and moving to AI these are uh the considerations and
you know i know you've mentioned something about the um
you know in the notes and we spoke about it earlier like the the
the starting code base that the AI is working with and understanding that.
So, yeah, there's a lot to consider with that move.
And the thing is you can actually pull a lot of that risk forward.
So one thing we do with a lot of our customers is that we measure the code alpha in our
code base, we visualize that.
And what's so interesting is that inside the same system, inside the same code base,
you can be in very different states of AI readiness.
because you can have, you know, components that are perfectly healthy.
So those are the parts we can start to adopt AI.
That's where I would do my AI pilots if I was in an adoption phase.
Then you have other parts of the code that are simply very,
very far removed from being AI-friendly.
So there I would instead, you know, target on uplift,
focus on improving, strengthen with tests around it, that kind of stuff.
So having this situation awareness, I think, is a necessity.
And I like the parallel you had with
their microservices, because I do remember that, like, 10 years ago,
everyone had a transformation project going to microservices.
Right.
These days, people are having another transformation project,
going back to modular monoliths.
And I think the difference with AI is that it speeds up the whole timeline.
So a microservice transition today is something that you can do within a week.
Right?
It moves so quickly.
So making the wrong decisions there and continuing to build on that can have,
it amplifies the consequences.
You know, this also brings up another thought that just occurred during this bit of the conversation.
You know, we found that when moving to microservices or moving to different cloud technologies,
whether you're going to Kubernetes, Lambda, or you're just going to lift and shift,
the best approach was to have a predefined goal.
What is it that you're trying to accomplish?
Do we feel that when moving to AI, there should be more of a goal than to move to AI?
Are there more specific goals that people?
should be saying we're going to move to AI
with the expectation that this is our
outcome. If we set that expectation
we can have a better transformation to that.
And if so, what are some of those
goals that might be starting places?
Yeah, that to me is really key.
And I've been asking myself that question, too,
if it's so hard to succeed with AI, why bother?
And, I mean, for me,
I'm a startup founder.
So for me, AI is all about
being able to do more faster.
So I would be very interested in measuring things like throughput.
And I would also like to measure quality, like the stern-like quality, right?
And ideally what I'm expected from AI is that not only are we going to be able to speed up our roadmap execution,
we're also going to maintain or ideally improve quality, right,
which means fewer production defects.
So this is the type of stuff I'm measuring.
And I think that's really, really important.
And I think that that's where I'm so skeptical about measuring things like, you know,
lines of code generated by AI because lines of code doesn't have an intrinsic value whatsoever, right?
It's all about outcomes.
So that's what I would highly recommend to focus on.
That's a great advice, yeah, because in the end, as he said, it doesn't matter how
what you did to get to that result,
the real thing is
what is the impact that you have with this.
And that's kind of the value.
Coming back to that statement
from the previous podcast,
show the value,
and nobody asks about how you got there.
I would like, because time is flying.
I know we have a lot of cool blog posts from you.
Folks, if you're listening in
and if you want to learn more from Adam,
from Code Scene,
everything that Adam and team has done.
There's a couple of links in the description of the podcast.
I'm just two more questions.
I think the first one, you already, kind of answered a little bit,
but you talked about you're working with a lot of enterprise customers
and they're still a little bit hesitant or they're getting ready.
What are some of the things you can give us a piece of advice
for enterprises to get readier, to get their code-based AI ready?
what are some of the practical tips that you can tell them?
So I always view it as a three-step process.
So the first would be to, you know, I'm focusing mostly on code quality.
So I'm going to assume that there's an equally strong focus on things like build pipelines and delivery pipelines.
But from a good quality perspective, what I always recommend is number one is create situation awareness, measure code off,
see which parts of the code are good and where it's lacking
and then map that to your roadmap so you see okay
where is the real friction
and of course techniques like a hotspot analysis
really helps to point out where are the potential
bowl next when adopting AI.
Then for the parts that are good
what I always recommend that this might be the most important
one is that that's where you can start to accelerate
perhaps even going agenetic but you do need safeguards
so that's where we recommend that
Yeah, pull in the code of MCP server, integrate that, and have the AI to self-correct to make sure that it doesn't spiral out of control.
And for the parts that aren't AI-friendly but are critical for your company, for your roadmap, that's we need to invest into an uplift.
An interesting thing is that an AI can help that uplift, right?
So just because the code isn't air-friendly doesn't mean you can't use AI.
What differs is the level of agency you can give.
So the worst of code health, the more you as a human developer needs to be involved in
established patterns and guide and drive the AI.
But I've seen that work really, really well, but it obviously requires a lot of discipline.
But the payoffs are huge.
Awesome.
I love the term AI readiness and getting it, getting it air ready.
So focus on code health, accelerate, we're good in, but use guardrails, invest in an
uplift. We are not too good, but it is business critical.
I have one last question that I also asked the previous two guests that we had.
Well, let's say it's a situation that I've observed and I want to get your opinion on.
What I've observed within our own organization, within our customer base, also with my
friends who started coding, web coding, all of a sudden, everybody solves the same problem
their way because now they can.
The same tool, or the same problem is solved with five different tools.
The same problem is solved with 10 different apps.
The same problems is solved with 20 new websites that do the same thing.
So while this is great to experiment, to explore, to understand this technology,
I have a fear that within an organization you end up with a lot of code that needs to be maintained,
that might be running business critical processes,
but that is duplicated because it solves the same problem.
So it's overhead.
I also see this from an open source perspective.
I know a lot of open source projects grew really strong,
not because the same problem was solved 100 times
and 100 individuals were coding,
but because these 100 people came together
and together built something that solved a bigger global problem.
So I'm just wondering,
A, if you see this as well as a trend,
and whether you see this is a good thing, a bad thing,
what is your perspective on that,
which has been interesting for me?
So, yes, I think it's definitely a trend.
And because the barrier to create something is simply so low now, right?
It's so fast.
And in general, I think it's dangerous.
And I think we lack the type of governance
because we aren't prepared for this speed yet.
But I think we need to have some governance there.
And the reason for that is because creating code has always been relatively cheap.
I mean, look at any software project, any software product, and 95% of the costs are after the first version is released, right?
And that's not going to change with AI.
Even if we no longer count human hours, in the best case, in a scenario, maybe we just count tokens,
still means that 95% of token consumption is after the first version is built.
So why you have this duplicates, right?
where you always have the risk of introducing security vulnerabilities,
having outdated business processes.
I think it's really, really dangerous.
And it all comes back to, you know, the increased importance.
We need to put on software engineering and software architecture.
I think it's going to be more important than ever, and maybe it already is.
So I'm not sure if that's an answer.
I know also something at least.
That's great.
I mean, perfect answer.
I love it that you brought up the very good point, right?
It's very cheap to create the first version of something,
but it's as expensive as it is in traditional software engineering
to get it from the first version to something that sustains the lifetime of this thing
to create business value because you have to maintain it, operate it, fixing bugs.
I think that's awesome.
Thank you so much.
Brian.
Yes.
Yes.
it's the end
it's the end
it's the end of the software engineering
as we know it
it is as we know it
doesn't mean it's the end of it
but as we know it exactly
it feels like
everyone's becoming first time parents
by trying to raise these AI
coders but the difference
is there's nobody who did it before us
you know for us if we have a kid
it's like oh we look at our parents
or other people and you have this whole history
of it. It's like, oh, we're raising this new AI child to code, but there's, it's never
been done before. So we have to figure it out. It's going to be interesting. I don't know if I'll
be alive, but it'll be interesting in the future when people look at back to the beginnings
of this and what the perspective will be in what was going on at this time. But I think,
you know, if we take advice from, you know, people like Adam and others we've talked to
who are really looking at this from a pragmatic point of view to do it correctly or as correctly
as we know at this point to do it and to do it with care, hopefully there'll be some really
good reflections on. But yeah, this set of people over here was really thinking the right
way and got kicked off because it is. It's way too tempting.
I think to just say, hey, do this for me and I'm going to move on.
So hopefully we get the word out there to not, yes, it's going to help you.
Yes, it's going to save you time.
But don't be careless with it.
I think the important takeaway for me is just that we're getting faster,
but we must not forget the guiding principles of software engineering,
which is good automation, good quality checks, close feedback loops,
whether it's to the human or to the AI.
And the AI will help us obviously get faster,
but we need to have these card rails.
Adam, thank you so much.
Did we forget anything, any final words, any advice for anybody out there before we close
this session?
Yeah, well, thanks a lot for having me.
And my advice is with AI doing it.
right will give you a speed up, but it requires more quality, more skills, more feedback loops,
right? It's arguably harder than doing things manual way. So be prepared for that and do the right thing.
Did deliver value? I think there's an interesting future topic in here. Potentially, Adam,
I'll throw this out at you. As you're going through more of this, I know you've mentioned
the profile, not the profiling, but the hotspots, which again ties back to the
code is a crime scene. Very nice
connection there too. But I think
it'd be interesting to see,
come back and have you on in the future
to take a look at what are some of the patterns
we're seeing in
that not ready for AI
code.
And what those shifts and changes
have to be to
make it ready for AI.
So as you gather more
or see more on that, I think it'd be awesome to have you
back to see what those anti-patterns are
for AI preparedness.
Sure. I'd love that. Thanks.
Thank you. Thanks for everyone for listening.
We hope this was very educational for you, as it is for us.
And we'll see you on the next episode.
Thank you. Bye-bye.
Thank you.
