PurePerformance - 10 Fundamentals to get Vibe Coding right with Jeff Blankenburg
Episode Date: January 19, 2026If you are still treating your AI Coding Agent like a chat bot and not like a development team then this is one more reason to tune into this episode.In his blog post series 31 Days of Vibe Coding, Je...ff Blankenburg walks us through all the lessons learned when bringing an idea to life just with vibe coding. His idea was building a website for collectors of baseball cards. With now more than 950k cards from almost 10k players, he has proven that vibe coding, when done right, can truly boost the output of software engineers. Tune in and learn about how to effectively use Git Issues as the backlog for your AI, the importance of going through different phases in your conversation with the AI and why it is important to ask the AI the question: "Do you have any questions for me?"Links we discussedLinkedIn Profile: https://www.linkedin.com/in/jeffblankenburg/31 Days of Vibe Coding: https://31daysofvibecoding.com/Collect Your Cards: https://collectyourcards.com/Claude: https://claude.ai/
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabmer and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and you're not.
Sorry, that was a dumb Chevy Chase reference, Andy.
I don't know if you know Chevy Chase.
I do, I do, yeah.
But I don't know the reference.
What was the reference?
Oh, on Saturday Night Live, he used to do the news.
He'd be like, I'm Chevy Chase and you're not.
And I guess he's in my head.
some new documentary came out about him about what a difficult man he is and problematic, but really funny.
Anyhow, that's where my mind went, and I don't know why I did it. Anyway, you're my co-host, Andy Grabner.
Yeah, but I do know that I'm not Brian Wilson, obviously. But I am Andy Gramner, but you are not.
So I basically have learned something new, a new opening line.
Yes. Yes. It's a new year.
It's a new year, yeah, and it's hard to believe it's a new year, but when people listening in, I think this one airs on.
January 19th at the time of the recording
it's January 7th
it's actually the day after the Three Kings Day
I'm not sure do you have
celebrated Three Kings
some people here do
a lot of the more Hispanic cultures do
in the United States
but not the
not so much the European
descendants
but you know to keep it
I love it
so basically what they did yesterday
in my local radio station
they basically played the whole
day, the kings and queens of rock, meaning I think people that were admitted to the
Rock and Roll Hall of Fame. And they basically played a lot of great rock songs.
I hope they played Queen, because that's the Queen of Rock, and they were in the thing.
Exactly, they did, yeah. But you know what? It's not just the two of us, and I think it's
finally time to also give a little bit of airtime to our guests. And he will have much more later.
but Jeff and thank you so much for being here on the call, Jeff Blankenburg.
I would have actually said Blankenburg because that would be the German way to pronounce your name,
but I don't want to butcher your name more than I should.
Jeff, thank you so much for being here.
Can you quickly introduce yourself?
Who are you?
What do you do?
What's your passion?
Sure.
Well, thank you.
Yeah, Blankenberg is generally the way I pronounce it.
but in Germany that you're absolutely right in fact there's a town in the mountains and there's a castle there in fact castle Blankenberg and I feel like I'm the first born of my family and that goes back many generations I've always been the first so I feel like if I just visit the town and show them my idea I probably am heir to the castle or at least I hope so that's exactly how it works right yeah exactly right anyway so to introduce myself I'm relatively new to Dinotrace I joined the company in August
and I'm on the developer relations team
doing a lot of fun stuff
with vibe coding and AI generated code
and previously I've spent a lot of time
at Microsoft. I was the
chief evangelist at Alexa at Amazon
for a number of years. So
really anytime you think
about building stuff with AI or communicating
with AI or doing things that have
an agent kind of working with
or on your behalf, that's kind of where
I've been chasing my passion.
Yeah. And obviously thanks for
your time at Alexa.
think I told you that, that kind of the way I see, and I've seen it for many years now,
we have a little Alexa dot at home, and every day in the morning, I always ask, what's the weather,
what's the news?
Because I want to know what's happening in the world and whether I can take the bike to work,
whether I take the bus or the car.
And so for me, this helped me a lot.
It was a nice digital assistant.
But things have changed, right?
Things have changed.
And now, Jeff, the reason why we, I mean,
There's many reasons why we should have you on the podcast, but the main reason today is 31 days of vibecoding.com.
It's a project that you have initiated.
At the day of the recording, we are at day seven, obviously a couple of more days ahead.
When this airs, we have day 19.
So, folks, if you're listening in and you're opening up 31 days of vibecoding.com,
you will also find the link in the description.
You will be, I assume, blown away.
as I have.
I've only made it
for the first couple of blogs,
but I took so many notes,
and I just want to highlight
two or three things,
and then I want to jump into the topic.
The first quote that I wrote is
I was treating AI like a jetbot
instead of a development team.
And you're explaining things in the beginning.
I think you call it on day number two.
You call it the different phases
where you talk about
stage one
asking for a function
like create that function for me
then asking for a feature
and then you talk about
spec driven development
Jeff I want to stop talking now
but I want to pass it over to you
can you walk us a little bit through that project
what you've actually built
what was your motivation
and what have you learned
and that's you know you know best
how to learn us
sure sure so this started
actually this whole project started a few
years ago. So like a lot of folks, I needed something to do during COVID. And my wife recommended,
hey, you've got your childhood sports card collection down in the basement. Why don't you go through
that? And I thought, that's a great idea. I can just dig through piles of cardboard and spend
my time because we can't go anywhere. We can't do anything. Can't see people. And so I started doing that.
I started organizing stuff and putting sets together and making sure that I knew what I had.
And so I started with an Excel spreadsheet, like most collectors do.
And they're like, oh, this is what I have.
Here's how I'm keeping track of it.
But I found pretty quickly that if, as a software developer,
I found that if I wanted to know anything about my collection,
Excel wasn't a really great way to do that.
And so then I tried it, other iterations.
I built a SQL database.
I went through all these things.
And eventually, I was like, I've got to build my own software.
And this, thankfully, was right about the time that things like Claude and ChatGPT
and all of these things were coming out of the woodwork.
And people were singing in the praises about how we could,
write code for you and my experience had been it wasn't very good. But it seemed like it was something
that it could do. And so I decided to just jump in with two feet. Let me see what I can really do
if I start from scratch and have AI build something for me. And so I built a website called
Collect Your Cards.com that is specifically designed to track and manage your collection, right?
And I'm not here to talk about that website, but that was the project that kind of drove a lot of
the learnings that I had in this series that we're talking about. And so what I quickly realized is that
as I think about like you described, I would ask it for a function.
I was trying to still be like a true software developer where I'm putting all the code
into the ID and it's just giving me some things that I can use.
And what I found is I was even slower.
Now it was like having somebody write code for me and then I have to cut and paste and copy that
into what I'm trying to build.
And it wouldn't work, of course, because it didn't have any of the context around what
was sitting in that file.
And so I just started from scratch and I said, okay, this is what we're going to build.
Let's think about how this is going to work.
And as you said, I started treating it like a chatbot.
Okay, now build this.
Okay, now build this.
Oh, I don't like this.
Can you fix A, B, and C?
And then we'd go fix A and B, and I'd forget about C.
And then we'd keep moving.
And later, I'd be like, why didn't we fix that thing?
It's like, oh, yeah, it doesn't remember.
It doesn't, it's not a person, right?
It doesn't do a lot of those things.
And so I started spending a lot more time thinking about, how can I be good at this?
Because it felt like, yeah, I'm generating some code.
I'm building a website.
But it doesn't feel like I'm being very efficient.
and it doesn't feel like I spend a lot of my time just correcting it.
Hey, this is wrong.
Hey, we've built that kind of button or that kind of modal a hundred times before.
Why does this one look so different?
And so I started doing a lot of reading and research and paying attention to what other people
were talking about and certainly listening to the vendors.
I primarily use Anthropics Claude as my AI agent, but the lessons in my series really
translate to all of them.
And so I started thinking about things like, how do I make sure that when I tell it to do something that I don't lose it, like that C example I gave a second ago.
And then I started thinking, well, there's got to be a way to externalize this.
I had it write markdown files.
So I would say, hey, here's what I want you to do.
Write a markdown file with your thoughts on what to do with this.
And that was good until I realized I had dozens and dozens of markdown files just all sitting in my project.
And there was no organization, no prioritization for me as to.
which one we should work on next.
And I saw, I was at a conference called All Things Open earlier last year,
and I saw someone talk about how they were using GitHub issues as their backlog for AI.
And it fundamentally changed how I thought about all of this.
Now instead of creating a markdown file, it's creating an actual issue in my GitHub repository.
It writes all the instructions and thoughts it had at the moment, and then that's preserved.
And so later, when I say, hey, okay, now we're going to work on that customer
detail page or whatever it is. That's issue 74. Let's go work on that. That unlocked everything
for me because now I'm not losing any of the features or I'm acting like a real product manager,
right? And so that taught me things like thinking about how to prompt effectively, how to how to think
about managing your backlog, how to break features into small phases. Context management is also a big one.
We could spend the next hour talking just about contact management. But one of the ones that really
stood out to me, and you know, this ties nicely into the work that we're doing at Dyna Trace,
is that when I build software myself, I have that mental model of how it works. I wrote every
line I know what it does and what it doesn't do and what I should expect the kinds of behaviors
out of my code. But when I don't write it, I don't really have that kind of view into what's
happening. And I joke in the article about observability that we all have the best intentions
of reviewing every line of code that AI writes,
but it slows us way down.
And eventually, you learn to trust a little bit.
You say, oh, that will be fine.
I'm going to keep moving.
It works the way I expected to.
And eventually you get to a point where I am now
where, like, I test my software,
and we have unit tests,
and we do all sorts of things to be responsible,
but I'm not reviewing most of the code
that my agent is writing.
And I'm using observability
as the way to understand what's happening.
I'm instrumenting my code.
I'm making sure that I understand what things happen, when they happen, how frequently they happen.
And by using observability as my mechanism for understanding what my code does,
I still have the piece of mind that I know what's happening in my software.
So I've been talking for a while, but that's kind of the idea behind this series.
Yeah.
I got to say, I mean, I literally half an hour ago, I read exactly.
I think it was part of day one, and I took some notes.
and I know I'm paraphrasing now
I'm repeating what I just said but you wrote
you can't review a thousand lines of
code that the I generates
you run the tests, you validate the features
you look at the observability data and verify
what's happening
for me
everything you've just explained
really means we need to rethink
on how we
do software engineering
I mean not that the fundamentals change
well maybe the way
we I'm just wondering what does
this mean for the next generation of
software architects and software engineers?
Because a lot of people are very afraid out there
and say, hey, this is going to take away
my job. I think we had a discussion
earlier this morning where you
met somebody, or I'm not sure who, maybe
it was Henrik who said, he met somebody
that had 30 years of experience and now
they're afraid that all of this experience goes away.
I think it was you, right? Yeah. Yeah, yeah.
So I was reading about a guy
who was very concerned, like visibly upset
that
everything he had spent his career, learning,
and doing, becoming a master craftsman at software. He spent 30 years of his career being really good
at this. And he spent the holidays with Claude and realized he doesn't need to know any of that stuff
anymore. And I think a lot of the people that came to comment on and reply to his message were very
focused on the idea that he's 100% wrong. It's not like that at all. If I were to hand a tool like
clawed to someone that had no experience at all in the software world. Let's give it to a high school
senior or a freshman in college. They're going to say, build me a website, do this, add this thing.
But the moment they roll that out to the public, the moment that all of the malicious bots that
live on the internet can get access to their software, it's going to get destroyed. They're not going to,
they're not going to know it doesn't scale because that's not a thing that occurs to them.
they're not going to think about the fact that security is important and locking down your data and making sure that it's not destroyed or added to in a malicious way.
There's so many things that can go wrong in software.
In his 30 years allowed him to use Claude in a way that was effective because he knew what to think about, what to do.
And I actually saw someone right this morning that it's almost as if getting a computer science degree is really important again.
Because for a while, you could be a software developer without a college degree, right?
you could sling some code, you could build some HTML, and you could have a pretty successful
career. But now, if you're building everything with AI, knowing the syntax for CSS doesn't
matter anymore. And what does matter is understanding why that needs to be there. What the per, like,
why do we create a universal CSS file for a website? Well, it saves on caching. It saves me on load time.
It allows me to translate styles universally across my website. Like, there's all sorts of reasons
why would you use that? But if you don't know any of those lessons, if you don't,
understand how the things under the covers work, these tools aren't going to be very useful
to you because they need guidance, they need steering.
I don't imagine that they're going to be able to steer themselves too terribly soon.
It's an interesting, interesting what do you bring up there?
Because as you were talking, my thoughts are going along the lines of, you know,
we learned all this stuff through experience from, you know, in 30 years having gone through
everything, you know what to do.
but what you're talking about is the principles and concepts of these things, right, which can be taught.
You might not be as aware of them if you didn't experience the pain of it, but it still could be taught.
But it goes back to the simple idea that my physics teacher brought to us in high school,
where for every test, he would write the formula on the board, all the formulas we needed for the test on the board.
His point was formulas don't matter if you don't understand when and where to use them.
exactly right
which seems to be
exactly this case
if you understand
the principles and all
who cares about the formula
you know
use the right formula
or in this case
the AI
would use the right
component to build
in what you're telling
it to use
because you know
what to
implement here
because you understand
the principles
and that kind of
concept
I love your physics teacher
for that by the way
but I feel like
I feel like that
concept has gone on
for a long time
right we've
we forever
I mean
we are all of a vintage
that suggests
we've been around before cell phones and having access to the internet.
And I think that you think about when you used to sit around at a bar or a restaurant with your friends
and you talk about stuff and someone would say something like,
did you know that that actor did this thing?
And people would say, no, there's no way that he wasn't in that movie or whatever it was.
And you would just have to agree to disagree.
That's not a phrase you hear very often anymore.
But when you think about sitting around in a restaurant now and having that same argument,
no, there's no way he was in that movie.
someone pulls their phone out.
Someone has to know the thing
and has to put some conclusion
on whatever that conversation is.
And I think that's a demonstration of the idea
that like anything that you just needed to memorize,
whether that was the syntax for how to
underlying text in CSS or
what a physics equation was,
I think all of those things aren't as important anymore.
Memorization isn't learning.
Memorization is just holding data that you'll need later.
And so I think that when we think
about tools like AI, it's making it easier and easy for us to just do the big thinking and not
focus as much on the nuance. I mean, a good example of this is I'm integrating with the eBay APIs.
When someone buys a card on eBay, I wanted it to automatically show up in their collection.
And there's a lot of nuance and complication to that, but for the most part, it would take me a week
to really understand and connect and work with the eBay APIs alone, regardless of the logic
that I wanted to take on.
And instead, I got it done in an hour.
That's magical, right?
There's no way I'm going to just go learn the eBay APIs
and all of the OAuth and all the other pieces
that fit into that in an hour.
And not only do I have it done in an hour,
I also have all the integrations done.
So it's a new world.
You just have to be willing to embrace the fact
that the things you memorized aren't important anymore,
but the strategy and the thought and the architecture around it
is still all your responsibility.
I also have kind of to what you just said, day one.
And as I said earlier, I tried to read through as many pages,
but then I took so many notes.
Coming back to the same example, you said,
I'm not someone who claims to be a 10x developer.
I'm a regular developer who learned to work effectively with AI
because AI is not magic.
But when you stop fighting the tools and start working with them,
you can build faster.
than you ever could alone.
And the examples you brought earlier
fighting with a new API
because there's a lot of trial
and error we typically do, right?
Try to get an example.
Oh, the example doesn't work anymore
because you had an outdated version,
copy and paste, run, fix things
or if you switch between different programming languages, right?
Why, instead of Googling,
how does an if-loup or an if-statement
or a four-loop work in this particular language,
this is not what makes me a good software developer or not?
they're just holding me back
I heard a friend say recently
that he prided himself
on being like a really strong Python developer
and he goes
you know it's interesting now I can just call myself
a software developer it doesn't matter of what language I use
if you throw me into a dot net project
or a JavaScript or whatever it happens to be
I still know the fundamentals of what I wanted to do
now I just I'll let it write the language
it doesn't really matter what the language is
yeah
hey I got a question for you so when you
So you explain first of all, you know, how you have to be good in explaining what you really want.
I think we need to, I think one craft we all need to learn and need to get better in is good requirements engineering.
What is it really that you want?
What should it do?
What should it not do?
What are some of I think you call it the edge and corner cases?
This is where you really have to start engaging in the discussion with the AI and explain what you want and what it shouldn't do and then kind of giving clear guidance.
one of when you started with your project right you collect your cards dot com
did you give instructions on architectural patterns did you say what type of database you want
in the background how does this work but does it just come up with recommendations and then
you argue or do you already really define your requirements what the rough architecture should
look like and giving clear guidance on what type of software components you want or don't want to use
So I definitely had some requirements.
As an example, because I had gone through a few iterations, right, I mentioned that I started with an Excel spreadsheet, then I moved to Airtable, then I moved to Retool, then I decided to build a SQL database myself.
All of those things happened before I ever started talking to AI.
So I already had a really robust data model around everything that I wanted this system to use.
And so I was able to give it my schema, my data schema, and say, here's what I have.
I am very comfortable in React and Node and all of that.
So let's live in a JavaScript world as far as our architecture goes.
I intend to host this in Azure as a web application.
So keep that in mind.
Like I already had some decisions made about what I wanted to do.
And it was really the Azure decision was really entirely because that's already where my SQL server was.
As I mentioned, I came from AWS.
I spent seven years there.
So I'm pretty agnostic when it comes to cloud.
But those were kind of the initial instructions.
I want a tool that allows people to collect.
And the first feature I sat down and kind of worked out with the system, with the AI,
was what I call universal search.
One of the things, there are a few tools out there that will allow people to collect their cards.
None of them are terribly good.
And most of them were built a long time ago.
But search is the most important feature of a system like this because I either have a card in my hands and I want to figure out what it is or I have a card in my hands and I want to add it to my collection really quickly.
And so I needed a search box that could respond to anything.
You're typing a card number or a player name or a team name or whatever you can find on the card, right?
A lot of the cards also have like a unique identifier that is what kind of card it is.
But there's no database or catalog of those numbers anywhere.
So I started cataloging those things too.
But it really started with, hey, I want to build universal search.
You can see all the things that are in my database.
How do we search all of these?
I use terms like I want to be the Google of sports card searching.
I often use a lot of similes.
I want to be like this.
I want another good example.
If anybody's ever sold anything on eBay, you know when you upload images to add images to your listing on eBay,
it defines the first picture you upload is your main image.
And then they give this little grid of all these secondary images that you may have.
And you can drag any one of them to be the main image.
And it just drags it and reassigns it and becomes the main image instead.
And so as I was doing thinking about image uploading for cards,
I knew people would take multiple pictures of their cards,
but they'd want one to be the main image.
It was a very similar idea.
And so I just said, hey, I want to do this the way the eBay does
where you can just basically upload a bunch of pictures
and then you can reorder them however you want just by dragging them.
And it was like, okay, cool, here's how I would do all of this.
Do you want me to build this?
Yeah, absolutely go.
But I find that having similarities, and you'll see this in the thing I talked about with a style guide,
I think that was day two or three, where I had it build a bunch of examples for me.
Here's what a table looks like.
Here's what a page header looks like.
Here's what all of these elements that I'm going to use throughout all these pages.
so that I had, I was really comfortable with what they looked like, how they functioned,
and then I can just say, oh, I want to use a card table on this page.
And it already knows what that is, right?
It's already built that once.
And I don't have to go through the exercise of like, no, no, no, I want the columns to be resizable.
No, I want them to all be sortable.
Like, if you just ask it for another table, that's what it's going to do.
It's just going to give you a basic table and it's not going to look like the rest of things you've built.
So having those kinds of examples, whether they're examples you find online on the web,
or things that you've had it built itself,
using those as a reference
as you continue to have your conversations
goes a long way.
Cool.
Now, is this,
is the quality of these AI agents
good enough to also understand
that if you keep building
and using similar components all over,
that it automatically starts
like creating a component library,
does it automatically componentize your code
so you can reuse?
Does it do this?
Or is this where then your expertise
is a good software,
to engineer has to come in and you have to give it the instructions.
That's a fantastic question.
So I think with things like style sheets, it does a pretty good job.
I actually found that I got to a point where I segmented all of my style sheets, which is
not normally an architectural practice you would take on.
But what I found is that I had one universal style sheet for the site.
And then I would go to a page and I'd say, oh, this table actually has this extra column or
this extra thing or whatever we need or I need this extra button.
and it would just go and make changes to the universal style sheet,
but then all of my other pages would break or change or have a behavior that I wasn't expecting.
And I felt like it wasn't thinking about the fact that,
oh, a bunch of other pages used this file.
It was just focused on how do I make this thing right now do what you're asking.
And so I actually ended up splitting apart a lot of my styles
so that each page basically gets its own style sheets so that I don't have that.
Now there's duplication of code.
it probably adds a little bit of weight to all of my page loads.
But it was such a battle that I was,
it was either like throw this all away and start over
or I've got to find a way to break this all up.
The other question you had, though, was like,
oh, I've got this standard component that I'm using on all of these pages.
It is not, my experience has been that it won't just automatically do that.
It doesn't learn from his experience and say,
oh, like a developer would, right?
Okay, I've built this the first time.
okay I've built this the second time
all right I built this the third time
we got to make this a component right
that's a good practice as a developer
it doesn't think that way
it will just continue to pump out
the same thing
a variation on the idea
unless you have a solid concrete example
and you say I want to just use this again
it's going to reinvent the wheel every time
it's going to say oh you want another table with data
cool I can do that
but it's going to imagine what that should be
every single time it's not it probably won't even
look like the previous table.
Yeah.
I was going to say it's interesting.
For those of us who've been around a long time,
it sounds like the early days of code outsourcing,
where oftentimes an outsource team would be hired to write code,
and they would deliver exactly what was written.
Without context of anything else, I need a table.
Like you said, you had the card table, right?
You need a new card table to be put on.
Next time you say, I need a new table.
Well, you didn't specify card table.
So we're just going to give you a generic table because that was what was on the paper.
And I think being aware that, as you point out, that the AI is not learning, oh, he probably wants a card.
There's no, at this point, there's no assumptions that AI is making based on the past and the history.
So you do have to be explicit.
But if you are explicit, it sounds like it's going to give you what you want.
But it is going back to that style of like, oh, we learned in the early days of outsourcing that,
You got to be very, very specific about what you want, how the things tie together.
Otherwise, you're just, because you're always just going to get what you ask for.
Yep.
And that's where a lot of the context comes together, right?
As I think about all of the issues that I've created that I'm still waiting to work on, right, as I find more time.
Those issues are the context.
And one of the things I think that people forget is I'll show them my backlog of issues.
There's 50 or something issues, and they're pages and pages long.
And they say, you wrote all this?
And I'm like, no, I didn't write any of this.
I sat down and I had a conversation about how do we create an issue?
What are all the things that we need to think through?
It wrote the issues for me, and now we're going to revisit those later.
Now, did I read it?
Of course.
But I think what you're describing is the same way that I think one of the posts I talk about, you know, working with a junior developer.
I trust a junior developer to be able to go and sling some code and build some stuff.
But sometimes they do direction around how or why or when.
And I have to provide them that context.
and then after they provide the code back to me,
I need to verify that it does what it's supposed to do.
And that usually comes in the form of unit tests
or some kind of integration testing or CICD, whatever it is.
But you need something that lets you say,
yeah, they accomplished what I was asking them to do.
With a junior developer, you may have the time to be able to read through
what they've written.
It may just be a small module or something like that.
But when we're talking about AI,
it could write a thousand lines in 30 seconds.
You can't possibly read all of that.
You just need to trust that it's going to do what it's supposed to do.
For every listener, right, really check out the 31 days of vibe coding because you've also
your GitHub repository of your collector cars.com is also public.
It's really fascinating.
So I actually have one of these, this Jira, or not Jira, GitHub issues open, sorry, I'm sure
this also works for Jira.
So if you're at Lashen fanboy.
But one of the things that you've listed was where I walked to the issues.
add the ability to few cards in collection by set,
issue number 77.
So you're telling me that everything,
the structure of that issue
and everything that is written there
was actually not written by you,
but it was created by, in your case, Claude,
because you had the conversation.
Yeah.
So we were working on that module,
adding something else,
and I realized that I needed that feature.
I was like, hold on, let's stop for a second.
We're also going to need to do this later.
I don't want to forget it.
So can you create an issue for me that covers X, Y, and Z?
And, you know, for me, it was a couple of sentences.
For it, as you see, it wrote paragraphs.
It has bulleted list of things we need to think through.
And one of the other kind of tips that I like to have for everybody is that I learned only recently is that every time I ask it to do something, especially in the chat interface, I always end with, do you have any other questions for me?
In the same way that you would ask a junior developer, right?
Is there anything else you need to know to go be successful on this task?
And the number of times that it will come back to me and say, oh, yeah, actually, I'd love some clarification on this, or how do you want me to handle this, or what about these edge cases, or whatever it happens to be.
It has been a delight to be able to ask that question.
Do you have any questions for me?
And have it really thoughtfully asked the things that I didn't specify.
I didn't clarify on those things.
Thank you for asking those.
Where does it know what to ask?
How does it know?
I mean, I guess we don't know many things of what's actually happening.
This is fantastic, right?
Yeah.
I mean, I think it probably has some vision of what the thing it wants to build looks like.
And there, again, I'm drawing a picture in my head, but I imagine there are some things that are question marks.
I'm going to have to make a decision about how to solve that, or I'm going to have to make a decision about how to do this thing.
And it probably weights those.
Again, I'm thinking in that probability sense of an AI agent, but it probably says, oh, these are heavy variability answers.
I should probably get some clarification on him.
this one from him. And that's how I picture works. I don't actually know how it decides what the
questions are. You know, we often talk about AI as just being that probability engine. It's really
good at deciding what the next word should be. So is it, is it thinking or is it just trying to assess like,
here's a question I think I should ask him? Because they always feel very thoughtful and well-tuned to
exactly what I'm looking for. But at the end of the day, it's just producing words for me. Yeah.
Yeah, you know, the whole question thing I heard about recently, like actually just a few weeks ago, which is like, yeah, ask AI, what questions do you have for me?
Or you could even prompt it with, give me five questions you need in order to complete this task or something, right?
And I don't know how widespread that is.
I'm sure it's pretty well known at this point, but I think that's a very, very important one that people have to remember.
because I mean if I'm even thinking about how it knows what questions right
if there's a table like create me a tape you know create a table for me
I'm sure it's it's looking at okay there's a bunch of variables in that table
I can ask a question around this to find out what might be more specific to it
right or something else right so just in terms of how it's doing it I you know I don't
think AI's thinking at this point but it's probably looking at what are places where
there are a lot of options I can pick from and let me try to narrow those down you know
exactly right yeah that's kind of how I picture it
my head. I would like to go into the observability topic, but before that I have one more question. Day two, you have a section that's called the workflow in practice. And you talk about planning, review the proposed plan, implementation, verification, shipping, and repeat. My question is you explain that, you know, you obviously have a discussion, you explain what you want. You make it plan, what it should do. What does this plane really look like? Is this,
Is the plan, so you create an issue, you define what you want, right?
This is a new feature and new capability.
Is this already the plane, or what does a plan look like that you review?
Does it really say spit out, I would like to create this and this and this?
Or is this just a mock version of what?
What does the plane look like that you review before you say, now you go and now you implement it?
Yeah, it's interesting.
When I wrote this article, and this was probably early November when I wrote this,
because this was early, early.
At that time, I was either handwriting or having clawed write a lot of my issues in short form.
Let's just capture the idea.
And then I would say, okay, now, describe to me how you're going to solve this problem.
And that's kind of where this planning piece came from.
Today, that's less relevant, at least for my workflow, only because when I have it plan and when I have it create that issue, the plan's in there.
It talks about what files it needs to change.
It talks about data fields that we need to add to a database or store procedures or indexes
or anything else that it may think of for data.
It's already got a lot of those pieces and parts in the plan, and so I can review that before we
even start.
But it's been interesting to me because, like we talked about, I have 50 or so issues.
I don't remember how many there are right now.
When I get started, they're, for the most part, all equal to me.
I haven't spent the time to think about, like, okay, what's the thing I want to get done
today. So I ask Claude, like, hey, we've got all these issues. What's, what's something we can
knock off today? Or what's something that has the highest value to customers? And he'll come back
and say, oh, based on that, I think it's these three. Let's, which one do you want? And I'll say,
let's talk about issue two. And it'll kind of lay out for me, okay, here's what I'm going to do.
It'll go through that process of giving me a list of issues and suggesting ones that we should work on,
lay out for me what the plan's going to be. And then what's really useful with Claude.
And again, I think all of them, all the agents treat this a little differently. But as
it's going, as it's dropping chunks of code
into files, it's putting those in the
CLI as well. It's explaining
with sentences, okay, now I'm going to do this thing.
This is for X.
And so I can read along as it's going.
And the thing that
I found, and this may not be true for
everybody, but when you're building something
you really care about,
and you can watch an agent just
explain itself as it goes. And then it goes,
okay, hey, I'm done. Here's some things you might want to test.
This
is this has become like dopamine to me. Uh, I, I can't tell you the number of times where, you know,
I get done with work, I go have dinner with the family. I'm like, hey, I'm going to go get a
couple of things done in my office. And then the next time I blink, it's one in the morning.
Uh, because I didn't realize I was going to be in there for five more hours, but like,
you just kind of get on this role. And like, I think that's where the term vibe coding comes
from is like, you're just locked in and going. Uh, and it's so, it's such an enjoyable
experience to just give it some instructions and now you have features, I can't even express
how amazing it is every time.
I think you actually had a really nice explanation, again, quote that I wrote down on day two,
folks, check out the blog post you started with.
Yesterday I defined vibe coding as staying in the flow while AI builds the features for you.
Kind of like, I remember, I think Brian, we had a session years ago, and I don't remember who
the who the guest
where we talked
about
flow metrics
and like
you know
how many minutes
it takes an engineer
to get into
the flow
where it feels like
magically
cold flows out
of your finger
and if you
have a
if you get
an interruption
because you get
pinged on
slack and
something
it puts you out
of the flow
and then it
takes you again
10, 20 minutes
to get back
into the flow
and it feels
like if you can
really know
if we know
how to use
these tools
correctly
you can
stay in the flow and achieve a lot of output
in a certain, like you said,
you spent five hours and you didn't even
notice and you probably have built
many different features or you gave
the right instructions and you did the right
reviews to have these features
being planned and
then implemented and tested and better did it for you.
That's fantastic. And don't
get me wrong. In those five hours, I'm sure there was
some time where I was just battling.
Like, come on, will you please
just understand what I want you to do? And
some of the things that I've learned later in this series
about context management and clearing out the context that I have.
A great example is I'm working on a feature.
We build a bunch of stuff together.
We get it done.
And then I just naturally move on to the next feature.
Okay, now let's go do this thing.
But I still am carrying all of that context
that I just worked on it with,
that it doesn't need with me on every single pass
as we continue to build.
And it starts to get confused.
And it's just trying to hold on to too much information.
And so what I found is going in,
at least with Claude, they have a command called Compact.
And it just basically auto-compacts everything that we've already done
into a very small amount of context.
And then I can start fresh again.
And now let's go work on issue 57.
That has changed a lot about how quickly it gets confused,
how much it doesn't understand me.
Today's post, the one on the seventh,
talks a lot about this and how, you know,
if you have more than 30 messages,
that you send in, expect that it's going to start to degrade after that. That's kind of been my
experience. And so if you have more than 30 back and forth, whatever you talked about at the
beginning, don't assume it's still there, which is, I think it's really hard for humans to do.
If you and I talk about something today, I can come back to you six months ago. Remember when
we were talking about that thing? You'd be like, of course I remember. Yeah, we talked about this and
this and this. AI does not behave that way at all. And so from day to day, you would like,
just like the thing we did yesterday.
I don't know what we worked on yesterday.
Yeah.
Yeah.
I got to ask a snarky question here.
Yeah.
So you have your day job,
you have your cards,
your card website that you're creating,
and you're writing 31 blog posts during this month.
So are you using AI to assist you
in helping write the blog posts?
Well, initially early on,
I had it help me with an outline.
Okay.
But no, I'm not, I'm not just like, hey, go write this article.
No, I don't mean that, but I mean, like, putting some inputs in and having it format you.
I've had it, I've had it review articles, suggest changes.
But the initial outline was the biggest help, because starting from 80% is way easier than starting from zero.
Yeah.
And so those kinds of things.
Yeah, I think that there's a part of AI for almost everything that you do.
I think about just simple email, important emails that you need to send or a document you need to write.
I would rather, in a few paragraphs, explain what I'm trying to accomplish and have AI lay out the framework for me.
I had a conversation with my son, actually, about this last night.
He's applying to colleges.
And he was interested in getting a couple of these small local scholarships.
And I said, you know, you should really think about using AI for this.
He goes, Dad, I can't have AI write my essay.
And I was like, no, no, I'm not saying that.
But you know what you want to say.
Have it help you figure out a structure that is appropriate for what you would want to write your essay about and then use that structure to write your essay.
that's not writing your essay for you,
but it's helping you frame your thoughts
and gather things appropriately.
So I would be lying to you
if my instinct isn't,
I wonder how AI could help me with everything.
I also made the mistake of buying a 3D printer
in November.
And so the number of times
that I've been like,
I wonder if AI could design
a 3D file for me that would solve this problem.
And it's not very good at that yet,
but it does really
good designs, like it could come up with an idea, and there are tools people have built that
will take an image that you have and turn it into a 3D model. And again, it's layers of progression.
It gets worse and worse as we go, but it's interesting to think about how people are trying
to solve all of these problems, because 3D printing isn't hard, but coming up with exactly
the object or the shape or whatever it is you're trying to build. That requires some real skill,
and there isn't a lot of training data for AI to have learned from, unfortunately.
Hey, Jeff, in about three weeks, the two of us will be standing on stage in Vegas at Dinahries Perform.
That's going to be made.
We're not going to use this podcast to promote too much of what we do on a regular,
like in our kind of paid job, even though, obviously, we get paid for some of the stuff that we're doing here.
But the topic there will obviously be also around observability, right?
and we will talk at stage about how AI agents change the day of the life of a software engineer,
and you've perfectly laid this out here in a blog post.
Coming back to observability, because you also said earlier, observability,
you cannot review hundreds or thousands of lines of code that the AI generates,
but observability allows you to validate that what's happening in that code that gets generated,
it actually does the right things.
I have two questions for you.
The first question is,
by default, the code that got generated,
does it already, does Claude already know how and where to
probably maybe add logs or create metrics or even spans?
Does it automatically instrument,
or is this something that it doesn't do by default?
That would be my first question.
I'll let you answer that and then have another one.
Okay, so my initial answer to that is no,
it does not natively think about this.
In fact, the primary output it will create.
is simple console logs.
It will write to your terminal,
but that's really the extent of it.
Now, when I said,
hey, I want to do full observability instrumentation
with open telemetry,
it knew exactly what I was saying.
And it went through,
and it added all sorts of interesting telemetry moments
in the places that it made sense.
So it definitely understands the concepts,
but it doesn't, by default, do any of that.
And one of the things that I write about in my blog post for day four, which is about observability, is a console log is really just some basic text.
But if we think about most of the spans or some of the information that we send into observability services like Dynetrace, it's a much richer data set.
There's so much more that we could provide.
And so it was really eye-opening to me to see how we could really hydrate that huge object of telemetry data in a way that was incredibly useful.
useful. Now I see things that I hadn't seen before, right? And one of the examples I have in the
blog post, not only am I creating events for errors and issues, but also I can see login success.
And log in success on its own is great, but if I could see who the user was in my telemetry,
that would be super useful. Who's actually logging in? How often and what issues they may be running
into? And so as it started to do those pieces, it started including a lot of that information,
which went a long way in helping me
think about what I'm seeing in my dashboards.
And I guess can...
Go on, go on, Annie.
No, because the login example
just triggered something because in the very beginning,
you said, well, we talked about this
engineer that has been around for 30 years.
This engineer, and also you and Brian and I,
we know that there are some considerations
about what type of information you log.
Logging, who is logging in,
maybe with their full personal details might be something you don't want or I think you
have you said you have integrations with eBay and other systems maybe also with
payment providers I hope that these AIs generate code that don't immediately get
you into jail by logging confidential information or PII data but I guess this is
again coming back this is where it really pays off for the people that use these
tools to have experience in their job so that they give the right instructions and say,
hey, this is a test environment.
Here, it's okay to capture more.
In production, this is PII data or be aware of that PII data should not be stored anywhere
where somebody that shouldn't have existed.
That's one of those things you can put in your agent files that's kind of like,
I have a file called clod.md that is my rules.
These are things we will not do.
One example in there is we will never use a JavaScript alert box.
It loves to use them.
But they're not anything you'd ever want to put it in front of a customer, so stop doing it.
Just create a nice dialogue on the box that lets people say yes or cancel or whatever.
But where you're going with that, I do have an example of where PII was a problem on my website.
And so I have on the bottom right corner of Collect Your Cards.com, there's a little feedback tab.
And you can kick that open.
And it's basically a way, especially in the early days, for my users to tell me, hey, this card set is missing, or this card has incorrect data, or I like this page or whatever.
They can just provide any kind of feedback they want.
But they're logged in to be able to provide this data.
And so what it's doing behind the scenes is it built an engine that actually creates issues for me in GitHub.
So when someone types into those form fields, it generates a GitHub issue for me.
in the issue is the user's email address.
Oh, yeah.
Which is super useful, but also now I'm posting their email address to a GitHub, public GitHub issue, which is not good.
And so, thankfully, a friend of mine was the first one to use this.
And he goes, hey, I just saw my email address was posted to your GitHub.
Can you please remove that?
Yes, happy to do it.
And then I went back and talked to the AI, and I said, look, we can't continue to push PIA anywhere.
And so that got introduced into my clone file.
I was curious
when we were talking about open telemetry
and Andy obviously this is not a topic for today
but I'm curious if this is a topic
we explore further with Jeff or other people
but when you talk about using open telemetry
in conjunction with this
obviously we have logs and metrics
but when it comes to spans right
one of the promises that we're seeing
in
one of the promises of open telemetry
that we don't see much except for some of the AI
code itself was
built-in span telemetry, right?
So having not just your entry and exit point,
or maybe a get connection kind of thing instrumented,
but some of the key methods in your code being instrumented,
because those are the ones that you would have to look at in between.
I guess the question being, is AI smart enough to,
well, first of all, it's awesome that you could potentially say,
add open telemetries for spans in here and do that work for me, right?
But if you tell it to instrument the key methods, right?
Yes, we'll call it your code, right?
Because it's not like the foundational code that you need to run anything,
even though the AI is writing it.
Does it know which ones are the important ones to instrument?
Does it over-instrument?
Do you have to tell it to uninteresting?
Like, how much finesse do you put into that at this point?
And, yeah, I guess really that's what it is.
What's the finesse there?
It definitely knows, like if I was to ask a developer,
what are the important functions in your application?
A developer would know.
And I think AI has a really good gauge on that as well.
Now, they're probably using something like cyclomatic complexity
or something else that really, like, gives them some insights
into like, how often is this function called?
It seems like this thing is being used on every single page.
That must be important, right?
Because it doesn't have, it doesn't understand the value
the way a human might necessarily understand value.
But you started to make a point there about the idea that it's your code written by AI.
And I think this is a really important thing for everybody to walk away with is that anything that you're doing where you're building software and you're having AI do some or all of it, it is still your responsibility.
And so as you think about what you build, it's very easy for you to say, oh, AI wrote all this for me.
my bad, but the moment you do something improper with someone's data or a lawyer comes calling,
they're not going to take the AI to court. So I think it's important to have that kind of
responsibility and awareness of what you're building and how you're doing it in a way that
you have to take ownership of what's happening here. And while I don't think it's necessary
to read or write every line of code, I do think it's important for you to have a very solid
understanding of where your data is going and how.
Final question for you.
You already mentioned that you're using
Obserability to monitor
for what I call kind of business
KPIs, how many people are successfully
logging in. Do we
have, let's say, users
that are logging in multiple times, like your top
users, but has Obserability
helped you to also identify
any technical challenges,
performance problems, resiliency issues?
Have you found anything in Obserability
or was Obserability
useful that was built in by the AI to identify any other issues you went into?
Yes. I have two, in fact. One of the things that I had done initially was
instrument when I'm making a database connection, something simple and easy and obvious.
What I found was on one page, it was making 86 database connections. It wasn't pooling anything.
It wasn't doing all the things that I would normally expect a database call to do.
For every single place that we needed data on the page, it made its own connection.
that was revealed to me by observability.
Because I was like, wait, why is this happening all of a sudden?
The other one, which I actually have an issue for, but haven't solved yet, is there, you know, the website is pages and pages of lists of cards.
A set might have hundreds or a thousand cards in it.
And in some cases, someone may already have the whole complete set.
So they want to go in and they just want to add all those cards to their collection in the system.
And so let's say they're adding 700 cards to their collection.
Well, they go through and they check all the boxes and they hit add,
and then they add some metadata about where it's stored or how much it costs or whatever.
And then they hit go.
And I mean, that's going to take some time to add 700 records to a database and whatever.
But what I found was it was making 700 calls.
Instead of batching all this stuff together,
because it's really just a set of IDs that we need to insert into another table,
it was making 700 insert calls.
And that's slow.
compared to what it could be, right?
It could be one call with one large insert statement.
So those are the kinds of things that pop up to me immediately
when I think about the things that I wouldn't otherwise have noticed.
It works. It does what it's supposed to do.
But man, is it slow for some reason?
Hey, Brian, I hope you noticed that I didn't say it first.
He brought it up.
Yeah, I think we have to have Claude go back
and listen to the first year of pure performance.
for performance anti-patterns.
Yeah, it's our classical N plus one query problem.
Yeah.
It just doesn't go away.
It doesn't go away.
Well, but maybe we should get the transcripts of all of our old episodes
and then feed it into Claude.
Yeah, there you go.
Exactly, yeah, that's what it is.
Here we go.
Cool.
Hey, Jeff, thank you so much for sharing what you.
you learned. Obviously, it was great
that you found a great project,
something that you are passionate about, collect your cards.com.
Thank you for creating
these 31 days of vibe coding blog posts.
I know at the time of the recording, we're at day 7,
at the time of the publishing, we're at day 19 folks.
Go to their website.
I think people can also subscribe
to your newsletter and think so that.
So check us out.
And I'm very happy to be with you on stage in Vegas.
That's going to be great.
We'll also talk a little bit about this topic.
And yeah, awesome.
And I'm pretty sure we will invite you back
because it feels like the AI journey for all of us
is just starting.
And there's so much to learn and so much changing.
Yeah, just in the time it took me to write this series,
things have changed enough where I had to rethink a couple of my articles.
It's moving fast.
Well, it was great to meet you, Jeff.
Really appreciate your being on.
Wish I could say it's here perform,
but I'm not going again this year.
Yeah, it's tough to go there.
I'm sure Andy could tell you.
It's a lot, a lot of work.
Hopefully they'll keep you busy, but it's a lot of fun, too.
Tons and tons of fun.
And I hope you get to meet some of our great customers there.
Really, thanks for being on here again today.
And I hope this podcast has inspired some of our listeners.
Please go check out 31 Days of Observability, which will be at day 17 when this comes out, right?
and, you know, try using it, you know.
So that's all we can say.
Yeah, all links in the description.
It is 31 days of vibecoding.com, but all the links are in the description.
And, yeah, talk to you soon.
Bye-bye.
Bye.
