PurePerformance - DX Core 4 Applied - Measuring Developer Productivity with Dušan Katona
Episode Date: June 23, 2025"How do you measure the impact you have with your platform engineering initiative?" is a question you should be able to answer. To show improvement you must first need to know what the status quo is. ...And this is where frameworks such as DX Core 4 come in. Never heard about it? Then tune into this episode where we have Dušan Katona, Sr Director of Platform Engineering at Ataccama, who is a big fan of the DX Core Four Metrics and who has just applied it in his current role to optimize developer experience.Dušan explains the details behind those 4 Core metrics: Speed, Effectiveness, Quality and Impact. He also shares how improving those metrics by a single point results in the equivalent of 10 hours saved per developer per year.And here the relevant links we discussed todayDusan's LinkedIn Profile: https://www.linkedin.com/in/dusankatona/DX Core 4 Blog: https://getdx.com/research/measuring-developer-productivity-with-the-dx-core-4/Marian's JIRA Analytics Open Source Project: https://github.com/marian-kamenistak/jira-lead-cycle-time-duration-extractor
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready! It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance. My name is Brian Wilson and as always I have my really annoying co-host who I really can't
stand but today is the opposite day so none of that is true.
Andy Grabner, how are you doing today?
As you can see I'm smiling and I'm enjoying this.
Very good.
It is summer is back
here we had a very strange weather patterns here in Europe but it feels
after a very rainy couple days it's bad we're back on the heat wave we got tons
of rain tons of cool weather we actually have a spring this year so it's been nice
and today's the last day of school for all my kids. They're all done.
So I'm officially going through them.
I'm going to take summer off too.
Yeah, cool.
That means you will have a great upcoming experience
of your personal life, whatever you're doing.
My personal development.
Well, yeah, I'll be developing my kids to be better people.
Yeah.
And how do you measure this?
Can you, do you have actually any way to measure
the effect of like the summer break that is
coming up and how it will put, positively impact you and how can you measure the impact
at the end of the summer?
Well, I usually count how many times a day I have to yell at them.
Okay.
That's an interesting metric.
Yeah.
Yeah.
That's a real good one.
Yeah.
But I don't have any other, I wish there was something. I wish there was something that could help me with these development metrics, but nothing
that I'm aware of.
Maybe we should learn something.
Maybe we should ask our guests to finally make sure that people know that this is not
just a podcast between Brian and myself.
Dusan is our guest today.
And before Dusan, I'll let you introduce yourself
and give a little background.
Duschan and I met at ELC,
that's the Engineering Leadership Conference in Prague.
That was organized by Marlian,
who we had on the podcast last week,
or last time, the previous episode.
That's the best way to explain it.
And I always call this now my new Prague connection.
And I was really happy to Duschan to meet you
at the conference
and then a couple of weeks later also at the meetup where you were part of the panel where
you talked about platform engineering in the wild. Dushan you are a platform engineer at Atacama
and you're driving growth and engineering efficiency and I think you can do a much
better job in giving us a little bit more background on where you come from what you do.
So please Dushan go ahead and thank you so much for being here.
Hello, everybody.
First of all, thank you for having me here, MD and Brian.
The world indeed is a small place, like with Maria and you and me meeting in Prague.
So in terms of my background, I started the career as a developer,
but that was like 15 years ago.
Then I moved on to managerial positions,
like managing a team, then managing multiple teams,
mostly in scale-ups.
So that's where my niche is as well,
where I focus more, as you said,
on developer experience,
improving developer efficiency, because I think that could be a
great part of a company brand. And you know, I love when
developers can ship features efficiently, as well. So now I'm
Senior Director of Platform Engineering at Atacama,
which is a data trust platform company basically,
doing more things about data quality,
master data management and things like this.
And I was brought here by my former colleague
on a mission to improve developer experience
because they had several hurdles.
So I can talk more about that experiences I've been in the company for half a year.
Yeah, I would be that's awesome. That's exactly what I would love to talk about.
Because I've been talking about platform engineering for the last two or three years now.
And the question that I always typically bring up when people invite me to talk to them,
I say, do you have any way to measure the impact
that you want to have
with your platform engineering initiative?
If you are forming a platform engineering team,
what is the mission?
What is the goal?
How do you in the end justify
the investment of money and time?
And this is kind of like the first question
that I have to you.
What are good ways
to measure the impact? What have you found as being an effective method also to sell kind of
that idea of platform engineering to your leadership? Any insights that you can give
from your personal experience would be great. You know, this is the second place where I'm
doing platform engineering in the previous startup scale up that got bought by a bigger company. By the way, the
company was called Jamf, which is on stock market. And when they
are they were doing due diligence. They also said that
we ended up in top 5% of like operational excellence like the
engineering was working pretty well so that was a testament that what I was
doing there was was really good but that was back in I don't know 2019 since
then situation also the macroeconomic situation changed. Everybody wants to do more with less,
and everybody's more focused on also like metrics.
And as you said, selling the platform
engineering initiatives to the execs,
because in the end it's about money, right?
Like you are pouring something to your platform engineering
and you are expecting something out of it. Otherwise, no one would build platform engineering organizations.
So back to my experience here in Atacama. I'm a big fan of DX Core 4 metrics.
There are multiple frameworks like Dora, just like an old timer or space or Delex. But the creators of space or multiple people, multiple PhD people created this DX Core 4
framework which I can very briefly go through and then discuss like how I crafted the survey
here and things like those. So the XCore 4, the 4 means there are four main metrics. The
first one is speed or it's about speed and that's diffs per engineer. So basically number of pull requests or merge requests
merged during the week. This might sound controversial because one might think are we
starting to measure people on number of commits or so and they are explicit that no like this
shouldn't be looked at on an individual level, should be rather looked at from on the team or departmental or organizational level.
Because like, you know, I was a developer as well, like when I was
feeling when I was efficient, I produced more usually more pull requests than when I
had to scramble to test on a local development
stack, for example.
So that's the speed metric.
Can I just interject one thought here?
And the reason why I always get pushback on these metrics
is because some of these can be gamified.
And you want to avoid the people gamified
is by making smaller and smaller pull
requests and smaller and smaller changes. And therefore it seems like they're speeding up.
And so this is why I'm glad that you mentioned this is not necessarily on an individual level,
but it's more on a team or an organizational level so that you can actually see if things
are improving. But do you also then kind of bucketize the change scope?
Because you can probably not put every single pull request into the same bucket because
a pull request on a text change on a UI might be different than implementing a new backend
service.
Is this something where you bucketize and then look at it?
That's true on the other hand, as you said, like I'm emphasizing the size of the pull request, so it's not like too...
And to your point on gamification, the DX Core 4 metrics are crafted in a way that, you know, they work together.
So if you game or if you improve or have more pull requests,
perhaps your quality goes down because you are not implementing tests
in your pull requests.
I just wanted to validate and just...
Yeah, I think so. speed is the first metric um
of the dscore there are there are and the number of pull requests is the primary metric there are
several like secondary metrics metrics like lead time uh also from dora but what i focused on also
in the survey uh were the primary metrics so now moving on to the second one,
which is effectiveness, which is purely qualitative metric.
Whereas the speed can be quantitative.
If you have GitHub, you can probably very easily
get the number automatically.
Whereas effectiveness is basically computed from 10 questions ranging from documentation.
How is documentation impacting your productivity?
The second question is focus time, whether you have enough focus time.
So you see it's not only platform related, it's more about like really your productivity, which not always platform teams have have an impact on right? Like if somebody doesn't have enough focus time, because there are multiple meetings or so platform teams usually don't have impact on that.
So platform teams usually don't have impact on that. That's why, that's one thing I like on the X-Core
that it could derive improvements in the whole organization,
not only from the platform engineering.
We can get to that later on.
So the third question is about change confidence.
For example, whether you are confident in releasing,
how are you satisfied with build or CI processes, about incident management, all our alerts and
things like this. So there are 10 questions, which by the way, the DAX provides you with. Like there is an example survey,
which I used also to get these questions.
And I just like very slightly altered them
to have more context for people at Atacama.
So asking these 10 questions,
there is a range from one to five,
whether you are either dissatisfied or
fully satisfied with the area, five being the most satisfied and then you compute something that
is either called top two box score or they call it developer experience index. That's basically
developer experience index, that's basically calculating
the sum of all responses four and five, meaning the people that are satisfied, compared or divided to all responses. So in
a sense, it's kind of an NPS on those 10 questions. And what
they provide as well, our benchmarks. So they are saying they have a range of,
or they have a set of 500 or more companies
in the benchmark and they will tell you like
on a percentile of 50, 75 or 95, what the value should be.
So our value is around like 54, 55, meaning like 55% of respondents are happy with
the areas, whereas the P50 from the benchmark for the tech company and our size of engineering
is around 60. So, and what they tell you as well is that like one point
increase in this index means 10 hours saved per year
per developer.
So that's where the magic happens.
That's the actually the first time when I was able to
compute how many hours I'm missing
on developer efficiency.
And you can just like take,
basically take the average salary
you are paying your engineers,
and then you can easily turn that into dollars,
which usually the execs listen to, right?
More than an arbitrary number like developer experience index.
So that's what I did.
And I calculated, like, I can't tell you, like,
precisely the number, but roughly 15% of engineering time
is like wasted or, you know, it's non-productive.
And that's just like if you calculate the
number, the dollars number, that's just a number on developer
salaries. But you can account also for, you know, during
that time, during those hours that you could have saved if your developer
experience would be better. you could have saved if your developer experience would be better, you
could have built more and more features that could brought you some revenue.
So there is also like this opportunity costs like missed.
So that's the developer experience index.
Fascinating.
And I just reiterate because I think this should be something that everybody
needs to remember. One point increase of that index means 10 hours per year per developer
and you can obviously translate that into the dollar amount of more efficiency, more hours that you actually have. The other thing though, like the opportunity
cost, I think as you called it, is there any way that you found that you can also kind of make
this more tangible as well? I think you said obviously, you know, increased quality, improved
output. Have you found a way to also calculate any impact there on the opportunity cost?
any impact there on the opportunity cost.
That's a bit harder to be honest. Like I was talking with Marianne,
your previous podcast host about it,
because he's so well like host of delay as well.
Like if you don't invest into this now,
like what are you missing?
But yeah, I haven't found any way so far
to really like compute the opportunity cost if you have any
ideas or if you encounter it is really it is really curious yeah no it is really hard typically
the numbers that I that I bring I mean you've seen my presentation at the ELC I think I always
bring up the how many engineers like the engineering in productivity or like the lack of productivity
based on the state of DevOps report,
it's a very high number.
Around 60% of engineering time is not being used
for effective value generation.
And, but for me, the second number that I always bring
is more relevant almost,
because if you're providing bad developer experience,
36% of the developers that have been surveyed
are considering leaving an organization,
which means in the current world that we live in,
we cannot afford losing the best and brightest engineers.
And I was at Red Hat Summit two weeks ago in Boston
and one of the keynote speakers,
they also referenced another study that by 2027, 90% of engineering organizations, 90% of them will face an IT skill shortage, especially
as we are building these new cloud-native, AI-driven systems.
And so if you have an engineering organization and you're ineffective, inefficient, it costs you more money,
but you're also potentially losing the best and brightest, which means you're falling further behind on having engineers that actually can give you this competitive edge.
I think that's for me was also a mind blowing, like how fast you can lose people. Yeah, I can relate to that. If you are not productive and you are a senior,
then after some time you are frustrated and you leave, right? And then the company needs to
hire somebody else, invest into onboarding. That's a lot of money as well. And for sure,
developer experience has an impact on retention. The company that I was mentioning at the beginning, like ending in top 5%.
My experience there was that we also created some published blog posts about how the developer experience is at that company,
and I used it during interviews. Or I even had people coming to interviews saying, oh, I read that blog post,
I see your SDLC is great here, tell me more about it.
So it's also can become a part of a company brand, not only for increasing retention.
For our listeners, I just want to quickly also highlight,
you talked about these DX Core 4 metrics.
To get more information, check out
the description of the podcast.
We put a link to the getdx.com website.
That's basically, I think, the public website
from the research and all of these methodologies.
And they have a great
blog post that also explains what the DX4 metrics are and yeah so I just wanted to read through it
because in the beginning we just talked about these four metrics folks as you know we always
have this stuff in the description now is there so you've done the survey how many can you give me
a rough idea on Atacama how many How many people have filled out that survey?
So the department I'm working in has around like 110 developers,
but unfortunately the response rate was about 52 or 54%.
And that's partly because of the previous survey,
you know, we already had one survey, which was focused more
on tools, like how satisfied are you with GitLab, rather than
with like capabilities, which is more about the day's core or,
you know, asking about change, confidence and things like
this, which is which was the new new survey. And unfortunately, there were like immediate, which was the new survey.
And unfortunately, there weren't immediate actions
out of the previous survey, or at least there wasn't
a clear relation between what we did and with the survey.
And I joined after that.
So people were saying, people are saying like, you
know, I filled in one survey, nothing happened. Like, this is
the it's, it's a new guy here. So I will try it. But I know I
definitely need to right now have a clear action plan and
communicate it on R and the all hands, which is to happen.
I think that's, that's always the thing, right? You need to follow through with action items.
Otherwise, at some point, people will say yet another version
of the same we've done and what will come out of it.
And it's great that you are obviously now following up
and really putting the collected data into action.
What else can we learn from you?
What else have you, um, in some cases?
Yeah.
Yeah.
I can finish off like the last two metrics overview and then I can talk about like what
I, what I did with the results or, or so.
So, uh, so the second one was the effectiveness that they see.
Now moving on to the third one, which is quality. As I said previously, all these like play together.
So sometimes you can increase the throughput, the speed, but quality might go down. And the
primary metric from the X-Core for quality is change
failure rate, which is like an old timer as well. It's from Dora, so I
guess I don't need to spend too much time on it. So it's basically like
how many changes cause an incident or you need to fix them in production.
So that by the way for us we ended up under P50 as well so we need to make more investments
into quality. What I didn't mention at the beginning, as I was starting new survey, some of the items I could have calculated from GitLab or so,
but I wanted to give people an option to self-report, just to give them, like, to say that I trust them, like, will they report,
so all these, like, metrics were self-reported. Then if I go in the next half a year or so,
I will be doing a third round of the survey.
I will probably automate some of the things.
So that's the third one, the quality.
And the fourth one that's a really interesting one
as well, that's impact, meaning how much time you spend on your roadmap or delivering
new capabilities. And again in the survey is like in percentages, like how much time you spend on new
capabilities, how much time you spend on keep the lights on, you know, bugs, maintenance and so on,
and there is another bucket as well, if there is something else.
In good organizations, and that's my experience
from the past as well, investment into your roadmap,
which should contain both product things
and engineering initiatives should be around 60%.
Good organizations have it
around like 65 or 67 is the is the P95 benchmark from the score.
Why this matters is that like if you have a lower score than
those like 60%, it means that you are not investing your time
into your most important or revenue-bringing initiatives
because, presumably, you crafted your roadmap around the most important items.
Temporarily, it can go lower than those 60% because if your customers churn because of stability or bugs,
it wouldn't be wise to just deliver new features and not focus on fixing bugs.
So for some periods of time or depending on the product or whatever the company is as well,
it makes sense to invest less into new features and more
into like keep the lights on. Sometimes those initiatives to
improve the quality end up on a roadmap as well. So it depends
like how you measure it. But this is also an important
metric.
I think for me, what was very important now here,
because initially when I hear roadmap,
I typically think about,
and people think about new features,
new features, new features,
but you also said also engineering work.
So investing in, I don't know,
better processes, better tooling, improved platform.
So it means everything that supports
then the engineers as well.
How did you typically measure that time?
Do you just look at tickets and do you see how many tickets and how much time was booked
on the tickets for let's say bug fixing maintenance versus investing in technical debt or investing
in new features?
This was self-reported for now but in previous organizations how I calculated it is
we were using Jira as many people do and I calculated it from lead time so I had different
different issue types right like stories, bugs, tasks and so on. Many of those were linked to
epics which were linked to roadmap items so I had all these like
parent-child relationships so I could tell like whether that task or that story contributes to a
roadmap item or not and then basically I just calculated from the first in progress to done, the lead time excluded some if a ticket was blocked and calculated
then the ratio between some of those lead times for roadmap items and some of the lead
times for others.
By the way, Marian has a handy tool which he open sourced that can connect to Jira that can calculate all of this.
We should make sure to link this tool as well in the podcast.
Let me just make a note quickly here.
Links for show notes.
There you go.
And we'll talk with Marianne.
Yeah, or I can send it to you.
Yeah, that would be nice. So now that you are,
I think you said, you did the survey, you do the analytics, you present back the results.
What do you think? And I know you can only tell so much about what you do internally and what the
you can only tell so much about what you do internally and what the risk the action items will be but can you tell me some of the action items what will you do to improve the
situation and what can you attribute them to quote-unquote platform engineering?
So as you said I put together the results it's on one Notion page that I shared with a few people. It was quite eye-opening. Also for the CTO. I usually
don't share just the results without actions, especially to execs, but our CTO was so curious
that he wanted to see the results. So I went through the results with him. And yeah, it
was eye-opening that we really have a gap
and we need to do something about it.
So right now I'm in a situation
that I'm preparing a pitch, like what to do with it.
There are several options ranging from investing
some portion of time of my teams
to improving developer experience.
By the way, from those categories, from those questions, the worst were documentation that people can't find things.
The second was cross-team collaboration, as we have a system pretty tightly coupled,
and many features can be delivered only when two or three teams work together.
And the third one was local development experience.
That's partly why I was brought here,
because there were really struggles with local development experience,
meaning the whole stack was running on laptops
and you can't even run Zoom, for example,
because it ate all your memory and CPU.
So there is more of a push to remote development
environments here.
So there are these hard laws that are specific to Atacama.
But right now, as I said,
I'm doing more of a product discovery
with product manager here as well,
like where we can invest with documentation.
As I spoke with a few people on the conference
I was attending last week in Budapest,
it's more about AI, right?
Like we are living in an AI age,
it's not about like having a clear documentation,
it's many organizations are doing it in a way
that they build an AI book that first responds to people,
people's questions on platform engineering support channel.
So there are these areas that I need to a bit
spend a bit more time in to know like what we can like deliver but overall
I want to pitch basically these initiatives perhaps a part of the
part of the roadmap or the companies also driven through OKRs or goals. So we have goals per quarter.
So I was talking with the Statio that we will probably put like improving developer experience
goal on our as a target for the next quarter. So that helps with alignment as well with all
So that helps with alignment as well with all the stakeholders. And what I was mentioning before as well, like with the focus time, for example,
it's not necessarily only on platform engineering.
So I'm going to share it with all the VPs and basically derive some improvements also locally in feature teams.
Because some improvements could be done there, some improvements are more on the enablement side from the platform engineering.
But together, I believe if you couple these two, it could be a good improvement. Which when I run the developer survey again, I will
be hoping for an increase in some of those metrics. So I could say yes, like these initiatives
really made an end in developer experience and we improved. If you get it to numbers extreme, let's call it that way, you can easily calculate,
yes, I'm going to spend, let's say, 30% of our roadmap. I will calculate how much money
that is for half a year, then run the survey and again calculate the gain, or expect some,
run the survey and again calculate the gain, right? Or expect some, let's say, 10 points improvement in developer experience index and that is some amount of money on developer salary. So
in the end, it's only a numbers game. If you look at it through this prism, obviously there are other aspects, as you said before,
retention, people happiness, and things like this, which are not always reflected in the
numbers.
I like metrics, as you can tell, but I like to look at them with context as well.
Yeah, I think that's the challenge with a lot of this too, if you look big picture, to look at them with context as well.
I think that's the challenge with a lot of this too. If you look big picture,
my team itself is facing similar challenges, but not on a developer level.
Where you have anecdotal stories about things that are working, things that are not working,
but they're just stories. How do you collect data? How do you collect data?
How do you collect metrics that support this?
And then how do you take and present that back?
Because when you're, say, taking it to the CTO and you keep mentioning investment,
investment could be time, it could be money, even money could be part of the time. we need to make these improvements. Same thing we saw with DevOps transitions back years ago,
is we need to slow things down
so that we can make these improvements
so that we can speed up later, right?
And no matter what you're going to the higher level with,
you need to be able to show before and after.
You need to have that data to illustrate
that if we make these changes, this is what we're
targeting to improve and we'll be able to measure it because you could do all the surveys
that you want.
You can collect all the metrics that you want, but unless you have the buy-in from that top
level that says, yeah, we see this, we can verify the evidence that you're presenting to us
and we can understand the improvement we'll get
out of making these investments.
They're not gonna go for it, right?
I can imagine plenty of companies will be like,
yeah, we know there's a problem,
but, you know, tough it out, right?
Or we don't have the time or money, right?
So it's free that, you know, data is key here.
And it's cool that, what is it? it? DX? What's the name of this
metrics set? DX Core 4 are the metrics, but the company is called DX, I believe.
It's so important because there is a framework to prove this stuff out and to show this. I'm
actually going to be looking into it,
see if any applies to us, but I don't know if it will.
But it's a challenge, right?
You have people on your team,
especially if you're a people leader,
you hear their, you know, what's working,
what's not working,
but how do you collect the information
to take action on it, right?
So it's really enlightening stuff.
And it's, you know, really glad to hear that
people are looking into it and taking action on it.
Yeah, that's why like there is definitely a point in what you are saying, like really sell it to executives with stories and what they are getting from the investment.
would add sometimes it is also about selling it to your teams
that you are managing. Because what I experienced is, you know, I have a team that a big portion of the roadmap is focused like
internally, you know, we need to upgrade the GitLab or we need
to do something with the hard with the hardware, which is
fair, right? Like, I don't, I don't want a team that spends a lot of time on keeping the lights on with these tools,
but equally, it's also about putting on a roadmap items that help the other 100 developers that are working with the platform.
So sometimes it's also about these
tough prioritization choices.
And last point, the survey that I ran
was not only about these DxCore 4 metrics.
I included some of the questions that would either
that in the end confirmed like the numbers that I got from DX Core 4. So for example, I added a
question. How much time a week do you spend on troubleshooting platform or platform services?
on troubleshooting platform or platform services. There were obviously ranges and people picked and I then calculated again time wasted and money and it was interestingly very close to what I got
from the XCore 4. So that confirmed me that probably the calculation and more the day score for or the company saying that one point
equals 10 hours saved is probably right. Another question that got me more the stories that you
were mentioning was what are the top three things platforms should focus on to boost your productivity?
things platforms should focus on to boost your productivity. So that was another more qualitative or free form question that I asked.
And out of those, there are two main things.
One was about our local development experience.
Another was about our SDKs or libraries that people are using.
But that's specific to Atacama, but there were already some big topics that I need to focus on
in platform engineering that people were mentioning, for example, in that productivity question.
I would like to recap something quickly because when you mentioned it earlier that your folks
have given you feedback on the hardware that they get is not sufficient to actually be
productive and whether you are giving them better hardware or whether you are offloading
the bulk of the work to a centralized remote development environment as different ways to solve the problem.
But these are very, I would say, common things that are here and very quote unquote simple
things to solve.
But the thing is you need to know about it.
And if you don't know about it, if you don't have like Brian used it earlier, if you don't
have the statistical evidence, if you only have anecdotal evidence, it's hard to take
measure on it and especially to prove that they actually improved it.
So there was one thing.
And the second thing,
I believe this is also one of the reasons why backstage
in Spotify have had so much success,
you talked about documentation, right?
Finding where do I start?
How do I do things in my company, right?
And I think, especially as you're growing,
there are so many different
places where you find some information, but all of a sudden, if there's no central place,
then you have some information here, some information here, some might be outdated.
So you're losing a lot of time to find the right information. And correct me if I'm wrong,
but I believe that one of the reasons why Backstage was created was actually to solve this
problem because getting a consistent overview
of all of the repositories that are out there
then also providing documentation, easy access,
and then also these self-service templating based guides,
right, the golden paths.
And yeah, it's just phenomenal.
And the thing is, right, you mentioned earlier
that there were a lot of aha moments when you analyzed
the results, but every organization faces the same challenges.
You're not better or worse than others.
You are just like everybody is challenged with the same things.
It's very similar also to my previous experience and that that previous company.
It was also about like, you know, developers using one development environment, which became
a choke point. So then you need to like invest into development environments per team or per
developer so they can be efficient or work on SDLC on the pipeline
so they are safe and you can move on to continuous delivery
or even continuous deployment to production
and things like those.
Like I agree, these patterns are very, very same
or the struggles are very same.
Just like if I compare my previous experience to this, I have to say like
we now live in a different era, right? I don't want to sound like a broken record or, you know,
jumping on a bandwagon, but really AI is changing things. So if something like worked in my previous company, I'm now trying to look at things through AI
glasses or AI prism, like creating, perhaps, as I said, solving the documentation problem
with AI. Because instead of focusing on having a single place where the documentation is being put. Many times
it's repositories, Notion or Confluence, some technical designs are there, some
some comments in Jira are are precious or goods, right? And if you feed all of
this to an AI model or build an AI assistant, it could be a great way
to solve the documentation.
I haven't done so yet, so I can't tell you from my personal experience, but I talked with a few
great guys at the conference. I was last week and they are solving it pretty much right now.
Yeah, by the way, it's so funny. I was in Budapest as well last week.
You were on the crap call for it? No, I was at a different call, I was at the Observability Days
on Wednesday, but would I have known that you were Budapest, we should have met up for a drink,
that would have been nice. Yeah, I was there on Thursday and Friday. I met one guy, Patrick Debois, who is a father.
He was terrified.
Yeah, that was like, I stumbled upon him just by accident, just sitting next to him on a
couch when I was having lunch. And just we got to a conversation. He's now focusing on adopting AI or helping companies with AI. Another great
guy was a group PM for Devex and GenAI in booking.com. Who said like these things, who
I asked about the documentation and he said he gave me tips about two tools and they have an AI assistant that
is first to respond on requests in Slack channel from people.
Yeah, I mean, I've also seen this now at different conferences. I remember a KubeCon in London where
I think it was, was it eBay or somebody else that were on stage?
It was an observability related topic and they basically said,
you know, not everybody is an observability expert.
So what they have done is they basically, you know, using MCPs,
the model context protocol to connect to the backend systems
and then they're using some of the AI agents to then give developers answers about common questions
that can be answered by looking at observability data, but the AI models know how to look at this
data. And this is also Brian's stuff that we have put into our solutions. And also I agree with you,
the collecting everything and putting everything in one single place will not work because there's different places in an organization where certain information makes sense.
Like Brian, we internally use heavily Slack, where there's a lot of discussion,
I think a lot of tribal knowledge in Slack.
On the other side, we have Stack Overflow, then we have Confluence,
and I'm sure we have many other platforms.
And ideally, I can have an AI agent that I ask a question and that AI
agent goes out and finds the relevant up to that information and presents it to
me. Do we have that coming? I just saw plenty of examples this week where it
was like oh yeah I looked here then I found actually found it and stack over
for that that's exactly yeah I mean that's a fantastic point. And I did hear you, Dishan, earlier when you talked about the AI component bringing these things together.
That alone is a fantastic idea.
Cool, yeah. So AI, and thanks for the reminder, because also thanks for the explanation. AI is going to change the game because we now
we can solve certain problems in a different way right because the AI can do the hard job for hard
job meaning fetching all this data from different sources and providing it to us as a human so that
we don't have to spend the time to find five different sources and then make sense out of it.
Another example, I used one product from one startup
in previous work.
And the founder founded a new company,
which basically connects to your GitLab,
to your observability tools and so on, and is on a
mission to basically auto-resolve incidents from that.
So if one of the questions in the DX Core 4 is about incident management, how your developer experience can improve if an AI provides you with a possible root
cause of an incident also.
So that's a tremendous impact.
Yeah.
I mean, that specific use case is, I guess, right where we, where Brian and I have been
for the last years, because that's one of the problems we've tried to solve in our company
with our observability platform to provide not only impact but also root cause
information and you're right that is one of the game changers and so many more
things are now possible with the new HNTK AI systems.
Duschan, we are unfortunately almost at the end of our time budget that we have here and I'm keeping track
of that metric. Any final words, any final thoughts, anything you have not mentioned
yet?
Yeah, I would just like to say take a look at those DX Core 4 metrics. As I said, it's for the first time. It helped me calculate the waste or rank
to put a number on developer not being productive, which can open the door to execs or to interesting
discussions. But the metric itself is not enough. You need to have a story as Brian said before.
Yeah.
Cool.
Yeah.
When we will add all the links to the description of the podcast, as we said, and if you could
follow up with me with the tools of Marian, that would be awesome.
Otherwise, I'll ask him, but if you have it handy, that would be great.
Okay.
Brian, Brian, what are any final words from you?
What I'm obsessing over now is an AI to bring all of our knowledge base together.
Totally missing the point, but no, it's part of the efficiency, right?
And that experience, you know, now I think this is great.
As I said before, I think, you know, having that, the ability to measure this stuff, and you mentioned
early on, Dusan, that, oh, it's another survey.
We didn't really see much coming out of it the first time.
So being able to turn things around and show your team
that, no, we are going to take action.
In this case, you're presenting to the CTO and making sure to communicate to the team,
I'm not just doing this for myself, I'm bringing this up to other people, we're going to present,
we're going to try to tackle this stuff, we're going to try to make this stuff better. So keeping transparency between, you know, in both directions from that side is key.
But, you know, I think it's a really good trend because
too often it's just keep working and get the stuff done
as opposed to we got to keep working through this, but we care about you
and we're trying to make improvements and this this is all the stuff that's gonna help us
make improvements to make work better,
to make it more enjoyable.
And obviously, anybody with a head on their shoulders knows
if your workers are happy,
they're gonna be more productive,
they're gonna be more efficient,
they're gonna be getting better things out, right?
And then the company will benefit as a whole from that.
So yeah, I totally on bored with this kind of stuff.
And thank you for sharing it and sharing it with our listeners.
Exactly.
Thank you for having me.
Thank you.
And with my new Prague connection, I'm pretty sure we'll keep this connection open, which
means I'm sure a pass will pass whenever you make it through Linz.
Let me know.
You're going to have to go to Prague more, Andy.
I would go to Prague more, yeah.
Both sides will work.
Cool.
Thank you.
Thank you so much.
Thanks everyone for listening.
Bye bye.
Bye bye.
Bye bye.