Postgres FM - pg_stat_statements track_planning
Episode Date: November 29, 2024Nikolay and Michael discuss the track_planning parameter of pg_stat_statements — what it is, how it affects performance, and when or whether you should switch it on. Here are some links to... things they mentioned:pg_stat_statements.track_planning https://www.postgresql.org/docs/current/pgstatstatements.html#id-1.11.7.40.9.2.4.1.3Our episode about pg_stat_statements https://postgres.fm/episodes/pg_stat_statementsPostgreSQL 13.0 release notes https://www.postgresql.org/docs/release/13.0/track_planning causing performance regression (thread on hackers during v13 beta) https://www.postgresql.org/message-id/flat/2895b53b033c47ccb22972b589050dd9%40EX13D05UWC001.ant.amazon.comOur episode on 4 million TPS https://postgres.fm/episodes/four-million-tpsObserver effect in pg_stat_statements and pg_stat_kcache (Postgres TV Hacking session with Andrey and Kirk) https://www.youtube.com/live/wHMNX-fHb2A?si=DPgmrPaSpPF6DxuS~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith special thanks to:Jessie Draws for the elephant artworkÂ
Transcript
Discussion (0)
Hello and welcome to PostgresFM, a weekly show about all things PostgresQL.
I am Michael, founder of PGMustard, and this is my co-host Nikolai, founder of PostgresAI.
Hey Nikolai, what are we talking about today?
Hi Michael, let's talk about performance cliffs and one case particularly, track planning.
The reason why I brought this to our attention is that I observed recently, actually over the last few years, I observed several
strong cases in various production systems where lack of planning phase tracking
caused a huge amount, huge effort invested to troubleshooting. And, you know, our best example is we talked about this many times.
During planning, Postgres logs all indexes with access share log,
all tables, all indexes involved, even those which will not be used.
It still logs them, right?
And we talked a lot about lightweight log manager contention issues, right?
And it happens during planning.
And if we don't have track planning,
pgSrcSettlements.trackPlanning enabled,
it's not visible to our top-down query analysis.
We cannot see which queries spend a lot of time in planning and a lot
of buffers maybe, right? And so on. Yes. And crucially, not only is PGStats statements not
on by default, although a lot of people have it turned on. But PG stat statements track timing is off by default,
even when you enable PG stat statements.
So most folks that enable PG stat statements
don't enable track planning,
including most cloud providers.
So let's zoom out, actually.
This is a good point.
By default, Postgres presents only poor metrics for query analysis.
You have probably, if you have IOTiming enabled,
in PgStat database, you see IOTiming, right?
You have a number of transactions.
You can understand how many transactions per second,
not queries per second.
It's like Postgres doesn't track it.
And that's it.
So you don't understand throughput.
You don't understand average latency in general.
It's super hard, right?
And even at the highest level, like my whole database, how is it doing?
How many TPS?
How many TPS is fine?
How many QPS and queries per second, and what's about latencies?
But if you want to go down and understand which groups of queries,
parts of workload are responsible for, like, where do we spend time, right?
For example, or do some work like with the buffer pool.
To do that, you don't have it
by default. You need to install
PgStatsStatements extension.
Right? And this
is actually
most people I know do.
This lesson is learned and
PgStatsStatements
is maybe the most popular extension
because of that, right?
Definitely.
And we did have a couple of episodes about PGSAS 8.1.
It's hard to say that this extension is not good, right?
It's really valuable.
And there are opinions that it should be already...
It's time for it to go to the core engine
because it sucks that we install this extension
and Postgres needs to parse,
analyze query once again,
just for metrics, right?
So it should be part of core.
Yeah.
But it's already not default situation
when you have good query analysis.
And it has problems, as we discussed many times.
It doesn't track failing queries and so on.
It's another story.
But next step, okay, we install pgStarsAdvance.
We see actually kind of all queries.
Well, okay, top 5,000 by default right it has pgstatstatements.max parameter which controls how many normalized
aggregated or official languages normalized
queries without parameters are tracked
but then you like at some point and actually I think
many folks don't understand it honestly I
didn't feel it so deeply as I do it now, like last couple of years, maybe.
We track only part of our, of Postgres work.
We track only execution.
We don't track planning.
And planning can be a huge headache if workload is heavy and machine is huge and so on. So in heavily loaded projects,
not tracking planning means like you don't, you can like very roughly, you can be in trouble in
50% of production cases when you need to investigate what's happening or what's causing
high CPU consumption, for example.
But PGSR Statements doesn't know, it doesn't see it.
It doesn't see the planning phase at all,
only the execution phase.
So it feels very weird.
It's like we have, okay, we install PGSR Statements,
we tell all people who have LTP install it,
we know overhead is quite low,
and we will discuss it in deeper in a moment.
But then we say, okay, actually,
the PGSR statements extension you actually installed,
it's like half of it, half of extension,
because it's only about execution.
But planning is a super important phase,
and we don't have it.
So all folks who have PGSR statementsins most of them most of them all folks who haven't changed
settings of pgs as admins they have only part of solution yeah i i would probably make the case
that it's a bit more than half like i think i think planning in general is like a much lower proportion of performance issues are planning related than execution related.
In my experience.
But I do take your point that it's not 0% planning related.
So even if it's 80-20 or 90-10, there's still a whole category of issues that we're not spotting by having this
off and i think there's a couple of extra things like that i don't think it's just heavily loaded
systems i've seen planning dominated queries that are analytical in nature where it's just
somebody trying to diagnose why a single query is slow and that has been hidden as well if you're
just looking at things like this is just just-in-time comment, actually.
Well, there's just-in-time compilation as well.
But, yeah, good.
No, but sticking to planning, I think there's a few confusing things.
And I see this mistake made by all sorts of quite expert people as well
when they might be looking at a single query plan,
and they say this query plan only takes 100 milliseconds but there's 10 milliseconds of planning as well like they're
only looking at the execution time they don't realize that the planning is in addition to the
execution time which they should be saying it's 110 or or how they should be summing those two
and the same is true in pgset statements we should be summing the two and we can't sum the two if it's off or we can it's just
it mistakenly tells us that there's zero planning time where it's actually just not being tracked
and uh just assessments uh had only metrics called like total time, and so on, before Postgres 13. And since 13, it was renamed. It was renamed,
right? It never tracked planning, right?
There was no planning. It was only execution, but the naming made it clearer.
Yeah.
And on 13?
On 13, it was renamed to pgExecTime, totalEx total exact time, mean exact time, and so on.
And there is another group of metrics, total plan time, and so on.
Mean plan time, and so on.
And it means that, okay, we have now the setting, but it's not on by default.
Yeah, exactly.
And it means we have a second lesson to learn.
Should we enable it?
We say install PGSS statements.
Don't go without it.
It's the number one recommendation in terms of query analysis.
Always have PGSS statements.
I remember when it was appeared, some rumors said,
okay, it adds like 7% of of overhead but it's worth it well
as as we can see now it's actually very small overhead under normal circumstances but but we
had had an episode showing remember four million tps showing that on edge case overhead can be
drastical and like drop performance it's like performance cliff
but i doubt normal projects reach that point it's it's it should everyone should be aware of that
overhead but under like it's again it's a cliff it doesn't come immediately it comes only when
you reach a lot a lot of queries per second for a particular query ID, right?
Yes, and I think this suffers.
Tell me if you think I'm wrong.
I think the problem here is that the easiest way
to stress test Postgres,
or the easiest way to do a quick benchmark
is using pgbench, which has this exact pathological...
Let's not interrupt for all secrets immediately.
Okay, okay.
Let's talk about, like, we are like problem, like,
PGCAS admins is number one recommendation for query analysis installed. Everyone agrees, good.
There is some overhead, we don't see it, like, it's not huge. Okay. Well, we saw it can be huge,
but with some pathological workloads, right? And we will discuss it.
Now, question of this episode, basically.
Should everyone start saying enable track planning, right?
Should we, like, first default situation when Postgres doesn't have anything
and then we install extension, okay, everyone learned this lesson.
It's kind of solved.
Should we consider
the second lesson similarly and tell everyone enable track planning? Because this gives you
full power of PgStat statements now. What do you think?
Yeah. My current opinion, and I'll be happy to adjust this as the episode goes on is i would enable this very early in a
project these days and and doing research for this i did come across some interesting things i
i didn't realize including why and when the default was made off but yeah i would say
when you've not got high volume high load turn it on while you've not got high load
and then reassess if you if you get if you end up with a pathological work this doesn't which i
don't think you understand this approach and usually it's good for example if we had some
overhead would which would would grow like monotonically with... I know.
Maybe linearly or somehow with our workload.
I would understand this approach because take these weights
and go to gym with additional weights all the time
so you got used to it.
But in this case, it's very different.
We don't have any noticeable overhead
as we saw from benchmarks. We don't have any noticeable overhead, as we saw from benchmarks. We don't have it,
and it happens only at very, very, very, very extreme situations, right? Let's talk about
history. I just learned about what you found right before our recording, so I was super surprised.
Tell us more about it. Very interesting. Yeah, so one one thing like an easy thing i do in preparation
for most episodes is just check when was the feature introduced what was the commit like why
what was the discussion around it and i noticed it was in version 13 that it was introduced and
i went to the version 13 release notes to see what was said kind of as a high level summary
and the release notes now have these awesome little links to the
commits even for old versions well 13 had it i knew they did it for the latest version and i was
surprised to see them in 13 but very pleasantly surprised so thank you to whoever did that that's
great yeah yeah so every item in the release notes we can quickly trace discussion, commits, and so on, right?
That's great.
Yes.
Or you can go to the commits, and then from the commits, you can go to the discussions.
In most cases.
Yeah, in most cases.
But often new features, especially simpler ones like this, like a new parameter, they'll only have one commit.
This one had two, which piqued my interest immediately
so i opened them both and the first one was pretty pretty normal it made sense that this was this was
added and then the second one it was when it was when i realized oh it was during the beta it was
during the beta phase for 13 somebody had reported a performance issue with having track planning on
and turning it off made the performance go back to exactly how it was in 12 so basically they had a
regression according to a synthetic benchmark and then asked could we turn it off and there was it
was pretty unanimous i think it got a few replies and all the replies
were in favor of turning it off for the version 13 release and it's not as far as i could tell
it's not been revisited since yeah i so i now understand much better what's happening here let
let me like unwrap it here so what happened it was obviously it would be obviously good if it was enabled for all by default, right?
But then it was learned that there is performance degradation, 45%.
Okay, it doesn't matter, actually.
It can be 90% in some cases.
It depends on your machine and so on.
And it was related to spin lock contention in PG-SAS statements. And this is exactly what we have recently observed
in our benchmarks using our AI workflow,
which we discussed also several times.
I wanted to say how I see our benchmark workflow
for kind of synthetic benchmarks, PG-Bench and so on.
We built a lot, we build a lot
and we collect like 80 artifacts for each run,
very comprehensive configuration for everything is stored,
we can iterate and I see LLMs is just like kind of oil.
So eventually we should have well oiled machine, right?
Engine and but the main part is not in LLM. It just makes it easier to
iterate. So we wanted to check overhead. And we checked it. So PGBench, which was also used in
this research mentioned in the mailing list, Pidgey Bench by default is not limiting TPS.
But in normal circumstances, under normal circumstances in production,
of course, we don't.
So this is not just low test.
It's a kind of low test, which is called stress test.
So we are on the edge.
We are checking what's happening on the edge.
And in production, we don't have it normally
you don't you don't want to work at 100 cpu usually in all tp because you will experience
various interesting phenomena phenomena like this basically right so spin lock and we checked it and like we took very big machines i think 192 cores with
five generation fifth generation in google cloud still spending google cloud credits right
fifth generation of xenon scalable intel and almost 200 cores a lot of memory doesn't matter because we also took small
scale scales 100 it means 10 million entries in pgbh accounts only and we started first we started
not select only no no difference like if you use read write workload you don't see difference but
once you switch to select only workload you click you quickly
observe so what we did and uh like in our show notes we will have links to detailed
detailed details all details with all numbers and reproducible with all pictures and so so we start with just one client and 10, 20, and so on, 100, and so on, until 200 clients.
And we expect that we should basically grow in terms of TPS, right?
Because more clients, more TPS.
And Postgres these days scales to many cores quite well.
But we quickly saw that we've enabled on this machine,
quite powerful machine, right?
We've enabled PGS Start State 1's track planning.
Peak is reached after 30, between 30 and 40 clients, very early.
And then TPS goes down.
And without track planning, similar picture actually, but it's later until like 70-80, like two times to the right, two times more clients and TPS also higher.
And once the peak is reached and we go down, so first the lines are together, but once peak is reached and we go down we see like kind of two times difference
peak is reached two times sooner and tps is two times two times lower interesting right
but looking at this i was thinking okay why do we have peak without track planning reached much sooner than 196 cores, I think, vCPUs.
Because normally it should be there.
How many cores we have, this is our maximum,
most optimal point of load.
There are nuances because we run PgBench on the same machine,
so we limit number of threads by 25% of cores,
so we couldn't have all CPU consumed by clients, basically, right?
So anyway, then I recalled our February tests, and we had a podcast about it,
episode about it, PgSus Admin Software had 4 million TPS.
We needed to remove Pg assessments to reach maximum workload.
And I remember when we removed it, peak shifted to normal position,
closer to number of VCPUs we have.
So what does it tell me?
Okay, I'm thinking, oh, it looks like PGS assessments has some significant overhead,
but when we enable track planning,
this overhead doubles.
And then since our automation collects flame graphs, it collects PG-WET sampling analysis and so on, so we quickly identify there are spin locks indeed.
But the same spin locks are present when you just use previous assessments this is
most interesting part right so on flame graphs we see and like i think i uh i think i should
publish report if i have it already please let's not let's have it in show notes linked to the report. So in flame graphs, we see that
without track planning enabled, for this particular workload
we have spin lock contention and there is
a very wide s underscore lock function
inside pgss store. pgss store is a function
which saves metrics.
So what's happening?
To PG stat statements, that's the PGSS, right?
Yeah, yeah.
And I would expect, remember in February,
I expected, I wanted to quickly reach 1 million TPS
and go further.
And I know it was achieved by Alexander Korotkov
in 2016, like eight years ago.
So I was very surprised I couldn't reach it easily.
And only when I removed PGSA statements, I reached it.
So what's happening here is with PGSA statements used,
not track planning, just PGSA statements,
if you have so weird workload that it's just a single query ID
and it runs like hundreds of thousands of TPS for this,
QPS in this case, queries per second, for this particular machine, it can be lower. Actually,
we reproduce problems on eight core machines. This is a super interesting point as well.
Yeah. Yeah. So this performance cliff can come to you on small machines as well.
But wait, wait.
How many concurrent connections?
I guess it comes sooner.
Yeah, well, it comes sooner.
I don't remember from the top of my head,
but I know this problem can happen sooner.
So the idea is if we have limited resources and transactions,
these queries are so fast. I think it was like 15
microseconds or so. It's super fast for primary key lookup. But they all fight to update metrics
in a single PGSA statements record. And spin lock is required for that. That's why we see spin lock
contention, right? So just it's observer effect as it is, right? Pure observer effect.
And purchase assessments can have observer effect,
but you need to reach a lot of queries per second.
And they have to be really fast queries.
Well, if it's not fast, you won't reach a lot of queries per second.
Good point.
Because of limited resources, right?
Yeah.
So, yeah.
And this means that in reality, very unlikely you will see it.
Maybe, maybe, but unlikely.
It should be super fast.
Index only scan, for example.
And frequency is so high, right?
But when you look at flame graphs with track planning enabled,
you see exactly two areas similar width.
Both are S-log inside PGSS store.
And one PGSS store is in execution phase, another PGSS store in planning phase.
So metrics are saved separately.
And if you enable, they are saved two times.
That fully explains what's happening here if we enable it we just move performance cliff two times closer to us
well if we sit in 0.0 right if we shift it already it's yeah so it explains why in your synthetic benchmarks, you got saturation twice as fast.
Yes, yes.
All pieces of puzzle are like...
This took quite a lot.
We had four sessions of various kinds of research starting in February with Pitcher's Statements and IDEAL.
Let's squeeze a lot of TPS from our big machines.
But this is interesting, right? Let's think about it. So we have this PGSS store. Obviously,
I think it should be possible to find a way to store it just once. I mean, to save it once.
Well, I remember you suggested that when we were discussing privately. And since then, having read the discussions,
it's a deliberate design decision to save them separately.
I didn't know about that.
So you mentioned a while back,
we don't even track queries that fail, for example.
Well, now, in a way, we are.
Because the number of times a query was planned
doesn't have to equal the number of times it was executed anymore
in PGStats statements.
Once you've got track planning on,
you can see the number of times it was planned
versus the number of times it was executed.
Are you saying that if we have a failed quiz,
but it was planned, planned time would be saved?
Yes.
That's my understanding from reading the,
I haven't tested this,
but that's my understanding from reading the discussions
around the design.
Still, I think it should be possible to optimize.
Yeah, I understand the design decision,
but if we think about performance...
Either the design would need to change or the,
yeah, exactly, but it's a trade-off.
And that's, like, I found that interesting.
I remember cases, terrible cases, which were very hard,
extremely hard to diagnose because PGS assessments didn't show.
For example, merge joins, which merge join itself, not merge join,
maybe hash join was used, but considering merge join,
planners spent many seconds, like 5 to ten seconds, just considering it.
Because it needs sometimes to check the table actual data and see like min or max values for some columns, which is unexpected.
But Planner sometimes does it.
And if it takes too long, planning phase, even if merge join is not chosen eventually,
we don't see it, right?
Lock manager contention, right?
Sometimes with JSONB values, we have something like planning is long
and consuming a lot of CPU, but we should see it.
We should see it, right?
And PG-START Kcache, which tracks physical metrics
and those people who are, I don't know,
like self-managed Postgres can use it and it's great.
It also has track planning, actually.
The same-
Oh, cool.
Same parameter, right?
Oh no, off by default.
It's off by default, yes.
So now we understand the design decision
and we understand that some benchmark,
which checked very, very like edge case, I would say maybe even corner case,
because you have unlimited TPS, and you have select only single query ID.
It's corner case, not just edge.
We have two edges coming here.
It's a corner case.
So likelihood of having it in production, extremely low.
And decision, in my opinion, decision decision was wrong what do you think yeah i think i agree i can see why it was made i think
if you see it's quite a startling number if you if you look at 45 drop that was the number reported
and shown i can see why people went like were scared of that
especially when it was in the beta phase but in hindsight especially with what what i know now
based on what you've said it seems like it would be much better off having it on
for everybody 45 is roughly 50 right it's like 2x. It sounds scary though, right?
I mean, it's reasonable.
If you are sitting on the edge,
you are suffering from this penalty
PGSS Assessments already gives you.
You just double it
because PGSS Store is called twice.
That's why it's roughly 50%.
But the conclusion here is not
let's not enable by default.
Let's put a note in documentation that Pages as Admin is sensitive to the cases
when it's a single query ID and a lot of high-frequent queries happening.
I think there is that note, yeah.
If there is, it's good.
Then track planning just doubles this penalty and that's it but it's
not happening under normal circumstances so i yeah good it's a really good point that it's not
track planning track planning isn't the thing causing this the track planning makes it twice
as bad but twice as bad is quite a small multiplication factor when you're talking
about such a an extreme case yeah it would be like if
we imagine if we had twice as you know that the transaction id wraparound is a couple of billion
or a couple of billion in the positive if it was four billion in the positive that would make a bit
of difference but not like a huge huge difference at this kind of in these extreme cases, twice isn't that bad.
So I have two separate questions here. Can we have a single PGSR store instead of two?
And second question, should we recommend enabling it to everyone, but understanding that
doubles the problem of PGSR statements, which already is present. It's there already, but you just don't feel it.
You don't feel it, for example, on that machine, almost 200 VCPUs.
You don't feel it unless a single query ID has like 200,000 calls per second.
200,000.
It's insane, right?
This is the edge. It's super far yeah yeah yeah and without
without track planning okay it's four hundred thousand so not not enabling it you're just
saying okay instead of two hundred thousand we will have it four hundred thousand but
looking at your statistics you see okay we maximum calls per second is what? 1,000, 5,000, 10,000.
It's already a lot, right?
And usually we have different kinds of problems with such queries,
which are exactly present during planning time
because locking, right?
It's already a problem.
Yeah, such a good point.
And I wonder, if we have high frequent query
single query ID like 10,000
primary key lookup
in many cases, maybe in most cases
we say, okay, lock manager
can come after you, right?
Let's cache it.
I don't know, like prepare statements
or maybe PLPG scale function
indirect caching and so on. Let's avoid
planning.
Like planning will be okay, right?
I haven't checked, but that would be an interesting test.
I actually don't know.
Yeah, you could do the same test again, but with prepared statements.
It would be interesting to see if track planning is updated at all.
Presumably, it's not doing planning, so it shouldn't do that.
That's a great point.
I'm going to check it because, as I said,
with our almost well-oiled machine,
I will just ask to repeat experiment,
and with hyphen capital M prepared,
we will see.
And basically, it should have effect like we disabled it,
but we keep it enabled right
with a puller maybe like with some way of keeping the sessions remember last time it was doing the
first new yeah would it need to be that or not i actually can't remember that maybe that would be
fine in this case in this case it will be fine because the problem is like it's vice versa. Pidzhi bench maintains connection here.
What we discussed is that like in reality, if connection is not maintained, we connect again and then we don't have cache.
So reconnection, okay, I will double check this area.
It's a good point.
And additional research to make the picture complete.
But my gut tells me it should be enabled in many cases,
but we should be aware of spin lock contention,
which can happen if we have high frequency.
Honestly, I'm thinking to find some rule maybe in monitoring.
For example, we know this machine has this number of vCPUs,
and we can quickly check how many calls per second even if track planning is not enabled we can check it how many calls per second we have for
most frequent yeah very high frequency order by calls descending order by calls exactly yeah yeah
so top end like top 10 by calls and uh how big is that? This number of cores, this number of
QPS, this level
of QPS, just
roughly estimate how far
we are from performance cliff.
Maybe it's possible
to do this for modern...
Of course it depends on many things.
It doesn't depend on the plan, I think.
Because saving
happens only once. It depends on duration,, I think, because saving happens only once.
It depends on duration, but not on duration, on frequency.
And, of course, on resources, type of processor you are using.
So I think it's possible to have some kind of prediction how far we are from this.
And if we are really far, let's enable track planning.
This is my point.
Meanwhile, let's think maybe it's possible
to save it just once
and reconsider decision-making.
I'm really thankful you made this research.
I liked it in my puzzle.
I'm still collecting pieces of it.
Cool.
So enable it, but it will double overhead and be
be aware of overhead. This is the bottom line from PGSus. Yeah. If you
ran a managed Postgres service would you enable it for
everybody by default at the beginning? Well, this is another piece of great info you found.
We almost forgot to discuss it.
You found that many...
Well, it's not a surprise to me.
You sent me the list of managed service providers
and what's possible, what's not there.
It's not a surprise for me that most have it off by default.
Surprise for me was that Google Cloud
even doesn't allow to change it.
Not just Google Cloud,
quite a few of the providers.
Crunchy, Crunchy Bridge, right?
Well, yeah, it's definitely not a full list
and I'm guilty of testing more of the ones that make it easy
to spin up new instances but let's let's do some uh let's drop some names here rds off by default
should we go the other yeah the other way around is the other direction is the ones that do make
it configurable i yeah rds is i think them still by far the most popular. Off by default, but configurable.
Bear in mind, PGSEP statements in RDS is on by default.
So they do change some defaults around this, for example.
Most of these provide...
In fact, I can't remember the last time I checked a provider
that didn't have PGSEP statements on by default.
You mean on by default?
You mean it's in shared preload libraries
or it's created in like template data?
Yeah, it starts tracking queries without you doing anything.
You don't have to create extension or whatever the normal thing is to do.
I don't remember.
Almost all providers, because they rely on it for their monitoring, they present you runs of PGCAT statements for most of these.
So it's on by default, but track planning isn't.
That's true for almost all of the ones that I tested on.
But I think Timescale...
Timescale is champion here, yeah.
Well, it's configurable, but it's also on by default,
which was, I think, the only one that I found
that had it on by default.
There probably are others.
I didn't check them all.
It's quite time-consuming.
That's great. I love this quite time consuming. That's great.
I love this decision.
Yeah, it's great.
But a few of the others made it configurable, which is nice.
I just was surprised that at least one of the major ones doesn't even.
Yeah, Google Cloud SQL even doesn't allow to change it.
And you gave up trying to check Azure.
Yeah, I had limited time and the UI and I don't.
I need to talk to somebody there to teach me how to use it or something
because I was really struggling.
Yeah, I understand that.
So, yeah, so who can, like, our recommendation is to,
for those who are listening from those teams,
like Cloud SQL, Crunchy, and Superbase, guys, like, make it configurable. Our recommendation is for those who are listening from those teams,
Cloud SQL, Crunchy, and Superbase, guys, make it configurable.
It's not normal that it's not.
I would say consider making it by default.
Yeah.
But at least bare minimum should be let users to decide, right i think so and i think normally when these
providers change a default they'll change it for new instances only where i think there's just so
such low risk so i like that approach of you know if people create new clusters or new instances
have it on by default it's it's it help will help people when they come across planning time-related issues,
diagnose those issues much quicker, much easier,
whether that's on their own
or once they pull people in.
Yeah, yeah, yeah.
That's for sure.
Yeah.
So good.
I hope we brought some food for thoughts
to guys who are listening to us.
So, yeah.
Thank you.
Nice one, Nikolai.
Yeah.
Thank you.
Take care.
Have a good week.
Bye.
You too.
Bye-bye.