Postgres FM - Why isn't Postgres using my index?
Episode Date: February 23, 2024Nikolay and Michael discuss a common question — why Postgres isn't using an index, and what you can do about it! Here are some links to things they mentioned:Why isn’t Postgres using my ...index? (blog post by Michael) https://www.pgmustard.com/blog/why-isnt-postgres-using-my-index Why isn’t Postgres using my functional index? (Stack Exchange question from Brent Ozar) https://dba.stackexchange.com/questions/336019/why-isnt-postgres-using-my-functional-index  enable_seqscan (and similar parameters) https://www.postgresql.org/docs/current/runtime-config-query.html Crunchy Bridge changed random_page_cost to 1.1 https://docs.crunchybridge.com/changelog#postgres_random_page_cost_1_1 Make indexes invisible (trick from Haki Benita) https://hakibenita.com/sql-tricks-application-dba#make-indexes-invisible ANALYZE https://www.postgresql.org/docs/current/sql-analyze.htmlStatistics used by the planner https://www.postgresql.org/docs/current/planner-stats.html Our episode on query hints https://postgres.fm/episodes/query-hints transaction_timeout (commit for Postgres 17) https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=51efe38cb92f4b15b68811bcce9ab878fbc71ea5 What’s new in the Postgres 16 query planner / optimizer (blog post by David Rowley) https://www.citusdata.com/blog/2024/02/08/whats-new-in-postgres-16-query-planner-optimizer/ ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is brought to you by:Nikolay Samokhvalov, founder of Postgres.aiMichael Christofides, founder of pgMustardWith special thanks to:Jessie Draws for the amazing artworkÂ
Transcript
Discussion (0)
Hello, hello, this is PostgresFM episode, I don't remember the number, and today I'm Nikolai, and this is Michael. Hi, Michael.
Hello, Nikolai. I think 85.
Well, it's honestly, it's already at this point, it doesn't matter already.
When we get to 100, it will matter.
Yeah, wake me up, yeah, at 100.
So today I'm in the passenger seat, and Michael is going to talk about, I will join of course,
but from passenger seat, about why the planner behaves not as expected, not choosing index
I have prepared and how to troubleshoot that, right?
Yeah, yeah.
This was my suggestion.
So it's something I see a bunch and I got reminded again yesterday. I was
happened to be on LinkedIn of all social networks. It's surprisingly one of the less, well, one of
the slightly nicer social networks to be on these days, which I would have been shocked by a few
years ago. But I saw a post from Brent Ozar, who I remember from my Microsoft SQL server days,
and has been recently getting into
postgres things which is cool i think it'd be a great addition to the community and he posted
what i thought was going to be a blog post and he cheekily said why isn't postgres using my
functional index and it turned out it was actually a stack exchange question that he had posted, and he had included links to an Aurora database,
which are not Postgres, but Postgres compatible, which is fun. And he was genuinely asking,
why isn't it using my index? And I'd noticed it was only posted, like, sometimes on LinkedIn,
you see posts from like three weeks ago, but it was only posted about 20, 25 minutes ago or
something. I thought, oh, you know, there's no answers to this this is something I generally like know about I've written a whole
blog post on so maybe I can help here I had had a look and it just reminded me that this is a
remarkably common issue for beginners for a bunch of reasons but even for some quite expert database users, sometimes get confused as
to why they've got an index that Postgres should be using, or they think they do. And for some
reason, Postgres is either, either can't use it, and they're expecting it to be able to, or
doesn't think it will be faster and is choosing a different plan. So yeah, I've seen this a bunch of times.
I feel like there's probably about 10 different reasons it could be,
which is, I think, slightly,
people don't realize quite how many reasons there could be for this.
So I thought it might make an interesting discussion.
Yeah, well, yeah, I remember this struggle.
Now I know how to troubleshoot it.
But I remember this struggle I had many times in the past.
It's not easy sometimes to understand why a planner behaves in one way,
not as you expected.
And sometimes it upsets you a lot and you start thinking,
oh, it's stupid.
I saw also people started blaming Postgres a lot after such events.
They think like, oh, I want control.
Also, this is where you probably want to start wanting this hinting which Postgres lacks, right?
Well, it has it, but it's not like standard.
Yeah, I think we probably will get to that.
You said you now know how to solve it.
Out of interest, what's the first thing you'll do?
Well, first thing I'll do, I'll try to understand, okay,
this is the moment.
Usually I don't pay attention a lot,
but this is the exact moment when I start paying attention
to cost metrics in the X-Plane Analyze Buffers plan.
So the idea is I try to understand,
is it a really huge number or like it's relatively
small or tiny? And then I think, okay, based on my experience, and here actually it helps if you
have some experience, right? Based on my experience, if my index was used, what would be this cost,
right? And then like, basically I already explained my algorithm.
So, okay, I see some cost. I wanted to say actual cost, but the word actual we must reserve
for execution, not for planning. But this is the cost you see in the root of this tree of
query execution. Actually, we don't need analyze here, right?
We just say, can I say explain and it's enough.
For the total cost that you're talking about,
I guess you're talking about,
they're called startup cost and total cost.
I guess you're talking about the total cost on the root node.
In the root, yeah, yeah.
We just like, we have the plan.
We see our index we hoped would be used.
It's not present there.
And we see different index or maybe sequential scan. and we think, oh, what's happening here? And so I see the total
cost. I'll probably also check the success method node where data access is happening
and the cost there as well. It's also important. And then I think, oh, if my index was used,
what would be the cost? And since I'm experienced, I'm sometimes already understanding what's happening.
But if I have doubts,
I just start playing with these rough knobs,
enable sex scan, enable index scan,
enable blah, blah,
to put penalty on specific access methods, right? So to specific steps.
Yeah, scan types.
Right, scan types.
And in this case, very often I start seeing what I expected
and I see, oh, this would be the cost.
And I see the difference.
And sometimes it's very small.
So we were very close to choose
my plan, but it was not chosen just because
and then
super often, very often
it's new system and
I ask people, what random
patch cost do you have?
Right? Because
sequential scan, our index is not used
and random patch cost is four.
Recently, some Amazon guys, they got consultation with me,
and this is exactly what happened.
I asked what random batch cost.
They said four.
Remember Crunchy Bridge, after listening to us, changed it to 1.1,
and they also published good benchmarks and it's still
my to-do to revisit those benchmarks
because I had tendency
to set random page cost to 1
but they said
1.1 is better actually. According to our
benchmarks and they had some methodology
interesting but it's slightly off topic.
So for those listeners
who don't know what random page
cost is is how the planner thinks about random access to data.
And by default, it expects you using magnetic disks, rotational disks.
And this is not normal in 2024 definitely and i like i'm not i don't know if aurora and
rds still have four that cluster may be created long ago but if they still do have this this is
another question to them we had the question about cpu and green color in performance insights last
time but today we have a new question we need to check i i haven't created
clusters for long yeah last time i checked they most of them were there were a couple of good
exceptions like scale grid and now crunchy bridge but there aren't many that have tuned it last time
i checked which was admittedly quite a while ago this is super strange and also postgres community it's time to revisit this and
we we need this for mostly for new clusters you shouldn't think too much about all clusters they
already have postgresql.com in place and everything there so change it and i would say why are we
optimizing for the few people that are still running on magnetic disks instead of the vast majority who are running
on SSDs. It's time
to change, I think, for the default.
I might
be, again, taking too much
time for a passenger, but let me
just feel this, that
maybe some people don't understand what we are talking
about. I will be very short.
There is secPatchCost and random
page cost. SecPageCost means
the cost of sequential data access. And it's one. If you change it, you change scale. I have cases
when people change it. But it's one. It's our baseline. It's one. And randomP random patch cost by default in postgres is four it expects that random data
access is four times more expensive than sequential patch access which is not so if you for example
have all data fit in memory cached or if you have if it doesn't fit but if you have a non-rotational disks, SSD, NVMe SSD,
something like modern disks.
And most of new databases
have modern disks, of course.
And Aurora, I think,
they have very good storage.
So obviously it's not rotational.
I think maybe I'm wrong, actually, right?
It should not be rotational.
It's very slow.
I mean, throughput is terrible, right?
Okay, so why does it matter
here? Because if
you have one and four default settings,
one sequential and four
random access, it's four times more
expensive. The planner
quite often thinks, oh, sequential
scan is not that bad. I would
prefer sequential scan over
index access if I need to fetch
a lot of rows, right?
Because it seems to be cheaper for me.
Once you shift it to normal, 1.1 is normal these days.
This is common understanding.
And CrunchyBridge have good benchmarks.
Let's touch the link.
So in this case, you tell the planner, actually, sequential access,
random access, they're almost the same.
And index scan
starts winning.
Yeah, or at least it's only a
10% penalty instead of a
300% penalty, which is
a huge difference once you start returning
a bunch of rows.
For a single, yeah.
Which happens quite often, there's quite a lot.
Anyway, I'm very happy
for you to do a bunch of talking here it's a pretty common topic i like that we've gone this
deep so quickly but i do want to go i think it's worth going back and saying this is a great avenue
for like uh like you mentioned looking at the cost but i think there's an easy like even for
non-experts you can you can use that to your advantage, right? You can, if you have access.
I didn't when I was looking into Brent's problem.
It was a read-only user, so I couldn't do set enable set scan off,
which is the easiest way of, if you're getting a sequential scan,
flipping it to, well, basically telling the planner that sequential access
is tons more expensive than it really is.
And therefore, if it has any ability to use the index no matter how expensive it should be it will
then use it if it can't though enable sex scan off doesn't actually prevent the planner using
sex scans it just penalizes it it makes it look more expensive so if you cannot use the index
which is a bunch of these cases are postgres can't use the index for some reason, or you've forgotten that you're in an environment that doesn't have the index.
Like that is a surprisingly common.
Or an invalid index.
Yeah.
Well, yeah.
I haven't seen that one as often.
Like I haven't seen it come up as often, but quite often people are like in the middle of debugging something and someone's dropped that index or they're on staging and it doesn't have it when they went on production they
do have it so it is worth double checking the index exists but by using enable sex scan off
you can quickly check is it a costing issue now it's trickier with if you're the case you
mentioned where it's doing one type it's using one index and not the one you're expecting, you can't use that.
You can disable different scan types, but you can't tell it not to use a specific index.
I have seen a trick for that, though, which I wasn't familiar was using, explain the query, and then roll back, which means you can try and see if that index didn't exist, would it pick my index, which is quite a nice trick as well.
So trying to find out, are you in the case where Postgres can't use the index, or are you in the case where it's choosing not to?
The enable parameters are a really nice way of finding that out quite quickly normally.
So yeah, cool that you start there as well, even if you're eyeballing the costs.
The one thing I would add on that is I think explain analyze can be helpful.
So if you've run that already, or if your query actually runs and doesn't time out,
then the explain analyze part is helpful because you get the rows returned.
And I think when the cost estimates are off, sometimes it's random page costs,
but quite often it's about the number of rows Postgres expected to be returned
and the number of rows that are actually returned.
So in fact, maybe actually you don't need to analyze because if you know the query,
you know the data, you know the query you're running,
if you're expecting only a few rows to be returned and you see hundreds of thousands in the data, you know the query you're running, if you're expecting only a few rows to be returned
and you see hundreds of thousands in the estimate,
you've got a pretty good clue right there
that you're in a case where it's overestimating the cost of that.
Or underestimating maybe.
Underestimating wouldn't lead to index not being used,
but anyway it can lead to wrong plans.
Yeah, that's a good point.
So this is reason number one, let's say. First reason is like costs are slightly different, but this is like already I'm talking about the most difficult case probably. There are easier
cases. I just excluded them from my mind because they are easy, right? If cost is off and indeed we check the planned rows and actual rows, if the mismatch
is huge, either it's outdated stats, probably somebody is blocking auto-vacuum or something
we need to check, or maybe lack of stats. Sometimes we don't have stats at all. It happens.
For example, if you created a functional index,
but you didn't run analyze at all yet.
That was the case yesterday.
In this case, explain analyze buffers also.
It's good.
I agree with you.
But sometimes you have the query which lasts hours.
So in this case, we need to downgrade to just regular explain.
Well, this is one of those, like, I'm obviously a big fan of explainer-analyzed buffers like you,
but this is one of those cases where I don't think you can know, like maybe you can never know, but
I don't think you can know from a single execution plan which case you're in. We often need at least
two. We need need at least two.
We need like the before and the after we've changed one of these parameters.
The first one is not going to tell us that much.
It tells us it's not using the index,
but it doesn't tell us why it's not using the index.
We get some clues like row estimates being off or like the cost number, but they're only clues.
It's only by changing something and running it again
that we can see the difference of when it is using the index.
That's why database branching and experimentation with iterations
matters so much because if you already calculate the statistics,
but then you start having questions,
what if I did something different?
You want to reset and go different route different path right in this case being able
to iterate like reset in a few seconds run it again check this id check that idea that's why
branching and fast thing clones matter so much right exactly this this is yeah because otherwise otherwise it's a one-way ticket
so you already calculated statistics and that's it yeah true actually you can't go back you can
solve your problem but you can't go back right and sometimes you you okay you solve the problem
but you cannot explain in detail what happened. But this doesn't build
confidence in your team if you
cannot explain, right? And to
explain, you need to go back.
Yeah.
I like that you're using explain in a different
context now. Okay.
But it's related, right? It's related.
It is. It really is. And then you can
analyze what you're going to do next.
Right. Analyze what is overused.
Yeah.
Well, that's exactly what happened yesterday as well.
Well, there's an interesting thing here.
I jumped to conclusions a little bit.
I did spot the, it was probably an estimation error,
but I thought it might also be a data type casting issue,
which is the kind of thing that we didn't cover that in depth
actually the we covered query right let's call it mismatched query maybe yeah so that basically
if your query can't be answered by the index you have so if you like the typical example of this
is a function so for example if you're running expression lower on an
yeah expression on a on a field but you have only the field index and you might think logically that
it could use the end like if for example you're in your date uh huh times from timestamp to date
your reduction from timestamp to date for example it's very common you convert timestamp to date
and expect that an index on timestamp will
work it won't right because it's for timestamp not for this kind of not for your expression
so yeah so in some cases postgres handles some of these data type conversions like text to varchar
which i i wasn't sure about yesterday but in some cases it doesn't so it depends whether it supports
that but in general
it's that's the kind of thing that can prevent the use of an index that you're expecting
to be used or like the operator not being supported like there's a there's another there's
other cases where the index type you so let's say you've used a b-tree because that's what most of
us are using most of the time one example is is on a text field using ILIKE.
So like, again, a case-insensitive search operator
is not going to use the B-tree index.
Indexes only support certain operators.
And if you're using an operator that it doesn't support,
like a greater than on a hash index, for example,
or anything other than the equality on a hash index,
it won't use index, kind of obviously.
So I think there's a few cases that are quite simple
for why it can't use the index.
And then there's a bunch that are a bit more complicated
along the lines you were talking about,
which is what I've just classified as Postgres doesn't think it will be faster.
So yeah, we've said expensive and cheap a few times,
but those costs, while they're in an
arbitrary unit the idea is for them to estimate how fast the query will be or like how slow it
will be the higher the cost the slower it would be that's the idea and it is not the only measure
that postgres could have tried to optimize for like it could have tried to optimize for, like it could have tried to optimize for IO. And that's correlated, but it's not the same thing.
It optimizes for speed, which is interesting
and kind of leads us to what could have gone wrong.
So I actually think we didn't cover a whole case though,
like a whole simple case of,
it doesn't think it'd be faster and it's correct.
That's a case that catches
people out a bunch like you're trying to force it to use an index but it's doing the faster route
already for example if you read 95 of your table just just sequential scan might be much faster
and on the fly filtering out those five percent that are not needed, then walking on the B3, right?
It will be probably...
Even if you balance random page cost and indexed seg page cost,
it still might be better to use sequential scan in reality.
Yeah, for sure.
And because of how sequential scans work,
because of how efficient they are and the
fact they don't need to read data from multiple places and if your data fits really tightly on
like very few pages it could be a lot faster and not even not even just at percentages as high as
95 percent based on like for for small tables uh for with like not very wide rows,
I've seen it be as low as 30% or so that's still faster as a sequential scan.
So I suspect you've come up with a contrived example of it being even lower.
Right.
So in this case, I have a very simple, very basic question.
And I think in many cases, we just should start from this question when
we consider performance of a query we should think how much that data underlying data do we have
right and next very high level how many rows return then it can be like if we can we can
unfold this and say okay if it's just one row, was it
aggregate? So we need to
analyze a lot of rows for real
or just we return...
Last week I had a case.
People complained about very
slow, like very bad
behavior and they used expandalized buffers.
They saw like some gigabytes
of data and so on.
And then you... ORM or GraphQL they saw like some gigabytes of data and so on.
And they knew ORM or GraphQL was involved.
I don't remember,
but they knew that they need only like not a lot of rows.
And then we just saw that like,
oh, it's bad, it's bad, it's bad.
And the question, how many rows were actually returned?
And they also like, we created all indexes.
The indexes are here, but somehow it's still sequential scans. But how many rows? And we thought like, oh, actually it
was like 10,000 rows or so, maybe 100,000 or 2,500 rows. I don't know, a lot of rows returned.
Do you really need it or you just forgot the limit? So limit is applied on client side maybe, right? It's terrible. And then I said immediately, like, I'm big fan of using again,
maybe off topic, but I'm a big fan of using in larger projects, not in tiny
projects when you care about everything in your pet.
So queries for me, like, you know, this concept pets versus cattle, right?
Yeah.
So for virtual machines or for real Queries for me, like you know this concept, pets versus cattle, right? Yeah.
Yeah, so for virtual machines or for real machines,
for big fleet of infrastructure.
In the case of workload, if the workload is complex,
I'm a big fan of dealing with queries like with cattle as well.
So when I saw this, it was a new client.
When I saw this, that they had this problem, returning too many rows unexpectedly, I immediately said, stop here.
Like, it's just one example of the problem.
And let's return to high-level, top-down analysis using PGSR statements because it has a rows metric.
And let's see the whole picture, how many queries behave similarly and in turn too many rows but this is exactly when it happens index is not used because you request to for too
many rows here right and selectivity is not good it's very weak yeah and postgres is going to do
exactly what you asked it to do if you want all of the rows it's not going to and Postgres is going to do exactly what you asked it to do
if you want all of the rows
it's not going to say let me give you the first 25
and 0 that's enough it's going to say nope here you go
here's all of them
unless it's pgVector which is slightly
uncertain right
good point
I haven't actually considered
the new index types
yeah no more certainty in the SQL world Yeah, I haven't actually considered the new index types.
Yeah.
Anyway.
No more certainty in the SQL world.
Yeah.
But yeah, there's another case as well.
A big one is if you're selecting a high proportion of the table,
chances are it's actually faster to do a sequential scan.
Possibly a different index might be serving your query better i haven't seen that one as often but another case i've seen really
often is small tables so even if you're only selecting one row in a table that's like fewer
than 100 which might not be that common in production i've seen quite a few like reference
tables though that are not very many rows.
Or more commonly, developer databases on people's local machines
where they just have a tiny bit of data
for a new feature.
Even if you have the perfect index,
Postgres won't choose to use it
when you don't have much data
because all the data is on one page
and it can just simply look it up
very, very quickly, very, very easily.
So small tables are the other exception that i see more often because of dev boxes you know exactly how easy i can can
be impatient here trying to interrupt you right this is my favorite topic and like probably i
will not add anything here and listeners who follow us for long already know
what would I say here, right?
Because you should always
try to deal with
full-size databases.
Yeah.
Or at least like
large datasets, yeah.
If you're going to have to, yeah.
But it trips people up.
Unless you're a big fan
of hypothetical approach,
hyper-PG and indexes partitioning.
So it also would work, but in a limited way.
Would it, though?
Because if you don't have any data, I don't think...
Ah, no, sorry.
It was only discussed, not implemented.
And there was another project,
which lets export statistics from production,
import it to lower environment, and pretend we have a lot of data.
That's a cool idea.
Yes, it's a cool idea.
Maybe already.
Hyper-PG has been developed.
So definitely partitioning was covered.
But about statistics, maybe no.
But there was another project
developed in Japan I guess
and it's not super
popular but idea is cool
I think it's really good
but I expect some limitations
of this idea obviously
if you can afford testing on
full size databases this is the best way
yeah
you just need to make it cheap
and then here i stop yeah nice okay
but yeah this is a so this is the more interesting case though right like once you're past those two
of it being you know actually faster if it's wrong if postgres is not estimating correctly
that's when we get into the more interesting parts of like, you mentioned stale statistics
or not even having statistics relevant or needed.
So analyze is your friend here on the tables in question.
We've also got a couple of other tools.
We can increase the sample that Postgres will do
if you've got like an oddly distributed.
Default statistics target.
Yeah, some people. Not even...
For one column.
Yeah, exactly. You can do it globally, but
per column...
Default 100 is also quite a lot.
It might be not enough if you have
high insert rate and constantly
working on the edge of new data
and the distribution
of values in new data
is very different compared to the archive data.
So there are things there, definitely.
That's one of the defaults I'm happiest with, actually.
I think it's a pretty good sweet spot
because if you increase it globally,
you increase the time for analyze to run globally.
And I think that has knock-on effects for things like your downtime for doing certain types of major upgrades and things.
So I can see zero downtime.
And if you jumped from 100 to 1,000, yes, it will probably increase analyze time maybe two or three times, but not 10 times.
It's not linear.
Sure. But I haven't seen it cause huge problems globally. But yeah, if you've got a column that's
not in a skewed distribution, increasing it for that column can really help. And then the
final one I've got on my list was the multi-column.
Which can be only four columns inside one table.
Yeah.
It cannot be, unfortunately, for two tables,
two different columns in two tables.
It would be interesting to have maybe as well.
Sometimes one table fully depends on another, right?
I don't know.
We do have a whole episode on hints,
but I think this is where the hints discussion
really comes into play. It's like,
what are your options when
we're out of
tools within Postgres to
give the planner as much
information as we can about
the distribution and what stats?
Bugs also happen.
So bugs might happen.
I mean, plainer bugs or some not developed things,
not yet developed things,
some easy stuff which is not yet there.
Postgres planner is not super powerful.
It's quite powerful,
but compared to SQL Server, for example,
the code base is much smaller
and the amount of,
the number of engineering hours
invested to SQL Server Planner
are much bigger than into Postgres.
So, of course, it's evolving,
but still some things might be not yet developed,
and sometimes people see quite simple things.
To me, sometimes you think,
oh, it's so obvious.
Why it's not here? And as an example, it's so obvious. Why it's not here?
And as an example, it's very unrelated, but it's still Postgres.
Last week, transaction timeout was committed by Alexander Karatkov.
Many thanks to him.
And transaction timeout was my idea.
And when I came to this idea, I was thinking, am I stupid?
Am I missing something? The whole world still didn't raise this. And when I came to this idea, I was thinking, am I stupid?
Am I missing something?
The whole world still didn't raise this.
I searched.
I don't see discussion.
Why transaction timeout is not present in Postgres still?
It's so many dozens of years of development.
And then a few folks validated it's a good idea, actually.
And then it got support. So if you spend time with Postgres enough, with Planner,
you might indeed see cases which are not yet well-developed
or just simply bugs. Sometimes just bugs happen as well.
Yeah.
I've spoken to several customers that have long histories with DB2, Oracle,
and they're surprised by some things
but they also talk about some issues with like especially on the oracle side about kind of
patchwork solutions so sometimes more code in these optimizers is not necessarily better because
it's like patched fix on patch fix on patch fix and it leads to quite like nasty weird behaviors
so i do admire the elegance of the simplicity but equally i think you're right people i know that
that know these things well i've used sql server or sql server or db2 in the past definitely say
nice things about those planners well my opinion is also based not on my experience my last experience outside postgres was like very
long ago i mean real experience when you do develop something for long it was more than 15
years ago so i i'm just translating other folks opinions and but they seem reasonable these
opinions and they are experienced guys who actually came to Postgres and said, you know, let's just admit the planner can be much, much better.
We need to continue developing things, improving things.
And in this case, let's advertise a recent blog post from David Rowley.
Yeah.
So I've noticed many items.
It was his contribution, right?
Yeah, but also many from other people too.
Yeah, yeah, yeah.
So like the post is what's new in the planner in Postgres 16.
Let's like unfold it and so on.
And it's great.
It's also in my to-do to inspect deeper and maybe do some tests.
So it's a good good post very good
and i think i love most about it was a inspiring people to upgrade the postgres 16 came out a few
months ago now we had some minor at least one i think two now minor yeah february i already know
at least one a company that has upgraded as a direct result of that blog post coming
out, just to see if some of their queries
have been treated as a result.
So yeah, thank you
to David. This is important to share
by the way, because other people think
like, maybe it's still too early
and so on. And you know, like
I recently was approached and just
the question, do you know anything bad
about Postgres 16?
And I said, no, actually.
It's not Postgres 9.3, no.
9.3 was terrible.
I remember it.
I think I've got a new answer for you that I think is more on brand.
I think you should say yes, but they're also true in 15, 14, 13, 12, 11, and 10.
Okay.
Yeah.
Anyway, I know what you mean.
Did you mean like you don't know anything bad in 16 that's new in 16?
Nobody bumped into some problems like complaining that it's bad.
A few companies upgraded.
I mean, maybe not huge installations, but still.
It has a good reputation so far.
And, of course, the risks are still there,
and tomorrow we might see some problem,
but these risks already go down, down, and down over time, right?
And, I mean, it's important to share.
So, like, we upgraded, everything is good in our case.
It doesn't mean in any case it will be good,
but it builds up the level of confidence over time,
and more people will be good, but it builds up the level of confidence over time and more people will be upgrading.
Nice one.
Okay, good.
Did you have anything else you wanted to add?
I always have a lot of things, but I think we're out of time.
Let's not consume a lot of time.
Of course, it's good if you run more or walk your dog more,
listening to us longer.
But still, there are other things to do.
So thank you so much.
And thank you for support.
I didn't follow really well, but we see comments on YouTube, for example.
It's good.
Please share your experience as well.
And other people also see it.
And we can highlight some comments not only about red bull
right and uh or beer it's it's so free advert i'm teasing yeah um yeah about that like i think
over a thousand people have listened to the last few episodes which is pretty cool yeah so thank
you all good okay cheers bye-bye bye