Postgres FM - Minor releases
Episode Date: May 17, 2024Nikolay and Michael discuss Postgres minor releases — how the schedule works, options for upgrading to them, and the importance of reading the release notes. Here are some links to things ...they mentioned:PostgreSQL 16.3, 15.7, 14.12, 13.15, and 12.19 released (announcement) https://www.postgresql.org/about/news/postgresql-163-157-1412-1315-and-1219-released-2858/ PostgreSQL versioning policy https://www.postgresql.org/support/versioning/ PostgreSQL 14.4 release notes (most recent minor release not on the usual schedule) https://www.postgresql.org/docs/release/14.4/ Minor release roadmap https://www.postgresql.org/developer/roadmap/ Our last episode on upgrades (major and minor) https://postgres.fm/episodes/upgrades  All versions of Postgres https://bucardo.org/postgres_all_versions.html Why upgrade? (Useful tool by depesz) https://why-upgrade.depesz.com/ Stop and start Postgres faster https://postgres.fm/episodes/stop-and-start-postgres-faster WAL and checkpoint tuning https://postgres.fm/episodes/wal-and-checkpoint-tuning  Postgres CVE-2024-4317 and how to fix the system views (5 mins of Postgres by Lukas Fittl) https://youtube.com/watch?v=fLwVvJ3fKdA Our episode on NULL https://postgres.fm/episodes/nulls-the-good-the-bad-the-ugly-and-the-unknown What should we do for episode 100? https://www.reddit.com/r/PostgreSQL/comments/1cn8ajh/what_should_we_do_for_episode_100_of_postgres_fm/ ~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is brought to you by:Nikolay Samokhvalov, founder of Postgres.aiMichael Christofides, founder of pgMustardWith special thanks to:Jessie Draws for the amazing artworkÂ
Transcript
Discussion (0)
Hello and welcome to PostgresFM, a weekly show about all things PostgresQL.
I am Michael, founder of PGMustard, and this is my co-host Nikolai, founder of PostgresAI.
Hey Nikolai, what are we talking about today?
Hi Michael, about upgrades again, but it will be some small talk.
Just minor upgrades, easy, right?
Yeah, minor episode about minor upgrades.
Replace binaries, restart, bye--bye that's it episode done this is
what documentation says right replace binaries restart easy peasy yeah so well this was my
choice and i wanted to pick it for a few reasons one is that we just had new minor releases
announced a few days ago as of recording about a week ago as of the time this
will go out planned one right every two months yes yes planned one not like urgent yes exactly
so i thought it was good time to bring it up i thought there were a couple of interesting things
in it but also it was a good reminder to me that we hadn't talked about in a while i did look back
and the last time we spoke about upgrades it was also my idea and it was the time where i was
suggesting boring topic after boring topic so thank you for not thank you for stopping having
a go at me when i bring up boring topics and partly because i see a lot of people especially
when they use managed service providers on really quite old versions
and i don't mind so much if it's an old major version i can see there's reasons like it i
encourage them to upgrade but i do understand that there's reasons but when they're on a supported
major version and lagging really far behind on minor versions is difficult to justify.
The guys on version 14.3, for example, which was a bad one.
Do you remember?
Yeah. It was corruption.
Rindex concurrently corruption.
Last time we mentioned a couple of things about the schedule,
but I didn't have the details and I've looked it up this time.
And I think that's a good reminder that there's normally a schedule for these minor releases i keep calling them minor versions
but the the docs don't call them that they're all they're called minor releases in the in the
documentation which i found interesting major versions and minor releases and i think there's
just some inconsistency small one maybe but it felt deliberate like it felt very like anyway
as you said this recent one last week was a planned release and they in general are it says
unless otherwise stated on the second thursday of february may august and november yeah i was i was
wrong every three months not two there's a is a wiki page where there is a table
with schedule when major releases become end of life.
And also this policy for planned releases.
But sometimes some planned releases happen as well.
I also saw somewhere statistics.
Ah, it was on the YUpgrade, yupgrade.depeche.com.
We should mention it a few times in this episode
because it's a super convenient tool to see differences in better form.
You can find everything in the release notes
or just selecting, I don't know, like selecting some things in system catalogs,
but it's much easier just to go to yupgrade.depeche.com
and then just see the differences choosing between a couple of minor versions
or also major versions.
It's also supported.
So, yeah, long list and security-related stuff is highlighted.
So there I saw how many changes happened in each major version and how many minor releases happened.
And can you imagine, I think 7.4, more than 30 minor releases happened.
It was 7.4.30.
Oh, wow.
Yeah, it was popular
major version, I guess.
So what would expect to be about
25 years with
four updates per year,
maybe 19 or 20?
94, 96, I think.
We were like 25, maybe 24 releases.
But now it's shorter, like 20 maybe.
I don't remember details.
Right now, yeah.
So there are many minor versions.
And sometimes we saw it happened,
like minor version planned release,
and then in a week or two, new version released
because some problem just identified
right so minor version can bring not only fixes but problems because it's regular thing in software
development right despite all the measures to control the quality and regression tests and performance testing, everything.
It still happens.
So downgrades or fix it forward,
like let's wait until the next minor release.
It happens still.
I was going to say, though,
I think it's been happening a lot less recently.
So I think the only one I remember,
and this is why I brought up the schedule, is because you mentioned 14.3.
And it was 14.4 was the last unscheduled.
I actually don't know if they've got a name for it.
I guess it is just a minor release.
But it was not on the schedule we mentioned.
It came out in June.
So a month after, about a month after.
There are more.
I don't remember, actually.
More recent ones? Maybe. I don are more. I don't remember, actually. More recent ones?
Maybe.
I don't remember.
I don't remember.
But let's discuss the process.
And from there, probably we'll return to difficulties, right?
So as I said, the documentation says the process is super simple.
Replace binaries and restart.
But in cloud, probably you just should provision new replica
and perform switchover, right?
Well, what do you mean in cloud?
Do you mean using a managed provider?
I mean when it's easy to take another VM.
Like in general, right?
It can be your own cloud or something.
If it's easier for you just to bring new VM
than to deal with dependencies and conflicts of binaries,
I don't know, like some packages.
You know you can provision new node
and when you install from apt or rpm,
apt install, apt get install,
or yum install,
yum install, how to pronounce,
then you get always the latest minor version.
You cannot specify minor version.
We will discuss it later, I guess, right?
Right, right.
It's a big problem, actually, but we will touch it later.
So in this case, in new node,
you will have already updated Postgres.
It's replica.
The lag is close to zero.
Asynchronous replica, for example,
then you just perform controlled switchover. Controlled failover, also known as switchover,
right? In this case, all good, right? Well, I do think that process would work
most releases, but I do think there are other parts that you shouldn't forget which are like
read the release notes there might be other step additional steps you need to take in order to no
no no no no let's let's let's i'm not forgetting anything here okay i'm just talking about the
technical process of upgrade of course not only you need to read all the notes you need to properly
test and almost nobody does it properly because it's a lot of work.
I mean, I'm sure clouds do this.
That's why they lag many months usually.
They do a lot of testing, right?
That's why usually they lag behind official minor versions,
sometimes skipping some of them actually.
But I'm just talking about two big approaches to upgrade.
Replace binary and restart, as the documentation says.
By the way, does the documentation already mention how to restart faster?
It doesn't mention checkpoints.
Exactly.
It should mention it.
You did a great whole episode on how to do it faster.
So I'll link that up in the show notes for anybody
who wasn't a listener back then.
Yeah, super easy.
If you want to restart faster,
you need to remember about shutdown checkpoint.
Shutdown checkpoint will take long.
If you have Maxwell size and checkpoint timeout tuned,
you should have it tuned.
We had another episode about it.
The problem is that during shutdown checkpoint,
Postgres doesn't accept new queries
to execute at all.
And that's a big problem. Actually, it could be
improved in Postgres itself because it
could do two checkpoints. Why not?
Like pre-shutdown checkpoint
and actual shutdown checkpoint, which is
super fast. So our current
recipe for all Postgres versions to restart faster, you perform explicit shutdown checkpoint, which is super fast. So our current recipe for all Postgres versions
to restart faster, you perform explicit shutdown checkpoint,
which is not blocking anyone.
Queries are executed.
And then immediately after it's done,
you restart or you shutdown.
Restart consists of shutdown and start, right?
In this case, shutdown checkpoint has almost nothing to do,
and it's fast.
This saves a lot of time.
Yeah.
Last time you mentioned there was some discussion in your team
as to whether even explicitly doing two checkpoints might make sense
because if the first one, if you've done,
I don't know if that discussion went anywhere,
if you did some further testing there.
In total, it becomes already three checkpoints.
Three, yeah.
Two explicit and
one shut down. Yeah, anyway, you can
restart a lot
in a heavily loaded
system if you know this
simple trick.
And this simple trick, we
code it everywhere when we
automate some things,
major upgrades, minor upgrades,
various things. But the good thing, back to my point that we have minor upgrades, various things.
But the good thing, back to my point that we have two approaches,
two recipes.
One recipe is this, replace binary and restart.
And another recipe is what maybe in other database worlds,
not Postgres, but maybe Oracle, for example,
or SQL Server, what they call rolling upgrade, right?
When you upgrade one replica, another replica, and so on,
and then you perform switchover maybe multiple times, I don't know. The good thing is that current Postgres versions and current Patroni,
I think since 12th, right, or when,
restart is not needed when you promote.
Because before that, to reconfigure,
remember recovery.conf was in a separate
file and not in
postgrescale.conf and
to reconfigure primary
coninfo or
restore command you needed to
perform restart of your replicas
so if you if primary
changes it means
all replicas needed to be
restarted.
But right now, no more.
Everything is good.
Promotion can be done.
And promotion and reconfiguration of your primary on all replicas can be done without restart.
And Patroni does it.
So it means that it's faster.
And since restart is not needed, even no tricks with checkpoints, right?
Yeah.
I think this is how some of the cloud providers are doing it behind the scenes.
Yeah, yeah.
They manage services, yeah.
But good point.
Yeah, let's discuss subtopics here.
So you mentioned you need to check,
for sure you need to check release notes
because release notes might it might say release notes
might say you must re-index some types of indexes for example i don't know like well even let's look
at the latest 16.3 i looked at the release notes and shout out to lucas vittle who did an episode
of his five minutes of postgres on on this as well so we'll link that up from last week. But he reported a security issue
in the appropriate way, very well done.
And it got fixed in 16.3.
And you can only fix it,
like, so there's detailed instructions
in the release notes on how to go about fixing it.
Just applying the minor release in the the usual way in either way that you
described actually i think would not fix the actually would the replica if it depends if you
how you did the replication thing i think if you started a new if you spun up a new replica i think
that might actually be okay but if you if it was like a replica you already had on a minor version, did the minor release upgrade there and then failed over to it,
you wouldn't get the fix for that security issue.
Right.
So people in many cases don't do it, and it's bad.
Well, I think so, especially on managed service.
Imagine your managed service is doing it.
Maybe you've scheduled for some of these.
Somebody already did it.
And maybe they are running
the SQL script. The fix in this
case is there's SQL
scripts. After a situation
with major upgrades
with one of our customers
who was on some
managed service
which was running on Zalando operator,
where major upgrade is fully automated,
and they used it.
After we saw corruption related to JLIPC version,
and we know Zalando,
it was implemented there, automation, but it's not enabled by default.
It means that you need to just specify a special parameter to enable automation to mitigate JLIPC version change.
When you perform upgrade of your Spilo, and Spilo, this is part of Zalando operator.
It's like an image ready to run Postgres and Patroni in clouds,
in AWS, first of all.
So if you don't specify a parameter, you get corruption of indexes.
And the fact that it was not enabled by this managed Postgres provider
means that even they didn't read release notes because there it was specified.
I knew about that automation.
I was curious why it didn't work in this case.
And I immediately saw in release notes that there is some parameter that's not enabled by default.
So even managed Postgres providers
sometimes skip reading release notes.
It's a big problem, but it's a regular problem.
We buy things and we don't read manuals, right?
We just try to use them right away.
Same things here.
It's like psychology.
So I don't know.
Of course, let's be like these kind of guys, like, always read manual, right?
Always read release notes.
Why upgrade Depeche.com is a very convenient tool to read differences.
Well, and we have this gift that the release notes are good. They include full details of, but like when I say full details,
it's kind of an abridged version of every issue that was fixed.
It's like one or two paragraphs.
It's very, very simple, very easy to understand,
even if you don't use the feature.
It's not that long.
It doesn't, I did, I read the 16.3 ones quite quickly.
Maybe it took me five, 10 minutes just to look through them before the episode,
just because I wanted to see what had changed.
In fact, there's a couple in there that I think you'll like, which is nice.
And yeah, it doesn't take ages.
They're easy to find.
And I think we've just been a little bit,
well, I get frustrated when I use iOS, I use an Apple phone, and every single app update I get,
if I look at the release notes, it just says bug fixes and performance improvements. That's all I
ever get. And it's so annoying. And we get so much more than that in Postgres. Anyway, it feels like
such a gift that they're giving
us you know what i lack what i like in release notes links to commits i always have yeah many
minutes spent trying to to link proper release notes it's mostly related to major releases though
but minor ones as well sometimes and i try to find the exact commit and commit fast
entry and git commit
just to understand what
has changed. And it would be great
to have links right in the release notes.
Some software
has it in the
release notes. The link commits and
pull requests and so on.
So the bottom line about release notes
is you should read them
even if you're on managed service, managed Postgres.
You should read them because who knows what your provider has missed.
Maybe they were good a couple of years ago,
but then some good guys were fired.
It happens today. And this year
probably they are not so good already.
So it happens. And unfortunately
here I also like
if
managed service providers could provide more
transparency how they
what they tested, how they tested,
blah, blah, blah. Sometimes
maybe I would read that
instead of official release notes like results of testing how they test it, blah, blah, blah. Sometimes, maybe I would read that instead,
instead of official release notes,
like results of testing, for example, published or something.
It would be great.
But I know it takes sometimes a few months
for them to incorporate release.
Yeah, I would love that.
Imagine getting an email from your provider saying
your scheduled maintenance is, like your database is due to be upgraded at your scheduled time of this. Here's what we're doing. We've already done the release.
Here are results of tests. Details.
Yeah, that would be amazing. And then all you have to do is read the email and you know, you trust them to get on with it. I would read a few times, but then I know the detail.
I would be impressed, for example, with the level of detail,
and next time probably I already would rely on that.
But this process would be great.
But not only you need to read and perform actions,
some versions require you to do some actions,
like rebuild indexes to mitigate possible corruption you had.
Or something like security-wise.
Not only this.
It happens in minor versions as well.
But also you must test it.
Two more things.
You must test it and you should not forget about extensions.
Great.
So how do you recommend testing?
Well, regular testing.
Testing means like you should do the same types of actions.
If you do this official recipe with restart,
replacing binary restart,
you should do it on some lower environment
a few weeks before production upgrade,
maybe a week, at least one week.
In a lot of the non-production systems I've seen,
they're not under heavy load.
And they wouldn't be like long-running queries
and things that could actually trip you up.
You know, I don't think we do need like load testing
for minor releases.
It's super expensive to conduct, usually.
You need separate environments, separate machines,
and a lot of actions.
I would do it only, like, ideally, I would skip it, maybe,
unless you're a provider.
In this case, you have a lot of databases.
It's better to do it.
But if I'm an organization with just a single cluster or a few clusters,
dozens of clusters, then I just
would think about
testing it only if I
see potential changes
can affect performance.
Maybe
if you have good workflow of testing,
very good automation, it's good
to have. But what to test?
Simple synthetic benchmarks,
it's already tested in these load farms,
performance farms, right?
It's already happening on various types
of operational systems and so on.
So maybe it's not interesting.
You need to do it.
Load testing is a complex topic.
But at least to try to check
that packaging works as expected.
For example, preparing for this episode, I just quickly asked our bot to extract experience from discussed in mailing lists issues with minor upgrades. And the bot found some case
for Postgres
961 upgrading to 962
when Postgres could
not start. It was
related to some problems with
some corrupted
pghba.conf.
Packaging put some
placeholders there.
Wait, which versions?
961 to 962.
Okay, wow.
That's six years ago.
So not six even.
Probably seven or eight, but it's still not that long ago.
Yeah, it doesn't matter, but packaging is another layer
that can introduce some problems,
and you upgrade and something wrong happens.
Postgres itself was well tested,
but apt package, for example,
was not well tested, for example.
And if you don't test it on your environment
and don't encounter with problems of upgrading,
it's bad, right?
So you should do it.
And then you should run Postgres for some time just to see that it's working
and all your code is working and so on.
Well, and I think I probably should have asked you about extensions first
because it feels like testing is also about testing your extensions.
And the way extensions work, the core aren't testing them right and your combination
of extensions might be even if even if you're mostly using common extensions not even you
might not even have any private ones or you might be using relatively few your combination might not
have been tested by anybody yet the fact you've got these four or five extensions so i do think
everyone is talking about these problems with combination,
but I personally didn't see problems with combination.
I know this is a quite popular topic,
but maybe I'm using too few extensions usually
or seeing them being used like less than 10 or less than 20,
for example.
I don't know.
What I know is, first of all, a minor version of extension is not changing unless you do
it explicitly with alter extension, right?
Alter extension update to version or something like that, right?
Or just update to the latest available version.
And I usually, what I do usually, I check, and our checkup tool checks PGSTAT available extensions.
It has currently installed version and available version.
And if we see mismatch, it means that update didn't happen.
And it happens all the time.
People don't upgrade them.
They skip it all the time.
And packages don't upgrade them.
But maybe it's good.
I don't know, actually.
The problem is...
Do you then schedule them
at the same time as minor updates
for Postgres? Good question.
Maybe I would...
Yeah. We know this dilemma, right?
Like if we...
Change one thing at a time.
Change one thing at a time. Overhead is huge.
It's like doing things in separate transactions,
transaction overhead.
So you need to plan it, coordinate it.
If you have some bureaucracy,
and you should have some bureaucracy,
like approvals and then actual planning and approvals,
description of what to do, how to roll back, and so on,
and how to downgrade. We will touch this very soon i promise but if you
combine everything in one shot something goes not right and you don't don't know what is it
right i don't know maybe i would plan like usually we plan it separately, usually. But maybe it's not perfect, honestly.
Like extensions usually lack love, I would say.
Like DBAs don't go there usually.
Not usually, but DBAs like, oh, we upgraded this, upgraded that, good, done.
Extensions, oh, it's like backend engineers need them, right?
So unless it's like DBA kind of extension like page inspect or pg buffer buffer
cache or something which usually like do we need to upgrade them even pgstat statements is lagging
sometimes and i tell you the story i remember pgstat k cache was installed on very heavily
loaded system and it got upgraded silently it was rpm it was centos i think and rpm was upgraded
automatically with various stuff when even not non-dba but some sres upgraded operational system
packages and we had exclusions for all postgres packages, but not pgstat kcache.
It was not in the block list.
And it got upgraded.
And then every server started crashing with segfault and so on.
It was a bug introduced in pgstat kcache, not well tested,
not noticed in lower environments
because nobody tested properly
upgrades of operational system components
there. It's like
testing should be done properly
but it requires a lot of effort, coordination
and so on, right? So minor
extensions might introduce problems
even if you not
explicitly use them but
they are loaded in sharp reload libraries
it still can be dangerous.
So it can crush your
server if there is some bug.
So I don't know. Maybe
it should be in one shot.
Miner version plus all related
extensions, but
contrib modules, they have
the same cycle, but
third-party extensions
like pgstat kcache, they have their same cycle, but third-party extensions like pgStaticCache, they have their
own cycle and you need to follow all of them and check release notes of all of them. Honestly,
I like the extensibility, but I also like that when everything is like monolithic and comes
well tested. We can't have it both ways they can we
especially like if we see the progress PG vector has been making so rapidly
because they can just release multiple new features per year it's hard to argue
against that but cloud providers lag a lot with upgrading it it's super hot
topic but look at a double yes they just PgVector to 0.7.0
only a few days ago, but it was released. Yeah, I don't remember. I saw some lags,
huge lags of months again. Everyone needs it right now. Things are moving so fast,
but I guess it's a lot of testing and maybe adjustments and so on. And if you check Cloud SQL, they lag even more, I think.
Well, yeah, I think you might be thinking of the wrong provider
because I've always thought AWS have been really hot
on PG Vector updates especially.
And also, I was going to give them a shout out
because as a big company, I'm a bit surprised
they're so able to ship minor versions.
I think they ship the these
latest minor version releases within a day of the announcement 070 with small leg not pg vector
sorry i meant the um postgresql like 16.3 yeah and well there were only two providers I saw that shipped it. And I haven't seen any since so quickly.
And that was AWS and Crunchy Data.
That's cool.
Yeah. Got the minor releases out really quickly, which sadly, as you say, isn't that common.
And I'm growing more and more aware of how much people lag.
Yeah. Cloud SQL are lagging, unfortunately.
Last time I checked, and I checked a few times,
they don't lag a lot.
But what you see in documentation,
it's just problem of documentation.
Got it.
So if you provision machine, you see it's quite up to date.
Yeah, I didn't provision new ones on Cloud SQL. I did just
check their docs. Yeah, there are
some SLAs in
terms of version lag and they usually
define and try
to follow. I don't think Cloud SQL even
support version 16
though in PostgreSQL, do they?
Yeah, good point. Probably
not yet. But major
version is a different story.
It can lag like half a year easily.
But it's already more than half a year, right?
Quick question.
How do you feel about...
I had never thought about it this way,
but I think if I was picking a new provider today,
this is one of the things I would look at
is how much do they lag on minor version updates.
Yeah, because it feels dangerous if I need some bug to be fixed soon. And they tell me
it will take even not months, weeks. It's frustrating. So yeah.
Sorry, did you want to talk about downgrades?
Downgrades is the topic
which you must have
if you're a serious organization,
but in Postgres,
it's not enterprise-ready at all.
I mean, if you follow
this approach
and official packages,
official approach, again,
it's replace binary and restart.
But both apt and rpm packages support only the final the latest the latest uh not final latest uh minor version so
how am i supposed to downgrade the answer is it's not supported right or do you have to like store
them some like do you have to store them on your side?
You can download packages probably and deal with all dependencies.
I always end up screwing myself up completely and starting from scratch.
And this is exactly where I like having ability to provision new VM.
If I can provision new VM, we have different approach.
I upgrade replica. I test it, probably switch over, probably keep one replica provision new vm if i can provision new vm we have different approach i i upgrade a replica i
test it probably switch over probably keep one replica on previous version just in case you need
to go back yeah and probably i can do cloning and and and so on in managed services i guess it's easy
to downgrade right you just you can just choose version which you want. Or no. I'm not a big user of RDS
on Cloud SQL. I usually have
like minor upgrades.
I never was
involved, honestly.
Because I think it supports,
right? It's a little bit biological, but
not for like... No, no, no.
No, no, no. It should be possible
to downgrade, no? If we don't support downgrades,
like, maybe it's not a problem.
Maybe nobody needs it.
Postgres never requires you to downgrade.
It's so well tested, right?
I've never seen someone downgrade a minor version, though.
It is a good point.
Yeah, I don't know.
Let's provision some notes and check.
I will have answer and follow-up comment, probably.
Nice.
Yeah, but in my vision, downgrades are needed.
Otherwise, you don't know what to put in your plan.
In plan, usually management requires what if things go south?
And you should have the point.
This is our rollback plan or downgrade or something.
Reverse plan, right?
And if downgrades are not supported,
but again, if you have multiple machines, you can play with it and include into the plan the
idea that you can move back to previous version just performing switchover backwards, right?
Yeah. There is a nice kind of quote from the docs that I pulled out before the episode. It says, I think in the versioning policy,
the community considers performing minor upgrades to be less risky than continuing to run an old minor version.
We recommend that users always run the current minor release associated with their mage version.
So, you know, it might even answer the question of whether they consider downgrading.
It's a very interesting point.
Yeah, well, this is a good point.
We didn't discuss it,
but in general, you should upgrade quite fast.
Maintain, like, the lag of versions should be very small.
Otherwise, you skip a lot of optimizations and bug fixes.
Sometimes good optimizations happen in minor versions as well.
Really?
Yes.
I don't remember.
Yeah, well, if
something was completely wrong,
it's considered a kind of
bug, right? In this case,
it can be faster.
Sometimes things work not as expected.
Like, it was planned
to be fast, but it was not fast because of some problem in code.
And if this problem is fixed, it becomes fast as expected.
This happens.
Oh, interesting. Fair enough.
Cool.
This is it, right?
What else about minor regrets?
I've only got one more thing, which is kind of a fun one.
I found reading the release notes like quite fun
like it was quite it's there was one in particular that made me smile made me think of you actually i
thought you'd quite like it and that was a fix by david rowley let me i'm not sure how to pronounce
rowley rowley you ask me yeah yeah i don't know one of the fixes he did in 16.3, let me find it quickly,
was, I think I remember it actually,
it was a partitioning bug around nulls.
So remember how much we talked about feeling like nulls catch us out still?
Well, it catches postgres developers out too
if you partitioned on a boolean column so like true false or null and you so you have three
partitions one that data goes in that's true one data goes in when it's false and one when it's null. And then you've queried where X is not false.
The partition pruning, the planner would prune out the null partition,
even though it should be in there.
It's not false, but it was pruned out so you'd get incorrect results.
Because it's unknown.
Yeah, it's three-value logic, my favorite topic in SQL.
Yeah.
So I thought it would make you feel less bad
because it catches even Postgres developers.
Three-value logic should be central topic in all courses,
educational courses when people study SQL.
My daughter, she studied SQL multiple times.
It's high school, currently at university,
and right now she has yet another SQL course.
And they don't cover 3D logic at UCSD.
This is ridiculous.
Absolutely ridiculous. Because this is a source of so many troubles.
Yet another one.
My favorite, right?
We discussed it also.
We had an episode about nulls.
Yeah, I'll link that up as well.
If you touch null, you should expect unknown.
Unknown is not true at all, right?
And not false as well.
Right.
Last thing, we have episode 100 coming up.
I put a post on Reddit asking for ideas on what we should do.
Yeah.
I will link it up in case anybody wants to send us any more ideas.
Anything you wanted to ask people, Nikolai?
Yeah, if someone is running Postgres 100 plus terabytes
under good load, at least dozens of transactions per second,
reach out to us.
Let's discuss the complexities.
Maybe we will have you as a guest.
Yeah, that'd be great.
By the way, I just checked Cloud SQL
in RDS. I didn't find, maybe it's there,
but I didn't find how to choose minor
version in Cloud SQL, but
in RDS, it's easy, and they
already have all these
minor versions released last Thursday.
Yeah, on the day.
That's the day.
As you said, yeah. 16.3, 15.7 and but i can choose 16.1 for
example interesting that i cannot choose 14.3 good oh there's like a weird quirk i think aws
fixed the issue and called it 14.3, but that's like an old...
I remember something weird around that,
but yeah, I'm not sure if that's relevant.
They also have revisions, I guess,
because it's hyphen R1, R2,
means like I think revision 1, revision 2
for the same minor version,
some internal versioning additionally.
Interesting.
Because it's not Postgres, right?
It's modified Postgres.
Yeah, yeah, true.
Things that they've added or changed.
Yeah.
Right, right.
Well, good.
So in general, it means that downgrades are possible.
If you manage Postgres like this, like RDS, you can have...
You already upgraded the whole cluster,
but then you understand if anything goes wrong,
you can provision a replica on lower version,
previous version, for example,
and switch over to it.
But if you cannot
do it, you need to keep
old nodes,
unupgraded nodes, and then
be ready to go back to them. Also
possible, but maybe requires more
resources to be spent. You need to run
them for some time.
So, easy topic, right? Simple.
Well, I'm glad we covered it in a bit more detail.
Maybe this packaging minor version problem,
I know people try to solve it.
Someone from Percona, I remember, tried to solve it.
Oh.
Yeah, at least for containers.
Images should be possible to, like, you need to specify.
Is it possible for so-called official images to specify my new version?
I'm going to check it.
I'm curious.
Can we specify 16.3 right now?
Docker run Postgres colon 16.3.
Docker run Postgres 16.2. Let's try.2 first. Run Postgres. 16.2.
Let's try.2 first.
Let's see.
Unable to find image.
Yeah, it's pulling, so it's possible.
This is good.
We have different minor versions there,
so we can test at least something on containers.
What about 16.3?
And downgrade.
Yeah, yeah, yeah.
16.3 also there
16.4 is not
is not
available because
it's not
it doesn't exist yet
so
good
containers are good
I already forgot
I used it
multiple times
I just forgot
oh cool
good
okay
thanks so much
Nikolai
thank you Michael
see you next time
catch you next week
bye