Postgres FM - Comments and metadata

Episode Date: February 13, 2026

Nik and Michael discuss query level comments, object level comments, and another way of adding object level metadata. Here are some links to things they mentioned: Object comments https://w...ww.postgresql.org/docs/current/sql-comment.htmlQuery comment syntax (from an old version of the docs) https://www.postgresql.org/docs/7.0/syntax519.htmSQL Comments, Please! (Post by Markus Winand) https://modern-sql.com/caniuse/comments“While C-style block comments are passed to the server for processing and removal, SQL-standard comments are removed by psql.” https://www.postgresql.org/docs/current/app-psql.htmlmarginalia https://github.com/basecamp/marginaliatrack_activity_query_size https://www.postgresql.org/docs/current/runtime-config-statistics.html#GUC-TRACK-ACTIVITY-QUERY-SIZECustom Properties for Database Objects Using SECURITY LABELS (post by Andrei Lepikhov) https://www.pgedge.com/blog/custom-properties-for-postgresql-database-objects-without-core-patches~~~What did you like or not like? What should we discuss next time? Let us know via a YouTube comment, on social media, or by commenting on our Google doc!~~~Postgres FM is produced by:Michael Christofides, founder of pgMustardNikolay Samokhvalov, founder of Postgres.aiWith credit to:Jessie Draws for the elephant artwork

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to Postgres FM, show about all things PostgresQL. I am Michael, founder of PG-Mustard, and I'm joined as usual by Nick, founder of Postgres AI. Hey, Nick. Hi, Michael, how are you? I am doing okay, thank you. How are you? Recovery from some bad flu, but all good. Yeah, good to have you back. And yeah, so what are we talking about today?
Starting point is 00:00:25 Let's talk about metadata and various meanings. and including like comments, maybe more, right? Comments to database objects, comments inside queries, inside PLPG scale, stored procedures and functions. Just broadly, what kind of metadata, it makes sense to store, and how, pros and cons, I don't know. Yeah, I think so, and also some like side effects of having it in there or use cases.
Starting point is 00:00:55 I was actually quite surprised we hadn't talked about it already. I saw a recent blogger. post by Marcus Winand on ModernSQL about, I think it was mostly actually about, it was called SQL comments, but it was mostly about query. I think so. It doesn't have a date and by default, I assume all the posts are like very old. Somehow I think so. But yeah, maybe it's new stuff.
Starting point is 00:01:21 But it's very small post, right? Yeah, but it's small but also it's one of those, on modern SQL, he has these like nice visualizations of which. databases have supported, which syntax from which dates. And I always enjoy those when more. I also enjoy, but practically, I rarely leave the ecosystem of Postgres. So it's just, okay, good to know, Postgres is good here as well and here and here. And that's it. Occasionally, you come across them where Postgres doesn't support some syntax that another database does. It's not common, but I have been.
Starting point is 00:02:00 caught out a few times thinking, oh, wow, that's, or not caught out, but surprised thinking, oh, it's interesting there are somewhere Oracle or SQL server, have some new syntax that's even in the standard that we don't yet have. So it isn't often, and it is really nice to see especially the dark green ticks. It's like fully compliant tick, no no little subtext on how it deviates from the standard or anything else. Yeah, but that was about query level comments so single line and multi-line comments yeah and in the main like SQL runtime usually there is a standard minus or comment yeah and it's interesting to see that my SQL and Maria DB they have issues there if you there's a requirement to have white space after
Starting point is 00:02:47 two hyphens yeah okay but yeah postgis supports that because it's standard and also it supports C-style comments. Exactly. Caster is slash. Yeah. Which crucially can be, yeah, they're good for multi-line comments, aren't they? So you can have the first slash star. Yeah.
Starting point is 00:03:09 And then also start writing. I try to use them all the time because they are predictable. If you use SQL standard comments, you might have issues when, for example, in some cases your query line endings are stripped from your query. In this case, it's quite messed up situation because everything becomes a comment. Yeah, that's a really good point. And I wasn't going to bring this up until later because it felt like a really minor detail. But I was relatively surprised to read that PSQL strips out the single line,
Starting point is 00:03:44 the standard comments before even sending them to the service, whereas it doesn't for multi-lines. Yeah, exactly, client-side. So when they're used, for example, if you want that information, server side for some reason, it needs to be one of the C-style comments. Yeah, and we do have situations where we appreciate comments. For example, coming to PG-Stat statements. Let's start with P-G-Stat activity, first of all.
Starting point is 00:04:11 Comments for queries can be super useful to pass some, I don't know, like trace ID, like origin, even URL sometimes you can indicate which part of your application generated this query or participated in generation, right? There are some libraries for different languages. I remember one. For Ruby on Rails, it's called marginalia or marginalia.
Starting point is 00:04:36 It'll be margin because it's from the, I think the, yeah, from the margins. Like in a book, if you write some little notes about it. Yeah, so it's very useful. It can bring automatically generated comments. to your queries which are coming from RubenR's ORM and it's helpful to trace to analyze and quickly find where this query is coming from. And also how like we obviously can see them in PugetCAT activity.
Starting point is 00:05:07 The downside obviously that it increases the size of the query and by default Pichist activity, the query column it has only 1,024 characteristics. track activity query size setting, we usually recommend to bump it like 10X because we have memory for it. Let's do it. Because queries tend to be bigger and bigger
Starting point is 00:05:30 over years, right? So 1,000 it's not enough. And of course, this comment is put in front of the query and you might not be able to see, unfortunately how SQL is written. It starts
Starting point is 00:05:46 with select. And some Sometimes ORMs, they put a lot of columns there. And with comment plus column list, you might see that it's truncated and you don't see the from close at all. To see the from close, it's super essential. There are opinions that SQL like sell it's the wrong way around. Yeah.
Starting point is 00:06:11 Because this is where things start to be executed. But it is that it is. Yeah, so if you have a huge helpful comment, it might bite you here because you might have your query trimmed faster and you don't see it. But it is what it is, right? So my recommendation is comments are super helpful here, like from this library or you can write your own or something just to trace the origin of a query and so on. And you just need to bump your track activity query size to have bigger.
Starting point is 00:06:45 Unfortunately, it requires restart. That's the downside of that change. I actually don't know for sure, but I was looking at the marginalia documentation briefly just before this and noticed that the comments were going at the end of the query. So it might be that they've deliberately done that. In this case, you don't see it in traditional activity, maybe, right? Yeah, it's a good point.
Starting point is 00:07:06 Maybe it's even worse, yeah. Yeah, and what would be great, actually, to, I don't know, this is strange, but I recently implemented in our monitoring, I was hearing it's not, it's difficult, but then I just took cloud code and implemented it and it was quite successful. So I implemented the approach I saw in other systems. So we just, we have a mode now in our dashboards in Grafana. We like basically switch. You can see the whole query or you can see the query with stripped less important parts.
Starting point is 00:07:41 And I consider the column list as less important part and I just replace it with, right? not dot dot but a single symbol for there is unicode triple dot. And in this case you have more useful information and comments I think I also strip in this case right but yeah. Makes sense. Depending on situation comments can be helpful for various like observability activities to connect some dots and interesting how comments in queries for example coming in front of oh by the way big downside of having comment in front of query is also that In our old checkup, we had analysis so-called mid-level. So we had high-level.
Starting point is 00:08:23 It's just the whole workload, all metrics for the whole workload. According to like from PGSA statements, of course, it's not the whole. Usually like it's only 5,000 by default queries, normalized queries. But the lowest level, it's individual normalized query, right? Mid-level is when we call it so-called first word analysis. So we just, which word is the first? Okay, select. Is it update?
Starting point is 00:08:48 There is a trick with, because width can combine multiple things in one query. But usually it's quite helpful to understand how many of queries in terms of calls or overall timing, some metric, how many of them are selects versus updates and deletes. You can get stats for rights from TAPL statistics, but to analyze at query level, it's not like straightforward. So this is what we did. And then I remember having comments destroyed this analysis. Of course, it's a method. It's easy to fix, right?
Starting point is 00:09:25 You can just ignore comments. Yeah, but this is interesting. So bringing some observability helpers, you might destroy some observability tools sometimes. Yeah. The other thing I wanted to make sure we mentioned it in this area was how PG-Stat statements works because it does denormalize.
Starting point is 00:09:43 So, yeah, so comments, I think it's a quite a good trade-off, actually. I think I quite like the decision they've made. So they denormalize. So if you have the same query but with two different comments, they will count as the same query. You'll get the same query ID. They'll be grouped together.
Starting point is 00:10:00 But only the first one that gets stored under that query ID. It's not wrong. Unchanged. It's not normal. It's not changed with the comment. Exactly. For example, imagine we have a simple select with blah, blah, blah, where I don't know, like.
Starting point is 00:10:15 Email equals some value or lower of email equals some value. And you decide to put this email as a comment. This is how your PII leaks to PG start statements, but only the first occurrence. That's weird. Yeah, I don't know if I've seen PII in comments. It's an imaginary situation. It's possible, right? Sure.
Starting point is 00:10:36 Yeah. It's definitely possible. For example, we can say, okay, this is user with email, this is acting here. And we have, it's leaked two PGR start statements. Everything else is stripped. So it's normalized. We don't see parameters. But comments, we see only first occurrence.
Starting point is 00:10:55 This is weird. It's weird, but I quite like, imagine, I was thinking, what's the alternative? Either they don't show comments at all, or they have to store loads of copies. Yeah, there are pros and cons here. And, yeah, what I would like to see, it's unresolved problem still. I would like to have ability to pass comments as maybe key value pairs, comma separated or white space separated. For example, you say, okay, application ID this or application component ID this,
Starting point is 00:11:30 like correlation ID, many things, right, URL. And then to be able to have aggregated metrics based on those dimensions. So for example, I know my application consists of various components. I pass this component ID to comments. And then I want to say, I want to see how many calls overall for this query, particularly, how many calls are coming from that component ID versus different component ID. And this type of analysis would be super powerful, basically custom dimensions for PGIS statements. Yeah.
Starting point is 00:12:11 I know this was, it was discussed. for PGRSat K-cash. And the consensus was it should be in PGSTAT statements. And I don't, it was many years ago and I don't know how it ended. But definitely there is like desire to have some kind of analysis. But I imagine it can be quite expensive if implemented not in a good way. But yeah. This is what people want.
Starting point is 00:12:35 They come with questions like, okay, how to identify. We have like many parts of, we have more. one of course in terms of code. Yeah. We have many teams working on different parts. How to identify which part is most expensive in terms of CPU usage, for example. Yeah. Or time spent by database to process this.
Starting point is 00:12:59 And here is where such kind of analysis would be super helpful. Do you see people doing it, like you could, for example, have them connect with different roles. And that would then be loved. Yes. Yes. This is one way. This is indirect. like you could just downgrade this like I just pictured very good flexible approach that you
Starting point is 00:13:21 could do many dimensions but you can downgrade and let your different parts of applications speak using different users and use the fact that PGSatements has user ID yeah downside of this approach would be you need to think how to manage pools in PG Bouncer for example because different users means you need to set different quotas, pool sizes, right? This can be quite unflixable. If you want to have a single quota for all users, how? Maybe it's a question to Pidjabouncer. Maybe it's actually possible, maybe no. It's another question, right? So once I like to, this is, I think, good practice to separate your workload to different segments and each segment works under different debuts. But there is also management overhead
Starting point is 00:14:14 for maintaining various limits and so on. Yeah. I think it's all about, I forgot how. So it's an interesting, maybe some of our listeners has a clear picture what best practice would be here and please leave a comment somewhere. Yeah, or even just what people are doing, what you're doing in practice, it'd be good to hear what solutions people have come up with. What's possible right now, I think is if you, for example, have high, quite high track activity query size, like 10K, for example, I see people even go further, even more like 30K. And you use comments like from marginale or something. And you have already, you started to appreciate performance insights or weight event analysis. We talked about it a lot. In this case, you can start, so you can recognize
Starting point is 00:15:05 different weight events and how many active sessions and segment them by weight event type and weight event. In this case you can bring this knowledge about dimensions to this analysis and start saying, okay, we have usually like this amount of sessions spending on IO and among them like 90% is coming from that part of our application according to comments. This is quite powerful. this, you don't need to do anything except like just to, you don't need to change PIGA statements or how POSGIS works. It's possible right now already.
Starting point is 00:15:43 Yeah, as long as two parts of your application aren't doing the same query. Yeah, that would be done. They can do same query, but they put different comments. Yeah, but it's huge statements, not in POSG stat statements. No, no, I'm talking about weight event analysis. Sorry. Yeah, and I'm talking actually about, we call it lazy approach, like sampling of PIGC activity. It cannot be super frequent because there is overhead.
Starting point is 00:16:07 So for example, every second or every five seconds you sample Pidgest activity and you have a raw query from there, including comments as is. Yeah. If you start using weight sampling, which is great in terms of sampling rate, 10 milliseconds every 10 milliseconds by default samples, but you lose this. You lose the raw query and comments and you have the same problem as Precious statements. These dimensions become not available. at PG-wide sampling level.
Starting point is 00:16:36 So anyway, this is a super interesting observability topic, I think. And what do you think about comments which, I don't know, which are put inside PLPG scale functions, for example? What for is it to describe behavior, like, almost like code comments? Yeah, it can be an explanation of this, what this function does and every piece. Like, if you, for example, look at PostGos code, it's very well commented. Yes. comments are very thorough, right?
Starting point is 00:17:06 They can be huge. Sometimes you open some C file. Dot C file and a huge comment in the beginning, explaining what's happening here. Is that a good idea to put this to function bodies? What's the downs? I think I generally err on the side of commenting things. I like comments.
Starting point is 00:17:27 Although you brought up AI already, I do find some of the, LLM commenting excessive at times or like maybe not excessive in the sense that there's large comments. It's more that there are just comment, there are too many comments, just comments at too many stages. I do like the Postgres style, but they tend to be huge comment blocks describing a whole area then loads of code, not like comments at each line of the code describing what each line's doing. So I like that style, but I tend to find that people, at least the databases I seen over the years aren't commented generally as well as people's applications. I see application
Starting point is 00:18:08 code commented better on average than database code. Maybe I'm looking at the wrong projects, but personally, before knowing the downsides, would think it's a good idea. Are there downsides that I don't know about though? First of all, I agree with you. If comment just explains what next line does, it's like instead of four ways, it's quite silly. Like, stupid how to say better? Yeah, it's a low value comment, right? But if it explains some knowledge and decision making, how it was made, some tradeoffs which were made, this is super valuable.
Starting point is 00:18:47 And right now, I think comments make makes even more sense because sometimes when we engineer something and involve AI, we have some roadmap and some intention and maybe first version is not final implementation of everything. So having to do comment, right? To do fix me, right? Do fix me. This is a meme comment, right?
Starting point is 00:19:15 But these days I think it's maybe, I just feel the shift here because it makes sense to comment some future intentions more often because next time we will revisit this with AI as well. We probably improve in the same. direction we wanted originally. So preserving context now is using comments. Comments makes a lot of sense. It's not always worth putting it as a comment right inside function body because we might end up having huge, like the plan inside function body. And this doesn't feel right and it will consume a lot of bytes stored, right? So maybe some big comments should go as a separate document adjacent to the
Starting point is 00:20:00 like in the same place where we store function in Git for example maybe it's better documented separately right but when you do something and you say okay we do this but we plan to extend it to this and this I like this to do comments because they are in the same place and next time a I or you reading this like you you understand okay this is what we planned here and so to do fix me style makes more sense now. Because we have paid off, we explain it, and we plan to fix it later, why not?
Starting point is 00:20:36 But that was always true. If you work in a team, if you work in a style that's iterative, like any kind of agile process, any kind of extreme programming, that kind of let's do the minimum version and then let's iterate. That's always been true, hasn't it? But it always also been true that a lot of dead code and a lot of such comments, which have very little chance to be really improved. So you say, to do, fix me, you leave this comment, but you never return because of the capacity.
Starting point is 00:21:11 Now it's much easier to return and actually fix because we have AI. So, okay, I see what you mean now. Capacity changed, right? And you think, okay, actually, let's explain all the things like in the comment right here. And we will, we know that we will revisit it. If this code survives and if we don't drop. fully because of some different like understanding of like product or something. And we actually will improve.
Starting point is 00:21:37 I start believing into this, right? Unlike pre-AI era when I knew nobody will have capacity to work on this because everyone is busy too much everything and so on. And this is great actually. Right. So comments are good. If you don't leave comment, some weird decision made, code is hard to understand like why.
Starting point is 00:21:59 is so. Then we are in trouble. And inline comment is great because the AI won't miss it. It's reading this part. The comment is here all clear. But again, if it's some long document, it's better to offload it to some different part. And we slowly move to the topic we definitely wanted to discuss is database object level comments. Oh, before we do, can I do one more for query level comments? Yeah. I just thought it was, I'd forgotten about this. until recently that this is how PG Hint Plan puts hints in. And I think it's true for other databases too, not just Postgres, how hints.
Starting point is 00:22:40 Yeah, it's fascinating to me. That's the method we've chosen. It makes sense, right? If we don't have hints at the database level, how else could we get them at the query level other than putting them in a structured format inside a comment? And I didn't know, until reading the P-SQL thing, I didn't know for sure why it was in a multiline comment other than for readability.
Starting point is 00:23:02 But yeah, I found it really interesting that it uses that syntax, probably because it gets stripped less often. So, yeah, that seems to be another, like, big use case for it, for query-level comments. PSQL doesn't strip C-like comments? Doesn't strip the C-style. That's interesting. Yeah. So you can use PG-Himpland with P-Sq without. Yeah.
Starting point is 00:23:24 So back to functions. my approach is to have good comments, explaining intentions, context, plans maybe, but if it's a huge document, it should be offloaded. But we also have, for each function, we can create a separate metadata piece. We can say comment on function name, comment on function name. Have you seen how many variations comment on a statement has in Posg? Yeah, so I didn't realize. And I think I might start using this more.
Starting point is 00:23:58 Well. For, I didn't realize you could add comments to indexes. Yeah. That's really cool in the sense that sometimes you go to someone, like sometimes someone shows you they've got these 16 indexes, but they don't know like why certain ones were added. And wouldn't it be cool if you could just check the comments on as to what, you know. On constraints, on sequences.
Starting point is 00:24:21 Isn't it like fascinating? I knew tables, I knew columns. I knew like general objects you could put comments on them but I didn't know that there were so many options. That's already hacking. It's already too much, too deep. Yeah, so yeah, it's cool. 44 lines there.
Starting point is 00:24:41 A funny thing, in 2005 or six, when we created the first social network, it was PostGar's plus PHP. And I, some time ago, not far ago, not long ago, I stumbled upon an email first review of my code from someone with experience, actually. And big criticism was like a lack of comments at database object level. Can you imagine 20 plus years ago? And I remember actually how I was like protective and defensive.
Starting point is 00:25:15 Oh wow. Yeah. Yeah. But it's a good thing. And my point right now is it always has been a good thing to have some. some approach in your project and use comments because they can sometimes they become like not valuable right you can have a comment to a table but also for each column yes and I remember I tried to enforce this in multiple teams I had in different projects I tried to
Starting point is 00:25:48 enforce this rule let's do it let's do it after that review because I eventually I actually agree that it's a good thing to have. This is just a lot of metadata. But I remember also seeing, okay, column ID, this is our ID. What comment can you put there on this column? I don't know. Sometimes there is no extra meaning, right? Super simple column. But right now I think it's valuable to think about, and this is engineering level. This is what humans should think. Maybe brainstorming with AI, but what should we really document? at database object comments, in database object comments. And right now it's so easy, right? When we do something, there is no more excuse not to write tests for CI because this is what AI does quite well. You just need to control it. And not just coverage, you should go deeper and say, okay, we need, we coverage is like
Starting point is 00:26:47 it's super simple thing. Okay, we have 80 plus percent. But what does it really mean? We should cover edge cases. corner cases, really test things, right? And the same, it's the same documentation. And comments is our documentation. This is a part of project documentation.
Starting point is 00:27:04 Database tables should have some comments, functions should have comments, columns as well. But of course, if it's like nothing to say about some simple column, okay, it can be skipped. But there should be some rule and the I should be helping to maintain good comments. So later when you try to edit. more features or do refactoring. There is a great context. And also when you work with database, all like all those MCP servers, I don't know, APIs, like if you work with database and it can describe itself, it's great for things you have, right? Instead of guessing the meaning of column
Starting point is 00:27:43 just on column name, you have a comment. That's great. So now I think there's no excuse to avoid this powerful tool. Yeah. And have everything documented. I think you raise some interesting things. I do think that being strict about it on ID columns, you make it a perfect point. There's no point. But the place I've seen is super useful is reporting queries. So I think sometimes it can be quite complex to make sure you are, like, summing the right columns. Or the column means what you think it means.
Starting point is 00:28:19 Does this number, like what, this revenue number, what does it include? What does it include? What doesn't it include? And that's like super relevant when you're when you're trying to report stuff. And maybe sometimes like just the data person knows that. But if they can put it in comments on the schema, then their future, once they hire a team, those people can know it. And nowadays, if an LLM is writing reporting queries, they've got a better chance of getting that right rather than...
Starting point is 00:28:46 Sure mistake. Misans. Exactly, exactly. And the ideal schema, two columns, ID, column ID and it's of type UUID and we should have a comment never put UUID version 4 here
Starting point is 00:29:01 always UID version 7 and second column data JSON B it's a joke just in case Yeah yeah Sounds like Mongo Yeah and then the comment which you extend all the time extending schema of that JSONB right
Starting point is 00:29:17 explaining what's inside Yeah Anyway One of interesting case we discussed recently that was some project which was originally monolith but they split the database into several pieces so when you do this you need to abandon some foreign keys because you cannot have foreign keys between two clusters between two primaries right yeah and i remember we discussed maybe we should maintain some fake foreign keys like imaginary foreign keys and define them in the comments
Starting point is 00:29:48 it's just it was just an idea right because who will be enforcing the rule that nobody will ever write those comments. I don't know. But it's possible. So you have a column in one cluster saying that it should, values here should match values of that column in that cluster. And periodically application or some additional tooling checks this. Anyway, tests and comments are super cheap to write these days.
Starting point is 00:30:15 There should be some rule to enforce in every project to make them rich. Yeah, not only are that you. cheaper though. I think there's, they've always been valuable. I'm a big fan of tests. Like I was, I love making changes to things and knowing that we haven't introduced any regressions. Like, all the previous bugs that now have tests that mean that we can't reintroduce them or reintroduce something similar. So I'm a big fan of tests. I'm a big fan of comments. But I think their value might even be going up in this new world. Like, I think it's not just that they just as valuable and cheaper to add, but I think they might be even more valuable. Like, I would,
Starting point is 00:30:53 be super scared letting an AI make changes to an application that doesn't have good tests coverage these days. Like, just the value of tests for me is going up even higher because I have less trust that people have properly reviewed things. So it's, do you see where I'm coming from that actually these things might be even more valuable, like comments? Quality. Yeah.
Starting point is 00:31:13 I think so. And yeah, checking, obviously, getting it to add these things is one thing, but then checking that they, there is a reasonable comment that it does. is documenting what you think that column is or does. Yeah, I can see that. The one thing I was going to ask you, though, is what do you think about index comments? Do you use them?
Starting point is 00:31:35 I don't remember I used them ever. But it makes sense to document why we created this index. I think so. I even think maybe, because we re-indexed, right, sometimes for bloke maintenance. I was even thinking, like, when we added it, who added it, what for? There might be some interesting metadata.
Starting point is 00:31:56 That's interesting. That's interesting. And not only index, I think. Yeah, I remember cases when I thought, oh, damn, I wish we could establish proper. Like, we could figure out when something was really created in Postgres. Untable or index. Function. Yeah.
Starting point is 00:32:15 If you, for example, establish a rule that when you create something or recreate, rebuild index, you documented it in a comment. Why not? It's an interesting idea, actually. Probably I should borrow it for our PG index pilot project, which their indexes automatically. Yeah, so if a lot of people are using ROMs, they'll have the source control of this, right?
Starting point is 00:32:39 Like they can look up when was this first created, and hopefully that comes with a commitment. Or looking at logs in logs, which is like not easy usually. Looking at logs. Logs, if you document DDL, Yeah, but how long do people store those for? If it's a serious project, usually we have something like elastic and store it for quite long. Not forever, I agree.
Starting point is 00:33:03 It's a lot. But indexes could easily have been created years ago and people... Yeah, now remember, actually this is also interesting. Remember, in Pidge Index pilot, we of course have a couple of tables where we store such metadata and all the history of rebuilding and so on. Nice. Yeah, and this is interesting. this is like to think pros and cons of storing some metadata in a comment versus you have specific
Starting point is 00:33:27 table and store it there. Pros and cons are not obvious to me because of course comment is closer right. It's easier than to consume. But you don't have history for example. Only yeah of current additional table you need to maintain it and so on. Pros of storing some comments separately also permissions. Sometimes you want to store some data which you don't want regular users to observe, for example. It's very specific nuance for like your goals, right? But yeah. Yeah. Last thing we wanted to mention is this blog post from Andrea Lepiehikov. Yeah. An interesting idea to use security labels as metadata storage. It's quite elegant, I think, and we discussed before. So the idea is that we need some metadata storage, but instead of creating a table and write it there, in that case,
Starting point is 00:34:24 it was I think it was PGA Edge, so it was related to probably a multi-master solution and logical replication, bidirectional logical replication, something. So the idea was let's use these security labels coming from integration with S E Linux security stuff and benefit from the fact that you can put anything there. And for different users, unlike, for example, comment, which is single comment for database object, there you can have multiple metadata pieces belonging to specifically, like, to each user. So it's one to many relationships.
Starting point is 00:35:02 So it's interesting. And putting there some custom data, why not actually? Yeah. Yeah, so it's called security labels, but I guess you could just think of them as labels. Yeah. So it's interesting. I never thought about this and maybe there are different use cases
Starting point is 00:35:21 where you can benefit from this. If you need specific comments for specific users for this particular database subject. Yeah, many options. Yeah. Anyway, comments should be used more in the AI era.
Starting point is 00:35:36 Like table level, index level, I like it a lot. Never use, but I'm going to think. Yeah. And even if you're somewhere that isn't using AI stuff all the time. I don't know how many of them there are these days. But just, I think this is useful anyway,
Starting point is 00:35:53 even for teams that are collaborating. Like, this comments are good for communication generally. Good. All right. Nice one, Nikolai. Thanks so much for this and catch you next time. Have a good week. Bye-bye.
Starting point is 00:36:07 You too. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.