Postgres FM - Auditing

Episode Date: January 20, 2023

Here are links to a few things we mentioned: Crunchy Data PostgreSQL Security Technical Implementation Guide (STIG)pgAudit (site)pgAudit (repo)noset (extension)SplunkKibanaTimescaleCREATE TR...IGGER docsTransition table triggers (blog post by David Fetter)Table Audit (blog post by Lorenzo Alberton)Row change auditing options (blog post by CYBERTEC)Hydra founders interview (on Postgres TV)max_slot_wal_keep_sizeeBPFBuilding a perf-like tool for PostgreSQL (talk by Ronan Dunklau)Party tricks for PostgreSQL: perf, ftrace and bpftrace (talk by Dmitry Dolgov)------------------------What did you like or not like? What should we discuss next time? Let us know by tweeting us on @samokhvalov / @michristofides / @PostgresFM, or by commenting on our Google doc.If you would like to share this episode, here's a good link (and thank you!)Postgres FM is brought to you by:Nikolay Samokhvalov, founder of Postgres.aiMichael Christofides, founder of pgMustardWith special thanks to:Jessie Draws for the amazing artwork 

Transcript
Discussion (0)
Starting point is 00:00:00 Hello and welcome to PostgresFM, a weekly show about all things PostgresQL. I'm Michael, founder of PGMustard, and this is my co-host Nikolai, founder of PostgresAI. Hey Nikolai, what are we talking about today? Hi Michael, your turn, you tell me. But don't choose boring topics, please. Guilty. So today I have chosen auditing. So I decided you claimed the last topic I chose was boring,
Starting point is 00:00:22 so I thought I'd go for the absolute most boring topic I could imagine. But on the flip side, I actually think this is quite commonly needed. Not for everybody, not in all cases, but it comes up often enough. And we've had two listener requests for this now. So it's pretty cool to see these starting to kind of cluster into things people are interested in. So yeah, auditing. And by that we mean, well, I guess it can cover a few different use cases but specifically the things people seem to be interested in are the different options they have for seeing who's changed what
Starting point is 00:00:57 and when and these these fall into a few different categories but we have a couple of really common options in postgres and a couple of interesting options. Before we discuss options, let me tell you that I consider this topic even more boring than transaction ID wrap around two weeks ago because it's related to security. Due to my professional activities, I must deal with a lot of security and auditing and other things all the time compliance but i don't like it and i like i i like to build something but when we well we can consider building auditing system right so this is entertaining to consider options but in general security not my favorite topic so my opinions might be not advanced because i sometimes try to avoid it and use other people help so so don't listen to me if i can be wrong and so on well and i think probably transitioned into the options i think a lot of the development in this
Starting point is 00:02:00 area has been out of necessity people needing this not out of passion not out of you know wanting to contribute to open source or anything like that requirements requirements if you are a big company you're thinking about ipo you already might have might have requirements from external auditors or like it's or compliance various compliance processes. Yeah, if you're in a regulated industry, perhaps there's all of those things. But even, and this is, people also want to use these same solutions sometimes for feature-based things, but they tend to be they tend to be the on the less interesting side so i can see why there aren't maybe loads of uh projects around this from hobby developers right so i think if
Starting point is 00:02:53 you are a smaller company a smaller project and but you think in in the future it might happen with you some requirements may arise in the future, so not yet, or already you have them. I recommend checking Crunchy Data document they prepared with United States Defense Information System Agency. So it's Security Technical Implementation Guide, S-T-I-G. It's like a huge list of various items, like basically each requirement has a severity level and so like you can start with most important ones critical ones and it's impressive so if you want to reach good level it's a good document to check and maybe to use but in general i agree with you sometimes we implement this as a feature for For example, if you implement trigger-based auditing,
Starting point is 00:03:47 it can be also your way to restore data, right? So if you save old value when someone deletes something, it's a way to restore it if needed, right? Manually restore some wrongly deleted parts of data. But if you switch to options, my recommendation is to draw a matrix. Three horizontal, three vertical. So three cells, nine cells overall.
Starting point is 00:04:13 And it will help to understand the use case and the options and choose. First of all, there are three big types of events that can be logged or remembered for auditing purposes. DDL changes, of course, schema changes. Then DML in terms of modification of data, update, insert, delete, and copy. By the way, copy is not, usually it's not reading. It can be reading to SD out or to file, but sometimes it's a massive data upload to database.
Starting point is 00:04:51 And final third one is access of data using select, copy, or with statement also. It's also like select. So it's already interesting, these cases. And three basic big options are using logs, Postgres logs. You're right, Postgres logs. And then two options, I would call them, both are like logical level. So either triggers or logical decoding to send events to some place. And for selects, we can quickly exclude second and third option because we cannot see selects in triggers
Starting point is 00:05:29 and we cannot see selects in logical decoding stream, in wall. We don't have selects in wall. Well, there are selects that can trigger some wall writes, but it's a different story. In general, we should not think that it's happening. So for selects, the only option is to use something like PGAudit, which is quite a popular extension, right? Yeah, so on that note, yeah, the document you mentioned from CrunchyData is incredibly impressive, but it's also incredibly long.
Starting point is 00:06:00 I think it's 130 pages or something and i did a when they announced it or when they announced the public version they gave a shout out to pg audit in it and i did a quick search on the document and pg audits mentioned about 140 times in 135 page document so that goes to show how important that is as an external. So as an extension, not as something that's part of Postgres core to running Postgres or in their opinion, running Postgres in a very secure manner. So it's incredibly important, incredibly robust and has a lot of development history by the team at EDB or not EDB. Second quadrant originally who are now part of EDB and the team at CrunchyData as well, or at least one person there. So it does have real Postgres experts behind it, but it's not part of Postgres core, which is interesting. Right. But the document is huge, but you can
Starting point is 00:06:57 check only first the tip of the iceberg to take the most important items and then go down to like main body and so on. So like to, to, to take it in steps, right? So it's possible because all items, as I said, they are, they have properties of severity, criticality or something. I don't like, uh, if so, if like, it's a big big question do we need to take care of access of data or or we just need to track only data changes because in both requests from our listeners the the discussion was about data changes because if we need the access of data to log it it's probably we should choose PG audit or similar thing. We cannot do it on triggers. Yeah, exactly. The more common use case I see is that you do only need to track the changes
Starting point is 00:07:55 rather than access, but I don't see why, like accessing of data is one of the big security incidents these days, right? Somebody who shouldn't be reading a lot of data is one of the big security incidents these days, right? Somebody who shouldn't be reading a lot of data, reading a lot of data is a big issue. So I suspect that is why pgAudit is so popular. So yeah, just to recap on that. If somebody shouldn't read some data, just revoke access from those parts of data for this user. That's it. And distinguish users so well and check regularly that permissions are properly configured right yeah so so you don't
Starting point is 00:08:36 need to log access to data which cannot be read by some database user because it's not possible yeah but like if there's if like even if they should have access to it if they're doing an by some database user because it's not possible. Yeah. But like, even if they should have access to it, if they're doing an inordinate amount, like a not usual amount, if it's a support person who should be checking customer accounts. This is, by the way, interesting.
Starting point is 00:08:57 I wish we had a very simple way to log massive reads. Sometimes I think we should, for example, have alerts if some massive reads is happening for some user, but how to do it? Probably maybe even from PgStats statements because it tracks selects as well, right? And it tracks it for a particular user as well,
Starting point is 00:09:23 and so on and so on. And we have a number of blocks read there. So we can notice that some copy, for example, happened. And if we distinguish users, we can quickly see, okay, we have plus one in terms of calls for this query, for this user, and it has a lot of... Or we can do it in logs, for example, even with auto-explain, with buffers enabled.
Starting point is 00:09:49 Well, it's something which seems to me quite common task, but not well solved. Or maybe I don't know. As I said, my disclaimer in the beginning, I'm not an expert here at all. Yeah. Well, let's go back to the let's a very interesting topic but probably not what we can like neither of us necessarily know how but the pg audit is
Starting point is 00:10:11 very much in that log category right there's the only way of doing things is via logging it's highly customizable but by default it does log a lot so it's it needs to be customized basically the the other like i see people using triggers quite a lot for different use cases not necessarily just not necessarily the security ones but i don't see much of the third option you mentioned so i'll be interested to hear more about that logically well well let's uh i i don't want to forget to mention that with PGAudit, we have an issue, by the way. Maybe it's not common, but I see it as an obvious issue. And I had in my practice this issue triggering. So PGAudit is logging everything to Postgres log, everything that is configured.
Starting point is 00:11:01 And only superuser can change settings, of course, right? So regular users cannot change settings. But what if we want to audit all actions of our DBAs, for example, which is quite common task. If DBAs usually have super users, they need it and they can override settings. Yeah. Well, this is another thing for security, right? Like if you're doing this for
Starting point is 00:11:26 security, what are the, yeah, what are the loopholes? What, like what's, what's your exposure to with each of these? Yeah. It's a good question. I don't know the answer. You might have alerts configured for each set happened. This is one way. So if someone decided to override it, it's very visible to other people. Or you might want... Yes. Or there's an interesting small extension from OnGrid called NoSet. And it disables setting particular settings. You cannot change them without changing configuration.
Starting point is 00:12:04 So it's also an interesting approach. I know about this extension. I never used it, but I think maybe it's something that Postgres should have. For example, prohibiting changing of statement timeout. It's a different story, but still. Or prohibiting changing of PGO audit settings completely. It would be a good and nice feature in the engine, in the core.
Starting point is 00:12:24 But it has to be changeable by somebody, right? You have to be able to configure it. Well, change it to config, send a SIGHUB signal or restart server. If it's something that you don't want to let... For example,
Starting point is 00:12:40 in some cases, one hour ago, I reviewed Ansible Playbook for major upgrade of Postgres based on first we switch from physical to logical and then we upgrade Postgres. And we want old cluster to set it like read-only state. So there is a setting, GUC setting, I don't remember. Transaction default read-only, something like that. I always forget setting names. And you can set it in configuration on the primary.
Starting point is 00:13:12 So it won't accept read-write queries at all. But unfortunately, any user can change it and still insert something to old cluster. And that's not good. I would have no set there and prohibit it. So only if you have access to files, you can change it. I don't know, like it's maybe additional topic here, right? But I see some problem here with pgAudit as well. Right? Well, similar similar problem with triggers, right? Like if
Starting point is 00:13:41 people can remove them or disable them for a while well in this case uh we should have ddl logged but yeah we can first set a log statement to like none and but it should it should be alerted yeah who who decided to keep silence and do some bad things right so so uh before we switch to triggers, one thing to mention about logging overall in general, I don't like dealing with logs at all. Because to do it properly, if you have many servers, you need centralized storage with a lot of power.
Starting point is 00:14:21 And the best system I was dealing with, it's Splunk. But you need a lot of money because the best system i i was dealing with it's splunk but you need a lot of money to because it's commercial software so it's like huge system usually people use kibana and not perfect interface so like many many things and you need to think about a lot of stuff so like pii also and so on and so on but in general if you consider pg audit you need to think what you will do with a lot of logs and also of course in current Postgres implementation it may be bottleneck in terms of performance especially if you forget to have a good disk where you store logs sometimes it's's some magnetic disk. It's good for sequential writes, actually, but also in terms of IOPS, you may hit some seedlings. So dealing with logs is not
Starting point is 00:15:15 big fun. And I think the advice I've seen, which seems sensible, is only log what you have to. I think some people are tempted to turn it on and log everything. That feels like really, that feels really bad. Genuinely, I've seen, I've reading, like researching for this, I've seen several blog posts that say it's fine and I just don't agree.
Starting point is 00:15:37 No. You can test it easily. Just create a PgBench database with a hundred million rows or something. And then just unle just unlimited TPS, regular setting, don't use dash R, capital R, and compare it with log statement all. You will see a huge drop in TPS, a very big drop.
Starting point is 00:15:58 And then try to configure a login collector, compare it, also interesting thing. So if logs go through syslog, journal.d or something, it can be a very big limiting factor. I don't know current systems, but I explored like five, four years ago and found that if you use syslog,
Starting point is 00:16:19 performance may be limited there in syslog itself. So if you enable a lot of logging, you probably should configure Login Collector and check throughput, ensure that everything is fine, and you will have capacity for it. So capacity planning is needed here, definitely. Yeah.
Starting point is 00:16:41 Whichever option you go with, definitely. Like even on the trigger side as well right um i saw some anywhere but yeah i saw some advice that that put some numbers to it but if you have a right heavy table yeah right amplification happens if you if you go with triggers i like triggers because they have they give you flexibility and also you deal with SQL all the time. So you can check what you have inside your database. You don't need to go outside. Because dealing with logs means dealing with either Kibana or a lot of shell scripting,
Starting point is 00:17:16 Ock, and so on, like a lot of such stuff. But dealing with SQL, it's good. A lot of data. Maybe we have timescale there. It's partitioned. We insert and store a lot of data, maybe we have timescale there, it's partitioned, we insert and store a lot. Great. But of course, it has the right
Starting point is 00:17:29 amplification because if you for example, before you wrote a kilobyte, now you write probably a couple of kilobytes instead of one. Right? Yeah. And you delete it, then vacuum cleaned up that tuples,
Starting point is 00:17:46 but if you need to keep old data, it's kept in database, so database only grows. But as I've said, the good thing here is that it gives you an option to restore. And old Postgres guys like to recall that originally Postgres had time travel in core. It was removed. So MVCC was implemented using some time travel features. So when you deleted, the data was present, and you could jump to the past, to a point in the past.
Starting point is 00:18:18 And it was removed for performance reasons, I believe. Right, right. But it's so easy to implement it. And a few days ago, we had a discussion with Hanno Crossing and Peter Zaitsev. Peter Zaitsev raised this topic, mentioning that it should be easy to set up this shadow table, probably, which keeps all deleted data or maybe also old versions if you update data. But it's actually like 10 lines of code because you can use json to convert everything to one line you don't think about schema so easy i don't know maybe i'm wrong i i think i saw i saw the conversation and i think peter's suggestion seemed to be
Starting point is 00:19:02 for ux reasons for you know as a user i'd like to be able to specify this and maybe for a time limited period i think it was like for 30 can we keep the history for 30 days for example that's it's rare that you get a that's interesting to have some clean up process exactly if you don't have to keep it forever which i think was the original design then i imagine some of those trade-offs become less bad but yeah probably a different topic we've definitely had a request for that one as well actually temple yeah well yeah that's that's interesting topic but if we don't think about retention and some complex capabilities here we talk about couple of lines versus 10 lines or 20 lines of code it's it's like
Starting point is 00:19:41 not a big deal it's quite good question good question for a DBA interview, right? A trigger which can be attached to any table and it stores data which was deleted. So we can always know when it was deleted, who deleted it, which user. It's also possible. And if needed, we have some procedure to restore it with a couple of actions. So it should be easy to restore it. So not a big problem. And we can use the same approach for auditing. So triggers write old data, new data, who did it, when it happened.
Starting point is 00:20:17 But one thing here in terms of performance, usually most articles I saw, they talk about triggers for each row. But for this task, probably it's better to work for each statement. And since we have transition tables, we can access all data, new data, so we can save here in terms of overhead. So it can be easier for Postgres to save everything in one query, actually, additional query from trigger. And based on the blog post you sent me, I think that was added in version 10,
Starting point is 00:20:51 so everyone should have that now because everybody's on a supported version of Postgres. It's a very old feature already. Time flies. Yeah. Well, some of these blog posts were from even older. I think one of them was 2017, right? Five years ago, roughly 2007.
Starting point is 00:21:08 I saw one of the ones around the trigger, one of the trigger based solutions. So we, we do have, so that was, that must've been free. This is probably around the time when I first wrote it myself for some project.
Starting point is 00:21:21 Yeah. But that's, so that's a good, uh, interesting point about implementation another big thing that seemed to come up time and again some people doing um well a lot of people implementing this in a in the simplest way they could so trying to do a single audit table for all tables now that there are trade-offs obviously depending on exactly what you're trying to do um but it is interesting how simple this could be and you you can do it you can roll your own extremely easily there are some there are some people bundling up i saw super base uh bundled
Starting point is 00:21:57 it as an extension recently super audit i think they called it so that's really cool that's a freely licensed as everything they do is oh and worth noting pg audit is free is postgres licensed so that's it's really cool that's that's available to everybody well i don't see big problems with a single table approach i know we will attach the beautiful article from cybertech with a table with comparison and it says for a single audit table it has right amplification and also in cons column and also audit search queries might need some json skills yes but i don't see the difference with multiple tables here because if you for example have additional column in this audit table you can put their relation name right and have an index on it so
Starting point is 00:22:46 it may be you you may not see a difference between dealing with multiple tables or single table and of course this table in my opinion should be for large systems should be partitioned yeah and if you have time scale you can control what to do with old partitions, old chunks, old chunks, right? So a lot of things to do. So I think this is perfect actually use case for timescale. I wonder if they already have a blog post about it. I didn't see one, but yeah, it would make sense.
Starting point is 00:23:19 In the super base one, they mentioned... It's time-serious, right? We have a data change, we have we have a bunch of data change we have timestamp who did it and perfect it's append only by definition right you shouldn't be deleting things out of an audit log like it's um right but you can compress older and yeah and so on the cybertech table is great i do see their point on um if you don't want to add columns if the whole point of a single table for all is that you don't want to change the schema of that for each one, then I do see the advantages if you're using it for, like, if you want to index individual for each table we want to forget about shadow tables. So it's kind of, it depends.
Starting point is 00:24:32 It may be convenient in some cases, but these days I would definitely prefer JSON and maximum flexibility fully independent of schema. I don't see any issues. If I need an index, I can use gene index, or I can have bit rate indexes for specific paths in JSON, if needed also. It's not different from having everything in separate column and repeating, mirroring schema of original table and then indexes. I don't see it's a big problem at all. And JSON is beautiful,
Starting point is 00:25:05 so I would go this path here. Cool. While we're on the tangent, there was a really good blog post by somebody. Let me press a little bit timescale. A few days ago, I was reading about their bottomless approach. I really liked it. It's only for cloud for their cloud offering. So you can decide what to do with all data. And it can go to s3. So bottomless, interesting. And in this case, you can implement this auditing and data goes to s3 for lower cost storage and so on. It's like bottomless postgres. Yeah, super interesting
Starting point is 00:25:47 when you have such a specific use case and you know that old data is not going to be accessed as often and performance on it doesn't matter as much. It's really cool. And off topic, I also was researching what people do with branching
Starting point is 00:26:00 and neon and so on and plain scale. And I Googled something and made a mistake. I wrote bottomless branching and neon and so on and plain scale. And I Googled something and made mistake. I wrote bottomless branching with you. Not branching, but branching. Branching. And there is such thing as bottomless branch, you know.
Starting point is 00:26:16 Normally that involves a lot of alcohol, right? Yeah, yeah, yeah. So bottomless mimosas and so on. So a lot of, we can make some memes here, definitely. Oh, goodness. About new cloud versions of Postgres. Yeah. Speaking of which,
Starting point is 00:26:33 so the two big options I see compared all the time are logging via pgAudit, very, very scalable, very customizable. And it's available almost everywhere. So all cloud. Exactly. Everyone I checked supported it. Yeah.
Starting point is 00:26:52 Every cloud, all of their managed services all supported it. But if you want, you can implement something via triggers if the trade-offs are okay for you. It can be really simple, much easier to do per object, but equally you don't have to.
Starting point is 00:27:07 And then there's the third, I want to make sure, while we're on the cloud topic, it feels to me like that's what's driving this maybe alternative logical approach. Right, so something based on logical replication or logical decoding, and you send events to either different postgres maybe with timescale maybe it's this hydra new postgres which is open source snowflake we can attach links to postgres tv episodes about it right or it can be something like snowflake or click house or anything like vert vertical what you have there or
Starting point is 00:27:45 i don't know like a big table redshift anything right or maybe even not sql at all many opportunities here and good thing is there is no like no light right amplification but bad things you cannot use logical on secondary you don don't need it because we already discussed that selects are not possible to track, so it's only about data changes. But the use of logical slots can happen only on primary, and it has some risks, of course, like out of disk space. Fortunately, the fresher version of Postgres, fresher version of Postgres, the newest version of Postgres,
Starting point is 00:28:27 I think it was added to 15 or 14, I don't remember. You can specify the maximum threshold for slot size, so you are protected here. And of course, setup is complex, but a lot of flexibility. And there is no write amplifications, almost. There is a small overhead of writing additional data to walls. So there are limitations.
Starting point is 00:28:48 So, for example, you should have primary keys. Maybe not, by the way, for the sake of auditing. Anyway, so a lot of interesting things. Also interesting that if you talk about PGAudit, if you check documentation, they immediately say why it's better to use PGAudit. Because, for example, if you have dynamic SQL and you access or change something in table, which name is created dynamically, so consisting from several parts. Normally, with log statement equals all or log min duration statement zero, you cannot grab them or search. But with PGAudit, you have normal relation name
Starting point is 00:29:35 you can search. And if we talk about logical decoding, well, something's probably, well, everything is there, right? So user is there, right? Everything is there. You can decode and know when, who did what. Except selects, of course. So data modifications, DDL, everything is there.
Starting point is 00:29:56 Question, it seems, does it have that same downside you mentioned as shadow tables? I'm cheating and using the CyberTech, the excellent table they have, but they've got, typically requires some schema changes and extra care when the schema evolves. Yeah, well, if you have schema changes, well, if you use logical replication, each schema change will break it.
Starting point is 00:30:18 In current version of Postgres, there is ongoing work to improve this, but destination schema should match publisher schema should match publisher schema, should match subscriber schema, right? And if you use this, you need to take care of DDL. For example, in PGLogical, there are special functions which you need to use as a wrapper for all your DDL.
Starting point is 00:30:41 So there are limitations here, definitely. So you're right. But it's it's there are limitations here definitely so you're right but it's an interesting area i i very much credit the cybertech team for that but yeah it's super interesting it feels like we've got a few really good options and depending on our use case we've we can hopefully it'd be very surprising if one of those doesn't work um on some level so hopefully that's given people plenty to think about. I did have one slightly less serious question for you. Have you seen the PG Audit logo?
Starting point is 00:31:13 No. I mean, I'm sure I saw it, but what's about it? I don't know. I can't really work it out. I think it's just got like a golf visor on the Postgres elephant. Let me check it. Is that what audit people wear? Okay, it has green.
Starting point is 00:31:31 Yeah, yeah, yeah. I'm not sure. Never thought about it. So weird. But yeah, it made me think. It's serious, right? It's a serious extension with not serious at all logo. Extremely.
Starting point is 00:31:45 Before we finish, let me put the fourth option on the table, which I think maybe we'll be winning. So imagine if we can observe all queries on the server, not installing triggers, not writing anything to log, and send them to somewhere like log collector or using UDP, those different servers. Somehow we can send everything, all details about each query executed and even not finished.
Starting point is 00:32:13 We can send information about the query which is already just started. Of course, we could do it observing PGS activity, but it has limited query. By default, it's a column length. Like 5,000 characters or something. 1,024 by default. Track activity size, I don't remember. But not limited. So here we can see everything.
Starting point is 00:32:40 And we just do it, right? And we can send it. And everything comes almost for free. Imagine such approach. It's possible with modern Linux and it's called eBPF. Oh, interesting. Yes, we had a couple of episodes recently on Postgres TV about this particular thing. But it was about monitoring and query performance troubleshooting.
Starting point is 00:33:05 So observability. But nobody prevents from using this exactly approach for auditing purposes. And I think it has obvious pros. And cons is complexity. So you need to write and deal with it. But it can be very, very low overhead like because we don't need to sample it we don't need to go to execution plan here we just need information about query with parameters who did it that's it it can be about selects right there's some of the same downsides that pg audit solve you know in the pg like the i know it's a very contrived example, but I would guess it, like, for example, the piping of the object name together in the pgAudit readme, I guess they have the same problem. Oh, yeah, you're right. But other than that, it does sound really interesting.
Starting point is 00:33:56 Yes, and Postgres log, it's a single file. file and uh yes quite uh like there is lack of flexibility in maintaining that file and of course overhead to write right of writing it to disk here we can use network and send this somewhere and filter and so on but you're right in terms of dynamic sql yes but yeah it doesn't like i i don't know enough to know how common or how often that is an issue. Actually, probably we can extract relations somewhere. Well, worth checking. I think it's possible to extract relations and to have additional tags, like what objects or database objects involved users,
Starting point is 00:34:39 which user initiated it, and so on. So I wouldn't... That might solve it. Yeah. Well, I'm not aware of any work in this direction, and I just came with this idea like 10 minutes before we started. So it's a fresh idea.
Starting point is 00:34:58 Nice. But I think I'm quite sure many people already thought about it. It's obvious. And the eBPF is the future of observability. So maybe it will be future of auditing as well. Well, it'd be exciting if,
Starting point is 00:35:10 if me forcing you to talk about a boring subject actually comes to some good. Well, it's about building something. So it's, it's becoming interesting. Of course. Awesome.
Starting point is 00:35:21 Anything else? Any last comments or thoughts? I think that's it. Nice one. Well, thank you, everybody. And thank you, Nikolai. See you next week. Thank you.
Starting point is 00:35:30 Bye. See you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.