The Offset Podcast - The Offset Podcast EP019: Color Management

Episode Date: October 1, 2024

Color management pipelines play an integral part in modern postproduction, but these pipelines still cause a lot of confusion for many people. In this episode of The Offset Podcast, we discu...ss essential concepts & vocab about color management pipelines to prepare you to dive deeper into these concepts. Specific topics discussed include:- Why understanding color management so important in modern workflows- Understanding the concepts of scene and display referred - How LUTs can be considered part of a managed pipeline - but with some limitations- The parts of a color-managed pipeline- Choosing an intermediate/working space- Project-wide, node-base, and layer-based approaches to color management - The power of color space-aware tools- Printer lights & contrast in a managed pipeline- Output metadata tagging- Using OFX and DCTLs in a managed pipeline- The danger of too much fiddling with transformsYou can also find this episode on all major podcast platforms. Big thanks as always to our sponsor Flanders Scientific, and our editor Stella!

Transcript
Discussion (0)
Starting point is 00:00:00 Hey there, and welcome back to another episode of the Offset Podcast. And today, we're talking about something that's probably been on your mind for the past few years, and that's color management. Stay tuned. This podcast is sponsored by Flanders Scientific, leaders in color accurate display solutions for professional video. Whether you're a colorist, an editor, a DIT, or a broadcast engineer, Flanders Scientific has a professional display solution to meet your needs. Learn more at flanderscientific.com. All right, Joey, here we are for another episode. And this time we are talking color management, which, you know,
Starting point is 00:00:43 if you've been hiding under a rock for the past couple years, you might not be aware of what color management is for video and how pervasive it's become in our industry, not just in color correction software, but even in editorial workflows, DIT workflows, etc. Everybody is always talking about color management. So I know this is a subject that is, how should we say, near and dear to your heart, right? This is something that you like to pontificate on all the time.
Starting point is 00:01:11 I always learn something from you when we have a color management discussion. And for our audience, I think I think I want to make one thing clear off the top here. We are not trying to give specific recipes for specific camera workflows or device workflows or any of that kind of stuff here. There's plenty of great tutorials on the web about how you push all the buttons. buttons, we want to talk about largely conceptually about what color management is, how it works, how it thinks, you know, and all some of the things to think about sort of intricacies within color management, right?
Starting point is 00:01:45 Yeah. Like you said, color management is absolutely incredibly important in our industry today, more so than it used to be now that we're dealing with not only a lot of different source formats, but also a lot of different destination formats, HDR, SDR, various flavors of HDR, various tone maps, lots of weird deliverables. Color management can make or break the efficiency of a project. And I don't think a lot of people understand it,
Starting point is 00:02:14 starting from kind of the beginning to the end. Yeah, you know, what's funny is that I have always thought about color management as something that, like, people just kind of understood. And I think that's because years ago, I mean, decades ago, I had some good friends that were print guys, right? And they were always doing stuff, you know, like CYMK printing, screen printing, etc. where they were having to interface with like printer profiles and things of that nature.
Starting point is 00:02:41 And so like the kind of concept, the core concepts of like, hey, where this is, where this is our working space, this is our output space, all of those kind of things. I kind of just understood. But it shocked me, you know, that how little video people and post-production people really kind of got that. Obviously, these days, people understand a lot more than they did 10, 15 years ago, right? But it is something I think that's worth exploring, you know, kind of conceptually, because I think it's one of those things, too, where I think that a lot of people know enough to be dangerous with some of this stuff, right? But like, don't maybe know enough to really use the right
Starting point is 00:03:21 vocabulary to ask the right questions or to get help. And I think that's really kind of what we're trying to cover today, right? Is, again, not individual rest of. but more so some of that, some of that big picture stuff. Yeah, absolutely. And I think, you know, one only has to look to the resolve Facebook groups and some of the screen grabs of the insane color management setups that people seem to find themselves in by some kind of guess and check where they might be doing one thing and then undoing it and then redoing it like seven times in a row.
Starting point is 00:03:50 And I think a lot of that is just not understanding the core concepts. So let's start with those core concepts. And I think the most important, one of the most important core concepts is scene and display referred. We hear these terms all the time. And a lot of people don't really know what they mean. Like referred? What does that mean? It's kind of a weird wording.
Starting point is 00:04:09 A weird way of saying. It's weird. Yeah, it's weird. For sure. So to break it down, it's actually very simple. Display referred means you're grading what is on the display. Yep. Scene referred means you're grading in theory what is the scene.
Starting point is 00:04:25 light. So, you know, we worked display referred in television for almost 100 years. Right. And that's why we never really thought about color management in the TV side of the industry because everything was rec 709 or 601, whatever, you know, everything was normal color television, gamma 2.4 and you just worked with that. Then we started getting cinema cameras into the mix. They shot log. They shot raw, and they needed to be converted just to look at them on a screen. And a lot of people's initial reaction was to, okay, we do the conversion, and then we color correct in our normal way by looking at the screen. You know, display referred. Again, in the modern world, like we started this off saying, well, now we have more output formats. The screen is not a definite.
Starting point is 00:05:21 Right, we have more screens, different flavors of screens is a way of saying that, right? exactly so these days I mean we always recommend working scene referred unless it's a very very very specific exception to say
Starting point is 00:05:38 you know there are very specific exceptions like maybe if you're grading for the Las Vegas sphere and you're in their test laboratory you might work scene refer or sorry display referred for that because it's not going anywhere else well I think I think the other thing
Starting point is 00:05:54 that, and I think where people often get confused about this is because there is a marriage of display referred and scene referred, right? You are eventually going to look at this content on a display of whatever flavor that might be, right? And what scene referred workflows allow you to do, I often think about it this way, is the management or the flexibility to adjust things to whatever that display referred device might be at the end of the. pipeline, right? So it's not that's not that you're working in this magical fairyland of, oh, where it's scene light and it never has to see a display ever again, right? Of course that's not true to see it. You have to look at it on a display that has a certain amount of settings,
Starting point is 00:06:38 etc. But what Seenrefer does, think about it almost as like you're, and I'm going to use these terms a little loosey-goosey here, so please correct me, Joey, if I get it wrong, but is, you know, it's kind of your working space. It's kind of your overarching pipeline of, okay, I'm going to work in this scene referred, right? And just to be clear, we've all heard of scene referred workflows, right? You know, working in ACEs, or we'll talk about later, working in DaVinci Wide Gamut, you know, those kind of parts of it are components of a scene referred workflow, right? They're that working space that allows us to work in that stage of the process. Yeah, and, you know, like I said, in television, we've always been display referred in the land of cinema and movies. They've been
Starting point is 00:07:21 scene referred because they were working with, at the beginning, negative film, right? You can't just look at negative film and, you know, see what it's going to be. Everything's backwards. So you need to look at it under a transform, which was the print film. Then we got into digital intermediate where you would scan the negative film, work with it inside of a computer under a display lot that would emulate your print film. So you've got something on your monitor that would approximate what it's going to look like after it gets printed back to film. That's still a scene referred workflow, right? Because you're working with the negative film under display transform,
Starting point is 00:08:00 and that display transform is going to emulate the final print. Nowadays, we have very accurate displays that allow us to kind of see the entirety of the signal in ways that is better. But the idea is similar, right? The idea is exactly the same. Your log encoded footage off of your cinema camera is the same thing as that digital negative, right? We want, ideally, we want to work with it in that scene referred native space, right? And then at the end of the pipeline, put it to wherever it needs to go.
Starting point is 00:08:31 It needs to go to an HDR display. It needs to go to an SDR display. It needs to go to the sphere. Wherever it needs to go, we have that flexibility at the end. I think one thing that you mentioned that I think is also a little confusing to people is I think when people, I always look at it as kind of like this, this, this, this before the knowledge you have before you have the real knowledge. And that is for the past, I don't know, 15, 20 years, right? The idea of a lookup table or a lot has gotten into more people's lexicon, right? They understand essentially what it does. It takes it from one thing to
Starting point is 00:09:04 another thing, right? We always give the example of, hey, the easiest let to understand is you have a blue pixel, the let does some math, and you have red pixels on the other end, right? I mean, essentially that's what, for our purposes, that's what a lookup table is doing. I do generally lookup tables to be part of the the overarching color management process, right? So if somebody tells me, hey, I'm using a lot, you know, and I'm not in the managed workflow, there's like an asterisk by that because like you're sort of kind of are. That lot is going to make some assumptions of what the source footage is. It's going to make some assumptions for what you're out, possibly make some assumptions
Starting point is 00:09:43 for what your output device is going to be in that case, right? So a lot is like a dumb form of color of color management, if you will, in the sense that it's just, I mean, we've talked about this before and I'll put a picture in the post for this, but it's literally just a table of numbers that say, okay, for this input, make this output. And it goes down and down and down and down and on hundreds of values to tell you that, right? And that is, again, it's not flexible color management, but it is a form of color management as far as I'm concerned. Yeah, absolutely. And a LUT can contain whatever transform it has in it, right? So it's funny. I actually saw, again, back to the Facebook thing, I saw a big argument on Facebook about people saying, why are they called Lutz? That's a stupid name, lookup table. What are you looking up? Like, no, no, you have an RGB triplet. You look it up in the table and it gives you a new one. A LUT is just a text file with input numbers and output numbers and nothing else. I remember I remember teaching this one time and I was at a I was at a big conference I think it was Adobe Max and I actually opened up the Lut into a text editor and there was like like people it was like Nirvana and people like what like I thought it was like some like you know I think they were expecting calculus right or some sort of like you know big huge transform function when it's really not like you know that's where people go wrong sometimes a lot of people say okay lots are just dumb math they're not really even math. institution. Yeah, that's good. Right. And I think that's an important thing to to kind of differentiate.
Starting point is 00:11:18 I say that all the time. Right. And we'll talk about later are math, right? They're doing it from a formula, not from a baked down set of values. But lots have been used in color management for a long time and they can still be used. Like I said, the print example of when you're working something that has to go to print film, they would use a print emulation lot to look at it on a monitor. In that specific, yeah.
Starting point is 00:11:42 They would even get it from that specific. specific lab that I was calibrate. Yeah, the whole night, I get it. The other thing I think that's related to this, and you hear oftentimes by, and I want to say, and I'm not trying to point fingers at people, but I do think there's a certain subset of people that think about color management and the first thing that they think about is complicated, right? They think about like, oh, it's too many steps, too many buttons. There's too many things that can go wrong. And I always get this retort from that type of person is, I'm the color manager, right? I'm the colorist. right i'm making the color management decisions and i used to like think about that and be like
Starting point is 00:12:19 oh god man like you just don't want to learn about this whatever the more that i thought about it there is some validity to the colorist as the color manager or however we want to say that right the color the colorist is making obviously creative decisions about what shots are going on making creative decisions about transforms etc and that um is going to happen regardless of whether you're in a non-managed workflow, a fully color managed workflow, or somewhere in between, like with a lot, right? You are still going to be making decisions that influence the output, right? You know, how much you're going to crash the shadows, how are you going to blow out the highlights, how saturated your thing is going to make. So it's not that the colorist is divorced from all of this,
Starting point is 00:13:04 right, in terms of the color management workflow. You're still making this decision. Yeah, it's just like how much work do you want to create for yourself? Because, you know, the attitude of I'm going to do everything manually, you know, what if all of your files were video levels and they were tagged as full range levels? Yeah, they'd be wrong. You could grade each shot manually to adjust for that, and it'd be fine, but you're just creating more work for yourself. And having a not good color management pipeline can do that, right? Especially if you have a lot of different sources, it can make way more work for yourself and mean you have to be sitting there working on technical fixes as opposed to creative.
Starting point is 00:13:43 Yeah, and I think the other thing that doing working color management, working color managed does for people is it presents a certain level of consistency to the initial starting point for your shots. Because those same people I just mentioned, they'll often go, well, I can just, I can grade these shots faster than I can with pushing all the buttons to set this up. I could be halfway through the show by the time I figure that out. And I'm like, that is true, but you multiply that by a thousand shots. Right. You as the, you as the colorist are going to have some variability to how, like, fine,
Starting point is 00:14:18 you might have been pretty consistent day one. You come in and sit back in front of the show day two, and now you're crushing the blacks a little bit or you're saturating things a little bit more than you did the day before or whatever, right? Color management and whatever flavor it comes in, one of its other advantages is consistency to the starting point of your work. And I should be clear about what I mean by starting point, because I think, that there is a tendency to think of color management solely as a way to get to, you know, get in, get to a starting point, that log to a sort of a normalized looking transfer. And I are a transform. And I think that's definitely part of it. People want a good starting point.
Starting point is 00:14:55 But color management is more than that. It can be part of your ingrained process. It can be obviously part of your output process. But I think that where I'm- And it can be a big part of your look development process as well. Right. But what I think where most people think of it is, how do I go from law? to looking something that looks more normalized as a starting point,
Starting point is 00:15:12 and that's fine. You're still going to do that work to eventually massage that image. What we're talking about in this case where I think that I make the argument to a lot of people try color-managed is that you don't do all that work up front to get it to the starting point. Yes. So that kind of brings me to the next thing I want to talk about, which is, you know, what are the actual steps in this color-managed pipeline we keep talking about? You know, you have the final output and you have what you start with.
Starting point is 00:15:38 what happens in between them. And the way I like to look at it is there's three basically major regions of this, right? There is your original sources, whether they be log C, red raw, whatever, REC 709. There is what they are. Then there is the intermediate space, which is where you're actually doing your creative corrections. And then there's the output space, which is a transform from that imaginary intermediate space that isn't displayable to something that's actually displayable. And a couple things to kind of add to that, you know, general one, two, three approach is sometimes your source space can be your working space. You know, a lot of people
Starting point is 00:16:22 like to color manage with their working space being log C. And that's, there's nothing wrong with that. Like the most simple color management pipeline is a log C image, the airy lot at the end, and then all of your nodes in the middle being your grade in log C. Now, if you have a odd shot that came from like a Panasonic log in that, you could color space transform that to log C. Now, that kind of reminds me of the next thing I wanted to talk about. Hold on one second, because I think you hit on something, you said this, but I want to reiterate because it's so important that people get this.
Starting point is 00:16:59 So what I heard you say is that there are essentially three parts to this pipe. There's your starting point, your input, what the footage you have is. There's how you're working with it and processing it, that working space or timeline space or whatever you want to call that. And then your output space, right? And what we're trying to do with a well-managed pipeline is that we initially transform our camera shots into our working space that we want and prefer, right? No right or wrong there.
Starting point is 00:17:30 We'll talk about that more in a second. We're in our working space. and then from our working space to whatever output device space we need, whether that be P3D65 for HDR 2020, whether that be Rex 709, whether that be SRGB gamma 3.9 or whatever for some weird display, right? It doesn't matter, but those kind of three components, and those handshakes need to be correct, but it's important to say that those handshakes are also done for the most part under the hood. right? We're not necessarily having to go, okay, well, I'm taking these pixels and going to this, I'm going to these pixels and going to that. You're simply saying, hey, this is my footage is, this is my working space setup, and this is my output. Yeah, and I think it's important to do one more kind of core definition. We define scene referred, we defined, display referred. We've never defined what is a color space.
Starting point is 00:18:27 Oh, man. People get very confused about this because there's various aspects to it. So let's just break it down a little bit. A color space consists of a set of primaries, which is where your primary red, green, and blue sit on, for example, CIE chromacity chart. A white point, which is where white is, a gamut, which is how much saturation you can go outside, which essentially is defined by the boundaries of your primaries,
Starting point is 00:18:59 and some kind of transfer function, whether that be a PQ curve for output, whether that be a log curve for input. So there are various ways to convert these numbers. So, like, for example, if you're going from log C to Aces, it knows where red, green, and blue in log C are, numerically. It knows where red, green, and blue in Aces AP0 or AP1 is. So it just maps those. Imagine, like, a three-dimensional map from one to the other. That's what a color space transform.
Starting point is 00:19:32 does. And that's what we're using to get between those stages of input to working to output. And there are plenty of luts that have that baked into them. So any lot that changes you from like log C to a display space has those conversions built into it. Yeah. And I mean, I think that any color scientists that is listening might take a little, you know, might raise a finger about that quick definition because it is a little more nuanced and complicated than that where we have abstracted spaces. you think about like XYZ, you know, and then you have encoding spaces like Y, CBCR and stuff like that. Like it is a complicated subject, but I agree with you on the kind of the baseline 50,000 foot view of that. So now that we've defined those kind of three parts, we've defined what, you know, kind of going in and out of color spaces,
Starting point is 00:20:20 I think it's important to describe where color management can happen in that pipeline, right? So let's start with we got a bucket of media that we brought under our computer. It's now in a project, let's just for sake of argument, say it's in DaVinci Resolve, right? At that point, the first step of any pipeline, and people are going to go, well, what about project management versus node management? It doesn't matter for this part of the conversation, right? Yes, there are different ways of doing the color management, but the principles here are the same. First thing we have to do, whatever system we're using and whatever project or node-based ways we're using, we simply have to tell the computer, this is what my footage is. It's R.E. Log C. It's Black Magic Cinema Camera. It's Canon Log 3 or Sony S. Log 2 or whatever, right? We have to tell it what it is. Once we define what it is, we have to tell it how we'd like to work. This is the working space that we like to work in. Now, this is a big decision, this part, right? Because working space is largely preferential. I think that the guideline would be that you want to work in a working space.
Starting point is 00:21:31 that is large enough to not damage the footage that you're working with, right? So traditionally you'll see these spaces and if you look at them on the visual locus that you mentioned just a moment ago, the chromaticity chart, you'll often see that these spaces as being very big and wide, right? I'm thinking about things like ACEs, AP1, AP0 is more of a transfer space, but AP1, REC 2020, log, you know, Log C, DaVinci Wide Gamet. These are all very large working spaces that you're essentially never going to run into the boundaries of as you're working. So therefore, whatever you need to go then down the pipeline for your output, you're not losing anything in the processing that happens either going into or out of that working space.
Starting point is 00:22:18 That's the essential idea. You always want to work big if you can. But what- Yeah, and a lot of these working spaces are designed to be used with a particular set of output transforms. That's where you get into Aces, Resolve Color Management, et cetera. So if you're working in Aces, you probably want to be working with a, you know, Aces working space because the transforms from, for example, Aces CCT into your displays are all completely done and standardized and easy to use. Same thing with Resolve Color Management, T-Cam and Baselight, right? So usually that working space is tied to the output.
Starting point is 00:22:57 transform as kind of a partnership, but not necessarily always. And that's one of those times where you might want to think about some weird stuff. And by weird stuff, I think one of the, I think you're generally right that, you know, if I'm working in Aces project, working in Aces, you know, AP1 as my, uh, my working space makes sense to go to, you know, in that pipeline. It doesn't have to be, but I think the one weird thing about this choice. And I actually, It's funny because when we did our conversations a few weeks ago with Nate McFarlane from Dolby, which if you haven't listened to those episodes, be sure to check them out. You know, one of the things that always struck me when talking to Dolby about this kind of thing in color management,
Starting point is 00:23:39 like they're kind of like, whatever you want for your working space, whatever is big, like they're noncommittal about it. Because this last point about it, I think is important, is that the working space, another way of saying that is your timeline space. It's kind of governing once you get Intermediate. Once you get your footage in and tag it what it is, how you're going to be actually color correcting, what space that is. It's largely about feel, right?
Starting point is 00:24:05 That, you know, the controls, if they are color space aware controls will feel a little bit more normalized. If they're not, they're going to feel pretty different between these different working spaces and how, you know, how, if you move one of the the balls on your color control surface,
Starting point is 00:24:22 you know, you'd be surprised in a really big space where a tool is not color managed or space aware, a little move goes all the way out to the edge, like super far, right? Yep. So it can be largely about that preference.
Starting point is 00:24:34 Talking about working spaces that I think is kind of interesting to think about is we mentioned earlier you can use, for example, a lot as their output transform. But all that we've talked about so far with this three-step process has implied floating point transforms, right?
Starting point is 00:24:49 As in we're doing the math completely imaginary, in floating point land where there's infinite precision. If you are involving luts in your process for any reason, whether that's some kind of transformation Lut, whether that's a specific print emulation that you're using as a display transform, you might actually want to not use as wide of a working space because then that Lut might not have the precision
Starting point is 00:25:13 to resolve that working space into the display space. That's kind of the weird outlier situation I was thinking of. But general, if you're a lot of, staying floating point where you're not cutting off the precision and you're not clipping anything, how wide your working space doesn't really matter, right? You could multiply by a hundred, then divide by a hundred and you get the same number back at the output. Yep, 100%. And so I think that that choice, I think if you talk to a lot of colorists, they're probably going to say one of maybe a few things that we've kind of narrowed it down. You know, I think
Starting point is 00:25:51 people are going to say some, you know, an ACE's working space AP1, a lot of other colorists are going to say, you know, their color corrector of choice native working space. So for resolve, that might be, you know, DaVinci Wide Gamut or slash intermediate, you know, and the baseline's going to be their own, you know, their own flavor of a working space. The output spaces are interesting because this is the third part of the chain. So once you've, you know, you've got stuff into your working space, you've made your. your academy award-winning grade. Everything's going lovely.
Starting point is 00:26:24 Clients are giving you high-fives. We now have to get it out to that display referred space, your actual device that you're using. That's going to be a little interesting because some of these things are standardized industry-wide spaces, right? 709, rec 2020, et cetera. But they can also be a little bit of like inside baseball kind of math that's going on that's based on those, right?
Starting point is 00:26:48 So, for example, in Aces, you have like, something they call like simulated spaces for like projection setups, right? Like a D60 or, you know, a D65 simulated space. That's an output transform that the guys at Aces, you know, the Aces Working Group came up where they go, this is how we can get, you know, this to map to this display properly, et cetera. So there is standardized stuff for output. And then there is some sort of, you know, customized stuff.
Starting point is 00:27:11 But the point generally with that output space, and this is a good rule of thumb to have, right, you know, circle this one, is that you want to have your output transform in a pipeline generally match how the actual display is set up, right? So if you are working with an HR, HDR display and the HDR display is set up P3D65, well, it makes sense your output transform would be set up to P3D65, not rec 2020, right? We want to make sure that, as I mentioned the word handshake earlier,
Starting point is 00:27:43 we want to make sure that these handshakes are consistent from one place. So you're not doing something, okay, well, I'm feeding my display, a different color space than it's set up for, that's going to give you and lead to a whole bunch of problems. And on the subject of these output spaces, up to now, we've been pretty loosey-goosey about our recommendations, right? We'd be like, oh, you can use whatever input transform, intermediate,
Starting point is 00:28:05 you know, build it up as long as you're keeping the ingredients right and the ends and outs match. You can build this pipeline however you see fit. The output transform, in my opinion, is where that ends. you need to not do anything after that output transformation for a couple of reasons. The technical reason doesn't exist, you could do an output transform, then do a display referred grade of that and tweak it a little bit if you didn't like it. But guess what?
Starting point is 00:28:34 Then you lose any ability to change that output space to another deliverable. Without having to compensate it again. Any ability to put that output transform anywhere else in the pipeline, right? So we talked about project-based color management, node-based color management, and other systems like bas-light, you would have layer-based color management. Where you're putting these transforms can logistically make a difference in your project. Yeah, that's a big one, dude. That's a real big one.
Starting point is 00:29:04 So project-based, let's start with that, right? Because when you're telling resolve, okay, we just tag each clip as its input. We tell Resolve what the timeline space is. It'll automatically transform from those inputs. The great thing about that is most of that can be handled by metadata automatically. Right? And then you set the output space. And then your node tree, as far as you're concerned, is just all in your timeline space.
Starting point is 00:29:29 Easy peasy you work as normal. That's kind of the simplest from an operator perspective way to do this. And to be clear, Joey, it does in a project-wide approach, you have your choice. And we'll expand on this more in just a second. But I want to be clear that that concept of how that works is no, different, it doesn't matter which system you use, right? If you're using ACEs, RCM, T-CAM, whatever, it's the same basic flow. You're tagging footage. It's transformed automatically into your working space, automatically into whatever you have set up output. And it's also worth mentioning
Starting point is 00:30:04 one important thing is that many systems, I can't say all, but many systems, they do treat raw footage a little differently in those type of workflows. You said, I just want to be clear, You said this, but I want to expound them on this. You said this can happen automatically with metadata. That's true. But specifically with raw footage oftentimes, because raw is not a color space, right? It's raw. You're just automatically...
Starting point is 00:30:29 It's not an RGB image. Right. You're automatically parsing that into your working space, all the math happening behind the scenes without you having to explicitly say, oh, yeah, this is RRARA, red raw, Sony Raw, whatever, because it already knows that it's raw. It's not in a color space that you have to tag or define. So it's going to do that on the back. Yeah.
Starting point is 00:30:49 Now, anybody that's listened to me or Robbie in the past 10 years knows we personally advocate for node-based color management and resolve, which means essentially, instead of telling the project level the color manager project, we are taking the entire pipeline and building it ourselves in the node tree with color space transforms on either end from. from the input source to the intermediate space, and then from the intermediate space to the output space. Here's where it gets a little bit dangerous, though, because when you are working node-based,
Starting point is 00:31:24 and this gives you a lot more flexibility, in my opinion, the reason why I like to work node-based is because I can work, for example, before the input transform. If I want to do some paintwork or some texture stuff that really looks like it was done in camera, I can more easily do that inside the original camera space than inside the managed space in my experience. But I never, ever do anything past that output transform,
Starting point is 00:31:53 even though in a node-based workflow you could. The reason being because that will break lots of other things. And you can put that output. If you're working node-based, there's a bunch of different organizational techniques you can use to put your output transform at the end of your chain. Easiest, put it on the timeline level. less easy, put it on an adjustment layer
Starting point is 00:32:14 or put it on a group level or even just put it on every clip. There might be reasons why you do one or the other. For example, if you are working in rec 709 and you have rec 709 graphics, you might not want to color manage those. You might want to just take them from the designer, lay them on top of your master with an adjustment layer in between. That works great.
Starting point is 00:32:37 You might want to do it on the group level and have it, you know, be easily changeable globally that way without having to use the timeline level because, again, that would affect things like graphics and even slates and bars and tone. You know, you know the one I get all the time? I'm sorry to interrupt you, but this is an important one. It's not, because graphics are an easy one to think of. The non-linearity sometimes that you find in the way that the tone curves work with things like dissolves can also be like something special, right?
Starting point is 00:33:06 Up and down to black, you might want to handle differently. Totally, totally. And it will react slightly different depending on where you do it. One of the other reasons for never going beyond that output transform is you might start your grade and grade your entire film, for example, with the output transform on the timeline level. And then you might run into some online editing problems when you try to drop in graphics or fades or other stuff in the timeline and say, hey, you know what? This would actually be way better if I put it on an adjustment clip or on a group or rippled it to all. all the clips, right? Since that output transform is completely standardized and we haven't messed with it, we can just take that node, use any of those other methodologies, and our output image is not going
Starting point is 00:33:50 to change, right? And if you need to send things for VFX with like a preview grade on them, you can send a graded image in your working space and then just tell them, for example, hey, that's ACES CCTV, or you could give them a lot for ACE's CCTV to Rec 709, and they can give you back a log image in your working space. But since you have not changed that output transform at all, that's going to be the same across your entire pipeline with multiple people. So basically, anytime you do anything after the output transform, you are taking your entire color managed workflow, baking it down and doing a display referred grade on top of it,
Starting point is 00:34:30 which in my opinion completely defeats the purpose. Wise thoughts from a wise man. I do think that one of the ways that I have often explained the node-based approach is that one of the frustrating things to me, and for full transparency here, for years prior to really robust color management systems, I was a big fan and still occasionally am of using lookup tables in my workflow, right? In my setup and my node setup when using a workflow, I always put my, lookup table, generally speaking, kind of in the middle of my tree. Why did I do that? Because
Starting point is 00:35:14 I wanted the ability to massage the image before the lookup table, before that math of whatever processing that's doing. And I wanted the ability to do it afterwards, right? There are certain times, like, you would apply a creative Lut and, I don't know, black level would be clipped at like, you know, 7 or 8 percent. So you couldn't, if you, you couldn't do anything before the Lut, but after the LUT, you could bring that black level back down. That's just, you know, one example. So when I started getting into a color managed pipeline, I was initially very frustrated by a project-wide approach
Starting point is 00:35:46 doesn't actually allow me to do anything before that initial transform into my working space, right? And then it also doesn't really allow me to do, you know, sort of any part sort of after that processing. I know that's a bad idea, but it doesn't allow me that flexibility if I wanted to actually needed to do something after that process. Where node-based, I think one of the reasons that we love it so much,
Starting point is 00:36:13 is because, a reason I should say, I love it so much is because it emulates that Lut workflow for me precisely. I put whatever transform is in the middle of my tree, and if you look at our tree, we'll post one of our trees, people can compare your complicated tree versus mine. You'll see in both there's a similarity, and that is that our transform into our working space is generally somewhere in the middleish of the tree, where we have nodes where we can process and do corrections in what we like to refer to as camera space.
Starting point is 00:36:47 That's not exactly true, but let's just for the sake of argument call it that, right, in kind of the unprocessed pre-transform, pre-working space area. We do that transform and then we have our working space, whether that be ACEs or RCM or whatever it may be. And it simply just allows us to work on both sides of the equation at the same time. And there's perfectly great reasons to do that. One of the most ones I see all the time that hit the Facebook groups and other places like that is, oh, I got these artifacts and they post a screenshot and it's these little like black dots all over the clip. And it's like, okay, you're producing negative values in, you know, from the transform into your working space.
Starting point is 00:37:28 These are values that don't really can exist, but the math lets you create them. Therefore, you need to fix it. You need to apply some clipping or do something else. But guess what? It's really only solvable if you do that free transform, right? And you can't really do that pre-transform if you're in a project-wide approach. In theory, right, these color management systems are designed to make that not happen, right? You shouldn't get weird negative values and gamut problems if you have the,
Starting point is 00:37:58 the right input transform, the right intermediate transform, but guess what? Our footage isn't always perfect. Sometimes there's issues in the footage that will push things out of gamut or into negative values on weirdness and things like that. And you might need to tweak it. I love being able to go into camera space to do that. But I will die on the hill that you do not do anything after your output space. I understand that.
Starting point is 00:38:22 The other thing I was going to say about this before and after thing of the transform, which node-based applications or approaches allow you to do, is that, and it's the same thing, honestly, it's the same thing with look-up tables, is that those transforms, even though they're a higher level of math, they're still expecting a certain set of conditions on input, right? They're still expecting that more or less, you know, your gray levels, you know, mid-grays right about here,
Starting point is 00:38:49 more or less, you know, all these, you know, things, these criteria are met. you can create some pretty crappy looking images with the correct transform, right? So it does also allow me to go ahead of the transform, massage the image a little bit, so the math of the transform is doing more or less what it should. So if I have a clip, for example, that's a couple stops overexposed, it's going to look really terrible after I apply that transform. The transform is expecting it to be exposed correctly. Right.
Starting point is 00:39:21 And chances are you could probably recover some of that detail because of floating point and some of the things that you spoke about earlier. But to me, it's just much easier to go ahead of the transform, bring that shot down a little bit before it gets to the math of that transform. And now I have the kind of correct, which is, again, very difficult and impossible to do in a project-wide approach. So that's why I just kind of like that flexibility of a no-base brush. Yeah. And that brings me to kind of the next thing that I think everybody is going to want to hear about, which is the actual tools we use. for grading and what are things like color space aware tools, what are conventional tools that you would use, do they behave differently in a color managed workflow, things like that?
Starting point is 00:40:02 And I think that's, it's really important to know what the individual color tools do to the image and how that behaves inside this greater color management workflow. Big examples are things like contrast, saturation. We're going to talk about printer lights because that's like everybody hears about printer lights. and I think a lot of people don't understand printer lights. But, you know, when you have tools that are known as color space aware, that just means that the input and output of what that tool is changing knows what color space it's working in and tries to tailor itself to that color space.
Starting point is 00:40:40 As in, for example, if you use the HDR tools in DaVinci Resolve, and you set the color space and the transform to REC-709, when you're working in log C as your working space or ACE CCT in your working space, what it considers to be highlights and shadows are going to be wildly wrong, and you would need to adjust those ranges manually. So color space-aware tools can be really, really useful
Starting point is 00:41:06 because they're basically pre-setting where their controls start and finish their adjustment to the image to match your working space. Now, contrast and printer lights. this is where I think a lot of people go really, really, really wrong. And this is all kind of one of my other hills to die on. I want to preface something because I don't think, I mean, yes, it's wrong in the way that they think it works. But I don't want to blame these people because I do think that like any industry,
Starting point is 00:41:42 there's a little bit of hero worship that goes on in the emulation of workflows, right? And so I think what you're, because I think I know what you're about to say, part of the problem here is that a lot of people emulate what they see from higher end colorists who are willing to share some of their workflows, but not really gronking the entirety of what their process looks like in their workflows. So, you know, you might see an A list colorist going, oh, man, I never use Lyft Gamma gain. I never use any of these tools. I'm just printer pointing everything, and voila, I want an Academy Award, right? What they're totally forgetting to tell you is what, Jody? The entire color science team that's building their color management pipeline. Right.
Starting point is 00:42:29 And I'm with this. I'm one of the people that tells you, hey, use printer lights for basically everything. If you look at my node tree, it's entirely designed so that my primary grade basically just has to be printer lights. Because all the work is happening and all the other color management around. those printer points. What are printer lights and what is the contrast control? Right? So that's what we need to talk about.
Starting point is 00:42:54 Printer lights move the entire channel, red, green, or blue up and down, basically. So your shadows go up, your highlights go up, your midtones go up, all proportionally. And a lot of people, like you said, will hear really great colorists who do amazing work say, yeah, this was like 99% printer lights.
Starting point is 00:43:14 And then colorists will take their, display referred grade in rec 709 and start doing printer lights and say, why are my shadows hot pink polluted and weird and my highlights are all wacky? Because printer lights were never, ever, ever designed to be worked not under a transform. They were never made to be used on a displayed, display referred image. What I mean by that is, let's go back in time to the film days again. what are printer lights? Why do we call them printer lights and who's printing anyway?
Starting point is 00:43:47 Right. So when you had negative film, you would project that film with an optical printer onto a print film, and that optical printer had red, green, and blue lights to expose the print film based on the negative that was contacted on top of it. And if you wanted more red or more green
Starting point is 00:44:08 or more blue in the image, you would literally have those lights turn on for more time when it was doing the optical print. And that was the only real color control that filmmakers had for the longest time. But what a lot of people don't understand and what I think is important to mention is print film has a contrast curve associated with it. It has a contrast baked into the chemistry of the film. Yeah, the motion layers of the film. Yeah, yeah, yeah, totally. So those printer points in the film days were always working under that
Starting point is 00:44:43 kind of S-shaped contrast curve of the print film. And guess what? All of our output transforms usually have a similar kind of S curve for contrast. And if you use printer lights underneath that in your color managed space, the shadows naturally kind of roll off into neutral.
Starting point is 00:45:02 The highlights naturally kind of roll off into neutral because that's the way the output curve is working. And printer lights work great. In fact, they work better in a lot of cases than Lyft Gamma gain, because Lyft Gamma gain is doing it based on a curve, and then you've got a curve for your highlights, curve for your shadows that's then happening under your output curve. You're double curving, which can get you some weirdness,
Starting point is 00:45:25 which is why I also mentioned contrast. The default contrast setting in DaVinci Resolve adds an S curve. This looks great in display-referred workflows because you don't end up clipping your shadows here highlights, but you do that in a color managed workflow. You're double curve. And guess what? You turn that contrast up.
Starting point is 00:45:42 You're adding an S curve and then adding another S curve on top of it. It kind of multiplies. So one of the other things I like to tell people is just go in, turn off, use S curve for contrast. If you're working color management, it'll make it just feel. The control will feel and look better in general. Those are all really great points because I think a lot of people with printer lights, you know, again, I was teasing earlier when I said this, but like, you know,
Starting point is 00:46:08 you look at an A-Lis colorist, you know, I agree this whole film on printer points, and you want to emulate that, not getting similar results. I think that's a key, key reason why. I think another reason why, by the way, that that bothers people
Starting point is 00:46:20 is because I think a lot of people, honestly, don't realize that printer lights, like how the interactivity, the additive nature works between red, green, and blue channels and, well, in yellow and sign of gentle too but like between all these color channels I don't think they realize oh if I add red I'm
Starting point is 00:46:40 you know what negative red or that's you know I'm adding red I'm taking away from somewhere else in the kind of thing because remember added of color space you add all the numbers together you get white right or whatever so no it's a good point I think I have one last thing I want to say talk about color management but it's a very tangential thing okay um and to be honest we don't have enough time in today's show to really dive into all the particulars about this, because it is a complicated subject. What we have said so far to date is we've talked about seen and display referred. We've talked about where in the pipeline this works,
Starting point is 00:47:16 different places you might want to work, different tools you might work. We've talked about a lot, right? But all of that has assumed to a certain degree that we're working on generally a standardized display of some sort that is, you know, we're piping a, Bayspan video signal into it, you know, H-GMI, SDI, whatever. It's going into that, and it's the monitor is set up the way it is. Well, in the wacky wild world that we live in of web delivery and various device delivery,
Starting point is 00:47:44 etc. Things are not as simple anymore as just, hey, it's one piece of video and it displays more or less correctly everywhere, right? And the one way that we have to sort of do, we have to get into our lexicon and thinking is, is metadata tagging that is related to color. management. And what I mean by that is that no doubt you've heard the, or seen or read the 16.5 billion posts on why does my footage look different when I play it in QuickTime player or X, Y, Z or whatever. And part of that reason has to do with the fact that these days metadata
Starting point is 00:48:20 tagging of color spacing, color space information is vital to a successful display of that content. So in tools like DaVinci, we can tag our files on output, right? So just like I said earlier, we're generally speaking, you want your output transform to match up how you're displaying things. Well, generally speaking, when you output a file, whether that be H264, H265, pro res, etc., we want to tag it to match what our output transform was, how we looked at it while we were grading it and working on the show. So if you're working on a REC-709 monitor, we want to tag that file, REC-709. So various software players and devices and stuff like that go,
Starting point is 00:49:04 oh, cool, because guess what? They're doing their own color management pipelines on those devices and on those applications and on those screens. So just like we said before, where we need to tag incoming camera footage, your outputs need to be tagged. Think about it as an input transform, if you will, for that player. It needs to be tagged correctly to be able to display correctly on that software player or that device, right?
Starting point is 00:49:28 So output tagging is a vital thing. For most of us, your best bet is just to create what's called, and there's a certain standardization of this, even though it's very old and antiquated and needs some updating. NLC tags, right? This, or NCL, I always get that backwards. CLC.
Starting point is 00:49:47 Yeah, these are metadata tags that there's a standardized way, you know, one means this, two means that, three means that, et cetera. For most of us, we're trying to create one-one files, right? That's a standard REC-709 file. You might throw in an odd nine or 16 if you're doing, you know, HDR work and that kind of kind of thing. But having those tags standardized and understanding what they do is vital because I see people all the time ago,
Starting point is 00:50:10 well, it looks different in QuickTime player. Well, that's because QuickTime players expecting a certain tag, et cetera. And you can get yourself into a whole, again, we don't have time to fully break this up to day. And this honestly gets easier in HDR because the transfer functions are much more standardized. One thing I just want to mention, you said, one-one-tagging. That means Rex 709 across the board. A lot of people get confused about this because I said, well, I'm grading Rex 709 gamma 2.4 on my reference monitor.
Starting point is 00:50:36 The gamma correction factor of a monitor has to do with the ambient lighting of the viewer. Right. So in a reference environment, you want to be looking at Rex 709 with gamma 2.4. In a dark room reference environment, you do not want to try to tag your file rec 709 gamma 2.4. One, because there is not an explicit tag for Gamma 2.4. And two, because your viewers might not be watching in a gamma 2.4 environment. In a brighter environment, you're going to be wanting the display to be gamma 2.2 for a computer, for example. So what we've found and what I kind of stick with is the best thing to do for SDR, like we said, HDR, this actually gets easier, right?
Starting point is 00:51:20 Because PQ is PQ is PQ. It's absolute everywhere, right? But in rec 709 world, the best thing to do, in my opinion, is monitor rec 709 gamma 2.4 on a calibrated reference monitor. Notice we're not talking about making your UI monitor in the grading software match any of that. Honestly, I don't even know how to make that happen. I keep the UI monitors dim as crap so I don't even have to be distracted by them. Probably some Kentucky windage ways you could make them kind of match. But everything we're talking about is assuming. that you're using a calibrated external reference monitor. If you're doing that and you're grading gamma 2.4, tag everything 111, which interestingly in Resolve, you just do by setting the output color space to Rex 709 and the output gamma to Rex 709 as well. That will yield a 111 tagged file,
Starting point is 00:52:14 even though you are monitoring gamma 2.4, your file won't have an explicit gamma curve assigned to it because, again, that depends on the viewing environment. Yeah, and there's a little more to this. I don't want to make this seem like this is the definitive, you know, only way of solving these problems. But, you know, because there's, it gets a little more complicated with Apple specific display devices.
Starting point is 00:52:37 Yeah, people try to make it more complicated. And yes, it can be more complicated. But that general guideline for rec 709 is what I have found to get the most consistency on the most players on the most. And that's the name of the game is consistency. So yeah, I think, no, this is this is, this is, really interesting stuff. I mean, I, again, I think there are a lot of, you know, nuances with some of this stuff. And one thing I think we didn't mention, what's important to mention,
Starting point is 00:53:08 is that, remember, so we went from lookup tables right to a fully managed pipeline, right? even though you might not choose a specific, you might not, like, you know, say, I'm going to choose Aces project wide or RCM project wide. It's important to note that in a CST workflow, you can, you're still color management, right? Like, CST is a more sophisticated, you know, it's not a lot, of course, it's, it's different math, but like, you, you know, you can, you are essentially color managing when you're using color space, transforms in your node tree. And it's important.
Starting point is 00:53:47 One thing that we didn't mention here, and the reason I brought this back up is because there is sort of a half-ass approach to doing this a little bit, right? And that is, I'll give you an example. I often use CSTs into just a different flavor of something that I know to be a little bit more comfortable
Starting point is 00:54:07 for the way that I want the tools to work, right? So I might not know exactly which version of S-log it was for a Sony shot or whatever, what gamut, you know, S-gamot 3, S-Gamet 3, Cini, or whatever. But guess what I know? I know R-E has, you know, kind of one or two things, and I can just use one of those, right? So I often will transform from, say, S-Log-3 or whatever,
Starting point is 00:54:32 into RRI and then just do the rest of, you know, just the rest of the work that way. So you can use sort of parts of color management to just get the behavior of your tool set to change a little bit too, which is something that we did. Absolutely. And what you're doing there, essentially, is a full color management pipeline, right? If you go, if you have a mixture of S-log and R-E footage, for example, and you transform all the S-log to R-E, then you do all your grades in log C, and you just put a log-C to REC-709 or a log-C to P-Q, LUT at the end of that.
Starting point is 00:55:04 Yeah, absolutely. It's doing the same thing. It's input, intermediate output. Totally. But I think a lot of people don't, I think a lot of people don't think that way. I think they think they think that it's a, it's a half as I said, half ass kind of, but it is, it is a truly managed pipeline. And there's reasons to do that. You might have, you might have a whole, let's just combine the two things. You might have a whole package of lookup tables that you love. Maybe they were developed by a color science team. Maybe you bought them, whatever, right? But all of those lookup tables were designed to work in log C space, right? So you might go, hey, no, I love these creative luts, I need to use them on this project, but they're meant for log C. Well, you could get
Starting point is 00:55:47 them to work properly in Log C by converting everything into Log C, applying a lookup table for that log C thing that you like in your work, you know, on the other side of your tree. And, you know, Bob's your uncle, right? That's, that's, the one problem. And this is going to, this is kind of, I'll close with this because I think it's the one of the biggest questions we get about color management is why can't I use my luts that I like? Oh, that's a good question because those transforms, output transforms are often baked into what the luts do. Exactly, right?
Starting point is 00:56:17 Because sometimes the Lut just as a look. Sometimes the Lut is just a technical transform. Most of the luts that have been circulating around the world in the past, you know, number of years are not log to log luts with a look built into them. They'll drop into a color management pipeline. They are a combination of some creation. look with an output transform to, for example, a R.
Starting point is 00:56:41 Normalized thing. Yeah, yeah, yeah, yeah. Those luts you could use in a color managed workflow, like we're talking about, we're all transforming to what that Ludd is expecting. Yep. But you'll never be able to get out of the display space that that Lut is outputting. And there is, you know, people have tried to do math to kind of reverse a lot and remove the display transform.
Starting point is 00:57:01 And intuitively, you think that might be something simple to do, but it's really a three-dimensional volumetric thing that you cannot mathematically do accurately in almost all cases. It's basically you can't unbake a cake, right? Once you put that output transform and that look together into this list of that final values, right. You're never going to get their constituent ingredients back. And that's something just to be aware of if you do want to use some of these legacy look-based lots that have transforms built into them.
Starting point is 00:57:35 If your project is only ever going to be SDR, great, do that. But if you might think about going to HDR down the road or you don't want to clip anything on your output and you want to do something a little bit more flexible, it might be worth trying to recreate that Lut with normal grading controls in a node because, you know, once you have that output transform baked into a Lut, there's nothing you can do to get rid of it. Yeah. Two last things I want to wrap up with is number one, when you, I think a lot of people have integrated OFX or DCL tools into their workflows because they do a lot of extra things that, you know, native tools in some of these applications don't do or they add some functionality. You know, just be aware that a lot of those tools have color management sections in them, right?
Starting point is 00:58:26 The ability to say, okay, well, okay, I need to make sure that this tool is operating, being in Aces space or in DaVinci Wide Gamma or whatever. So that processing is behaving how you want. That's something that sometimes is just hidden under, you know, a little disclosure triangle or somewhere like that. And if it's not a color space kind of aware kind of thing, just tell the developer. I mean, some of that stuff is easier,
Starting point is 00:58:49 more easy than what used to be to implement because there's standard libraries and things of that. That's number one. And then number two, the last thing I'll say is that the complicated, how should I say this? There are complicated workflows, CGI workflows, animated workflows,
Starting point is 00:59:07 those kind of things where it really becomes a mind, you know, my job. But I will say in general that if you are getting confused about this, you're probably doing too much in breaking it. And the case and point is this. I see people all the time post things about,
Starting point is 00:59:24 well, should I change these 400 things in my options in my CST, because I'm doing whatever. And my answer is no. Unless you know what you're doing for specific and good reasons, let the auto magical nature of some of these tool sets do their thing, right? So like if you put a CST on a clip, it's going to kind of take stock of,
Starting point is 00:59:48 okay, this is your working space, this is what your output. Like it's going to kind of make some intelligent decisions for you, not all the time, but some intelligent decisions. And so I would just stop messing with things so oftentimes, because I just find that people who have the biggest complaints about color management are often the people that are tweaking and not knowing what they're doing,
Starting point is 01:00:06 which is, and to be fair, I have no idea what half of those sliders and things mean on some of these transforms either. So I just go- In general, right, Black Magic has done specifically, has done an excellent job making the color space transform tool automatic. You set the input and output color space and transfer function.
Starting point is 01:00:29 And anything beyond that, you very likely don't need to change because it'll automatically populate. There's the checkboxes at the bottom for different OOTFs and other different transforms. As you change options in your source and destination, you'll notice those checkboxes automatically turn on and off. That's a point of confusion for a lot of people. They don't realize that those options are automatically set based on what you're coming from and where you're going to. And if you want to change those, you really need to understand what you're doing and have a reason for it. That is one reason I think a lot of people go with Aces because AIS doesn't give you that flexibility. AIS has a list of inputs and a list of outputs and everything else is under the hood.
Starting point is 01:01:09 Yeah, and I do think that the last piece of advice that I have for the tweakers out there is that it would be to get involved with, I mean, so you can't really get involved with, you know, the Resolve development team or the baseline development team other than giving them a feature request. but open source systems like Aces, you certainly can if you would like to contribute to, you know, I'm thinking of our friend Nick Shaw, for example, who's an incredible color scientist. He contributes all the time to the AIS's platform of, hey, this is how this should work, this is how, you know, that should work. And so there is a certain level of tweakability on some of the open source platforms like Aces. You know, if you wanted to make your own IDTs or your own ODTs, that is possible. But generally speaking, if you find yourself getting more complicated than your, you know, your guts telling you, you're probably more complicated than you should be. Yeah. If it's too hard, you're probably doing something wrong. Yeah, very good. Joey, this has been an exceptionally good and fun talk. We covered a lot of ground here. Of course, for our viewers, if there's anything that doesn't make sense, you want clarification. Just let us know wherever you find this. And to that end, remember, you can always find episodes of the Offset Podcast on YouTube. You can find them just by searching for the Offset Podcasts.
Starting point is 01:02:27 or you can use our YouTube handle, which is at the Offset Pod. At the Offset Pod is where we are on YouTube. And of course, you can always find us on social platforms like Facebook and Instagram, again, searching for the Offset Podcast. And, oh, actually, the last thing I want to say about this is that a lot of people aren't aware. We've been doing this for, what, about nine months or so every two weeks for about nine months. It's been a lot of fun, but we can always use some more ideas. And if you have an idea for the show, you can actually, we have a website for the show.
Starting point is 01:02:54 It's just Offsetpodcast.com. Offsetpodcast.com. There's no the, just Offsetpodcast.com. And then you can click on the submit an idea button at the top of that page that comes right to us and we'll consider it for an idea for a future show. So if you have some ideas, please let us know. And of course, wherever you find this podcast,
Starting point is 01:03:14 like and subscribe. That always goes a long way. Big thanks to Stella, our editor, and big thanks to Flander Scientific, as always, for being an amazing sponsor. So, Joey, that was good stuff, man. So for the Offset podcast, I'm Robbie Carnan. And I'm Joey Deanna.
Starting point is 01:03:31 Thanks for watching.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.