The Offset Podcast - The Offset Podcast EP012: Color Grading Myth Busting
Episode Date: June 17, 2024In this installment of The Offset Podcast, we're diving into some common color grading myths that we've heard over the years. This is by no means a comprehensive collection of myths, but rath...er a few select ones that we hear often. We'll start by taking a look at the 'skin tone line' on a vectorscope and why its use is not as cut and dry as you think it might be. We'll then jump into why 'more' grading is usually not the best approach to your grades and the related issue of why teal & orange looks engineered in post can sometimes be a tell to 'over grading'. We'll explore why lots of LUTs are nothing more than snake oil and why the one-size-fits-all-all LUT doesn't exist. We'll also discuss why separate P3 grades for film festivals can be an over complication for most projects, why you don't have to normalize a log image first before keying, and where you place noise reduction depends on the shot & your needs. Remember, if you like The Offset Podcast please like and subscribe wherever you watch or listen!
Transcript
Discussion (0)
Hey there, and welcome back to another installment of the Offset Podcasts.
Today, we're taking a look at some common myths that are pervasive in color grading.
Stay tuned.
This podcast is sponsored by Flanders Scientific, leaders in color accurate display solutions for professional video.
Whether you're a colorist, an editor, a DIT, or a broadcast engineer, Flanders Scientific has a professional display solution to meet your needs.
Learn more at flanderscientific.com.
Welcome back everybody. I am Robbie Carmen.
And I'm Joey Deanna.
And Joey, today we are going to be talking, well, a little bit about myths, things in the
color grading industry that for whatever reason are pervasive out there. People believe them.
They believe that they're true in various shapes or fashion. Now, to be clear, we're only
going to cover a half a dozen or so of these myths. There's a lot more that exists out there.
So if you are watching this on YouTube or listening to this on Spotify or Apple Music or something like that,
you know, let us know if you have comments available to you.
Let us know if there's some other myths that we didn't cover because, hey, who knows,
we could have a MythBusters Part 2 episodes somewhere down the line.
Yeah, and we're going to be pretty lighthearted about some of these things.
But, you know, we like to think we're pretty technically knowledgeable,
but we might get one or two tiny technical details wrong.
If we do, let us know in the comments.
We'll talk about it, but I think we can be, I think we can bust some myths today.
Totally.
All right.
So the first one I want to talk about is something that I think you put on our list here today to talk about because just this week, you know, browsing the old Facebook groups or I think it was Facebook.
It might have been like Lyft Gamma Gain or somewhere like that.
Yeah, most of these come from Facebook, by the way.
I wonder why.
Somebody had posted something about the quote unquote, using air quotes here, the skin tone.
line, the minus I quadrature line, there's different names for the skin indicator or whatever you want to call it.
Well, there's only one name for it, and you already said it, the in-phase quadrature line.
Right, right. But I'm just saying this colloquially known as the skin tone line or the skin tone marker or something like that.
And just to cover our bases before we go into why this is kind of a mythical thing, that minus I quadrature line or the skin tone line or indicator has become some.
somewhat ingrained, I think, in a lot of new colorists or even some experienced colorists that
skin has to, and I'm using, again, air quotes here, has to be on that line, right?
And, you know, there's lots of people who write articles about this.
Hey, you know, regardless of skin tone, your skin is, you know, supposed to be on this line.
It doesn't matter if you're olive complexion or fair complexion or red or orange or whatever, right,
that you should be somewhere wrong.
And as far as we're concerned, that's a little bit.
bit of a misnomer, right? First of all, first of all, it's impossible to equate one thin area of a
vector scope to all of humankind's various shades and flavors of skin tone, right? So the idea that
somebody says this has to be on this line, I think is the first mistake. Do you agree? Yeah, and let's go
back in time a little bit. The reason why there's a line on your vector scope for this, we talked about
in phase and quadrature is these are modulation aspects of an NTSC color signal. That's why they are
perpendicular on the vector scope and they are used for evaluating test signals and signal flow
on NTSC analog broadcast. The idea of skin tone being related to that line has never once
actually existed. But it just happens to be. It's for purely technical reasons and it is
completely coincidental. Totally. Totally. Totally.
Most human skin tones under normal lighting
kind of hit within like 20 to 30% of either direction of that line.
And, you know, what we've seen in the articles a lot is no matter how light or dark skin is, right?
It does tend to be in the same region, but that doesn't take into account scene lighting,
which can drastically affect what the actual skin tone in your image is for, you know,
important creative reasons. Yeah, I think there's two things that you just said that are really,
really important about this. Number one, it's the way that I consider the skin tone line or that
minus I quadrature line is as a, as a as a as a as a as a milestone. It's a mile marker in my set
of scopes to kind of look at, but not be beholden to. I like to think of skin when properly done
is somewhere around this line, but people have various. I mean, look at go out in the real world,
walk around, you know, go somewhere with a lot of people. You're going to see people with
slightly redder tones, people with slightly olive tones, you know, more olive tones, that kind of stuff.
Look at a room lit like mine versus a room lit like yours. Right, exactly. Like, I mean,
you have that pink light and that's the next thing we'll get to. People, there's variance, right?
So being saying something has to be on this line or else. And I think a lot of new colorists,
they struggle with this because they do that, right? They go, okay, I read somewhere or somebody
told me that my skin tone has to be on the sign. So they do all sorts of things. They like,
they crop into the picture, right? And they're like, they're keen. They're using secondary
curbs or doing whatever, and they're moving the skin ton around, and it always kind of looks weird
when people, like, you know, when they get like on this narrow super range, but this is where
skin has to be, you know?
Yeah, and it's important to mention one thing.
There have been some Vectroscope softwares that do have what they call a skin tone line
that is not directly lined up with this test pattern line that we're talking about.
Sure.
And they're kind of like positioned where whoever wrote that software feels it should be,
but it's the same idea.
It's like, you know, that maybe that's a guidepost.
But for me, I turn that off on all my vector scops.
I do not even let that line be on there because I'm not evaluating an NTSC signal.
It's completely irrelevant to me.
Well, so I agree.
And the second thing you said that I think is worth describing.
And I actually, in that Facebook post that we both saw, I actually responded and gave this anecdote because I think it's an important one.
I remember years ago, I was grading a film.
It was actually, the whole film was actually shot in Las Vegas.
And there was a scene in the show where these actors were walking across this bridge between the New York, New York Hotel and the MGM Hotel in Vegas.
If you ever been to Vegas, the MGM Hotel is green. It's neon green. It's this huge emerald light. I mean, the whole thing glows.
And these actors were, you know, not too far from the MGM building themselves.
And therefore, that green light of the building was, you know, casting this, you know, super,
you know, pretty saturated, severe green cast on the actors.
And the filmmaker could not get it like, it was just like, they look green.
And I'm like, yeah.
And it came around.
Yeah, walked around the, racked around the table, looked at the scopes, looked at the
skin tone line and see like, look, see, it's all the way down here.
It's green.
I'm like, and I didn't understand what they were really getting on about at first.
But it just-
Don't say skin tone line.
It's not a real thing.
But it dawned on me that somewhere, somehow, some way, they had been convinced.
that proper skin, again, is right around this,
totally ignoring the fact of what the master
illuminant is in the scene, right?
And so in this case, there, you know,
yards or feet from this gigantic green building
casting a gigantic green light.
And skin is weird, man, because it absorbs,
but it also reflects a little bit too, right?
So like, yeah, if it didn't reflect, we would all be invisible.
Right.
So.
And here's the interesting thing, right?
And this is where a skin tone can get really, really delicate.
it because even in the same lighting condition,
not everybody has to have the same skin tone.
Different types of skin, different,
heck, people at different times of the day
under different stress levels, their skin will reflect light
differently.
There's so many complexions have so much detail
in how they reflect light.
Some people will be affected by that ambient lighting
more than others.
And you just got to really think, you know,
one, does this look natural?
Right.
Two, does this go with the look
and the story that we're telling.
And three, is everybody exposed
where we need them to be brightness-wise?
Other than that,
I don't think there should be any rules
for what right is for skin tone.
Because otherwise, we get into this too sometimes
where you're like, I'm going to fix all these,
I'm going to power window all these people
and you kind of like try to power window
everybody to be the same skin tone in a scene.
And then you realize it looks completely whack-a-doodle
and you kind of got to back off on it.
You know, sometimes we have this instinct to fix
where we don't have to fix.
And that's exactly what happened
with this filmmaker, right?
Like I couldn't, I didn't understand
really what they were getting at first.
I'm like, what do you mean?
It looks like perfectly natural to me.
And he was going on about how the skin was green
and I was like, listen, you know,
you're ignoring the fact that you have this gigantic light source
and you see this a lot too with like an indoor scene, right?
Somebody's sitting next to a tungsten, say, table light, right?
That's casting like a nice warm orange glow
and their skin is super yellow, super orange, super saturated.
Well, guess what?
Do yourself a favor tonight, go into a room in your house, turn on a light, sit next to it,
and have somebody take a picture of you.
That is going to be what you see.
So I think a lot of times newer colors, you know, are so dogmatic about this skin tone line
that they actually end up making the pictures worse because they're trying to do corrections
that just are not natural and not in line with whatever is, you know,
the master illuminate or the overall lighting in the scene is.
Yeah, and honestly, that I think kind of takes us right to one of the things you would put on our little list here, which is that more grading doesn't always mean better grading.
You know, like I said, sometimes we tend to overfix because that bypass button to show you, quote, before and after is right there and so easy to toggle that you kind of really want to see a big difference, right?
You want to feel like I did this.
And sometimes we push things too far.
Yeah, I think this is, you know, of the many things that I've learned over the past decades from our dear friend Walter Volpato, one of the things, one of the phrases that I hear him say over and over again to groups of people that he talks to and he said it to you and I is respect to photography, right?
And, you know, Walter, of course, is working with the best directors and cinematographers and gaffers and stuff in the world.
you know what his push and pull might be a little different than our push and pull but the overarching
idea of yeah you know this is what you got and i think where this kind of more grading thing comes in
is that honestly a lot of a lot of us especially those who are newer to the industry are often dealing
with shooting situations that honestly are a mess right that you know we're you know we we joke sometimes
about shows that we work on we're like i guess the dp just wanted to use every color temperature
available to them, you know, in every single shot. So I get it on some level that a lot of entry
level grading and even, you know, even some, some mid-level stuff is truly about fixing problems.
And sometimes you have to be a little more surgical. But I remember this story, I think I told
to you actually after it happened, where I had a client in the room one time and they were just
that type of client that was, they were stressed. They were intense. They were really, you know,
OCD, you know, pixel peeping on everything.
And I remember that they walked around the room back to my desk and looked at it.
And they're like, well, now it makes sense.
And I was like, what makes sense?
They're like, you don't have enough nodes.
And I just like, what?
I don't have enough nodes, you know, because they had worked for somebody.
Well, we charged by the nodes.
Right.
They had worked with somebody previously that, you know, had 4,000 notes.
And so I think that adage of, you know, more grading, more nodes.
whatever, is a dangerous one because the more that you start messing with very isolated specific
parts of the image, you have to cut you have a couple problems.
One, there's no, you know, any correction you do is sort of, uh, sort of, in a sense,
sort of damaging to the image because you're changing the original pixels, right, on some
shaper level.
And this is particularly true as you get into targeted corrections, secondaries, you know,
with, with keys and curves.
You can actually add things like noise and banding and, you know, all sorts of
problems. The other reason I think that more is not necessarily better is because especially on those
users who try to do everything with not more nodes, but just do more in a node, right? It gets very
hard to kind of backtrack and figure out what's doing what in any given node. I was actually going to
say the opposite. I try to put a lot of stuff in one node because I use my nodes for really
organizational purposes. I see people back to the Facebook topic of I see people post these node
structures where seemingly random things are broken out into their own node for no reason.
Sure.
And yes, it looks impressive.
They made a whole bunch of nodes and labeled them very clearly.
But like, I look at it and there's no logic to it.
I feel like you just tried too hard to make something with a lot of noise.
I guess I understand what you're saying.
I guess what I'm trying to communicate is that...
And if you've seen my node tree, that might sound hypocritical.
Right.
I guess what I'm trying to say is a lot of...
I see those similar node trees where it's like,
you did think one thing in node number one,
and you think you're doing something different in node number two,
but what you're really doing is canceling out what you did in node number one.
Yes.
And it just,
so it's a fine line between being organized and kind of being able to turn various aspects on and off
versus canceling yourself out and not being able to figure out what's doing what.
Yeah.
I generally subscribe to the attitude of less is more when it comes to grading.
And honestly, I think, you know, the seasoned eye can tell a little bit.
And we always joke that like, hey, it doesn't really matter what tool you use or what software you use or whatever.
Like, there's no scorecard when it comes to looking at the images, right?
If it's good, it's good.
But at the same time, I think the trained eye can tell by looking at something of like, oh, man, that's really engineered.
Right.
Like that's, yeah, it was built badly in whatever.
It was really over.
Like, you know, where you have like, you know, whatever, different light on every single segment of the screen.
where, you know, people's skin is like unnaturally smooth.
Like, there's a whole lot of tells about that kind of stuff.
And I generally, I generally find it's because people just want to try to do more than they probably should.
Yeah.
All right.
Let's keep it moving because you mentioned something when we were talking about lighting and skin tone.
It's just in light on human skin.
Yeah, yeah.
Right?
And that one is, I think, very relevant to what I want to talk about next, which is the teal and orange look, the mythical teal and orange look.
And I'm not going to say it's a.
myth because obviously if you've watched
any movie. Anything in the
past 30 years, right?
Anything in the past 100 years.
Yeah, yeah. Since we've had color.
You feel it. And yes, there have been
extreme examples. There have been less
extreme examples. But
and this this delves into more
my theory of the history
of it personally. But I
think the reason why
we have, quote, teal and orange
as a common look is not
because it's somehow innately pleasing to
the eye, although I kind of think it is.
It comes back to tungsten lighting and film and the transition between film and digital, right?
Think about this.
And this is why I tell people so often that lighting is so important because in the days of
tungsten balanced motion picture film, right, you would use tungsten lights and tungsten balanced
film to get good balanced exposure of your subjects that you were lighting.
A natural function of that is everything you didn't light, especially.
if you were outside, naturally tended towards blue.
So we had an innate foreground background, teal and orange separation built into the imaging pipeline of motion picture film using tungsten light from like the 1930s onwards.
Sure.
And then when we transitioned to digital, we don't have this, right?
Right.
You can light, obviously you light to the white balance, but it's not nearly as extreme as the difference between tungsten light.
and everything else for tungsten balance film.
So we've started like chasing that teal and orange look by like,
I've got to tit the shadows teal and the foreground yellow,
but that's not really the same, right?
Yeah, I mean, I think that's a good, that's a good insight.
I will say that there is a little bit of color science action to this philosophy, right,
that the idea of complementary colors, if you look at, you know,
a standard, you know, color wheel, right?
and you look across that, the idea of complementary colors is going, you know, right to the opposite
angle of the other side of the color wheel, right? So if you look at, you know, where yellow,
orange kind of, you know, skin-tony stuff is and draw a straight line to the opposite side of the
color wheel, you know, you're getting into that cyan blue kind of area, right? But if you
project that colorimetry on a different kind of meter, it's not complimentary, right? You know,
it is kind of made up. Yeah, I understand, but I'm just saying like that is, you know,
It's something you doesn't necessarily have to be orange and teal.
You could have, you know, purple and green or whatever it may be, just kind of drawing sides of that.
And I think that, you know, to a lot of people, the orange and teal look, you know, in its modern form owes a lot of homage to, you know, Stefan Sonafeld at Company 3, you know, in the early 2000s.
You know, he was, I think, obviously, he's a top-notch colorist, but also, you know, a lot of color scientists work with them.
I think they exploited that complementary nature a lot as an overall look and feel into the very kind of, you know, first DIY originated films, you know, I'm thinking transformers and, you know, battleship and, you know, and things of that nature that he worked on.
And I think because those movies were successful, because they had a bold look, I think that, you know, we live obviously in a kind of a copycat, you know, kind of world when it comes to the looks and styles.
I think that one, also because of the aesthetically pleasing aspects of skin versus a counterpoint to the skin, I think it became a thing.
And we've seen, you know, you see all the time.
You see this push to 11 sometimes.
It's like, uh, you know, but you see subtle versions of it.
And I will also say that, you know, the complementary aspect, teal to orange or purple to green or yellow to blue, whatever it may be is not something that's, in my opinion, is not something that's best done in the post process.
right and that's yes that's the point of this as a myth right the myth is that they made the teal and orange
look in color grading the teal and orange look was made optically with tungsten balanced film and then
later on with complex on-site lighting and that's where i think people kind of lose set design too right
i mean people like set design yeah and makeup and wardrobe yeah exactly those things so the myth of teal
and orange is that it is a look that you make in the grade like anything else it is a look that is made
on the production as a whole.
And when I see it engineered,
that's clearly engineered in post,
I see a lot of obvious problems with it,
mostly being with the non-orange or yellow side,
the blue side, that as you said,
kind of creeps in everything else.
And you watch something and you have this really crazy,
a lot of times when this is engineered,
really shifting blacks in a show, right?
Where, you know, some of them are science,
some of them are blues,
some of them are green and it really
I think when it's when it's done well
it's invisible and you kind of
just you adapt to it and get used to it
after the first few shots when it's
not done well it's
really kind of like it's off putting
it kind of throws you like why is this I can't quite
place it why is this feel different
and oftentimes it's because that black point is
just kind of going everywhere you know
yeah and that brings us to one of
your on our little
list of myths here
the mythical
magic lutt. Yeah, I mean, listen, man, if there is one place in our industry that is more snake
oily and, I mean, and not to, not to dedicate these people, the people that have come out with,
you know, Lut packs or whatever, like, hey, get yours, right? Like, if you can get some dollars from it
and make some sales, like, I get it, right? My point about what I call the magic bullet or the
mythical Ludd is that it's partly snake oil salesmanship, hey, get
the Hollywood look at the whatever
Joker, Teal and Orange, whatever, that kind of
stuff. But it's also
partly a misunderstanding
of the color science and how these
things work. Now, to back it up for a second,
you know, you take a look at big films,
Joker is a popular
example, right?
You know, there are a color, there's
color scientists that work at some of these large
facilities that do camera
tests with DPs at the start of the project or whatever.
And yes, it is true
that they may, they may develop
specific looks that are very stylized that match the aesthetic needs of that particular film, right?
And match the input of the camera of that specific thing.
Correct. What I am objecting to as the mythical magic bullet thing is that you can just go online,
buy something, and all of a sudden it's you have the Hollywood film, right? The thing I always say
to people is that lookup tables or luts are dumb. They're hard-coded math, right?
And I would even say they're not even technically math so much.
I understand your point.
Yeah, I understand your point.
Math is giving them too much credit.
And this came up on everywhere else, Facebook.
Somebody actually talked to me about this.
They said, why is it called a lookup table?
That's silly.
That's just they tried to make that sound more engineering.
No, no, no, no, no.
It's a table of RGB triplets that represent input colors and then a hard-coded new
output color. The only math is interpolating for where there isn't a specific value listed.
The simplest way to understand this, and this is not obviously how a 3D lookup table really
works in the sense, but to dumb it down, let's say you have a red pixel as an input pixel,
right? You have math in this table that says, okay, let me take this red pixel and make it green,
right, or blue or whatever. And that literally looks it up in the table. Yep, that's all it does, right?
So that works well and good when the lookup table is more or less designed for the general idea of the input.
But there's a couple problems here.
Number one, a lot of lookup tables are designed with specific cameras, color spaces, or other technical details kind of in mind.
Right.
I mentioned earlier those color scientists doing camera tests with cameras.
So they know, okay, this is going to an Alexa.
This is kind of the shooting we're going to do.
This is the ISO we're going to do.
This is where we're trying to hit exposure.
So a lot of times those lots are built kind of under generic way of thinking about it as sort of lab conditions, right?
They know exactly what the input is going to be, the saturation, et cetera, right?
When you have an input that does not match what that lookup table is expecting, the lookup table is still doing its thing, right?
It's still applying this transform.
But, you know, if you have an input that's 10 stops overexposed and way too saturated and you hit that lot, bad things happen.
And I see this, you know, teaching people all the time that they don't know.
know, because I think this kind of falls also into two categories, Joey. One, the initial
transform from log to something normalized, which a lot of people look for, and then obviously
the stylistic part. And sometimes they get kind of coalesced together. But I see people like,
you know, they start project, okay, got to grade this. And they go on like, not that let, not that
let. You know, they go on line. They buy some letts. Okay, that one doesn't work. That one doesn't
work. And next thing, you know, it's two o'clock in the morning. Then they've graded three shots
and their 1,200 shot timeline because they're looking for this magic bullet to apply it.
right? It just doesn't exist.
No, and it's an important thing to remember also when you're using any kind of LUT
is yes, they're expecting in most cases a particular set of input parameters,
whether that's lighting, whatever, design, input color space, input transfer function, whatever.
But sometimes they have display transforms baked into them along with a look transform.
Sometimes they don't.
You know, so it's also what is that Lut outputting?
So you really got to think critically if how and when and why you're using particular luts in your pipeline.
And I think a lot of people, like you said, they look for that mythical, magical one and it just doesn't exist.
Yeah. And as you said, apply it without any understanding of it.
You know, we see a lot of, you know, these days where people are getting more and more familiar with color management pipelines, you know, ACEs, RCS, you know, T-Cam workflows, whatever.
trying to use their tried and true lookup tables
and those kind of workflows too
also presents problems for the reasons that you just mentioned, right?
We're now, well, hey, guess what?
We're not working in a, you know, a Cynion space anymore.
Like we're now in whatever, this whole, you know,
the variables can combine a different thing.
Yeah, you could be using a print lot
that has a baked in REC-709 display transform.
Right, exactly.
You can't un-bake a cake.
You know, you can't get the constituent ingredients back
after you've thrown away all the data that isn't in the input side of that lot.
And there's tools out there.
Ladis comes to mind from the guys at Video Village that allow you to extract, tweak, massage
some things from a lookup table.
But if you try to do what I just said, reverse a display transform in lattice,
it will come up with a big warning that says,
this is probably not going to go how you want it to go, and it's not going to look good.
Yeah.
And I just think that, you know, the, I'm not trying to poo the idea of the use of what's.
I'm not.
I'm not trying to poo the, the hard work that goes into colorists, camera teams, color scientists,
creating those for various pipelines.
What I am trying to poo is just this idea of you can just go on the internet, buy something,
and it's going to fix your problems, right?
You end up, you end up fighting it way, way, way, way more.
then you probably need to or probably should,
and that's because it's just dumb math or dumb table, as you said.
Okay.
All right, Joe, I get another one for you.
This one comes up frequently as well.
You know, these days, people doing feature shorts, you know, docs or whatever,
they're going to go to festivals, film festivals, narrative places, that kind of stuff, right?
And because it's a theater, that's going to mean that it's projected.
Sometimes that projection is no more than a laptop connected to a, you know, home theater projector.
But a lot of times it's a legit theatrical setup with a, you know, a server, DCP server and, you know, proper DLP or whatever projector.
I see a lot of people overcomplicating their grading workflows because they look at big houses that go, oh, well, we did a P3 pass, we did a 709 pass.
we did. And they get down this road where they're led to believe that, especially in the case of something that originated or was initially, you know, SDR, REC 7 or 9, they're convinced that there's more to get if we did a bespoke P3 grade from the start. Now, that has a couple problems. I'm sure you're aware of all of them, but let me just start with the most obvious one, right? People try to get into P3 workflows for theatrical release without a big component of that whole.
phrase that is the theater right yes that the theater itself a theater environment right you can
have all the color science and all that stuff correct on your direct view monitor the theater no
matter what you do is going to provide a significantly different perceptual experience than grading
on a direct view monitor both the black box environment the diffusion of light coming out of the
projector uh how uh the contrast ratio of projector is versus direct view between you
between black and white, all that kind of stuff.
So, like, problem number one is unless you're actually grading in the theater,
like, no matter if you, you know, no matter what,
if you get all the color science right,
you're still going to have perceptual differences if you go down that path.
Yep.
Right?
Yeah.
And I think, you know, the next thing is hand in hand with that is calibration, right?
Yeah.
If you have spent almost all of your time working in SDR, REC 709,
and you have a very good calibrated reference monitor for that, right?
you may not have a good calibration available for P3 for your monitor,
or you might not use it often enough to really configure it properly,
so you might be putting yourself into a corner where you're going to make mistakes,
simple mistakes of monitoring and calibration that you wouldn't make in REC-709.
My thought on this is if you started in REC-709,
and you're happy with your grade in REC-709,
use a technical transform to containerize that in P3,
because for 99% of things,
you're not going into P3 gamut for most stuff.
Unless you looked at your grade and thought,
man, I wish I could have gotten that a little bit more saturated,
then there's no real reason to explore going to P3
if you've already got a locked 709.
Yeah, I mean, I think that this idea that there's more to gain in P3
is a little bit of a misnomer because, I mean, I look oftentimes,
I take a peek at my Vectorscope or whatever, right,
in a what is, you know, what I would consider a colorful grade, right?
I'm like, cool.
I'm nowhere even near the boundaries of REC 709 and my color grade.
So, you know, when I think about that, I'm like, why am I going to complicate this life?
Also because, you know, it's cool if like, hey, 99% of the eyeballs are going to be in a theatrical environment.
Yeah, go down that path.
But most of the people that we talk to about this are like, well, I'm going to do something for a film festival,
but I also have to put it on a streaming platform.
I'm going to put it on Vimeo.
I'm going to put it on YouTube.
right? And so the big complication has to do with how, you know, people trying to manually manage a different white point that's involved in these two.
Because remember, DCIP3 has a different white point than 7 or 9, you know, D65, right?
I mean, you're going to have a different point. And manually managing that is a nightmare.
So I agree that, hey, you got to do something for theatrical.
Don't complicate your life, make it P3 or go to a P3 only grade.
You know, do what everybody, not everybody, but a lot of people do.
just master in seven or nine, hey, when it's time to come do that P, that, that, that, that, that, that, uh, that DCP,
do a technical transform into P3 into X, um, or X, Y, Z, um, or X, Y, Z and then to P3, um, you know, and that will
make your life much, much, much, much simpler. Yeah. So last thing we've got on the list, and this is one that
I, I just, just keeps coming up no matter what. And I feel pretty strongly about this one,
is that people think you have to normalize or convert an image,
to Rex 709 to get good keys or qualifiers, right?
People hit their little eyedropper on a log image
because, you know, we work seen referred.
Everything in our grades almost always
is in some kind of log format,
whether that's camera log, whether that's Aces,
whether that's DaVinci Intermediate, whatever.
We work in log for 99% of the individual node work
that we do on any given grade,
which means if we need to do a qualifier,
pulling it into Rex 709 temporarily to make a mat
is a huge, huge hassle.
So is the juice worth the squeeze?
And I say, absolutely not.
For one specific reason,
everything in Resolve is 32-bit floating point.
You have almost infinite precision of the decimal point
of these pixel values.
When you convert from whatever log to Rex 709 or SRGB or whatever,
you are not adding any more information to the signal.
By nature, you can't be adding any more information to the signal.
You're just applying a curve and spreading the values out.
So where does this come from?
Everybody just clicks the little eyedropper and says,
hey, my qualifier looks completely bad in log.
Well, here's the secret that I don't think a lot of people know.
The eyedropper tool is designed for Rexona nine images.
It samples based on that.
If you are willing to dial the qualifiers in manually using the floating point values available on the panel or in the UI, you can get every bit of as good or better of a isolation or qualifier on the original source footage or some kind of log color managed, then you can if it's been converted to display transform.
That is complete snake oil when somebody says make another pipeline in your grade with color space transforms just to pull the key and bring that in somewhere else.
Yeah, 100%.
Everything you said is dead on.
I think it's just that it's the laziness factor,
the ease of use factor with the eyedropper that, hey,
we click on something, shouldn't it be, you know, they wear.
And honestly, a lot of those tools, like, you know,
it should be possible for a lot of those eyedroppers
to kind of be color space aware.
It would be nice.
It would be nice.
But right now a lot of them are, you know, based on legacy, you know,
legacy color science math, but you can still get perfectly great results.
And honestly, you know, we've talked a lot about this internally in our own workflows
about kind of like camera space or log nodes that we would have in log to do various things
like, you know, any paint kind of work that we need to do, he work, that kind of stuff.
And, you know, once you kind of get used to kind of how to move those dials a little bit
to get the correct selections in log, it's not any different than doing it in Rex 7.
I would I would speculate that in most cases it's actually better because you're doing like we
talked about earlier simpler can be better you're doing less things to the image to get that because you're
going through it if you do it through transform there is some errors to that and how the pixels are
rounded off to get into that transform and when you expand out that dynamic range to go to go to a
different display guess what you could also be expanding out noise yeah now you said one more thing
I actually have one last thing that just came to mind because I'm just I'm actually as we're
recording this looking at something on Facebook that there's a you know we're uh let's see here 53 comments
deep on about this and that is where do I place noise reduction right um and I don't want to say
this is necessarily a myth but this is a something that I think confuses a lot of users about the
best place and I'll just say I'm going to just save my ass here and just say it depends right yes
um is what I say all the
time how much is a boat. Right, exactly, right? So let's give a case and point for why we want to
potentially noise reduce something at the end of the, the end of the channel or the string of
nodes. And that is because the logic there goes, while I'm pushing and pulling on the image,
right, stretching contrast or doing whatever, therefore I am exacerbating or making more noticeable
noise that might exist in the image in general, or two, I'm actually creating some sort of noise
with a key or a bad, you know, curve or something like that, right?
I, my line of first defense is always, is generally speaking to key at, or to key, I'm sorry,
to noise reduce at the end of the chain for those reasons, which I find to be pretty,
to pretty valid, especially a lot of the stuff that we get is, you know, so, so,
shot, so therefore there is going to be a lot of push and pull exposure changes, etc.
When I have to work on something or stretch something hard, that's my first indicator to
pop it at the end of the tree.
Especially if you've got two shots next to each other where one you had to push really
far.
Yep.
Another one you didn't have to push very far.
You might want that noise reduction at the end of the chain because you want to even out
the noise level on both of those very disparate shots.
Yep.
Now, for the opposite side of when would I pop at the time?
And if you look at my node tree, I actually have a noise reduction node at the top,
and I have a noise reduction node at the end as I just described.
Why would I want to do it at the top?
Well, a lot of times when I know that I have something that's well exposed, well shot,
et cetera, but maybe they were just in between ISO on the camera, right?
Or not shooting at the native ISO of the camera.
Or maybe it's a night scene or something like that.
Before I feed that into the rest of my transforms and the rest of my pipeline,
it would sure be great to eliminate that noise, right?
so it's not influencing other things.
Like, hey, maybe I have to do some green screen keen on this particular shot, right?
It's better to clean it up before I make that selection with the keyer rather than after.
Or maybe sometimes it's both, right?
Like, there's no right or wrong with this.
And I see a lot of people getting, you know, standing up on soapbox, like it has to be before.
It has to be after.
And my attitude about it is it depends, right?
Yep.
Well, it just depends.
Right.
Okay, cool.
All right.
Well, a couple things here.
I think these are some good, you know, myths or some things that are,
propagating themselves out there to consider as things that, you know, people sometimes get wrong.
If you guys have some more of these type things, please comment if you have comments available to
wherever you're watching or listening.
Remember, you can always check us out on YouTube.
If you like us on YouTube, you can like and subscribe.
We're also available on Spotify, Apple Music, Amazon, Google, all the major platforms.
You can always go to DC Color slash podcast, where we have individual episodes there.
or, you know, check us out on Instagram as well, on Facebook.
Everywhere you can think of to find us, we are there.
As always, big thanks to our editor, Stella,
who makes us sound somewhat intelligent.
And also big thanks to our sponsor, Flanders Scientific, as always, for the support.
So, for the Offset podcast, I'm Robbie Carmen.
And I'm Joe Indiana.
Thanks for watching.
