The Offset Podcast - The Offset Podcast EP021: 'Old' Postproduction Knowledge
Episode Date: November 1, 2024No, we're not returning to the days of 1", U-Matic, or even D1 & Digibeta (although those were pretty awesome days!). In this installment of The Offset Podcast, we're revisiting some pos...tproduction knowledge that many of us take for granted but that is surprisingly new or at least unclear to many people. In this episode, we'll discuss:Preserving institutional knowledge of post - old vocab and techniques are still relevant Understanding fractional frame ratesDrop vs non-drop timecodeUnderstanding interlacing Interlacing to progressive, progressive to interlacedTransparency & alpha channelsComposite modes and transparency in color-managed pipelinesRecap on 3 and 4-point editing Inclusive & exclusive playheadsBeing obsessive about file namesUsing Leader in file outputs - slates, bars/tone etcIf you like this episode, please subscribe and like the show wherever you find it. Also, thanks to our sponsor Flanders Scientific, and our awesome editor Stella. If you have an idea for a new episode please visit offsetpodcast.com and use the submission button to share your thoughts
Transcript
Discussion (0)
Old post-production knowledge.
Is it still useful?
Well, in this episode of the Offset podcast, we're going to find out.
Stay tuned.
This podcast is sponsored by Flanders Scientific,
leaders in color-accurate display solutions for professional video.
Whether you're a colorist, an editor, a DIT, or a broadcast engineer,
Flanders Scientific has a professional display solution to meet your needs.
Learn more at flanderscientific.com.
All right, guys, welcome back to another installment of The Offset Podcast.
I am Robbie Carmen.
And I'm Joey Deanna.
And Joey, today I want to talk about, hmm, probably not your favorite subject, but one of...
Literally my favorite subject.
I'm going to put it in like the top five, maybe top three subjects of your life.
And for those who are not familiar with, you know, Joey, we have not determined yet if he was actually conceived or born in a post-production.
facility, but it's pretty close of when he started working in a post facility. His dad was a
broadcast engineer. It's running his family. All his friends growing up were somehow involved in
post-production or their families were. So Joey has a long, long rich history of post-production
vocabulary, terms, workflows, and the more obscure the piece of hardware, the connector type,
the workflow, the more jazz he gets about it. But I'm just teasing, sort of. But, you know, actually
in thinking about this, Joey, this topic for today,
the motivation for talking about old post-production knowledge, right,
has to do with the fact that I have found myself interacting with a lot of people
in the past few years, I don't know, maybe four or five,
maybe a little longer,
who I just feel like the institutional knowledge of what I learned,
you know, coming up in the post-production industry,
in some shape or fashion it has been lost, you know?
I don't know if that's totally true,
like you say things like blacking a tape or you say things like, you know, 3GSDI or whatever.
And people just kind of look at you with like these blank explanations, blank stares on their face, like, I'm not sure what you're talking about.
I had one the other day.
What we'll talk about here in a second.
I was talking about a back timing and edit with somebody.
And they were like, what do you mean back timing and edit?
And I just think we've gotten so, the software is so good.
You know, everybody's drag and drop every these days.
Nobody's, you know, using old CMX or Sony controllers like you, you know, we're back in the day.
So anyway, on this episode, I want to get into a little bit of some of that older terminology, vocab, that kind of thing, and how it's still super, all of it, is still super relevant today.
But some of the terms and some of the connection to the terms might be lost.
That makes sense?
Yeah, absolutely.
And I completely agree, you know, some of the old knowledge has gone away in a way that I don't like.
I think there's a balance here between, you know, I'm an old man yelling at the clouds and telling people to get off my lawn, which I am.
Don't get me wrong.
But there is also still valid uses of this knowledge, even though we're not in a tape ecosystem anymore.
and broadcast is getting somewhat less relevant versus streaming and other platforms and things like that.
There is still things that we learned growing up from linear editing and broadcasting that I think have really stood the test of time.
And part of that is, you know, yes, the gear changes, the tools change.
We went from linear to non-linear editing.
We went from, you know, always having to do an offline to now doing a line.
to now doing a lot of offline and online at the same time.
We went from really hardware-based solutions to software-based solutions.
We went from SDR to H-DR.
So the technology is always changing.
We always want to stay up to date on that.
But there are a lot of concepts, things that apply to both workflows and the technical stuff
that, you know, kind of, in my opinion, last forever.
And some of that knowledge has been lost.
So hopefully we can shed some light on some of that stuff.
100% agreed. And, you know, I think it's important for us just to say this is not like a stand-under soapbox, but, you know, pontificate kind of episode. I actually really hope that some of this stuff, if it's not, you know, of course, there's a certain subset of our audience where this is going to be like, oh, yeah, I remember that. Oh, like, it was just, it's a refresher, if you will. But for another subset, you know, particularly those who might be a little bit on the younger side, hopefully some of this can kind of get you up to speed with some of these terms and kind of see the real world application today. So enter into the discussion here, fractional frame rates.
What I mean by fractional frame rates is like 2997 or 23-976, what those mean.
Drop frame versus non-drop frame counting.
And notice that I said counting.
I didn't say time code necessarily.
Just said a way of counting.
And then the whole idea of interlacing.
And interlacing has some nuances that we'll discover and talk about when we get there.
But let's start on the fractional frame rate, right?
Because I think everybody, for the most part, is familiar with 24 frames.
for second, right? This is the number that
true film play out has been out for forever, 24 frames for second.
And we got there through a couple of missteps over in the early
1900s where we had, you know, we're standardized on 15 or 16 frames a second,
18 frames a second, whatever. Eventually we got up to 24 frames
a second, mainly because of a cycle phenomenon called the persistence of vision,
right, where we can no longer really detect our eyes, our brains, can no longer
really detect separate frames coming through once we get to a certain speed.
And that's, you know, people think that 24 is like this magical thing or everything looks
prettier. That's just because all the pretty things that our generation has looked at have been 24.
24 is the slowest you can do it effectively and trick the brain. So not going higher was a economic
decision in general to save expensive film. Now, television.
got its frame rate with a completely different methodology.
And that is that the United States runs all of their alternating current grid power at 60 cycles per second.
Okay?
England runs all theirs at 50, which is why PAL is 25 frames a second.
And NTSC, which is the United States standard for SDTV and broadcast.
So black and white broadcast starts.
and the TVs are all synchronized to the AC signal.
This makes it a lot easier to make the television
because it doesn't need to have any kind of retiming logic or synchronization.
It just goes, hey, when the wave of the power is going up,
we know it's 160th of a second.
Great.
You know, it made all the signal processing and driving the CRTs and everything else
kind of something they could do.
Here's where we get to fractional frame rates.
A decision was made.
on a color television system.
And when they,
there was a bunch of competing color television broadcast systems
that came out around the same time.
When the standard NTSC color system was devised,
they found that on some older televisions,
when you add the chroma subcarrier,
because the color, like we talked about with 422,
and with some of the other stuff,
kind of layered on top of the signal, yeah.
Yeah, you have a luminance signal,
and then you also have a subcarrier
that has the color signal, they're separate.
That means the luminant signal can still be displayed.
A black and white image is still a black and white image
inside a color TV signal.
Now, what the engineers found was
that if they kept the broadcast at 30 hertz,
that color subcarrier would flatten some nasty stuff
in certain legacy black and white televisions
and it would make the signal not demodulate correctly
and look either bad or unviewable.
So what they came up with was for color TV,
we're going to make the signal
very, very, very, very small, slightly slower.
So we went from 30.0 frames per second
to 29.97, which is 30 multiplied by
1001 divided by 1,000.
So it's basically 0.001 difference multiplication,
of the signal speed.
Now, the reason why this is so small
is because, guess what?
It's still close enough to 60 hertz
that all of the analog circuitry
and the televisions will work.
They can still synchronize to power.
Everything's fine,
but now we're not breaking our legacy
black and white TV
is when we switch to broadcast to color,
and we can move on with color
for the rest of our lives.
Everybody was happy.
This was at the time
when television broadcasting
was almost entirely live, though.
We weren't doing a lot of...
post-production of shows.
So down the road, what happened was when they started producing a lot more, you know,
done recorded shows, well, guess what?
Now one hour of time code numbers on a tape does not correspond to one hour of actual physical time
in the real world.
Right, because 30 fits into 60 minutes or 60 seconds or whatever that denomination is really,
really easily and simply.
29.97, you get a whole lot
of decimals after the fact.
And so if you're counting time
codes for every frame, well, guess what?
That adds up over 30 minutes,
60 minutes. So at the end
of the day, a 60 minute program
is actually three or four
seconds longer on
the actual tape, or in the file
these days, then it
would be when it was actually
airing in real time. And that was
a nightmare scenario
for people trying to schedule shows
and more importantly,
commercials that pay for all this stuff.
That method of counting,
if we just took 29-9-7
with counting every single frame, whatever, right,
led to a disconnect between real-time,
actual real-time that we all experience in the world
and the runtime of the actual program.
Exactly.
Right.
So, how was that solved?
Drop frame time code.
On the purest level,
what drop frame time code means is that every minute we drop two frames.
So basically from 59 29 time code, the next frame is 0.02.
Right?
And you do that once a minute, except on the hours and the 10 minute marks.
So it's a it makes a time code mass an absolute nightmare for software developers
because there's all this logic that has to go into it.
But the result is if you have an.
hour duration in your edit, you play that out to television at 29.997, it will be an hour of real
time. And this has given us a kind of long ripple effect over the years of confusing issues
with time code. So it's a great, by the way, that was awesome explanation of drop frame time
code and fractional frame rates. And I think where this comes into play is a couple things for
confusing wise. Because so we get back to that.
whole 20, the love of 24 thing, right? Where people were like, well, hold on a second, the
faster we shoot, the more frames we shoot, the more real it looks, I don't like it. I want to go
back and shoot 24 frames a second, but guess what? 24 has that same sort of similar math problem,
right? So to solve that, we had to come up with a fractional number for 24 as well, right?
Yes, so that number is, it's not 2398.
23.98 is rounded is 23.976.
Again, 1001 divided by 1,000.
0.001 difference.
And the only reason for that, there is no such thing as drop frame 24 time code or 2398 time code.
And this is a confusion that happens a lot too, because 24, or sorry, 2398 still has the same problem of an hour of 24 frame of second time code in 2398.
or 239 and 7.6 does not equal an hour of real time.
Right.
But 2398 isn't broadcast anywhere in the world.
It's either put on streaming online theatrical,
although a lot of theatrical now is still true 24,
because there's no real reason for them to change.
Or converted to 2997 or 5994 for broadcast.
So, you know, we ran into this actually earlier this week,
Bobby opened up a deliverable that was a 23-98 master.
And he opened it up in QuickTime player and goes to the little about and has a minor heart attack because the timeline was 50 minutes, 51 seconds.
Is that right?
No, the clock was supposed to be 5150 and it ended up being 51.50.
And what ended up happening is because Quicktime thinks it's clever, it told us the duration in real time, not time, good time.
So the deliverable spec for the network said 51.50, right?
And that's exactly what the master was in 2398.
But QuickTime tells you the real time duration, which was 51, 53.
So, you know, it was just this confusion.
Wait, why don't these numbers match?
That's why those numbers don't match.
Yeah.
And I think for people who out there are like, well, I mean, I just deliver it to the web.
What does this really matter?
Well, I think you have to think about two or three things.
Number one, as Joey kind of pointed out with 23-99-976, one of the underlying things to consider some of the stuff is,
where is it going down the line, you know, later on, right?
If you know for a fact that it's only going to theatrical or only go to the web, shoot with whole frame rates call a day and be done with it, right?
Shoot true 24, right?
But, but here's the second part of that is that you don't know that for sure.
might need to be converted somewhere to go layer,
and that math can get really complicated, really fast
with non-fractional, into fractional,
back and forth.
Two, a lot of the equipment and software
and that kind of stuff that's set up these days
is expecting fractional frame rates for, like,
especially in 24, right?
23-9-7-6, 23-98 is more of a thing than 24
for a lot of equipment, a lot of pieces of software, that kind of stuff.
Now, I want to mention one thing.
And where I think the biggest confusion around this happens is that people throw these numbers around willy-nilly without specificity.
I would say 80% of the time, probably 95% of the time, someone says 30 frames a second, they mean 29.97.
Nobody actually means true 30 ever, because nobody ever actually uses it.
But if you didn't know about this 100-year build-up of, you know, ridiculousness, someone says, oh, we need it,
frames a second. You're going to open up Premiere, set it to 30 frames a second. And then after you're
done editing, they're going to be like, why is this failing QC everywhere? I said to do 30. I meant
2997. And you're like, wait, what? 29, you said 30. It says 30. Right. So watch out in your day to day
and look for specificity because people will transpose 2398, 23976, and 24 to all mean the same thing.
and they will transpose 2997 and 30 to mean the same thing.
There has been a call from the youngsters on the internet for many years now to eliminate
fractional frame rates.
And I am the lone voice of reason in the wilderness saying no.
We have a hundred year history of fractional frame rates almost.
We have all the workflows figured out.
And we have gigantic amounts of archive material that's that way.
If we tried tomorrow to make fractional frame rates go away because it would,
be possible in the digital world. It would be a nightmare because like Robbie said,
so much stuff is built around these defaults that you get into an edge case.
30 frames a second even is a very rare use case that most software manufacturers haven't really
tested for. It would be a nightmare. So I think it's just important to understand all this stuff,
but I don't think we need to go reinventing the entire world to get rid of it. Well, one thing that
you just said also gets us into the last part of this equation. It would,
the interlacing part of the discussion is that I do believe you're right that people use these terms as kind of a catch-all.
The other thing I will say that is pretty interesting about this is that oftentimes when people put those number,
we'll talk about file names later, but they put those numbers down, oftentimes they're not actually referring to frames per second, right?
They could be referring to fields per second, not frames per second.
So I'll give you an example.
I label all of my interlaced files, which we'll talk about next, I label all of them say
1080I 5994, right? And I've had people say to me, whoa, whoa, whoa, whoa, whoa, whoa.
This is not a, this is not a 60 frame of second show. This is, you know, 599 and 4 a second.
I'm like, yeah, yeah, I know. And they're like, but why did you label it that way? I was like, well,
because the I in the file name stands for interlacing and the 5994 is not.
frames per second, it's fields per second. You combine 5994 times two because you have two fields in
the file. Guess what you get. You get 2997. So Joey, what the hell is interlacing? Why do we have it?
And why is it the worst thing ever? It's not the worst thing ever. I know, I know, I know.
It's a great thing for its design goals. Transmission, exactly. Interlacing is this.
Interlacing is taking a single frame of an image and splitting it into every other line.
Now, that means you've got line one, three, five, and then line two, four, six.
Odd and even fields.
Odd and even fields.
Yep.
And you broadcast one field first, so half of the vertical image.
Then the second field first, the second half of the vertical image.
So what you've essentially done is in one frame, you've split it into two,
fields. And at the end of the day, for every unit of time, for a discrete unit of time, one frame
or more, you're not actually, you know, people think that interlacing reduces resolution.
It technically doesn't. What it does is it converts basically spatial resolution to temporal
resolution and vice versa. So for any given resolution, say 1920 by 1080, if you split it in half
into two interlaced fields at 30 frames a second, you get 59.94. See, I just did it.
You just did it. You just did it. If you split that up, so now you have a 5994 I, so you've got
5994 fields per second. That's easier to transmit because you're transmitting in smaller chunks.
But for each chunk, you have sacrificed 50% vertical spatial resolution, so a little bit less
detail and you have increased your temporal resolution so you actually get a higher motion frame rate.
Now, in the old days, you had what was what I refer to as like true interlaced, where you had
either analog or digital cameras recording in interlaced as well. So they would record a field,
then it would record the second field. And when you looked at those fields separately,
they would not assemble together to make one frame. They would assemble together. They would assemble together.
to make one frame of time, but they would actually move intra-free, I'm sorry, intra-field.
And, you know, for some people, this looks great. It looks like 60 frames a second because time unit-wise,
it is, right? You get smoother motion with 59-94-I, and that's why it was, you know, worked really
well for broadcasters at the time. But what people found is they wanted kind of that 23-98 more
stepy look.
So what they started doing, and
another reason why, not
to go a little crazy here, but
another reason why interlacing works
is because CRTs can
scan interlacing very easily.
Just
because of the way they work and
fully digital displays like
LCDs do a very bad job
showing interlacing. They physically can't do it.
So there's all kinds of software
that goes into showing interlaced images
on an LCD or
on an OLED or something like that.
So these days, most of the times when you've got something that is an interlaced file,
you are actually taking the same frame with no motion between the fields,
just splitting it in two fields and then playing it back.
In those cases, it's a completely reversible conversion, right?
You can take those two fields, put them back together in one frame,
and you don't lose anything.
Well, I think one of the reasons that what you just said is really dramatically important
is because where does interlacing come into play?
Well, for most people, it's going to be not in the pipeline.
It's going to be a deliverable that's required to them specifically for broadcast, right?
Because interlacing the way that it works, as you just eloquently described,
it can lead to problems.
Interlacing can, depending on how it was introduced in the pipeline,
can cause some motion artifacts, right?
Like if you take something and you interlace it incorrectly, perhaps, you can get some tearing, some banding, some things of that nature that people object to.
And you specifically see this when you're converting files to and from interlacing a lot, right?
You can see that.
Yeah.
So I see this all the time.
And it's a huge pet peeve of mine.
And I think this is probably the best example of the old knowledge being very valid today.
like I said, a progressive frame can be converted to two interlaced fields and then back to a progressive frame again with zero loss of data.
It will be pixel for pixel perfect.
But anything that has motion between the fields can never be converted to a single progressive frame because you're trying to convert two discrete captures in time to one discrete capture in time.
And where this is relevant today more than anything is when you look at things that are shown online,
You look at documentaries on streaming.
You look at anything with archive footage from television,
you will see these horizontal jittery lines.
Right, right, right.
And the kids these days think that's just how TV used to look.
That's what interlacing is.
No, what you're looking at is you're looking at two fields,
two discrete moments in time,
overlaid on top of each other,
because nobody thought when we were converting this broadcasted archive material
for our new 2398 or even 2997 progressive documentary,
that's going to master progressive,
nobody was thinking about how we
convert that interlaced material.
And you see it
all the time and it looks horrible
every time and it absolutely
drives me nuts because if
you'll talk about archive footage, that
footage was never presented that
way original.
And there's a couple ways you can kind of
get around that. You can either
take both fields and resample
them only vertically, which is kind of
my preferred way to do. In fact, I've written
a DCTL that does
that, actually a fusion macro, that does
that very well.
But you could also just take one field and
double it. You lose some resolution that way, but you
don't get the tearing. Either way,
if you start seeing that tearing
in an image, that's because you've converted
from interlaced to progressive somewhere along
the pipeline incorrectly.
Sometimes it happens before it ever got to you, and there's
nothing you can do about it except try to fix it.
But no amount of
sharpening or noise reduction or
traditional tools is going to
eliminate that unless you really start to think about how the interlacing worked and what's
actually going on there. Yeah. So that was my third thing that you kind of just alluded to is that
paying attention to that conversion is important. And you actually do have a little bit of a choice
on how that conversion is done. Sometimes when you do this conversion, you can either, as you said,
double up on the fields to get a whole progressive scanned image or you can even blend the field
sometimes or where you get, and people see that, a lot of people object to that, even though
that's technically a fine way of doing it, where you get blended fields where you'll see
like at a cut point, for example, you'll see that fields kind of blend together. I don't particularly
like that look, but it's a technically fine way of doing it. But your greater point is just
pay attention to that conversion, because once that's baked in, it's just a band-aid on a bullet hole
and you're just kind of, you know, on the overall subject of the episode, why does this old
information matter in today's world, that's, I think, the best example is that people will look
at those jaggy images and think that's just how old TV looked, so it's okay to put in my
documentary as is. No, that's not how old TV looked. We were not looking at CRT TVs with
interlaced broadcasting that had jaggies. It actually looked really good when displayed properly.
So be conscious of that. Documentary editors and online editors and colorists, watch for those jaggies
and fix them when you see them because it's just not right.
And I think a lot of people don't realize that.
Yeah.
And in general, I think for, again, applicable for today is stay progressive as long as you can
in the pipeline.
Think if you have a deliverable that requires interlacing, don't do that at the start.
Don't shoot that way.
Don't embed interlacing in the pipeline.
Make it part of your deliverable.
Going from a progressive to an interlaced image as you detailed is super easy to do.
just divided in half with the two different fields.
And there you go.
All right, moving on, Joey.
A couple of things we want to talk about.
This idea of transparency.
Transparency is pretty easy these days.
When we talk to people about transparency,
you're going to kind of have two flavors of it coming out of most systems,
depending on the file format, of course,
because not every file format can support transparency.
And just to be clear, that's the RGB information plus a separate channel,
which is the transparency, or also known as an alpha channel.
And that's going to come into flavors.
It doesn't really matter which one you use
just as long as you know which one it is
and the rest of the pipeline is set up to use that.
So we have straight alpha channels
and we have pre-multified alpha channels.
What's the difference between those, T.J.?
So basically, the way kind of transparency has always worked,
let's go back in time again.
Like you said, there's your fill,
which is the actual video.
then there's a mat or an alpha channel that cuts that video and tells whatever your system you're on
what to make transparent what not to make transparent that's a black and white or monochrome image
right so let's say you had a letter that letter would be drawn out and then the edges would actually
be scaled outwards a little bit so you didn't have a bad edge and then the transparency signal
would cut that letter out that's what's called a straight alpha where the edges bleed over
and it's going to look really weird in your viewer
until you put it on top of other footage.
So in the digital world,
and that's kind of how everything was transparency-wise
in the analog world forever.
In the modern digital world,
we have what's called a pre-multiplied alpha,
which came out of digital compositing,
which means you've taken that black and white alpha image
and multiplied it with the fill image,
giving you essentially a clean looking fill.
If you open it up in QuickTime player,
it's going to look like your graphic,
but on a black background.
Yep.
Right?
But it will also key cleanly over top of your footage,
but only if the software knows that it's a pre-multiplied alpha,
because mathematically it has to handle it a little bit different.
And how this will come into use practically is,
let's say you bring in a graphic,
and you put it over some footage,
and you see like almost a little blackish edge
around the edges
or you see
where it should be a smooth fade off, it might go to gray.
You know, you see weirdness around the edge.
You're like, well, this is keying, fine.
You know, why do the edges look weird?
Maybe it's a problem with the graphic.
No, that's because it's trying to treat a pre-multiplied alpha
as if it was a straight alpha.
So you just need to change that file in your settings
to be pre-multiplified.
And that's one of those things where communication, communication, communication,
a specificity is so important because we've had shows just recently
where they had an entire team of designers outputing graphics for us.
And we would get them sometimes pre-multiplied, sometimes straight.
And it was kind of a guessing game each one.
So we had to go in and check every graphic.
This is something that should be standardized and you should know what to look for
when you're putting shows together that have keyable graphics.
Well, you mentioned something I think is worth mentioning to.
is that you mentioned the words key and fill.
And I remember being a lowly assistant back in the day,
having to do graphics reels for a deliverable for a network.
And in those graphics reels, what we had to do
was we had to put the fill and the key signal, the mat,
next to each other, break them up with a slate or whatever
or with black on a tape and just string that out.
So for every graphic, you had the fill component,
which was the RGB information.
And then you had the mat component,
it, which was that monochrome, that black and white image.
And it's really interesting because there are certain compositing workflows, even to this
day that people do inside of Resolve, for example, whatever.
Like maybe you have a rotoscope artist cut a mat for you in their tools where they can be
very precise about positioning, sky replacement, whatever it may be.
Well, you can take a mat and have Resolve or whatever tool you're using, do that math correctly
to go, oh, this is the mat channel.
I know what to do with that and use it to cut this.
RGB information out so then you can have like essentially a movie mask or whatever.
So it's still very valid today.
And I do think you're right.
People get confused.
It was another good explanation by you that pre-multiplied and straight thing.
I think my, my confusion lies where you just said, where people just don't label those
kind of things.
And it can be a lot, it can be a little bit of a trial and error trying to figure out
what's what.
But I think that's a step that you should always do in every show is just verify that your
graphics, your overlays, lower thirds, etc., are using the right type of alpha channel because it can get
really confusing.
Yeah.
And I want to go from ancient times all the way to the most modern times, which is how do you use
these alpha channels, whether they be Matt and Phil's or whether they be pre-multiplied, not
pre-multified, whatever, how do we deal with that in a modern HDR color managed workflow?
Because that is a question that I get all the time.
and there's some major confusion around it
because a lot of people,
we actually just ran into this recently,
a good friend of mine,
a friend of the show,
called me up and we had some issues
trying to troubleshoot his color management pipeline
because his graphics were looking quite weird.
And the result of this is
if you do a color space transform,
for example, ample in resolve,
it's going to color space transform
just the RGB,
not the alpha channel.
So when it blends together,
it will have the wrong transparency and the wrong levels.
Same is true.
Same is true with the math of composite modes too, right?
Those composite modes can...
Oh, yeah.
That's why composite modes just don't really work in color.
Right.
They can ignore or at worst mess with the alpha channel information.
So if you're working scene referred,
which we pretty much always advocate that you do,
the bad news is...
Alpha channels don't work in seen referred work.
They just don't.
You can convert, even if you convert the alpha channel to the right input transform, unless
you're grading the alpha channel with the exact same kind of levels changes that you're doing
on your background, they're not going to line upright and it's not going to key correctly.
So the hard thing with this is it gets worse and worse, the more complex the graphic gets, right?
If it's just a simple text overlay with a hard edge, you'd never know the difference.
space transport transform the fill, it'll look totally fine. And you'll think, wow, yeah, I can color
manage these keyable graphics. But really advanced gradation and blending and stuff like that in an
alpha channel graphic is not going to color manage correctly in a scene referred workflow. So in those cases,
we recommend doing the graphics in a separate pass in your display space.
Whether that's in the timeline, using different ways of doing your output transform or in a
nested timeline is a great way to do it as well. But doing that composite,
in display space is actually one of the few areas where I think
working in display space is appropriate because that's where that alpha channel
was designed to be made.
Now, I would love to see software figure out a really robust color management
solution for alpha channels, but it's a pretty hard mathematical problem to solve.
So I wouldn't hold my breath for that.
A few last things before we wrap up here that I think are also in the same
vein, obviously. So a little bit for our editor friends, and then
we'll get into some packaging and delivery things to wrap this up. So one of the things that I
don't have as many old man moments as you do. I don't have to get off my long as many old
old man moments I should say. One of the things that really frustrates me from looking at how people
work these days, but also just seeing people talking about it online is because we can,
the pervasiveness of drag and drop editing styles and where that gets
people into trouble, and I'm not here to judge drag and drop. I drag and drop just as much as
everybody else. But I think if we're on the theme of this episode, I think it's important to
understand the origins of what gave you the ability to drag and drop and how that really works.
So anytime we're going to do an edit, right, it basically consists of a few different points, right?
And those points can either be in the source or they can be in the timeline, right? But to make an
edit happen. You either have to have two points in the source and one point in the timeline,
or you have to have the opposite, two points in the timeline, one point in the source, right?
So I think for most people that two points in the timeline, knowing where you want to start,
knowing where you want to end, and knowing where you want to go in or go out on the timeline
is probably the more common method. But there's oftentimes where you're like, nope, I got to,
you know, I got to go right here in this duration. You figure out the in or out point.
So doing the opposite of what I just said, two points of the timeline, one in the clip is a other way.
By the way, for those of you who aren't familiar with those terms,
that kind of thing is often called back-timing and edit or forward-timing and edit
where you know where you have to be out and then just let the computer figure out
where it's going to go in or go out, right?
But let's talk about this for a second.
So what I just described is three-point editing,
three points to have any given thing happen.
And back, if you can picture this with Joey with long, luxurious hair,
listening to some, you know, poison or motley crew or something like that,
Doing these edits, Joey, you had three points to make anything happen, right?
Yep.
Okay, what happened when you had four points?
What does that do for you?
This is where you would have things that don't line up, right?
Right.
They call it today a fit to fill, right?
And it was a quick way of saying, okay, I've got this two second part of my source,
a one second opening in my record timeline, stretch that thing.
Yep, yep.
So a fit to fill is basically these days is an automatic speed change, right?
Is you're basically saying, hey, I got this two seconds.
I need to fit into a one second hole, double it up by 50% to make that two seconds fit into the one seconds.
And that can be very useful for depending on what you're doing without having to calculate the math about an experiment.
Okay, 97.03%, 96.49%.
So three point versus four point.
And I just bring it up because I find myself, not that I do,
do a whole lot of editing, but when I do, I just find myself being a whole lot more precise
when I'm marking in, marking out, overwrite insert, marking out, overwrite insert. Rather than
this whole, you know, wrist gymnastics and hand gymnastics of dragging down the timeline, waiting
for it to pop open, oh crap, I, you know, the arrow was facing right instead of down. I'm doing an
insert, not an overwrite and all those kind of things. I think a lot of that can be avoided with the
the basics of three point and four point editing by using the keyboard, marking ins and outs.
I mean, we make fun of black, or I make fun of black magic sometimes because they
seemingly, Grant seemingly has the same love affair with old school video hardware that you do.
But that was one of the best things that ever came out was with that editor keyboard because
it, I think it got a lot of people back to the tactile approach of editorial versus the drag and drop
approach of editorial, which I think paid big dividends.
Yeah, and I will go even dramatically more dogmatic than you are.
I think if you are dragging and dropping in your timeline,
you're almost definitely doing it wrong,
and you need to learn to stop doing that.
The reason I say that is simply because of precision and the likelihood of mistakes.
When you mark it in, mark it out,
and then mark a third point or a fourth point or whatever,
You can look at your source, look at your timeline, and see exactly what it's going to do,
and then you put that edit in, and everything's great.
Or if you don't like it, you undo, and you can try things around.
The reason I do this is because if you get into this easy-peasy,
I'm going to drag and drop and wave things all around with the mouse kind of way of working,
yes, it can feel more interactive, but you're going to mess up.
Especially if you have an hour-long timeline with a bunch of different things,
you're going to drag too long of a clip over top of a shot and not notice it, and then come back and that shot will be missing, and you're going to have to go back and do detective work, or you're going to be off by one frame, or it's going to snap.
That's something you didn't like.
That's what I want to talk about is the precision of the timing and playhead positioning, whether it's inclusive or exclusive, is something that in drag and drop editing, you probably never pay attention to but creates problems.
And what I mean by that is, is it is your playhead position?
on the timeline, is the duration that is calculated inclusive of the frame that that playhead
is looking at? Or is it not inclusive? But knowing that and how that works, people never consider
in drag and drop because they're just like, oh, I got five seconds. The result is we get acts
that are one or two frames long all the time, right? We get commercial breaks that are, you know,
they're supposed to be five second commercial breaks and they're 501 or they're 427 or something
like that because I think people are just doing this drag and drop and not being precise with
that math.
See, and I think you're giving the younger generation a little too much credit here because there is a
clear right and wrong here.
In points are inclusive.
Out points are exclusive.
That is how it's always been, how it's always will be.
This means if you say marking in at one hour and then mark it out at one hour and 10 seconds,
that is 10 seconds of time and you are not including the frame that.
that lands at 10 seconds in your timeline
if you were to put the playhead there.
And if you zoom all the way in and resolve
and mark it in and out,
you can see visually how that works.
I think a lot of people don't think about that, like you said.
No, I think it's a great point.
And that precision is really where really we get there.
Speaking of precision,
the last thing I want to talk about
is this idea of file naming.
I'm a little more precise, I think, about this
than you are just because I, more than a little.
Yeah, burned a lot by imprecise file names, right?
And so some of my file names are kind of ridiculous, to be honest with you.
They have like every technical specification you could possibly think about in the file name.
But what I'm really talking about here is, again, going back to that same thing we're talking about with transparency and stuff,
is descriptive, well-understood file names, right?
calling a file like, you know, name of the timeline underscore graded doesn't do a whole lot for me
because I can't just look at that file and go, oh, well, that's the HD one versus the UHD one.
Or that's the 2997 version versus the 2398 or whatever, right?
I like to, yeah, I know you're probably, yeah, I'm attacking you a little bit, but it's all right.
The other part about this is that I do not want to have to open up a file and do a discovery on some of these basics.
I want to just be able to glance at it.
And the same thing is true, by the way,
with people naming files like stupid, like, final, final, final.
I mean, there's a meme there.
Never name anything final.
Million memes about that or whatever.
Here are the things that I think all file names should have.
I think that all the file names should have the name of the project, right?
Potentially the name of the client,
but at least the name of the timeline or project or whatever.
I think they all should have the resolution size
and or frame rate in the file name.
So, you know, you know, UHD at 2398 or whatever.
I think they should have the codec name.
I think they should have whether they're textless,
and I think they should have the date.
You can add more or less upon that, right?
If you want to do version 2, version 3,
if you want to put in some other information about,
oh, this is rec 7 or 9 or whatever.
But knowing the resolution, the frame rate,
and the codec something is,
just lets me be able to glance at something really quick
and go, oh, yeah, that's the version.
Without having to open it up in QuickTime
or some other player or import it into resolve,
I can just quickly know what it is.
Yeah, and
yeah, Robbie's completely right here.
I am like
kind of anarchist with file names
in a very bad way
for one particular reason,
and I'm going to fix this, mark my words,
but I have a firm belief
that your timeline name
in your NLE or your finishing system
should match the file output.
That way, if a client says,
hey, it's this file,
I can cross-reference
exactly what sequence it was
in resolve.
I'm very bad about naming things in resolve.
And then when I go to render, I just say use timeline name.
I never use the custom name in resolve because I want it to match the timeline.
And that comes out of kind of my origin story of working in promos because it would just be the name of the promo and like version 11, version 12, version 13, right?
So when the client says, hey, I want that shot from version 11, I've got the timeline for version 11.
And to be clear, Joey, in those workflows where specificity doesn't really gain you much because it's internal or.
or everybody is clued into a standard set of naming.
I'm fine with that.
Like, that doesn't bother me.
What I'm talking about is more of, like,
if I'm handing this file over to a distributor,
another artist, or something like that,
I want it to be very, or I'm getting files from somebody.
I wanted to be very clear what it is.
And like, you were talking about the Alpha Channel thing earlier.
It would be great if a graphic,
motion graphic designer could just put in the file name straight Alpha.
Like, that would be, or pre-multiply.
That would, like, that would solve a crap load of,
problems, you know? Yeah. And like I said, I know I'm bad about this. This is something I want to
try to fix in my workflow and in my head. I just need to figure out how to do it in such a way that it
works with how I organize my projects and I haven't done that yet.
And I think Robbie is very good at this. His file names are legit. They tell you everything
you need to know and everything I said earlier about how, you know, specificity and, you know,
detail is so important. It definitely applies to file names. I'm not great at that.
Actually, we skipped one last step before the file name thing because that's final output.
And by the way, I don't care what the file name is.
Some of that specificity.
If you're working with a group, just like anything else.
Like same thing with like keywords, right?
Just like get together as a team, figure out how you want to name things and just standardize that and stick to it.
It's really all that matters.
One last thing that before the output thing that we didn't mention that I think is a little legacy,
but still is important.
And that is the leader information, bars and tone, countdown, slates, that kind of stuff.
Right now, different distributors are going to kind of ask for different things.
And to a large degree, some of this doesn't serve the same purpose that it used to serve, right?
So, for example, putting down bars and tone on a tape used to be a calibration step for whoever it was getting that tape, right?
To make sure that their monitor aligned phase and, you know, and color wise to the signal.
Same thing with the tone.
okay yeah we have that tone 1k turn at you know minus 20 dbFS we can you know turn our speakers to match that
that doesn't nobody's really doing that anymore it's more of just a legacy thing to keep bars and tone on
there i don't really care so much if a file doesn't have bars and tone but especially on
uh short form where there's a lot of like cut down okay there's a 60 there's a 15 there's a 10
there's a six it's you know tomorrow night tuesday night thursday night whatever
slates will never go away from me.
Like, slates are something that I think especially,
that's where you can share additional information
about the file, audio configuration,
alpha channel configuration or whatever.
But it's also where you can really clearly go,
hey, from here to here, this is what this is.
From here to here, this is what this is.
And especially if you're doing multi-deliverable parts,
you know, as I said, the spots,
slate information, I think, is still vital.
Yeah, the only thing I'll say about that is,
obviously do whatever the distributor or deliverable requirements ask you to do.
But if they ask you to do a slate, make darn sure everything on that slate is correct and specific
because you get into really confusing things sometimes, especially, you know, a lot of people,
like we said early on, they will transpose 30 and 2997 as the same thing.
You want to be specific on that slate with the accurate, correct information.
And as far as bars and tone go, I honestly like it when,
distributors and deliverable specs ask for a little bit of bars and tone because no we're not
calibrating our monitor to it but it real quickly lets me see hey what color space is this in is it
video levels the levels it's a good gut check but yeah it's not completely necessary uh but yes when
those things are in place don't let it be an afterthought make sure it is right yeah and and related
to that is that the time code part of that's important too i think this is probably pretty well
known, but we should just state it. Almost every deliverable here in the U.S. is going to just, like,
the actual first frame of picture and sound is going to be right at one hour in the power world
that's usually at 10 hours, right, and then whatever time could running from there. The stuff we're
talking about bars and tone, slate, countdown, et cetera, that's all pre that hour. So if you're at
hour one for your first frame of picture, all that stuff is occurring in the 58, 59 rain, you know,
you might start at 5830 and have a minute of bars and tone, whatever.
This is when we were young.
This is what we got paid to do at night in the machine room is go black and striped tapes
with time code starting at, you know, 5830 or whatever, 58 minutes and have a control track
down the whole tape, right?
But that's the same thing still apply.
So when you're setting up your timeline, just adjust your starting time code, you know,
5830 or 5930 or whatever it is, and then line up things in sequence to that time.
Yeah, and it's a question we get all the time from younger or newer editors is why does this
timeline default at one hour?
Why shouldn't it just be zero?
0.0. Because when you need to put a slate or a leader or bars and tone or anything on something, you still want your show to start at one hour even. So then we go before one hour. There's no before zero. But it's also it's also an easy location thing. Right? If you're talking about a long form like a film or like a long show, you can say, oh, that mistake is in hour two, right? Oh, 2 and be able to go up. They don't mean the first hour of the film. They mean the second hour of the film. And that was Jermaine for.
tape reasons, you know, oftentimes real long play tapes, like the 120 minute tapes could
be hard to find. So you might actually separate a tape over a couple 40 minute or 50 minute
tapes, depending on the frame rate, and do it that way as well. All right. All right, Joey, good stuff.
Hopefully we haven't bored you to death with some of these get off my long old man terms.
But I think you can see how, you know, some of this information, yes, some of it is jeopardy
knowledge you can impress your friends with. But a lot of it is still really germane to the way
that we work these days.
So keep it in mind.
If you have any questions,
feel free to let us know in comments
wherever you're watching this or listening to this.
And to that end, of course,
the Offset Podcasts can be found on every major
podcasting platform.
Please tell your friends and colleagues.
We're also on YouTube.
If you find us, please like and subscribe
wherever you find the show.
You can always follow us on the social media as well.
We're on Facebook and Instagram.
Always feel free to follow us and ask us questions there.
And you can always go over to the Offsetpodcast.com
and submit a question.
if you'd like for a future episode.
So, Joey, good stuff.
I had fun reminiscing about some of the stuff.
It's a good reminder on some of it as well.
And I always appreciate your very knowledgeable explanation.
So for the Offset podcast, I'm Robbie Crimin.
And I'm Joey Deanna.
Thanks for listening.
