The Offset Podcast - The Offset Podcast EP049: Dealing With Archival Part 2
Episode Date: February 16, 2026Please give a warm welcome to our newest sponsor Conform.Tools! Conform Tools allows you to convert timelines between Premiere, Resolve, and other NLEs while automatically solving all those... tedious issues that can add significant time to your workflow. With a growing toolbox of features, you can avoid time consuming trim and transfer issues, and securely send large media files to collaborators at a fraction of the size, in minutes instead of hours.Built by post professionals, Conform Tools helps editors, colorists, and conform artists move faster and finish stronger.Check out https://conform.tools for more info. --------------In this episode, we're continuing our discussions on dealing with archival and stock sources. In part 1 we explored issues with film sources. In this episode we're exploring issues with video originated sources. Its always a bit shocking some of the issues that even big productions just accept as 'inherent to source'. While that might sometimes be true, there are lots of ways to fix common issues if you know what to look for. Some of the specifics we discuss include:Dealing with interlacingBlanking & edgesPixel Aspect Ratio (P.A.R) issuesResolutionCheck out www.offsetpodcast.com for our entire library of episodes + some of the additional assets mentioned in this episode that are available for download. Be sure to like it and subscribe to the podcast wherever you found it and be sure to check out our growing library of episodes. If you like the podcast it'd mean the world to us if you'd consider supporting the show by buying us a cup of virtual coffee -https://buymeacoffee.com/theoffsetpodcastSee you in about two weeks for a new episode.
Transcript
Discussion (0)
Hey everybody, welcome back to the Offset Podcast, and today we're continuing on with part two of our discussion on dealing with archival sources.
Stay tuned.
Support for this episode comes from Flanders Scientific's XMP 551 and XMP 651, the flagship QD-OLED reference monitors that are reshaping modern grading rooms.
Their large format and industry-leading viewing angles let clients see accurate images from anywhere in the room, and their change.
true HDR and SDR reference performance makes single monitor room layouts possible.
When everyone relies on the same display, you avoid the headache of explaining why the client
grading monitors don't match. Learn more at flanderscientific.com.
Hey, everybody, welcome back to the Offset Podcast, and today we're continuing on with part two
of our two-part series in dealing with archival sources. In part one, we talked about sort of
the big picture of archival, as well as dealing with film archival.
But in this episode, we're going to specifically talk about video archival and some of the challenges that pop up there.
Now, before we continue on, like usual, let's just do some quick housekeeping.
As a reminder, our viewer, our audience survey is still open.
You can find that at this link right here.
If you have five or ten minutes to give us some feedback about how the show is treating you, we'd really appreciate that.
Your feedback is going to go into directly help us sort of shape how 2026 and how the podcast is going to shape up.
So we appreciate anything you can do there.
Of course, you can always follow us on social media on Facebook or Instagram.
Just search for the Offset Podcast.
And then you can also head over to the Offset Podcast for our complete library, as well as show notes.
And in part one, we actually linked to some cool tools and utilities that Joey actually created himself to deal with some of the challenges with interlacing and stuff like that.
So be sure to check out the Offset Podcast as well.
Now, Joey, as I said, we are going to talk a little bit more about the video side.
of things rather than kind of the film archive.
And of course, there's some overlap here, of course.
But I want to tell the viewers at home that I have been on the receiving end of Joey's
pontifications on interlacing for almost 15, 20 years, whatever it's been.
So this is something I just want to preempt and say that Joey is very passionate about.
But let us just begin with the idea of interlacing.
It's amazing to me in 2025.
how quickly people have, or 2026, I should say now,
how quickly people forgot about interlacing as a thing.
It has somehow become, it's in the, it's in the area of like nostalgia.
It's a vibe, as my kids would say, right?
It's like, oh, we want to make something look archival old.
Let's make, let's add scan lines and make it interlaced looking or whatever.
and I'm like, guys, like, we did not watch Interlace TV going, wow, I love these scan lines and all of these jagged edges.
They look so good.
Like, it didn't happen then.
What makes people think that that's the way people watch TV and consume media, you know, even 20 years ago?
It didn't happen that way, right?
Yeah.
This is my personal documentary hill to die on.
and I've done extensive, I'll say this, I've done extensive research, I've interviewed editors,
I've interviewed other colorists of various generations to figure out what their understanding level
of these issues are, and I've come up with some answers, I think.
But let's start with the beginning. What is interlacing?
Well, for the first 100 years of television, every frame of the image was made up of two fields.
every other line would be displayed first.
So the even lines would be displayed first
and then the odd lines would be displayed after.
This was a way of optimizing the signal
to have the best bandwidth efficiency for broadcast.
Okay, so this was happening in analog television
from the black and white days.
Okay, that's how far back this concept goes.
Now, at the time, we were using displays
called cathode ray tubes.
that would, for each line, essentially imagine a laser drawing the line one by one.
It was an electron beam hitting glowing phosphors, and they would glow for a little bit,
and then fade down gradually. So it's going beam, line one, beam, line three, beam, line five.
Then we get to the bottom, goes back up to the top. Beam, line two. Beam, line four. Okay.
Now, these pixels, if you will, they weren't actually pixels because they weren't discrete boundaries, they were not instant on and off.
They would slowly glow up and then slowly glow down.
If you look at high speed footage of a CRT, you'll see exactly what I mean.
This meant combined with the persistence of vision in our brain, we would put together both all of the lines as one image and
both fields as one contiguous image.
So when we watched interlaced footage,
it looked smooth on the displays of the time,
which were CRTs.
That's point number one.
Point number two is today,
when we deal with interlaced footage most of the time,
it's a deliverable that's derived from a progressive source.
So we take one image, split it into two fields for the file
or the transmission
and send it on its way.
Okay?
That's an easily reversible operation.
However, for the first
100 years of television
and basically up until
the mid-2000s
when basically when Phantom Minnis came out
and Sony invented 24P
on a television camera,
the HDW-900F.
A little earlier than that,
early 2000s, yeah.
Yeah, Phantom Minnis
literally was the demarcation line
for 24 frame a second and video.
Okay? Everything before that, there was almost always what we refer to as intra-field motion.
You have in a 59-9-4-I signal, it's not 30 frames a second, it's 60 fields per second, which means an object can move between field one and field two.
Field one and field two are two distinct moments in time, whereas now with one frame in progressive, that frame is one distinct moment in time.
So what this means is if you take an interlaced image and you put it in a progressive display and you put field one and field two on top of each other, they might horizontally not line up because things are moving.
That's where you get those little jaggy sideways lines.
We call that a baked in interlacing because we're taking two discrete moments in time and squishing them together as one and it looks like.
garbage. It looks like utter complete garbage. And I have talked to so many young people in our
industry and they, I've shown them kind of images both ways. And a vast majority of them that
didn't grow up on CRTs and interlacing, they're not leaving that in because they don't want to
fix it. They're not leaving that into these shows. And like I said, I've seen this on the highest in
documentaries on Netflix on Amazon. This is not a small problem in my opinion. They're not leaving it in
out of laziness or malice or anything bad. They think because they opened up the file on their
computer and this is what it looks like, that this is what the show used to look like or the image
used to look like. I'm here to tell you all that TV for the past hundred years did not have
horizontal jaggy lines. Okay. Those need to be fixed if at all.
possible and now we can talk about kind of how we can fix that in a modern
well I want to add just a few other bits of context here because that's all
really good stuff I want to add a few bits of context is that the first one
being that you said something earlier about hey it's you know it's the even
in the odd fields right this is a concept known as field order that I think a lot
of people have forgotten about right and field order is important because if you
have mismatched or incorrect field order, bad things, you're putting, you know, it's kind of like
time travel a little bit, right? If you have the wrong field order, you're putting a moment in time
that was supposed to come after before or vice versa. Oh, and fun fact, standard definition TV
in the United States has a different field order than high definition interlace TV did.
Right. It used to be, it used to be that we did everything like in the days of like DV tape. It was all
lower field first, right? And now it's all upper field first. Like, that's a,
But my point being more about the incorrect field order and mismanaging that leads to a lot of these type problems.
The other thing, the other thing I would say is that, you know, you're talking about kind of the two fields being overlaid on each other and being incorrect alignment on that.
You know, the one thing that a lot of people think that works in that situation, but often doesn't because the cadence or the order of the fields is not correct is, oh, I'll just deinterlace this content.
and have it work, right?
Yep.
That only works, because the deinterlacer works by going, okay, which field do you want me to pull out, right?
I'll pull out the upper field or the lower field, and then I'll recreate the image from the other field, right?
The problem is when you deinterlace and do that, if you have the wrong fields, the wrong field order,
it's just not going to do anything.
Or, you know, at best case, it's just going to give you an image that any time you deinterlace something,
you're getting rid of temporal information.
Guess what that means?
you're going to have a softer, less, you know, less sharp image because you're removing data.
So that's the thing, right?
If you use what's called a deinterlace effect, and in a lot of these cases, you have to deinterlace it because they're going to progressive.
It's got to become progressive.
So we've got to remove the interlacing somehow.
But here's the problem.
All deinterlacing algorithms are based on the assumption that you're giving it an interlaced input, right?
and that means it has access to field one and field two.
That's what I'm saying.
That's what I'm saying.
You don't always have that because most of the time somebody has captured this to some digital format
and burnt those two fields together into one frame and then, hey, maybe they resized it after that.
Maybe it got scaled up from SD to HD.
Once you've done that, that relationship between field one and field two is completely blown out the window.
you have horizontal jaggy lines baked into your image forever.
And that's where I think a lot of people give up.
But I'm here to tell you about this too.
There are ways to deal with that.
Well, and also, this is a big one.
And you are perhaps the only person I know that still actually has this in a physical setup, right?
Is that, you know, one of the challenges for a lot of people is they don't even see the problem, right?
they don't see the problem with interlacing like in the like they might see the jagged edges or whatever
but they're not even at the point where they can identify the problems like if it's true interlaced or not
because they're not looking at it on an interlaced monitor either right they're looking at things on
a progressive monitor which can present its whole like depending on how that works with how it
rebuilds interlaced fields or whatever can be an issue yeah the quick thing to do if you're
looking at a fully progressive monitor and you need to know if there's intra field motion
in a source, bounce into another resolve project, turn on interlacing, and by default, resolve will
now left and right arrow go by the field, not by the frame.
So in a progressive image that was converted to interlace, you'll see left and right will be
the same, right?
It'll look like a freeze frame.
But if there is intra-field motion that eventually you might need to remove or address somehow,
you can left and right and see it on your progressive display, because what you're doing,
you're saying go field by field.
But the big thing is, in almost all cases,
we're going to be getting rid of interlacing
for our final mastering process at this point.
So we are going to be getting rid of some temporal information.
The best way to do it is with a real deinterlace.
And Resolve has a very good deinterlacer,
especially if you're going to preferences and turn it on to its enhanced mode.
But those de-interlacers all rely on a good, solid interlace signal,
whereas if it has been deinterlaced before or it has been captured to video and then scaled up to HD, for example, your native interlacing data is gone and we need to start looking at other solves for that baked-in jagginess that we see.
Okay, so let me give you a hypothetical.
I have a situation that's got some baking jagginess, producer-directors yelling me about it.
What is my first go-to way to address this?
A lot of people will start thinking, oh, maybe I'll do some.
noise reduction, maybe I'll try that deinterlace effect and none of these things work.
All you, the only real solve to baked in interlacing and it's a bit of a bummer because you do lose some resolution doing it.
However, since we're keeping the entirety of the horizontal resolution, it actually looks shockingly good.
What we want to do is we want to resample the image vertically.
essentially averaging out those two different areas in time where we get those jaggies along the vertical axis.
And I do this with a fusion effect that I've built.
I'll give this fusion effect out to anybody who wants it because I'm so passionate about fixing this problem in documentaries.
We're going to post it.
We're going to put it in the show notes.
Everybody that knows me has gotten a copy of this fusion effect.
And everybody loves it.
I'm actually really proud of this one.
So it's really useful.
Essentially, inside that fusion effect, all we're doing is we scale,
there's a slider that says scale the image up vertically,
then scale it back down again to the exact same amount.
So we're resampling it vertically.
But unlike doing that in the timeline,
Fusion has a lot more different options for the resampling algorithms.
So I went through and picked the one that kind of held the most detail.
for this very weird application
and bake that into it.
So essentially you just get a slider
that kind of very gently
smears the image vertically
to get rid of those lines
and the reason why it needs to be a slider
as opposed to just,
oh, only two lines, right?
Is because, hey,
if it was two perfect TV lines in the file,
we could de-interlace it.
We could use the interlace effect.
Most of these sources have gone through
generations of SD to HD to scaling
to H-264,
to time effects to whatever else.
So we have that little slider.
Now, the last part of this puzzle,
and this is going to get to our next issue
I want to talk about with video sources in documentaries
is what we call blanking or edge behavior.
When we scale or resample this image vertically,
it's going to mess up the top and bottom edges.
You're going to get softness there.
So you can either kind of scale it up a little bit
and crop it or do a little bit of a couple
of a clone on the top to adjust that.
So in this effect that we're going to put in the show notes and give you to play with,
I have little options for how to deal with that.
But in general, if you're using this technique of resampling vertically to get rid of baked
in interlacing, be aware of the top and bottom edges and in those blanking regions.
So now that I have completely ranted and raved like an insane person about obscure interlacing,
it's not obscure.
I really think this is a major issue that we all, as people in this industry, doing finishing work, should take more seriously.
I really believe that.
Support for this episode comes from conform tools.
Conform tools allows you to translate timelines between Premier and Resolve and other NLEs,
while automatically solving common issues that normally need to be fixed by hand.
Avoid time-consuming trim and transfer issues and securely send large media files to collaborators.
at a fraction of the size, and in minutes instead of hours.
With a growing toolbox of features,
let conform tools handle the tedious stuff
so you can focus on the creative.
Built by Post Professionals,
conform tools helps editors,
colorists, and conform artists move faster and finish stronger.
Learn more at conform.com.
That kind of leads me into the next major, major thing
to look out for,
and again, it's another thing that is,
often missed when dealing with video archival sources.
That is things in what we refer to as blanking or the edges.
Now we used to call it blanking because in the original video signals,
those areas were blanked out as in not visible.
So Robbie, why don't you tell us a little bit more about what other issues
blanking can give us now that we're finishing archival stuff in modern formats?
Oh man, this is giving me a little PTSD about my days stuck in a
QC box looking at, you know, scopes and analyzing, you know, what do they call that, front porch
and, you know, all sorts of analog type of evaluations. But generally speaking, these days, when we're
talking about blinking, it's referring to dead areas or black parts of the screen, which we're
commonly going to find either in the pillar side of the screen, left and right side of the screen,
or the letter box where the screen, top or bottom, right? Oftentimes, especially with archival
that originated on tape and was digitized,
you'll often see a thin, you know, maybe two to 10 pixel wide,
maybe sometimes a little bigger even,
you know, kind of stripped down the side of the frame
that is black, that is not active picture, right?
It was never meant to be active picture,
but when it got digitized, it was digitized with that.
It wasn't, you know, scaled or anything,
so it was copied right off that.
And so that can be annoying visually,
but it also can be a QC issue.
you'll often get flagged for issues like blanking on top or bottom the screen.
The other thing that you'll commonly see, too, is depending on the source and how it was, again, digitized,
at the very top of the screen, you might see what looks to be like noise, right?
It looks like, you know, kind of like little dots going off or little lines or whatever,
and people go, what is all that weird noise at the top?
Chances are it's one of two things, or both potentially, is that it could be closed captioned data,
embedded line 21 closed caption data at the top of the screen.
It can also be time code data.
It could be Vitsy time code embedded at the top of the screen as well.
Vitsy time code or vertically integrated time code.
Vertical interval time code.
Thank you.
I'm sorry.
Vertical interval time code versus what's the opposite of Vitsy.
It's Litsi.
Linear time code.
Thank you.
Which you'll see or hear rather on audio output as the tape is playing.
it's just one method of inserting time code into a source.
It could also be vizzi on top, right?
So oftentimes you'll see things kind of slightly miscaled
or kind of misshaped or whatever,
and you'll see that blanking or that vitsy
or that close caption at the top.
So how do you go about fixing this issues, right?
Well, the first thing is that I'm all about eagle eyes on this.
I will oftentimes zoom in the viewer, right,
to just the edge of the screen.
And this is what we talked about this one before,
but this is super helpful.
If you, in the Resolve viewer options, you can actually tie your viewer zoom into your SCI output zoom.
So it will also zoom on your reference monitor.
Now, another important thing while you're doing that is in your resolve preferences, set the option for viewer background to gray.
That way you'll see a hard edge where there's any discrepancy.
Yep.
So that's my first step is just the eagle eye of this.
But even then, you know, it can change shot to shot or if you're just, you know, whatever, it's late at night you're working through it.
Or if you want to go lo-fi about this, you could put a brightly colored solid behind the clips and just move your clips up to video track no, too.
But our smart audience will go, well, that's not going to work, Rob.
How am I going to see the bright clip behind that black strip?
Well, here's the deal.
There's two types of blanking that I think you'll find.
One, the blanking that's actually baked into the clip, which is an actual black bar of pixels, that's blanking part, you know, typed
number one, but blanking part number two is you have done a reposition or a remove on your own,
and you've introduced blanking, and then you have a set of transparent pixels, and when there's
nothing behind it, it's black. But if you put something back there, and then you can see that
color if you see it behind it. So I check for both of those things, and they're super, they're super
useful to check. Now, in terms of the Vitsy or the closed caption data, that's usually just a
slight scale. And one of the things I'm, especially for docs that are really archival heavy or shows
are archival heavy, I'll create an input scaling preset that I can just go, hey, you know what,
every time I don't want to have to grab this input or the edit sizing, you know, knob to to size it,
I'll just figure out a good, you know, 1%, 2% you know, push in, save that as a preset. So anytime I can
just apply that to that archival, just one click and it's done. Yeah. And it's also one of those times
where it is good to have, we talked a little bit previously about actively masking your
4x3 sources if they're pillar boxed. This is a great time to do that because when you scale it
up a tiny bit, you'll also clean up those left and right horizontal edges to be a dead
straight line, which in general, I think looks better. Yeah, I agree. And now the same thing,
you know, I just want one note about the artificial masking is that you have to get a little
use to this, by the way, with how you're, there's a couple different ways you can handle it. But like,
one of the things that can happen with that is that you don't want to let's say you're doing a zoom on a photo
you don't want to also zoom the masking right so you either have to do that through uh some sort of layer
ordering or a compound clip or whatever to just make sure that you can still do the moves that you
potentially want to do without that masking also changing yeah and that applies to any kind of size of
things like for example stabilization right if you have a four by three clip that you stabilized you
don't want those horizontal edges jiggling around right you want the
them to be locked into a four by three crop.
Now, speaking of four by three,
there's another thing that to get into my old video nerd history mode again,
my favorite thing, that I don't think a lot of people
have heard of at this point,
but it's another thing that I see done wrong often.
That is the concept of the pixel aspect ratio.
We've talked about normal aspect ratios
and how you should never change what the actual
real aspect ratio is. What's a pixel aspect ratio? Well, you know, we joked about field order being
weird for standard definition television. We've talking about the weird history of interlacing and how
CRTs worked for television. Well, guess what? CRTs also had this concept of the oval-shaped
pixel. On all modern displays, our pixels, our individual pixels, are a square. Makes sense.
right? You draw a square, it'll be the same length, top to bottom. Well, for the first 80 years of television up until HDTV, that was not the case. All of our television signals and every video source had vertically oblong oval-shaped pixels. This didn't really matter when dealing tape-to-ta-tape. It didn't really matter when broadcasting because the CRT would display it correctly at all times. But when we brought those signals,
into a computer, we have to compensate for that because we're essentially taking an oblong pixel,
putting it into a square pixel raster. So video sources, you often see the resolution 720 by 480
or 720 by 486, and you often see the resolution 640 by 480. Obviously, those are vertically very
different numbers. That's because if you were to capture all of the pixel data of a standard
definition image, you do get 720 lines. However, they are stretched vertically. And it's about a 10%
scale to bring that to where they will actually look correct on a square pixel display.
Now, resolve does this automatically to 720480 sources that are tagged appropriately. Most in
LEs do this automatically. But this is one of those cases where if everybody looks slightly
long, it might be wrong. You might need to apply that it's literally 10% is the number. Vertical
scaling. And it's funny, this, I've had this baked into my head since 2010, which was right
around the time when political advertisers who were kind of late to the game because it was
expensive, moved from standard definition to high definition. And I actually, I got on the news
for this one. We did a political ad. I brought in a photograph that was going to
to a standard definition output.
So I'd do the opposite.
I had to compensate by stretching it slightly vertically so it would be the right, or sorry,
stretching it down a little bit so it would be the right pixel ratio for our standard
definition deliverable.
Now, I was used to high definition at the time, so we had kind of started going the other
way.
Well, anyway, the end result was I put out a political ad where a particular candidate was 10%
thinner than they should have been.
And this particular candidate was known for being a larger individual.
So the internet blew up.
There were news stories about this.
Oh, we're digitally manipulating the image to make ex-candidate look thinner.
Nobody could do that by accident.
That is obviously an intentional decision to dub.
They're being dishonest and blah.
There were local news stories.
There were forum posts.
There was all kinds of stuff.
And it was because I had the pixel aspect ratio checked
wrong when I brought that picture into my nonlinear editor.
And it was a long day I didn't notice.
So, so, yeah, that's a bad day.
10% vertical scaling could be a pixel aspect ratio issue.
And I think a lot of people don't realize that.
True.
And I think that the last time I really seriously thought hard about this, because it was a daily
occurrence back then, was in the DV days, the HD cam days, that kind of stuff.
But I have, I just had to look it up on Wikipedia because I had this number in my head.
And I couldn't, I couldn't remember if it was correct.
I kept thinking about 0.9, 0.9, and 0.9 was that non-square DV pixel aspect, right?
That's the decimal version of it, how it related to it.
Yeah. So, um, 10% can be, yeah, it can be, it can be a little bit of a weird one.
But thankfully these days, most everything is square pixels.
So it's less of a, an issue with acquired footage, but you're right.
It still pops up from time to time.
Bad pixel aspect ratios can get baked into things.
And I think one of the things to do, as you said earlier,
is to focus on geometric shapes, right?
Is somebody, you know, is a circle, you know, an oval, right?
Or somebody too thin or are they too fat or whatever?
But this can also, this bit me recently.
I didn't realize that some anamorphic film footage
that I was dealing with had been stretched improperly.
Its pixel aspect ratio was calculated incorrect.
And so you can still deal with these issues,
even if they don't have anything to do with archival DV or whatever, right?
They can still have anamorphic is a great example.
Yeah, I mean, the animorphic is the film equivalent of a different pixel aspect ratio.
Yeah, man.
I mean, you know what it is?
It's one of those things that, like, it seems like a digital thing,
but in this case, it was an optical thing.
and I didn't do the disquease originally myself.
I was working with a sort of a baked-in image of this.
And it just, you know, at that point in time,
it can be a little difficult, right?
Because you don't know, especially if you're working with something
that's already been stretched or squeezed or whatever direction you're going in,
you kind of have to use your best judgment at that point.
You're probably never going to get it mathematically perfect,
but that's where looking for those circles, ovals, etc.,
can at least get you in the passable ballpark.
Yeah, absolutely. That's kind of one of the overarching things I really want to emphasize here is these sources go through generations of different conversions, different processing. Sometimes you've got to put your detective hat on and kind of think to what could have done this to this image and how do I undo it. One quick little aside, little piece of history, we dodged a bullet with HDTV and pixel aspect ratios. The original proposed HDTV spec was 19.
1920 by 1035 with oblong oval-shaped pixels, just like the standard definition had been.
And the biggest advocate against that, and for 1920 by 1080 as a standard and as the standard,
came from the legendary and quite an idol of mine, Mr. Charles Pointen.
That was one of his causes in the original development of HDTV,
so we can thank him for square pixels finally being our go-to.
I mean, you know, all these years later, it seems like a no-brainer.
Why do you want to have to be doing mental math constantly when you can just go, it's square?
But the argument was we had been doing it for 80 years, 90 years.
I know. I don't.
Support for this episode comes from Flanders Scientific and the XMP 270 and XMP 310,
the accessible, lightweight, and versatile monitors helping to bring HDR monitoring on set
while also being very well suited to post-production work.
Learn more at flanderscientific.com.
All right. So moving right along, one of the things I wanted to chat about in terms of video as well is the resolution issue and how we attach the resolution issue.
And this is going to parlay into a brief discussion for those of you who are AI adverse.
We're going to talk about this in a second.
But let's talk about the non, well, I can't say it's completely non-AI.
But let's talk about the more manual approaches to dealing with.
with low resolution video sources,
or I suppose even film sources too.
So in a film source, obviously, your answer is,
hey, we need this to be better.
I could potentially go back and re-scan it, right?
And a video sign of things,
you're not going to go back and redo the acquisition in any way
because it's, it is what it is.
So how do we do with something that say 720 by 480
and we need to put it in a UHD project?
Well, we can get creative with how we, you know,
window it, we talked about this earlier,
you know, window it, box it, you know,
do a background treatment, whatever.
But sometimes you do want,
those things to go to go full screen. The thing you probably don't want to do, I'm just going to put
this out there, is just use regular transform and push in a couple hundred percent into something, right?
I generally, and this is not a hard and fast rule, but I generally think about 15 to 20 percent
as kind of my cap for how far I'm willing to push in on things with traditional just scale
and, you know, you know, basic transform controls.
After that, I'm thinking about, hey, if I can go further,
do I need to start doing some other treatments?
Noise reduction, sharpening, and that kind of stuff.
But what, you know, prior to that 50% range,
you reach the point of like, nope,
no matter what I do, noise reduction, sharpening, whatever,
this is probably gonna get not so good,
it's gonna get softer.
So enter the world of AI and AI assisted tools, right?
And the first one that I've actually come to love a lot for this,
which does a pretty good,
job is super scale in DaVinci Resolve, right? So super scale applies a mathematical algorithm to basically
do some doubling up of pixels to help give you the perception of, and there's different ways
the algorithm can be handled, but to give you the perception of a sharper, more robust image
when blown up. The downside of it is that it's machine intensive to do this math all the time,
especially as you go up in resolution
and start dealing with more, you know,
high resolution sources and stuff like that.
Have you had pretty good results with Super Scale?
I find it for a lot of things pretty good,
but it can, especially at the settings
that focus more on noise reduction,
it can get things pretty, pretty soft too.
Yeah, the thing of Super Scale to remember
is that they do market it as kind of an AI tool,
but it's not a generative AI.
It is not filling in new pixels
that it makes up, which I think is, we'll talk about this a little bit more in detail,
but very important for accuracy in a documentary, if that matters to your project.
We are not making up new fill material with Super Scale.
We are just combining some noise reductions and sharpening algorithms and some other things
together.
You can also kind of make your own formula with that by adding some noise reduction or some
sharpening or sometimes a little bit of film grain can increase the perceived detail without
giving the ringing around edges that hard sharpening can do. Same thing if you do a frequency separated
sharpen, like the soften and sharpen tool. You can get, you know, those those crisp details
a little bit sharper without really getting that hard ringing. So it really depends shot to shot.
Sometimes it's super scale. Sometimes it's soften and sharpen. Sometimes it's a little bit of regular
sharpening. Sometimes it's a little bit of film grain. But the other thing that I think gets forgotten
about a lot is there are various different scaling algorithms or interpolation algorithms available.
If you look in the inspector, you could do sharper, softer, better quality. There's a couple of
different options you can have. And for different sources, maybe something that has a lot of really
sharp pixels or sharp detail, you might want to use the sharper version or the softer version,
depending on how it is. So dig into the inspector when you're even just using regular.
scaling and see what works best for your footage. It's not always just the defaults.
One thing I would point out about those scaling algorithms is that that math can actually make
a potentially a gigantic difference. And it's not just about going up either, right? It can
sometimes be about going down. I'm sure people have had this problem where they take a, say,
a U.HD drone shot, right? And then they scale it down to HD. And all of a sudden, it's got all
of this alicene and more a more a and all that kind of stuff that's a scaling issue going the other way
around right like so you could try say in those situations i often try lancos i think that's how you say it
l-a-n-cos scaling and that works tremendously for kind of downsample you need more options yep jump into fusion
fusion has a ton of options for different interpolation especially if you're going to be doing
slow pushes or zooms or moves if you start seeing twinkles or aliases or stuff like that yeah
Dip into fusion, try doing your scaling there, and go through the different algorithms and see what works with the image.
Now, there are a lot of applications out there now that are claiming to have the secret sauce about this in terms of getting the best results.
And I have to admittedly and semi-begrudgingly say that they can do a pretty fairly good job depending on the source, right?
And probably the most popular one out there these days that gets a lot of talk is the tool set from Topaz.
And Topaz comes as a standalone application, or it can actually even run as a plugin inside of Resolve.
I think I generally prefer the standalone option versus the plugin for a couple reasons, but mostly workflow-wise.
But it does with various AI models do targeted focuses for things like actual upscaling.
It can do a really good job with noise reduction and sharpening.
But the same general rules apply where you have to kind of work out a little bit of a
recipe. This is not just like, oh, I'm just going to choose this and it's a one size fits all.
You really kind of have to start separating out, okay, these sources do well with this kind of
upscale algorithm, these sources do well with this, and kind of evaluate the result and try to
iterate a little bit. I would say the one other thing I would put out there about using an AI
tool like this is you definitely have to factor in the processing time that's going to be involved
on doing these sources. Now, it's one thing.
thing if you're dealing with processing the clips that are on your timeline. That's a relatively
straightforward thing because you're talking, you know, seconds or minutes, not hours. But if you're
trying to do this upconvert on upconvert on sources before you get it in, yeah, that's where you're
going to have to really kind of budget for some time because some of these things can be really,
really machine intensive. Now, I'm going to be really dogmatic here and say that it's very
important to remember that there is a demarcation line when you get into a generative tool
like topaz where essentially what it's doing is it's looking at your image and then making its best
guess of what all of the photos that it has been learned on what pieces and pixels from those that
can steal to fill in details that don't exist in your image so if historical accuracy is important
in your project not only do you need to be aware of this your client
might not be aware of this. If you use tools that are generative AI to fill in texture or
details, yes, it might look very convincing. You are now, in my opinion, and I would say
factually, you are destroying the authenticity of that image for a bump in visual quality. It's
essentially you're faking it. That was never, those are details and information that was never
captured. And that could be as subtle as blemishes in people's skin. It can be as subtle as someone's
hair. It can be as subtle as the way they move. Right. It might look super convincing. But I'm sorry,
it's not real. And for real historical documentaries, I don't think it's appropriate. And I think it can be
very easy to very, like, have like a gut reaction to it. Like, oh my God, that looks so much better.
but then once you start pixel peeping it a little bit
and really doing some like analysis on it,
you're like, why is that guy's hair like gone to a geometric square now, right?
Yeah, or why is it?
Seven fingers.
Right, or I've not experienced that with Topaz,
but like I've been more of like, you know,
things take on like sort of a plastic sheen with them
because the noise reduction's overzealous.
You can't just assume that the algorithms that are used in these tools
are always doing supportive or good things.
have to, that's what I'm saying, you have to be errative about this. And generally speaking,
I tend to take the less as more approach with these, these tools, right? Like, okay, can I get
to a good baseline with this tool, but then maybe use more sophisticated noise reduction in
resolve or do other techniques and combine that? Like, I don't need topaz or any of these AI
tools to necessarily solve every problem with the clip. I'm just looking at the things that it does
really, really well. Okay, you're really great at tripling or quadrupling pixels, but you're,
you know, I don't like your noise reduction. So fine. Just separate those two tasks, right? That's totally,
that's totally fine, I think, but you're only going to get to that once you experiment a little bit.
And the last place you want to be with this kind of stuff, by the way, is just kind of winging it
on deadline. I would really, really suggest that if you're going to use something like a topaz,
that you get familiar with the controls, you understand recipes that generally work or don't
work rather than going, oh, crap, I now need to process 50 clips and I have an hour to do it kind of
thing. Yeah. And I don't mean to insult the topaz's and those, that category of products.
I just, they do a great job. In fact, Topaz specifically, I say does a very impressive job of
not doing sloppyish artifacting, as in the seven fingers or things like that. They've tuned their
models to be very, very good looking. And they've also.
built their models from what we'll say is authorized sources and they didn't just scrape the internet for copyrighted work.
So that's not a, there's not going to be licensing problems, things like that.
But when it comes to actual historical footage, I think it's very important to draw a line here and say if you are presenting this as historical photography, you just can't use genera algorithms to fill in the details.
It's basically, you know, in Jurassic Park, they put
frog DNA in there to make the dinosaurs work. And we saw how that worked out, right? Yeah. And that's why I
think that the, I mentioned much earlier, the idea of kind of like chain of custody is that like,
you know, a good, a good archival producer, which is a whole other subject we should, we don't
have to dive into right now. But a good archival producer will understand that chain of custody to
a certain degree and understand, oh, there are much better versions of this. And this is what it looks
like we just can't afford it for this project.
So let's use that as a reference point in the technical work that we're going to try to do.
Because you're right.
Like I think like take that Challenger documentary I was talking about earlier, right?
Obviously they went back to the original film scans and rescanned it.
But let's just say they couldn't, right?
That's a case and point where too much cleanup is degradating the original content to a certain degree.
Right.
Like it was never the space shuttle was never that white.
It were whatever the case may be, right?
Like, yes, there was these, you know, these lines you could see in the heat tiles on the bottom, right?
And, you know, all the noise reduction, you've gotten rid of those lines.
Like, whatever it may be, I think that there can be aspects of this where a little thing,
something that seems good on the surface goes a little too far.
Yeah.
And as long as you understand the generative tools are making up data that wasn't captured,
you can use that in your decision making process and it has you talk to the client.
Like, look, if it's a B-roll shot between requal,
creations that's made to look historical for the story, fine.
Top has that all you want.
If it's a president making a speech, no, that's not historically appropriate to use an AI upscaler on.
You know, it depends on the context in the film.
And only, you know, you and your client and the producers can really be the judge of that.
But it's important to understand that when you get into generative AI, you are removing the authenticity of the image that's unquestioned.
Yeah, I agree.
I agree. And I mean, so I think that there, there's obviously some sort of creative slash authenticity issues that exist there. But I think the best work that I've seen done to this regard tries to respect it with moderate improvement. Right. So like the idea that you're going to get to a perfectly clean, perfect image, that should never probably be the goal with most video archival, right? The idea that you can improve and enhance tastefully,
That's really more what I think the goal should be, right?
To not, and the generative stuff, I want to be clear, I don't look as, I don't look at, I think from a purely technical point, you're correct about it creating pixels that are not there.
To me, it feels a little different than, hey, you know, chatbot, make this cool image for me, right?
It's not, it's not necessarily like generative in the sense that you're, you know, I'm not saying, hey, make a person and put him next to this other guy, right?
Like, that's clearly generative.
In this regard, I think of those algorithms as more of, I get what you're saying, but I look at them more as enhancement, you know, resolution or noise reduction enhancement.
Yes, technically, are they making new pixels?
Yeah, I agree with that.
But it's not exactly the same as putting a different person in the shot.
Yeah, that's why, like, like, I'm drawing a hard line in the sand here.
And that's kind of where I stand on it.
But like I said, it depends on the context in the film.
and what the goals of your producer and your client is.
It's just important to understand the difference in the technology
between something like a Topaz versus something like a super scale
or a regular noise reduction.
Yeah, and I think if that last thing,
if authenticity to the image is the most important thing,
that's where having a plan about how to handle this stuff is more important, right?
I remember years ago I did a film, this film about the punk rock and hardcore scene here in D.C.
And the filmmaker was like, we have to keep, like, I don't even want to color this.
I just literally want, like, can you just make, like, clean up the edges.
So everything, you know, every time they had a shot from, you know,
1982 and bad brains or whatever, right?
It was a four by three image in the middle of the frame,
but he came up with some other creative artistic ways to make that seem less boring.
Because he didn't want to scale it, noise reduce it.
He wanted it to be as raw as possible, right?
So you had to consider that stuff as well.
All right, man.
Good stuff.
I think over these past two episodes, we've, you know, we've covered a lot of, you know,
50,000 foot view of this stuff.
Obviously, there is hundreds, if not thousands of things we could cover in detail about each
one of these things.
But, like, the idea here is that, you know, focusing on the challenges and the big picture ways
to fix them are going to lead, you know, plenty of opportunities to not just settle for,
oh, this is archival and we're just going to insert it in, right?
You can improve.
You can get better.
you can identify what's good and what's what's what's bad quick bit of housekeeping again if you
wouldn't mind if you have five or ten minutes we still have our audience survey open that's right here on
the link on screen if you have five or ten minutes to answer our audience survey that would be really
helpful we're using this feedback to help our sort of guide the podcast in 2026 so we appreciate
anything that you can do there as a reminder you can head over to the offset podcast to find our
complete library but that's also where we have show notes including some of the the
things that we're going to include here on this episode, the DCATL and some of the fusion stuff that Joey mentioned over the course of these two episodes.
We'll link to that over on the offset podcast.com.
If you're listening to us on YouTube or various audio or podcast platforms, do us a favor and give us a like and subscribe wherever you find the show.
And then lastly, if you do have a few minutes, head over to this link right here where you can buy us a cup of virtual coffee.
Your support of the show means the world to us.
Every dollar donated on Bias to Coffee goes right to supporting the show,
helping us pay our editor and all that kind of jazz.
So we really appreciate the support there as well.
Joey, fun two episodes.
Hopefully a lot of people got some sort of, you know,
a couple nuggets out of this.
There's a lot to talk about,
but it was always good talking about how to handle this kind of stuff.
Because honestly, if you do any sort of long-form dock work or that kind of thing,
this is always going to be something that pops up on how to best handle that.
So for The Offset Podcast, I'm Robbie Carmen.
And I'm Joey Deanna.
Thanks for listening.
