The Offset Podcast - The Offset Podcast EP039: Virtualization
Episode Date: September 2, 2025In this episode of The Offset Podcast we’re talking about something that’s been on our mind for while - virtualization. Specifically how virtualization can help facilitate and color and... postproduction workflow.If you’re new to the subject than this episode is a good primer on the essential components of virtualization, for those of you more experienced with virtualization, we believe strongly that virtualization will continue to play a large role in our industry and over the next few years become mainstream in many post and finishing workflows.Specific topics discussed include:What is virtualization and why is it important?Local hardware mentality vs virtualizationLocal hypervisors and VMsVirtualization servers (DIY local and or Cloud Based)Key vocabulary - bare metal, pass through etcThe role of Remote Desktop/streaming and local clients in a virtualized setupWhy and how we’re experimenting with fully virtualized workflowsThe trickle down effect of virtualization is gaining steamIf you like The Offset Podcast, we’d love it if you could do us a big favor. It’d help a lot if you could like and rate the show on Apple Podcasts, Spotify, YouTube, or wherever you listen/watch the show.Also if you liked this show consider support the podcast by buying us a cup of coffee. - https://buymeacoffee.com/theoffsetpodcast
Transcript
Discussion (0)
Hey, everybody, welcome back to another episode of The Offset Podcast, and this week we're talking all about virtualization.
Stay tuned.
This podcast is sponsored by Flanders Scientific, leaders in color-accurate display solutions for professional video.
Whether you're a colorist, an editor, a DIT, or a broadcast engineer, Flanders-Scientific has a professional display solution to meet your needs.
Learn more at flanderscientific.com.
All right.
Welcome back, everybody.
to another episode of the Offset Podcast.
I am one of your host, Robbie Carmen.
And with me, as always, is Joey Dana.
Hey, Joey, how are you, buddy?
Hey, everyone.
Joey, this week we're going to stay clearly in a topic that is in your wheelhouse,
and that is the idea of virtualization.
And virtualization is something that's being talked a lot about in our industry
and just general computing industries in general.
And I think it's something that's kind of pretty new to probably most of our audience,
but there's probably some people out there
are like, oh yeah, I know all about running VMs
and that kind of stuff.
But we want to talk about a little bit today
about how it affects individual users
or how it affects workflow
and why Joey and I both kind of think
it's kind of a little bit the future
and sort of where our industry is moving to.
But before we get into that,
let's do a little housekeeping real quick.
First and foremost, we have had a lot of feedback
from you guys about chapters in our episodes,
both chapters in the YouTube version
as well as the audio files that make it out to all the various podcasting platforms.
We have heard you loud and clear.
I'm not sure why we've made this oversight for 38 episodes or whatever.
But starting from this one out, we are definitely going to be making sure that all the episodes have chapters,
which is useful on some of the longer ones if you need to navigate specific topics.
So thanks for bearing with us.
I don't know if we're going to go back and redo all of the episodes, sad chapters,
but from this one on, we'll definitely have chapters.
Also, just as a side note, there is a new way that you can support the show if you choose to, if you like what you're hearing and want to help us out.
We have an option for Buy Me a Coffee, which is just a euphemism for crowdsource funding and crowdsource support.
You can head over to the link that you see here on screen and we'll put it as well in the show notes.
And if you do like the show, a little support really goes a long way.
It takes a lot of work to make these shows every two weeks, pretty much year-round with a little time off.
So any support that you have to offer, we sincerely, sincerely appreciate it.
We couldn't do the show without you guys.
And of course, as always, you can head over to theoffsetpodcast.com.
Drop us a note if you have any ideas for a show.
You want to check out some additional show notes.
And that's kind of one of the best places to find sort of the library of past episodes as well.
All right, Joey, so let's get into talking about virtualization.
Like I said, I think people kind of maybe have heard this term.
I certainly have heard the phrase VMs, right?
But let's start at the beginning of this conversation about kind of what the current landscape is.
Because when I think about the typical operator facility, you know, production company, whatever, you know, it's like, hey, we have a operator, we have a room.
And there's a computer or maybe multiple computers for that room, right?
And that computer has this processor, it has this connectivity.
It has this OS.
And that's the machine that that person works on.
day in, day out, they make, you know, good money doing it, whatever.
That's the machine.
What happens, though, if we need to move that computer to another room?
What happens if we go, oh, you know what?
It would be great to have, you know, two more GPUs on this computer today,
because I got, you know, big render to do.
Those kind of workflows have traditionally, in the traditional setup been very difficult
because you're buying new hardware, you're adding more gear,
you're unplugging things, moving it around.
Would you agree that that's, that's,
kind of the current for most of us, like a current lay of the land? Yeah. So I like to kind of think of it as a
split between what is kind of the server world and the end user world, where, you know,
today most artists are still working on what we consider to be local hardware, whether that's
a monster workstation in a machine room or that's just I'm editing something on a laptop. You're still
interacting directly with a computer to do your work. Now, years and years ago, let's go back
25, 30 years. It was like that for servers too. I remember the first like major IT network I set up
for a post-production facility. We had a huge storage network that handled all of the discs.
And then we actually had individual servers. The server was a piece of hardware for every single
task that was needed. We had an active directory to manage the domain. That was one machine. Put it this way,
these machines had two zion processors. I don't mean cores because these were single core processors.
So this was a dual CPU two rack unit IBM server. One was our domain controller. One was our
email server because cloud hosted email didn't exist back then. You had to kind of host it yourself.
One was a database server, which handled our scheduling system, and if resolve had existed back then, could have handled our resolve projects.
And another one was a web server that handled our website and our web-based review and approval platform that I had built.
So we're talking four servers running 24-7, each one with two power supplies, each one with all of the memory and CPU associated with running those services.
And 90% of the time, they were running at zero or one percent utilization.
So what virtualization means at its core is you take one big server, which is considered the host, the actual hardware.
Nowadays, we have dozens of cores in one chip.
So, you know, you've got lots of resources available, lots of memory in a system, and lots of possibly disk in a system as well.
and you basically, instead of running an operating system directly on that hardware,
you run what's called a hypervisor, which manages the system's resources,
and then runs what are called virtual machines that run each individual operating system.
So if you have, for example, a 24-core host server, you can say give four cores to this VM,
eight cores to this VM, 10 gigs of memory to this one, 20 gigs of memory to this one,
and it lets you use one big expensive server to do many, many different things without touching each other.
Now, this has been the standard practice for servers for over a decade, right?
Nobody wants to buy purpose-built servers and put them in databases or put them in data centers
and have them sit at zero utilization for 90% of the time while you're burning money on electricity, right?
Right.
It's very easy to virtualize servers because the user doesn't directly interact with them, right?
You can just say, okay, this is its IP address.
All of my workstations can now hit this IP address for my database, for example.
Great.
That works amazing.
And that has transitioned from local hosted VMs to now what we see all over the industry is cloud-based
services.
You go to Frame I.O., Dropbox, Google Drive.
All you're dealing with is somebody else's virtual.
server in their data center somewhere else.
The entire server world is basically virtualized now, but what we're talking about in this
episode is virtualizing workstations in addition to the servers, which is it's something
that exists, but it's been slow on the uptake because doing things like video and Wacom
pens and sound has always been the hardest part of this kind of remote desktop solution.
Yeah, so there's a lot to unpack about what you just said, but it's a good overview of the kind of the idea of virtualization.
I'll give you sort of my perspective on it, and that is, you know, I think for those who have come to know me over the years, I'm a big fan of kind of staying current on the latest and greatest machines and computers and that kind of stuff.
And so generally what that means is I lease this stuff.
And, you know, every, you know, three, two, three, four years, I'm like, oh, well, now I need to go and get the next generation and batch of,
computers, right? And I look at that every time that that comes up and go, wow, well, I need,
let's just put a number on it. I need five new computers and it's going to, you know, it's going to
be X amount of dollars a month and it's going to be at this amount of total. And it's every time I do it,
it's like, uh, right? And I always, every time I have this feeling, I have a flashback to what
my first experience with workstation VMs and virtualization was. And that was about 10, 12 years ago,
I was along with a group of some other trainer folks that I know.
We went up to Major League Baseball up in New York City.
Major League Baseball had just purchased.
I think it used to be the old MSNBC studios in New Jersey.
And they said, this is going to be the new host.
This is a new facility for MLB and the network and all the shows they do and all the baseball stuff.
And they had this edict that we don't want to have a computer in the building.
And I was like, what?
What now?
Well, we don't want to have a computer in the building.
So what did they do?
They built a data center a few miles away from the studio.
And they basically ran dark fiber, you know, all the private pipe from that data center
to the facility.
Probably not something any of us are going to do, but just bear with me for a second.
All right.
And so they had this data center over here.
They had the studio over here.
Every room that you walked into just literally had a keyboard, a mouse, and a couple monitors.
But there was no computers, right?
So here's the beauty of it.
You know, let's just say you're an after.
after effects person and you're working on some complicated show open and you're like oh man well
I've been working you know in proxy mode or draft mode and this is great but now I need to get a full
res you know full res view of this well the user could call up the help desk at the data center and go
hey I need three GPUs for this render can you make it happen click do do do do do do a couple minutes later
guess what that system magically has it now has three GPUs assigned to it and at that time I was like
what is going on?
This is amazing that you have this scalable architecture that can go, you know,
in terms of actual hardware that we think about paying for and installing on a machine,
be flexible enough to go where an individual user has a specific need, temporary need.
Let's throw RAM at it.
Let's throw disk at it.
Let's throw GPU at it, right?
And that was 10, 12 years ago when this was happening.
And back then, it was tens of millions of dollars to support this kind of infrastructure.
And to be clear, I'm not suggesting anybody dig dark fiber, you know, private pipe fiber from point to point.
Well, so here is where things have changed, right?
To be able to do really competent remote desktop interaction, you do need a lot of bandwidth.
Ten years ago, that meant dedicated fiber, which made it absurdly expensive for most than users.
Now a gigabit internet connection is a hundred bucks a month in most of the world.
Right.
Yeah.
So accessing those VMs has now become possible on a workstation level.
Yeah.
And so that was my first exposure to it.
And so over, you know, since then I have dabbled, just like many people have probably
dabbled with local hypervisors.
And what I mean by a local hypervisor is a piece of software that runs locally on your local
computer to allow you to virtualize other OSs. So for me, I prefer the Mac as my main,
you know, duty workstation, but I have often have needs to jump over to Linux to do like DCP stuff.
Or I have, you know, need to go over to Windows because there's a specific piece of software
that's Windows only. So I've dabbled in local virtualization using, you know, tools like probably
a lot of people are already familiar with, parallels, VMware. There, I mean, there's a few of them, you know,
out there that do this kind of thing,
where you basically install this hypervisor,
you install the OS that you want to work on.
It has to, by the way, that OS has to support the local capabilities of your computer,
but that's a side topic.
And I've installed Windows, Linux, et cetera,
and I just launched that when I need to get those applications or those tools,
but I'm still running it from my local desktop and going into it.
And I think a lot of people probably have that experience.
That's a cool first way to attack this, right?
but it's not exactly the level of virtualization that we're,
I really want to dive in today.
So, um,
the level of virtualization that I want to talk about today is building a virtualization
machine that can handle all that.
So instead of running it locally on whatever workstation you're on,
we're essentially building a server that has all of the capabilities that we'd want.
And what I mean by all of the capabilities.
Well, it's an interesting question.
I want you to think just briefly about.
about your dream scenario.
Like if you could have a machine that has whatever,
four GPUs, a terabyte of memory, et cetera,
that's the top of the pyramid for you building your virtual machine, right?
You want to pop in as much stuff as you could possibly want.
But the beauty about building a virtualized server on itself now, Joey,
is that we, yes, we can dedicate that to one machine,
but the real beauty is that we can take all of that hardware
and all of that superness about this machine
and distribute it,
divide it up and break it up however we want to whatever the needs are, right?
Yes.
And, you know, there's a couple levels to even that.
You know, we're going to talk about both building your own VM host, as in you buy
the server, you configure it, you are hosting your own virtual machines, or having a cloud
provider do it for you where you're just buying cloud resources.
You can build a workstation in either scenario.
What's relatively new to the virtualization game is being able to virtualize GPU.
And this kind of happens in two different ways, right?
The simplest way is if you do have a VM host, you can have multiple GPUs, physical
GPUs in it, and you can do what's called a PCI pass-through.
You can literally say, this virtual machine, say, resolve workstation A,
gets the GTX 5090 right now.
When you do that,
no other virtual machine can get that 5090.
So you can have multiple GPUs
and share them on multiple VMs,
but the really cool thing that can happen now,
and you're probably not going to get into this
if you build your own VM host,
but when we get into,
I'm going to build a virtual resolve workstation in the cloud,
you do have this option,
which is just like you can take a 24-core CPU
and split out its cores to give them to different VMs,
just like you can split out your system memory on the host to different VMs.
With the more advanced server boards that Nvidia sells,
you can split up the GPU cores per VM.
So let's look at the biggest possible virtualized workstation environment
for like a major corporation, right?
They have either doing this in the cloud or to private data.
It doesn't really matter.
But their VM hosts have a buttload of CPU, buttload a memory, and a butload of the
Nvidia Tesla cards that are virtualizable, right?
So then you say, okay, I need eight resolves today.
Each one of those resolves gets how many GPU cores, how many CPU cores, how much memory.
Those individual cards can be split out to multiple images and multiple VMs.
So there's two different ways to do a workstation level GPU with virtualization.
There is one, giving a whole GPU to the VM.
That's the easy way that if you're building this yourself, you're probably going to do.
The harder way, but if you're building something for a facility, is to get virtualizable GPUs that you can then assign to VMs on a demand basis.
Yeah.
So it can get, as you said, very complicated.
I want to, again, unpack that a little bit because I think there's a,
few interesting things that you have pointed out there, right?
And some phrases, some, some concepts that I think are really, really important, right?
So the first one that I think about in this, I don't know, this setup or this sort of building of these kind of setups is I think about the phrase bare metal, right, as a sort of a baseline concept.
And I think that people may have heard that concept before, but to you, what does, what does bare metal mean?
because I think it's an important, I think it's an important term before we move on to the virtualization parts.
Yeah, bare metal is the actual hardware.
So when you talk about what's called a bare metal hypervisor, let's say you would install windows on a computer, right?
Windows of the operating system running on your bare metal.
If you install something like VMware ESXI or ProxMox or any of what they are considered bare metal hypervisors,
that hypervisor system now replaces what the operating system on that hardware would be.
And you only really access it.
You don't plug a keyboard and mouse into that.
You access that over a web-based console and you kind of manage your virtual machines from there.
That is how VM hosts are done most of the time.
You know, desktop virtualization, like Robbie said, is great.
If you have a laptop that you want to open up a Windows app on real quick, but it's a Mac,
Yeah, virtual box parallels, VMware, whatever, works great for that, but you're not going to run a resolve workstation that way.
Bare metal hypervisors replace all operating system on a physical computer and allow you to run VMs from that.
Yeah, so I think about it as, so, you know, the bare metal part about it is the hardware.
You're right.
But this hypervisor, this additional layer sits one level above what most people I think is, you know, think about their traditional
OS, right? Conceptually, it kind of sits at this level that is between the hardware and what
most of us think about as a traditional OS. And that's the part. That's the key component. That
hypervisor is what allows us to then install a bunch of other OSs run virtual machines on that.
It's the traffic cop. It's the manager of the hardware to figure out, okay, well, you need this
much RAM, you need this much GPU, you need this much CPU. It is the traffic cop that lets us hand
and sort of track and manage all of those virtual machines.
Okay, so I think you make an interesting point.
So we have this bare metal stuff.
That's the hardware that we need to do.
We have this hypervisor, and you named a few of them,
VMware, ProxMox, et cetera.
Now the next level down from that,
so now that we got the bare metal,
we got the hypervisor that's going to control that bare metal,
the next step down is what you already referred to as virtual machines
or these individual setups.
Now, I want to ask, sort of clarify and ask a question at the same time.
So when I get to that level, I'm ready to start installing some VMs.
Does that literally mean that I am basically making choices about what hardware I want to pull into that virtual machine?
Or does the virtual machine sort of do that for me?
Does the virtual machine go, hey, I'm Windows.
I need at least 32 gate.
Like, is there a lot of automation to this?
Or is it all about the user setting it up for in configuring it?
It's both.
If you're doing this manually, you can basically assign whatever resources you want and then manually install whatever operating system you want.
When this scales up to a production workflow, what you end up having is you have a set of pre-installed images.
So, for example, you would have like, you would install a Windows workstation, install resolve on it, install after effects on, install whatever.
the system needs for the users, right?
Then that image gets stored on your storage.
And then when a user requests a workstation, what will actually happen is a new virtual
machine will be built for them automatically by a script.
It will say, hey, copy this image to be the hard drive of this virtual machine, send it to
the user, and then boom, Bob's your uncle.
They have literally a brand new computer that didn't exist.
until this morning, they can do their work on it because all of our media is stored on a network
storage somewhere, right? So they don't need any local storage. Then when they shut that machine down,
it actually deletes the virtual machine altogether. It deprivisions it. So all of the actual
workstations are temporary. Now, why would you want that? Because guess what? You can have two
versions of your workstation image. You can have the current working one. Then you can have
the, oh, I'm trying Resolve 20. I'm trying Resolve 21. I'm testing, right? You test with that,
and then when it's ready, you just say, hey, guess what? The new default image for users logging in
this morning is now the new one. We've just upgraded everyone's workstation in one day,
but also fully tested it across our whole workflow without any risk. That's one of the really
cool things about virtualizing all of your workstations is that you can test all you want and then
only deploy when you're ready. Yeah. So I think it's interesting because I think that for me,
being a little bit newer to this concept than you are, I was expecting some handholding when I
first set this up. And I think for the average user, the average user setting up a virtualized thing,
it's going to be pretty much fully manual unless you're at that mega huge data center level.
you're going to have to make some decisions about what kind of hardware configurations, that kind of stuff.
Also, it's worth noting at this point, too, that while many of the hypervisors are free or low cost,
to get a lot of times some of the advanced features out of them, like the GPU core splitting, that kind of stuff,
you're most likely going to need a paid version of the hypervisor.
And two, just because a hypervisor can support, you know, make a virtual machine out of a whatever OS,
doesn't mean that it's just free and legal to do so, right?
Like, you might still have to adhere to the ULAs of those various OSs.
You still need to buy a Windows license, for example.
You still need to buy a Windows license or check the boxes for whatever Linux distro you're using.
MacOS famously has fought virtualization for years.
I'm not going to say this definitively.
There may or may not be.
I can either conform nor deny that there are some people out there who have tried to get this working.
go at your own
on your own on that one
there's no legal way
to virtualize a macOS desktop
right that's that's the that's the quick way of
saying it um so i you know i had that
i had that you know sort of manual experience
but what got me at first
joey when i started setting this up on a
you know first locally and then eventually
dedicated machine was
I did not understand
the concept in the role that the
hypervisor played and when i'm just going to label
as passing through, air quotes here, devices, the bare metal pieces.
So I would load up a Windows thing and go, well, what the hell?
I have this beefy GPU in the system and it's still saying it's running off the, you know,
Intel built-in, you know, CPU-based ship.
Why is that?
Well, because I was ignoring this whole idea of device path-through.
And you mentioned this before, but very specifically, most hypervisors are going to have
categories for the things that you need to use on that bare metal.
network, storage, GPU, memory, USB.
You're going to have to choose what gets passed through from the bare metal to that machine.
And it's really important to notice this.
Not all devices and pieces of hardware can be virtualized, right?
I've run into this problem with some audio interfaces and that kind of stuff,
where they freak out that they're not running on the bare metal and that they're virtual.
So there is some experimentation to that that play about what you're going to pass through and how that works.
And that's actually another big advantage to multi-user setups, though, because you can, for example, dynamically assigned USB devices.
What does that mean?
It means dongles.
If you have, for example, 10 Pro Tools workstations that are all virtualized now and only one person needs this really expensive plugin suite, you can say you get the dongle today.
and then if you move them to another machine,
they can get the dongle today.
And you can take a USB dongle.
There's actually software out there too
that will help manage this from a large facility perspective.
But the idea to think about is you could share one dongle.
It's still one dongle per user.
But if you have 10 users and five dongles,
but only two or three users need a dongle at a given time,
you just saved yourself 50% on software cost
for a 10 user installation.
You know, that's not inconsequential.
Yeah, but it's just interesting,
and we won't get into the thick of this today,
but it can get a little squirly
and, you know, hard to get your mind around sometimes
the networking side of this and some of the devices,
but conceptually, it's about taking whatever
the actual piece of hardware scene,
that bare metal,
and choosing what is going to get sent through
to that particular virtual machine,
and what components of that are going to get sent through?
So we've talked a lot about kind of all the technical details and the how and the why,
but what I think that we haven't touched on and what I really want to get into is
what does this look like for most of our listeners, I think, and for us individually.
So the reason why we're doing this and the reason why, I mean, I say we're doing this,
Robbie and I are experimenting with this in great ways.
But I want to give you two hypothetical use cases, a locally hosted use case and a cloud
hosted use case, both for Robbie and I's actual color business.
So let's just lay it out.
What do we have right now?
We have three locations.
We have my house.
We have Robbie's house.
And we have an office where we have clients come in for reviews or look setting sessions
or anything that we want to be in person.
So clients don't necessarily have to come.
to our house. That means three workstations. That means three reference monitors. That means three
control surfaces. That means syncing all of our project data to three locations. And in general,
this has worked very, very well, especially with the resolve cloud database, being able to,
Robbie can start a project, I can pick up a project, then we can go to the office and review the
project with the client. This is wonderful. But there's a lot to manage with syncing
data and having three different things in three different locations and having, okay, do we have
this plugin installed on all three? If I go to the office to do a review, have I synced all
the DCTLs I'm using? Little details like that get to be a hassle. So what we've kind of conceptualized
and started experimenting with is both having this completely hosted in the cloud. So we pay a monthly
fee for all of our storage and all of our workstations.
And we don't do anything.
All we have at each three locations is a basic computer with a DaVinci control surface like a
mini panel and a reference monitor.
And that basic computer has a decklink so we can use DaVinci remote monitoring for our
playback.
So we have a virtual workstation somewhere in the cloud that is sending DaVinci remote monitor
to my house one day, Robbie's house one day.
for the office one day.
I want to stop you for one second
because I think this is something
that's hard to get your mind around.
So you said,
hey, I had this basic computer
that I'm interfacing with
on this virtual machine
somewhere in the cloud.
I think this is a core component
that we shouldn't gloss over.
When you are working with a virtual machine
and a dedicated server,
not running like a VM locally,
but like as we've described,
you're connecting to that machine
basically through a remote desktop connection, right?
And there's plenty of tools
to do this.
some of them are more preferential for this kind of work than others,
but it's essentially a remote desktop connection.
And likewise, just like you're passing data or passing bare metal hardware components
to the virtual machine, in this remote desktop solution,
you're also sending things like USB control, et cetera,
from the remote system to the virtual system via this remote desktop app or some other protocol.
Yeah.
So some of the things, like we were talking about earlier,
like this has been, it was really easy to get this going with servers.
It was a lot harder to get this going with workstations because you didn't have gigabit
connections for remote desktop.
And you'll notice, Resolve only recently really started embracing remote panels and remote
reference monitors.
That's directly because people wanted the remote out their workstation.
So now we kind of have this capability where the entire resolve system can be somewhere else.
And then the local office can have a basic desk.
with a decklink and a mini panel and it's good to go.
And so one other thing about this, so if we were using remote desktop to connect to and control that virtual machine,
and you had, you had mentioned this, but I want to just be clear about this, unlike the traditional setup where that deck link is, you know, is getting information or the signal directly from your local computer.
This is a little bit of a mind job, but you actually become like the client, your clients on this end.
you are streaming from the virtual machine to your local setup and then pumping that out to a monitor.
So you are essentially streaming the video signal rather than having it directly, you know,
in a SDI cable from the computer, direct to your monitor from, you know, from the main workstation.
Your remote, your basic setup becomes sort of the interface for that remote stream.
Exactly.
So let's go back to this hypothetical Robbie and I situation.
if we were to do this, why?
Well, think about it this way.
We've got three offices and only ever have two people using them at any given point.
We would only need to build a virtual environment for two resolve systems with enough juice for two systems.
That way, Robbie and I could always be working and we could be at our house or we could be at the office or we could be on a beach remote desktoping in from an iPad to do basic stuff like conform or editorial.
things. And as long as we have the resources to always have two machines running, we can run
three locations. We could run another review location somewhere else if we needed to. If I needed
to go on vacation or like if I was going to be gone for a while, we could, I could bring a
monitor with me and then have the full experience and the full real time playback as if I was on my
main workstation. So the idea of being able to completely remove the location,
from where you're computing is really, really powerful to us,
especially since we bounce between locations so often,
and we have the needs to do client reviews, et cetera.
Now, right now we're kind of considering building that as a locally hosted solution.
Big server, either probably at my house or Robbie's house,
because we have really fast internet.
And then that serves out the resolve workstations to all three locations.
The other option is to go onto AWS,
to go on to line node, to go on to any of the cloud service providers, and build up a resolve
image that does everything we need, buy enough storage, cloud storage that's high speed to connect
to those resolve images.
And then Robbie and I both just connect to our resolve image in the cloud.
All of our media is now only in one place.
So we never need to sink anything.
And we could be at the office.
We can be at my home.
We can be at Robbie's home suite.
does not matter one bit.
And the advantage of doing it in the cloud scenario,
although the cost right now is a lot,
a lot more expensive than hosting it locally,
that one that cost is going down.
But also, okay,
I just finished a sixth episode series
that was maybe 30 terabytes of data.
Once we're done with that
and we archive maybe all that to tape
or like cold storage in the cloud,
we delete that from our cloud image.
that we're storing, we're not paying for that anymore.
You're only paying for the storage you use at a given time.
So we can scale up and down based on project need.
That's kind of one of the arguments for the cloud environment.
So there's a few things to unpack there as well.
So you're right to point out that there's ways to do essentially self-hosted virtualization, right?
Put a machine in a rack somewhere, put it on the internet, that's that.
And then the sort of the cloud side is somebody, you know, I always,
love this joke, right? You know, the cloud is just somebody else's computer, which is just,
you know, in a data center somewhere. So the advantages, I think, I see both sides of it. I think
you and I are prone a little bit to the first one simply because of a control issue,
like, liking to have, here's the hardware, something goes breaking on it, I'm not waiting for
somebody, you know, that kind of thing. Whereas the cloud thing, it's like, hey, well, if something
goes down or the machine needs to be restarted, like, there's some, there's some, there's some
logistical things there.
The big logistical thing
that I have with an all cloud VM workflow
is the media, right?
Is that, you know, you're having,
there's no way to get around,
hey, I need to get this media
to the virtual machine, right?
So that's either going to be an upload,
which potentially could be very, you know,
heavy depending on how much you're putting up there.
The other advantage, I think you've rightly pointed out,
Joey, about the cloud way of doing this
is that it is a little bit more scale
right? Because even though you're working on a virtualized machine, that data center is virtualized, right?
So them having to say, oh, well, you need another 32 terabytes of storage, no problem.
Even though, you know, they're doing that on their end.
Yeah, if we build it ourselves, we need to buy the hardware for the biggest thing that we'll ever need initially.
Whereas, you know, a lot of colorists work in partnerships where they might have five colorists in their kind of group.
that are all over the country, and they all kind of represent themselves and book each other
together.
That kind of collective arrangement is the perfect use case for a cloud scenario, because
you only pay for what storage you need.
If three of the five aren't working on collective projects at one given time, you only
need to pay for two workstations.
You want another workstation.
You add another workstation.
And you can scale as appropriate depending on the capabilities of the things.
So the other thing, the other advantage I see of the cloud, though, is that also,
it's a relatively minimal setup.
Like you can actually go on the AWS marketplace
and purchase pre-configured machine setups.
Like if you want to buy Resolve in, you know,
AWS instance of a resolve machine,
just a couple pull-downs.
I need this many GPUs.
I need this much memory.
I need this much storage.
Bam, buy it and you're done.
I have, in the arithmetic of this,
it has currently in 2025,
it has proven to be,
that workflow has been more expensive
than the bottom line lumber that I would have for buying the equipment and either depreciating
or leasing that equipment, right?
So, like, if we're using the machines, you know, between the two of us, say, 70 hours a week,
right, it's going to be a couple grand a month to make that happen in the cloud.
So that got us to thinking about, okay, we're not quite yet ready for full cloud implementation,
though we see that benefit.
How can we do this locally?
And so we started looking into this.
And I think one thing that we skipped over earlier,
which is worth mentioning, but you just sort of said about
we'd have to kind of envision the biggest, baddest scenario that we want.
I want to be clear about something.
The cost of a machine to do this, air quotes, the right way, is very high.
It is not, this is not a cheap investment.
This is not like going to buy, you know, a Mac Mini from the Apple Store and plugging it in.
We're talking server-grade hardware, server-grade thought components about cooling, power,
you know, the whole nine yards.
So if this feels like, oh, that's cool, but this is a lot to chew off,
then probably the cloud wave of things is probably more of your, you know, your workflow.
For us, it was like, yeah, we want the local control.
We want the scalability of this.
We want the power to just to plug in, et cetera.
So briefly, I want to just talk about the core things that I've been thinking about in that setup.
So number one is obviously thinking about the CPU architecture and the CPU power.
I'm thinking server level CPUs for this.
I am not thinking, you know, core processors.
I'm probably not even thinking at this point,
not even Xion's because they have some issues.
I'm thinking things like Epic and some of those really big monster AMD epic kind of setups.
I'm talking 96 cores, 124 cores, you know, those kind of setups, right?
So we can have full-on workstation.
Those processors, really expensive, like 10, 12 grand each.
Next, I'm thinking about RAM.
Obviously, the more RAM.
the more memory the better.
You know, I, the box that we've been kind of specking out has about 512 gigs as a, as a sort of a baseline.
We could add more to that, to be honest with you, to run various more, you know, machines on it.
You know, a terabyte is not out of the question for amount of memory for a big box like this.
Next, I'm thinking about GPU.
Honestly, this is the harder part of this equation because as Joey pointed out before, a lot of the
consumer level GPUs are not really virtualizable in the sense,
virtual I made up that word,
are not really virtualizable in the sense of being able to shave out specific
functionality.
You can use the whole GPU,
but if you want 10 cores from this GPU one day and 20 cores the next day or
whatever,
you're going to have to go to the data center server level,
you know,
Atta, Blackwell,
etc.
Those, you know,
what we traditionally thought of like as quadro card kind of level,
you know, data center GPUs, also not inexpensive.
These GPUs can be, you know, four or five grand a pop.
You're going to need a couple of them, right?
And then lastly, it's the storage and the networking.
Fortunately, we've already thought about that story.
We already have most of the storage need in play.
We have big NASAs with, you know, terabytes and terabytes and terabytes.
So just to be clear, the estimate that I have for doing this kind of buildout is right around
50 to 60 grand as a baseline and we could probably get it up to about 80, 90 grand if we really
wanted to get after it.
If we really wanted to get after it, right?
So this is not something that we're taking lightly and it's a huge cost investment,
et cetera.
So what we're doing in the meantime before we dive all that is we are testing this.
I have a thread ripper, a 32 core, a 3970x thread riper that I'm not using anymore.
it's on a workstation motherboard.
We're going to pop this into a machine with storage the whole nine yards and see what it's like.
We know that it's going to have some limitations, but we're using already owned hardware to test this kind of thing out.
And if it works, great.
If it runs into problems, we know where those problems are.
But then the next step after that would be obviously to scale up to something that is more scalable as well as just more robust in general.
Yeah, and I know it sounds like a gigantic amount of money.
but if you look at, okay, my workstation at my home office was $14,000, $15,000.
My storage was probably close to $10,000.
So that's half the cost right there on one site.
We're talking about replacing three sites with one box.
And that's why when we started off this conversation and I said,
I always think about this every time I have to renew those lease agreements and buy four or five machines,
that's why I'm thinking about it because every time I'm like, well, gosh.
And to be clear, you know, before anybody gets any ideas that we're just rolling, you know, a 90 grand of extra cash line around.
This would probably be a least situation to kind of handle, right?
Like, I'm not going to sink because also it's one of those things where, you know, very, very quickly a year or two, this thing is going to start to go downhill.
I don't want to necessarily sink all that cash into something that is going to deprecate pretty, pretty quickly.
And so to manage those costs, it would probably be, yes, we're going to.
least we're financing this somehow. But that's also another advantage of doing it this way is once we
build out the virtual workstation images as we want them, if we five years down the road, like,
great, we need a much beefier box and we've brought on other colorists and we're much bigger now,
those images are just files. We can migrate those to a new host without redoing any of the
configuration work, right? We just throw new hardware at it. Or if we want to take this to the
cloud later, we can take those images and upload them to the cloud and basically be working
the exact same way. We're taking our entire, you know, three location workstation set up,
all the configuration, all of the hassle, and putting it in essentially one file on a server
somewhere, that becomes very, very portable, which is one of the reasons why this is really
attractive to me. Yeah, and I agree. So I mean, we're going to try on that, that threadwraper system,
Even if you don't have that, you know, I mean, that's a pretty robust system by itself.
I would encourage those of you who are interested in this.
There's a lot of very, very cheap ways to get into this.
Of course, you can start out with the local VM pieces of software, those virtualizers,
and try running Windows or Linux or whatever on your local machine.
But if you want to get away from that and run it on a dedicated server, sure, start out with a Mac Mini or an Intel Nuck or, you know, one of those, you know, a thin client that, you know,
HP makes or whatever,
download Fusion or ProxMox or something like that,
you can get into this fairly easily.
And to be fair,
I actually got into this not for work reasons,
not for workstation reasons.
I got this into this because I got into,
Joey got me into a couple years ago,
into Home Assistant, right?
And then the next thing I know,
I was into Octoprint,
and the next thing I know I was,
you know, into whatever.
And all of these things like really didn't warrant a computer by them.
themselves, right? Like, I wasn't going to be like, oh, I'm going to buy a dedicated server just to run my home assistant on and spend three grand on it, right? But I got into it because I'm like, cool. Well, I can get this one relatively small box. I bought it off at eBay. Like I bought like a, you know, it's like a I seven or something like that. I got 64 gigs of RAM in it. And guess what? It runs like six virtual machines for me just fine. And I know you have like you have a higher end one like a one use super micro, but same idea. Yeah. So I run.
I'm already doing this on the server side for everything we need for our business, right?
We have not just my little personal stuff like the home assistant and some of the other stuff,
but all of the stuff that runs are project management or remote workflow optimizations
or synchronizing software that we use.
That's all running on a little one-use super microbox sitting in my rack.
And it's running, I think, 12 virtual servers right now.
So the server side do right now.
It's cheap.
It's easy.
Your resolve project server can be on that.
It's super nice to be able to say,
oh,
there's this really cool open source like media transcoder that I want to try out.
Let me just make a computer for it by clicking one button, right?
You can try different workflows with it and different servers and things like that.
It's just what's new, I think,
and what's cutting edge is trying to move your entire workstation environment.
And so I think, you know, to kind of to narrow in on this, I think that at the very high end, you know, this, if, you know, if you're an engineer at Company 3 or whatever, like this is like, yeah, cool, guys, we've been, we've been talking about this for a decade, right? Like, at the high end, these kind of workflows are in place. You know, I mentioned Major League Baseball earlier. NFL Films does something similar. Like, there are, like, the big, big entities have been doing this for years for all of the reasons. The cost of the cost of, the cost of, the cost of.
saving reasons, efficiency reasons, et cetera.
What is new, as Joey pointed out,
as this trickling down to regular Joe's like us.
And I think it's something to think about,
and I think, you know, the math that I do in my head
is I weigh, okay, let's look at the one-to-one cost.
I have this for buying five machines.
I have this for buying one machine.
Then I look at it as like, okay, well, the pros are the local.
I have this, you know, these are machines that I control and operate.
Maybe I can control and operate.
Maybe I don't over here.
Like, you've got to build your own pros and cons list for this.
we right now to be completely transparent, our pros and cons list is kind of about equal, right?
So that's why we are like a little hesitant to drop, you know, tens of thousands of dollars on this right now
and biting off like, oh, you know what, I got this thread ripper box that we can try this on.
Let's see how it goes, you know, and see where the pitfalls are.
And honestly, I think there's probably a lot of pitfalls that we're not envisioning quite yet that are only going to
going to come through where like I'm sure we're going to have in a situation be like oh you know what
well that's that keyboard maestro macro that you have doesn't pass through in the right way to this
vm like there's going to be small things like that right but I think from a hardware point of view
the dream that I have is okay eventually I'm saying maybe five years from now we have a box that
is built to our spec and we designed it how we want we're going to do a
a cohab relationship in a data center, put our own box into that data.
That exists, by the way.
You can, yes, you can buy other people's, but you can also bring your own stuff in.
Do a cohab.
And then guess what?
I'm going to live anywhere that I want with any housing situation that I want, and I'm still
accessing a world-class workstation every day to go to work.
Can't beat that.
You know, the hardware is getting cheaper and the virtualization in the cloud is getting
cheaper. So we are like, like, make no, no mistake about it. The cost comparison right now,
like Robbie said, is probably equal, maybe even more expensive to do it this way, definitely
more expensive to do it in the cloud. But that is moving forward and getting better and better
every day. I do really think this is the future for how creative work is going to be done.
And that's why we wanted to talk about it, because you might not be doing this tomorrow,
but the idea of I can virtualize everything and now where the computer is doesn't matter is incredibly
powerful for artists and small businesses.
Yeah, and the small business part, I think, you know, we didn't really hit on this,
but there's a lot of things around the edges of this discussion that are worth weighing.
You know, for a large company, I mean, like, honestly, say the obvious electricity is a huge,
like, paying you.
Like, would you rather have a machine room fill of, you know, 30 computers versus, you know, whatever, one or two virtual.
Like, you know what I'm saying?
Like, there's a lot of things that add up into that.
And it's going to be specific to your business and to your workflow, your work model.
But it is definitely something to consider.
And I think that, Joey, I think if we have this conversation again in five years, I think it's almost assured that we're probably going to be in this workflow.
And I think that a lot of other companies, our size, are going to consider this workflow.
as the internet gets better, the remote desktops get better,
the pass-throughs get better, I think we're going to be there.
So a lot to chew and a lot to bite off in this one,
but hopefully you have kind of the, you know,
50,000-foot view of what a virtualization workflow is and can be.
You can go everywhere from a local hypervisor running on your local machine,
all the way up to a full-blown data center approach
and all the steps in between.
It's definitely something to consider.
So, as a reminder, you can always head up to theoffsetpodcast.com and find show notes.
and check out our library of episodes.
Of course, the show is available on all major streaming platforms, including YouTube, Spotify, Apple Podcasts, etc.
If you have any questions, feel free to head over to the Offset Podcasts, drop us a line there.
There's a little form.
You can submit show ideas or ask us some questions.
And, of course, if you do like to show and feel like supporting us, you can buy us a cup of virtual coffee.
Any support really does mean a lot and goes the long way or real coffee, in Joe's case.
Any support you might be able to offer really helps us out.
keep producing these episodes every two weeks
pretty much year round
so we appreciate any support you can give
and thanks as always to our sponsor Flanders Scientific
couldn't do the show without you
and to our great editor Stella
All right Joe good good chat
I think our next episode
will probably turn it from your wheelhouse
maybe back a little to my wheelhouse
I'll have to think about some subjects on that
and that degree but to all our listeners
thanks for checking out the episode
and for the old offset podcast
I'm Robbie Korman
and I'm Joey
Indiana. Thanks for listening.
