The Offset Podcast - The Offset Podcast EP028: ARM In Post Production Part 1
Episode Date: March 17, 2025ARM SoCs (system on a chip) have become a hot topic in the computing world in the past few years. Apple branded ‘Apple Silicon’, Qualcomm’s Snapdragon, Ampere’s Altra, and others have... been disruptive in a world once dominated by x86/x64-based systems from Intel & AMD.In Part 1 of a two-part series on ARM in postproduction, we explore some of the essentials of ARM systems, including:Basics of ARM vs x86/x64 processorsRISK vs Non-RISK CPUsThe flexibility & scalability of ARMThe goal of a uniform product architecture and its advantage for a company like AppleGPU design/performance - the surprise of Apple’s ARM implementationThe appeal and benefits of efficiency and low power consumptionThe benefits of unified memoryPackage scalability - faster/more cores, multiple SoCsDoes clock speed matter with ARM SOCs?Additional benefits - onboard encode/decode abilitiesAre SoC GPUs ever going to be on par with discrete GPUs? Will discrete GPUs ever come to Apple ARM systems?In part two, we’ll dive a bit deeper, exploring additional topics, including how cloud-based ARM computing could be a game changer for cost-effective, decentralized post workflows, what the future may hold for workstations from Apple and others, and much more.
Transcript
Discussion (0)
Hey there, and welcome back to another installment of the Offset Podcast.
And today, we're talking about arm processors and why they should be important to you in the post industry in general.
This podcast is sponsored by Flanders Scientific, leaders in color accurate display solutions for professional video.
Whether you're a colorist, an editor, a DIT, or a broadcast engineer, Flanders Scientific has a professional display solution to meet your needs.
Learn more at flanderscientific.com.
Hey everybody, I am Robbie Carmen and with me as always is my partner in crime, Joey Deanna.
Joey, how are you doing, buddy?
Good. How are you?
I'm doing swell.
Now, Joey, I know that this is an episode that you have been waiting to record for a long time
simply because you are a computer nerd and I have to give it a little inside baseball behind the scenes
a little story here for our audience.
And that is many of you might know that Joey has a large vehicle collection.
I don't know, 15 cars, 20 motorcycles,
you know, probably boats and planes that we don't know about.
But besides that large collection of vehicles,
Joey also has a large collection of antique computers.
He's probably the only person that I know
that has a legitimate supercomputer array in his basement.
He has various Atari's and Next computers and SGI boxes.
Going back to, you know, the early to mid-80s,
He's a little bit of a connoisseur and collector of these things.
So when it comes to computing, computing history, how computers do their thing and work,
Joey's always my go-to reference for these kind of things.
And just this week, Apple announced some new computers, some new Mac Studio a couple months ago.
They announced a little Mac Mini, and they, as you all probably know, have for the past several years,
have been championing their own processors, having.
having made a switch over from Intel processors for about a decade or so that they were using those.
And now they're creating what they generically referred to as Apple Silicon, but it's not something that is brand new, actually.
This is based on what's called arm technology or arm processors.
And so we want to talk a little bit about that today because we got a lot of questions popping up in email and different forums that people who know us saying,
Hey, what new Mac should I buy?
What's the deal with Unified Memory?
What's the deal with GPU cores?
All that kind of stuff.
And honestly, Joey, I think it's a subject we've been skirting for a while because, you know, Apple, you know, has been Apple.
And, you know, they use a lot of really fancy marketing.
And it's, you know, it's tough sometimes to wade through that.
But also, I think that we generally have the attitude of we're agnostic about computers, right?
It doesn't matter if it's a Windows box, a Mac, a Linux system,
like do whatever you're comfortable with,
do what you get the job done best.
But in this particular case, we thought that it'd probably be a good time
to talk a little bit about ARM,
just because this new announcements from Apple,
a lot of people are in that cycle.
Maybe they invested in the M1 series, you know,
three or four years ago.
Now it's time for an upgrade.
So let's cover this.
But before we do, I want to make one asterisk caveat.
We are not going to, in this episode, go into a deep, deep, deep, deep dive of chip manufacturing.
What I say is lithography, but you say it.
How do you say it?
Lithography.
Lithography.
Okay.
However, today.
Like photography, but with lifts.
There we go.
The idea of how the fabrication of chips are made, there's plenty of information that on that, on the web,
if you want to take a deep dive of how the billions and billions of transistors and interconnects.
We'll touch on some of that.
were necessary, but I just want to just preface this by saying, if you are a, you know,
a CPU designer, this is not your episode, okay?
We're talking to fellow colorist editors, people who are looking to use this technology
for creative, creative means, methods rather, and reasons.
If you say anything that you think could use more clarification, just let us know.
Along those lines, of course, you can always follow us on social media.
We're on Instagram and Facebook.
Just search for the offset podcast.
you can always go over to offsetpodcast.com and check out show notes.
Of course, we're on YouTube as well.
And wherever you find the show, just do us a favor, like, subscribe, download,
tell your friends, the more people we get eyeballs on, the better it is.
So Joey, let's dive in, man, now that we got all that out of the way.
I guess a place I want to start on, give us a little history of Arm.
What is Arm? Why is it, you know, that kind of thing.
Arm, overarchingly, is a CPU architecture and what's called an instruction set.
It's basically what the CPU instructions do represent how they're laid out in an actual piece of hardware.
It's basically the standard of what the CPU or central processing unit of the computer adheres to.
So that doesn't mean that all arm processors are the same.
Just like all x86 or x64 compatible processors aren't the same.
You've got processors compatible from Intel, from AMD.
Back in the day, there were a couple other companies.
These days, it's really just Intel and AMD.
But multiple manufacturers make their own CPUs that are compatible with the ARM architecture.
That means when you compile software for ARM, it will work on that CPU.
It's different than what we're used to with Intel, X86, 64, which is kind of the, that was pretty much the state.
standard for workstation level computing for a very long time because it got really, really good.
That's why Apple adopted it.
You remember long before the Intel Macs, there were the PowerPC Macs.
That's what I was going to add. It's kind of like the same analogy, right?
Where back then, you know, even though Apple wasn't making or, you know, they were relying on a different company other than AMD and Intel.
In this case, it was IBM.
IBM, yeah.
IBM was when Apple called it the Apple G3 or G4 or G4 or G3.
It was an Apple-specified IBM unit.
Got it.
Now, same thing moving forward with Arm.
Apple is specifying a lot of stuff
and then having their OEM manufacturers make them.
Now, Arm is what's called a risk architecture,
reduced instruction set computing,
where essentially the amount of things the processor does,
like the amount of options you have
for instructions to give it,
whether it's like, oh, move this memory, add this, subtract that, multiply that.
They're broken down into a much smaller group of fundamental primitive instructions.
So whereas certain architectures like Intel might have instructions for really detailed mathematical
operations that take a long time to run, the idea of risk is all the instructions run really,
really, really, really fast because they're all really small parts of the program.
And because of that, we can optimize the process.
to do lots of cycles really quickly,
and we can make them more efficient.
Risk and non-risk have kind of butted heads back and forth
over the past decades as to who was in practice
really the best performer for the best workloads,
and a lot of this really goes into software engineering
and how you take advantage of the processor.
But these days,
Arm is a risk architecture that Apple has decided
to move their entire product,
line too. And it is, it's been around for a very long time. It's a very mature architecture. It's
very powerful. And it has some really cool features and things about it that really work well
for desktop computers, phones, smaller computers, and in our case, for post-production and image
manipulation. Well, before you go on, I want to, there's a lot to unpack there. I want to make a couple
statements slash questions about this. So,
so Arm, as you said, has been around for a while, but where, where has it been, right?
Because, you know, I think from, you know, from my perspective, as a consumer, right, of course
I know about Intel. Of course I know about AMD. And that's kind of like, those are the two,
you know, two games in town, right? Like, yeah. And before that, it was pretty obvious because,
oh, Apple's pushing power PC with, you know, G3, G4, G5, that kind of stuff. But it's not like Arm hasn't
around as you said, but where has it been?
It's actually mostly been in embedded applications.
Small little efficient microcontroller
or above microcontroller level things
in your smart devices, in your car,
and any kind of thing that needs processors.
And the cool thing about Arm is its flexibility.
We'll talk about this a lot in terms of flexibility, right?
Arm can scale from, they just came out with one
that's like less than a millimeter size.
It's like the new record holder for the smallest CPU to, as we'll talk about,
gigantic 128 core, 256 core data center monsters.
Arm is designed to be able to scale in ways that other architectures might not have been able to.
Okay, so that makes sense.
And so now, so basically there is a foundry, you know, TSMC or somebody else making these arm
processors, these companies like Apple and others are now saying, well, Apple being the size that they are, as you pointed out, it's not like they've, you know, they say they designed the chip. That's a little bit of a misnomer, if I'm guessing what you're saying. They've, they've had a lot of input on the features and the architecture that they would like to see, but it's not like they're the ones actually making the chips. Yes and no. Apple has gone to what I consider to be kind of a higher,
level of OEM with this than most people have.
You know, they're not just saying, hey, I need a arm of this clock speed with this much memory on board and this much cash and this many cores, right?
They are going in and designing their own interconnections, we'll talk about later.
And most importantly, they're designing their own GPUs, whereas the CPU part of this system on a chip package is, uh,
you know, there's parts of it that are kind of used between multiple products.
The GPUs on the new Apple systems are actually 100% designed by Apple.
And I have to say, as everybody that's used one of these new R-MACs can attest to,
they kind of came out of from nowhere and became like heavy hitters in the GPU world in terms of performance.
All right. So that makes sense. So they have a core design that they're outside of the regular event.
Like if you and I just wanted to get some arm process for our new, you know, widget, whatever,
we'd probably buy something off the shelf.
But Apple being the size that they are, the product they're moving, has a lot more say to go,
no, this is the design considerations we want you to make, make this for us like this.
Yeah, they're designing what's called the package, the system on a chip or an SOC.
So that's got the CPU, multi-core CPU.
It usually, in the case of these systems, has memory built in as well.
and the GPU and anything like external interconnects,
whether or not it has access to PCI slots,
whether it has serial ports,
all of those things,
Apple is designing the entire package,
whereas if we were going to make a product
that had an arm in it,
we would buy something that you could buy a development kit for.
Yeah, right, or go to CDW or something
and buy some processors, right?
I got it.
Okay, that all makes sense.
So where, I guess one of the things that I was,
you know, the rumors about,
Apple specifically going to Arm started a long time ago.
You know, there was, oh, we're unhappy with Intel.
Intel's, you know, too much thermal problems, too much, you know, power problems.
So I think that writing was on the wall for a long time.
But when they, when they made that change, you know, for me, I think, I think the first time that I even really heard Arm was when Apple, I think, released to developers, they released a Mac mini style developer CPU, right?
Because as you said earlier, part of the challenge with this changeover was, hey, we have a whole different way of, you know, a whole new instruction set and a whole way, new way that, you know, underlying program has to be handled by this new architecture.
So they had, they got it in developers' hands for a while to give it out.
And then they had, you know, for a long time, I think it's still around actually, but, you know, Rosetta was an application that they had to kind of translate.
It's still in the OS.
Yep.
Yeah.
Yeah, older instruction sets, Intel or X86, to this new architecture.
But now they came out with M1, three, four years ago, something like that.
And now we're up to M3 and M4.
So it's obviously generation.
And to riff on what you just said a second ago about the combination of Apple and the actual foundry,
you know, that has been a really interesting thing to watch, kind of the development.
Obviously, core counts gone up, the type of core, whether it's what they call efficiency.
cores versus performance cores, how they've done the GPU. So it's not unlike the old, you know,
moniker of chip generations. They refer to this sometimes as nodes or die generations or something
like that where it's like, hey, we're going up and getting capabilities of each one. So where we're
now on M4 is the latest and greatest. M3 is also being supported. There's still some M2 in the system
as well. All right, that all makes sense. And really, we should be calling them like M25 or whatever.
because, you know, these SOC packages have gone way back before, because all of your iPhones,
before Apple even moved to Arm for their other stuff, were all Arm-based SOCs.
That's kind of what this evolved from with Apple.
They, you know, they started making the phones out of ArmSOCs, and then they realized, wait a minute,
these phones have a lot of compute in them.
And if we just make this a little bit bigger, well, then we can kind of control the whole ecosystem.
and we can have one unified family of architecture,
and that's really attractive to Apple.
That's a great segue into what I want to ask next, right?
Is like, why would a company the size of Apple make this move?
And I just want to be clear about one thing,
because you sort of alluded to it,
but it's not in the consumer space,
it's not obviously just Apple who's doing this now, right?
There's Qualcomm, Microsoft, all of the big players are going,
wow, you're right.
This is a big improvement on a lot of levels from what we're doing.
And they're all offering products now that shape or fashion support arm with Windows
or hopefully we'll see some more Linux out there.
But I think that's a good segue into why this move was made.
And I think the vertical integration part of it that you just, you know, I guess both
horizontal and vertical that you mentioned about Apple with the phones is a big part of this, right?
Yeah.
But you know, you want to talk about some of the advantages.
you know, why would a company move their products to arm?
You hit the nail on the head when you said efficiency.
Yeah.
Right.
Any computer puts out heat, takes electricity.
So for any amount of computing power you have, it takes a certain amount of electricity,
and it makes heat out of those, some of that electricity, and it makes compute out of the rest.
Arm massively better power efficiency and thermal efficiency than Intel.
That is the motivation.
That's why it was great for embedded applications.
That's why it was great for phones.
It's good for battery life.
That's why, you know, when it comes to these smaller systems like the Mac Studio and the Mac Mini,
they could put incredible power in something that doesn't need a gigantic brick of a power supply
or massive heat sinks or massive fans.
And that concept of it's just more efficient scales all the way up from your Apple Watch
to the biggest data center in the world.
Because for every hertz of compute that you do, if it's more efficient, it doesn't matter how big you get, you get advantages from that efficiency.
Yeah, I see that.
And I think that's something that they've always been interested in is power efficiency, battery life long term.
And I think that comes out of that phone.
But I had to look this up because, you know, it's one of those things where it's like Mac Studio has become a really popular computer.
Like power savings, is it real?
Well, let me look.
I look this up and I just want to tell you that the Mac Studio is a max power.
of 480 watts full full bore now I'm just thinking I'm just thinking to myself I'm
looking over this shoulder to a server I have sitting here in a rack that's like a
NAS that I built and that has I think a 1600 watt or an 1800 watt power supply
in it right and like that's kind of become like the normal for a workstation
performance is like you know somewhere between 1500 and 2,000 watts when you
factor in CPU of 280 and the GPU pulling 500 or whatever it all adds up
Can I put it in perspective?
Yeah, yeah.
You mentioned I do collect old computers.
And part of that is I have a vintage 1996, I think it was made full rack, Silicon Graphics,
to it's a big computer, yeah.
Basically super computer.
It's got a computer module and a graphics module.
Each one is half of the rack.
When you plug that thing in and you turn it on, you don't even turn on the computer and you
don't even turn on the graphics.
The only thing that comes on is one of the fan trays.
that's 800 watts.
I've measured it.
It's crazy.
It's crazy.
So, you know, it's not just bringing your power bill down.
It's the fact that they can put this in a smaller package and put more in the same amount of space for the same amount of power.
Yeah.
And I think the overarching point, though, I want to make is that, yes, there's the efficiency, the power savings.
The thermal performance, I think is a big one, especially if you're, you know, if you're working with a machine at your desk.
versus in a machine room, you want it to be quiet.
You don't want those fans spinning up and annoying you.
And all that adds up to smaller, quieter computers that pound for pound punch above their weight class.
But, you know, I think a lot of people go, oh, well, it's the end all, be all, power efficiency, it doesn't get better.
I mean, like, power still wins, right?
And we've seen that we'll discuss this in a little bit.
You know, if you have the ability to throw 2,000 watts at something and you don't care about the thermal performance because it's an rack and a data,
Center, yeah, okay, you're not going to beat.
Like, no, I don't think anybody's making the argument that a loaded Mac studio is ever going
to run circles around, you know, whatever, the latest Nvidia GPU with, you know, 96 core
thread ripper, you know, that has pulling 3,000 watts or something, you know?
Well, that kind of brings me my next kind of advantage of arm.
Yeah, and this isn't a arm specific thing, but arm, especially in the context of the arm max,
have made it a big thing that differentiates it from the Intel generation setups.
Is this thing that we talked about a little bit?
We're going to talk about more called unified memory.
Unified memory is an advantage that I cannot state enough is basically a cheat code for our
specific type of computing need.
It's not cheat code for all computing needs.
Right.
So before you go on, I want to make sure I have this part right,
I think this has caused a lot of confusion because I see people still referring to this memory as RAM,
which is, I don't think, I don't know if that's technically right or wrong.
It's still RAM, but it implies to me when people say that, it implies a slightly different architecture than what you're describing, right?
So tell me if I got this right.
In a traditional computer motherboard processor set up, you got your processor sitting in a socket that interfaces with a motherboard.
and there's a there's a bus, a memory, a pathway, a highway,
going out to that memory and the CPU talks to that memory
and it's by direction, it goes back and forth,
and data is moved, instructions are moved from the memory,
the RAM, back to the processor, and vice versa.
The advantage of having this all on what you referred to earlier
as an SOC or a system on a chip is that that pathway
is right there with the CPU next to it,
with no highway to have to navigate, no,
you know, bus or whatever to have to go out to.
It's integrated,
meaning that it can talk loads faster to get things done considerably faster
than having to go out on that highway.
I mean, that's an oversimplification.
That's part one.
Part one is that, yes, when you put the memory physically next to the CPU,
it actually does make a big difference in terms of performance and capability.
But whereas when you buy an external GPU for your Intel machine,
It has its own memory.
So the GPU needs to do operations, it's looking at its memory.
When the CPU needs to do operations, it's looking at its memory.
And what the CPU is going to have to do is this going to have to take stuff from its memory,
give it to the GPU, tell the GPU what to do,
then take stuff from the GPU memory, give it back to the system memory.
Well, a lot of back and forth.
A lot of back and forth.
Memory bandwidth to and from a GPU on a traditional system is a massively important thing
to be cognizant of.
That's why PCI went from, you know, different generations,
every generation, it gets way, way, way faster.
And that's always a big, important thing is if you're using an external GPU,
you need to be able to talk to it as fast as you possibly can.
Because think about this.
What's our workload?
What is our compute need for a GPU?
It's in French.
Yeah.
Right?
But it has to happen.
We expect it to happen basically in real time, right?
So we want.
Exactly.
So instead of saying, hey, here's a 3D, complicated 3D scene, render it.
Right?
Where in that case, you give the GPU.
a little bit of information, and then you're waiting on the GPU to churn, churn, churn, turn,
and render the image, right? No, no, no. We're saying, hey, GPU, make this one stop brighter,
right? That's a really fast operation to do. But then you say, hey, make this one stop brighter
at 24 frames a second, uncompressed floating point 4K resolution, and then do it maybe three or four
times over because I'm doing a temporal effect with noise reduction, right? You're moving huge amounts of
data in and out of the GPU in real time all the time and then asking it to do very comparatively
simple math to it. So where does unified memory come into play here? Well, on these new arm systems,
now not all arms systems are like this, but on the snap dragons and Apple arm systems,
they have what's called unified memory. The memory itself on this die that also shares the
CPU and the GPU is shared by the CPU and the GPU. So if you have 128 gigs of memory,
you have 128 gigs of memory combined CPU and GPU. And the system can divvy it up how it,
how it sees fit. Two big advantages there. One, when was the last time you bought a 128 gig GPU?
Right. That would be pretty expensive. Well, guess what? Now you get it in your Mac studio. So you get more
GPU memory. But more importantly, okay, once the CPU reads some frames,
off your storage, puts them into memory, the GPU just needs to know what address they're at.
They're already there.
It's instantaneous, pretty much.
Yeah, I guess.
Then if the GPU says, cool, here's your one-stop-over version.
I'm going to put that into memory for you.
Well, the CPU could just say, cool, what's the address, bro?
Because I've got that same memory.
We're essentially taking the send to the GPU memory and then get from the GPU memory
steps completely out of the equation.
So for a workload like color grading or any kind of real time video or image manipulation,
unified memory, look, I'm not going to say it's the end all be all and you couldn't build a
discrete GPU system that has enough bandwidth to beat out these new unified memory systems.
But unified memory is a big, big, big performer for what we do.
Yeah, no, it makes a lot.
It makes a lot of sense.
So the other thing.
that strikes me about these systems. And I don't know how to say this in a succinct way,
but it's like the, you alluded to earlier, the scalability up from the smallest, you know,
one millimeter size thing up to the big, you know, data center behemoths. And what struck me
about specifically what Apple's doing, and I think Qualcomm's doing this a little bit with
Snapdragon as well is, you know, kind of the levels of gradation of these chips, right? Like from,
You know, in Apple parlance, it's the regular, you know, just no, no added name, just M4.
And then it's, you know, pro, then it's max and then it's ultra, right?
And let's talk about that for a second because I think that, you know, on the surface, it's kind of like the old days.
I don't know if anybody remembers, but in the old days, back in the power PC days, the Apple store actually used to be good, better, best kind of like options.
And, you know, nobody wants to go back to that nomenclature because they don't want to tell people that you're buying, you know, an okay computer.
you're buying an awesome one. But the difference between those generations is a little bit of a
multi-factor thing. It's obviously a number of cores that you're getting, you know, both GPU and
CPU cores. What type of, types of courses are, whether they're performance or efficiency
cores, right? But there's also like some differences in terms of like each version of that
SOC, the bigger you go can do things like support more unified memory, for example, right?
So like more and more memory can be used for the GPU and CPU.
Is there anything else that I'm missing between like those like versions like Max, Pro, Ultra, right?
Basically, the way all of these companies are scaling these things up is by adding more small pieces, right?
So when you go from like the M1 to the M2 to the M3, the individual cores are getting faster and more performant, right?
Yeah.
But when you go between the sizes of them, where you have like, okay, here's, well, I forget what they call like the max has a bunch of cores.
And then like the Ultra has two of the maxes together.
That's kind of where the scaling really starts coming into play, right?
Because these, what they call the package, which is the chip that has the CPU cores, the GPU cores, and in Apple's case, the memory as well.
Right. That can only get so big, right? They're never going to be able to make that big enough to fit a monster, you know, workstation level computer. So what they do is they take multiple of them and put them together in a multi-processor configuration with a very, very high-speed interconnect right next to them on the same board. Like we talked about with memory, the closer it is, the less physical connections that has to be made, the more efficient it can be.
And that's why you get into these huge configurations like the M, like, okay, so on the new Mac studio, you have the M4, which is the latest generation processor, but they don't have the ultra or basically dual chip version of that because they haven't made an interconnect fast enough to take two of those M4s and zip time together.
Yeah, so in Apple parlance, they call that the ultra fusion interconnect or something like that, right?
That's just a, it's a fancy marketing term for that physical connection between two separate
SOCs that they've linked together to act as one.
Exactly.
As those SOCs get faster and faster and faster, obviously you will need a faster, more data-capable interconnect.
That's why in all of these generations, one thing a lot of people ask about is, okay, why don't
we have the latest and greatest, highest number, basically, in the biggest configuration?
And it's important to remember, you know, essentially all of these Apple SOC start on the phone first.
They start on the phone and the iPad first.
And they scale up from there.
That's how it's been since day one.
The original dev kit was basically an iPad without a display.
And then they start making them bigger and bigger and bigger.
And then they get to a point where it's so big, they can't really make it any bigger.
So then they start adding two of them.
And that's where you get these monster Mac Studio configurations that,
really compare with modern-day Intel workstations because they just have so much GPU and they're so
powerful. But what you're looking at there is basically two of the highest capacity chip that they can
make and still put an interconnect on put together. Yeah. Yeah. So one thing I think is interesting about
this architecture. And it's it's been a pretty marked change, I think, from the way that I used to think
about CPUs in particular.
So, like, in the past, when I would go to CPU shopping, one of the first things I would
look at is the clock speed, right?
I'd go, oh, well, you know, a three gigahertz chip's going to be faster, better than a,
you know, a 2.2 gigahertz chip.
And, you know, you'd look at other numbers like cash size and stuff.
And noticely missing from a lot of this arm marketing is that discussion of clock speed.
And I think the more that I dug on this, it's partly because that clock speed is highly
variable with these chips, right?
depending on what it's doing, right?
Like M4, I think, can have a burst of up to 4.3 or 4.5, you know, gigahertz, something like that.
But it's interesting from a marketing point of view that that has been a shift.
Like, rather than getting people to fixate on actual clock speeds, it's more about core performance, the thermal.
Like, they've changed the narrative, it seems like a little bit on like.
And I got to say, I think that's the right thing to do.
I don't think clock speed has ever told the whole story, especially on the,
these modern CPUs because you look at, we keep using the Apple example because it's the most,
most prominent, you know, on that chip, it has built on H264, H265 and ProRes, in code and decode.
So it wouldn't matter if the CPU was one gigahertz or a billion gigahertz for decoding
prores or decode it's got a dedicated chip for it, right?
Because it's already built onto a hardware decoder right there.
All it's doing is waiting for the hardware decoder to give it the information.
So the overall design of the system is now taking kind of more of a front seat versus what I consider to be somewhat antiquated metrics, which is a good thing.
So I think that I think, you know, now that we've gotten our head around the basic design of it, I think one outstanding piece that still seems like to be a battle.
And I think to be fair, the comparisons, the benchmarks do have some validity to it is that, you know, an R line of work in editorial and posts and color.
and whatever, the GPU has obviously become arguably the most important part of the kit, right?
Because that's the thing that's doing them.
Heaviest lifting on image processing has, you know, perhaps the biggest impact on real-timeness, you know,
of performance that we've come to expect.
And so I think it's a little bit of a weird thing for people to go, what do you mean?
There's not a, you know, I don't have to choose a GPU for this, right?
I'm just getting told this is the GPU, right?
And so, you know, there's a lot of people who want to compare, oh, this is, you know, the latest and greatest from Nvidia or AMD.
This is how it's comparing to Apple.
And it seems to me that there's still a gap.
Those dedicated discrete GPUs, you know, most recently, you know, from Nvidia, like the 5090, 5080, those ones.
You know, those are still beefier GPUs.
I mean, they're still doing heavier lifting and systems that are set up properly.
But they do have some of the issues that you talked about, you know, bus speeds a factor.
memory, dedicated memory on them. You know, there's only so much they can have. What's your feeling about,
and you alluded to this earlier where you say, hey, surprised everybody that Apple's been, you know,
competitive at least with GPUs. Do you think we're going to get to a point where the difference between
a discrete GPU in an embedded SOC GPU? Because I think a lot of people have a little PTSD about
embedded because they think about like, oh, you mean like that Intel, you know, GPU, you know,
crap that we had for years and suffered through. That never performed.
formed right. But where does that lay out now? Because it's like it's not quite as good as a dedicated GPU yet, but it's still holding its own. What do you think the future if you had to prognosticate a little bit stands for that SSC comes down to the workload in a lot of cases. Like I said, these these integrated GPUs are really good for our one particular task. They're really not as good for things like 3D rendering or machine learning training, stuff like that. In those worlds, the big discrete Nvidia guys,
will absolutely wipe the floor with any SOC GPU.
But even in color grading, you get into the super heavy noise reduction,
super heavy temporal stuff.
Yeah, those new 5090s will outrun the best Apple has to offer.
That is where we kind of get into this interesting question is there's no reason why you can't have an arm CPU with a discrete GPU.
It's just nobody's really put that into a desktop.
set up yet. All right, hold that thought for a second because we've talked about a lot and
we're already running a little long. So how about this? How about we come back in a part two?
And in that part two, we'll talk about what we think the future of arm is and what Apple's doing
prognostications of where the industry is going with foul-based arm and all sorts of fun stuff.
So stay tuned for a part two. Until then, thanks for watching. I'm Robbie Carmen.
And I'm Joey Deanna.
