Podcast Archive - StorageReview.com - Podcast #124: The Path to 50TB HDDs with Frickin Lasers
Episode Date: October 18, 2023Brian invited Seagate’s Colin Presly to the podcast this week to discuss research and… The post Podcast #124: The Path to 50TB HDDs with Frickin Lasers appeared first on StorageReview.com....
Transcript
Discussion (0)
Hey everyone, welcome to the podcast. Today we've got a great guest who I think you guys are going to like because he's a smart guy, or conceivably.
It's an engineer in a CTO office at Seagate, so I think we've got the guy to answer all the questions about what's going on with storage, what's the technical limitations,
opportunities, and all sorts of other things around hard drives and other technologies.
So happy.
Colin, thanks for joining us.
Yeah, it's great to be here, Brian.
Thanks very much.
Really big fan of the podcast and all the work you guys do.
Well, we appreciate that, and we're always glad to have a technician, if that's fair.
I know your background's in engineering, but tell us a little bit more about where you come from.
Yeah, so almost 25 years in the industry.
IBM back in the day originally, but I've been most of my career at Seagate.
Maybe you can tell from my accent, I'm not originally from the US, but I grew up in the UK
and went and worked for IBM there. But now, being in Minnesota now for more than 20 years,
working at Seagate, working in the Minnesota office. So my role is director of the office of
CTO. My career has been pretty much in hard drive research and development.
Out of our design center here, we've had a long history of our enterprise products,
and so worked on various of those. Just love the industry, love the technology.
I now have a team of people looking at more of the future looking technologies outside of disk as well.
But my blood bleeds rust, should we say, in terms of the hard drive technology.
And I still am amazed by what we've been able to do and how far we've come.
We'll get into all that.
I've got a more important question that needs to be answered.
So you've been in the U.S. for a couple of decades now. Are you still a footballer?
Were you a football fan in your prior days in the U.K.? Oh, absolutely. Yeah. So it's the problem I have now is I have too many sports to follow.
I don't know if anybody's following the Rugby World Cup, but I have football, as we would call it,
football and rugby, Formula One, cricket, as we would call it football and and rugby formula one
cricket as well as adopting the us sports too so all right we're going to get into the hard drives
but first we stay with sports uh please don't disappoint me and tell me you're a tottenham fan
i'm not a tottenham fan i'm afraid um my local team is actually portsmouth
nobody's heard of.
They're pretty low in the divisions, but I've kind of adopted Liverpool more recently. I
really like that. I like how they're doing. But yeah, not Tottenham or Arsenal, I'm afraid. Can't
do those London teams. Okay, well, I'm an Arsenal fan. But see, the reason is, I didn't come into
it when they were on top, because obviously they haven't been there in quite some time last year aside.
Being from Cincinnati, the Reds baseball team never was a big money team.
And now obviously that's changed in soccer dramatically over even the last decade.
But it was all about homegrown talent and developing players.
And I always respected what Arsenal did there to grow young talent.
So when I came into, you know, get enjoying European football,
I really stuck to that. Now on the other side, on F1, I'm a Ferrari fan.
So I don't know what I have to say about the same,
the same team selection decision-making process.
But, you know... They come around.
Arsenal have come good, and maybe this will be
their year. I mean, they had a little bit
of a choke at the end of last year, right?
How dare you? A choke?
I have to use the choke word. I think what
people do. But I think this year
they could do it.
But yeah, Ferrari is
another difficult one, right? They haven't
returned to their glory days yet either. No, it may still be some time before we see that.
And I thought we were done with the soccer, but then you said chokes, so I had to come back to it.
I think it's exhaustion that killed them last year. Lack of depth, I think, is another way to say it, but in any event.
I'm sure the audience has either skipped forward three minutes or tuned out,
but let's see if we can re-engage them on spinning rust, as you said.
It's funny to me that what should be a pejorative is pretty broadly embraced
by those in the hard drive
technology space. You don't have any problem with that, huh?
Oh, no, no. I think people obviously see hard drives, they've been around a long time, they see
flash, they see other technologies, right? But I think for the people near to the industry,
I think people are really amazed
by the technology that's inside these devices,
and it just keeps on growing and growing right there.
Just the level of expertise required
to do what hard drives do really is immense, right?
The nanoscale technology,
the level at which we fly heads over discs for the period we
do and the workloads we use them use and potentially abuse these devices and essentially
have gotten to a commodity level with the level of technology inside them yeah we can wear that
with a badge of honor i think it's it's survived over the period of time and we're really just
getting started and we're now into a new era
where we get even more capability so um yeah we'll we'll let people use those words that's fine but
i think people that are really following the industry know that really the the world is really
run on hard drives i mean the the cloud is essentially hard drives right that that is what
it is and there really is no technology
that can displace them in the next decade in terms of the amount of data that needs to be stored. So
we know what we need to do. We've got a lot of work to do to get there. But yeah, we'll let people
say what they like to say about hard drives and we'll just get to work.
Well, so tell me about that. Do you feel like the, because we see it as consumers of hard drives,
or like me specifically as a reviewer, you guys will send us new drives and it'll be,
you know, hey, we went from 16 to 18 or to 20 to 22. And it's at times, it's these progressive
increments that are, I will call it kind of small, although maybe your perspective is different.
But the technology required to enable that jump with adding another platter or two, adding helium, adding more bits on the media, adding more arms.
I mean, we can talk about all of these things.
Do you think maybe that it just appears easier than it is to just grow?
I mean, it just it seems maybe organic to outside looking in.
Yeah, it's very easy.
You know, when you just look at the spec sheets of these devices, as you say, we've been kind
of going along on this two terabyte sort of tick here. And I suppose it isn't obvious the technology that
needs to do those things.
But if we look back across history,
we've already had some major points of aerodensity
technology.
And then in between, we've had essentially
we call those kind of the inflection points of our S
curves.
And then between those S curve points, we have a general miniaturization of components and things that
happen to get to get us up those those points um so yeah between one of those whether you pick a
number 14 and 16 16 and 18 there may be a hundred things right that get us from there from one to
the other that are beneath the cut you know behind curtain, that we don't advertise, that we need to do to get from one to the other.
But if we look back, you know, in terms of what those big points are, I mean, longitudinal
recording was back in the day, bits flying down, right, and then we went to perpendicular recording.
That was a massive change for the industry. And just observing the capacity trend,
you know, until you plot it out, you don't really see, but there's definitely that point of inflection where the growth rate really went up significantly.
And we were able to ride that for a long time, as well as adding platters, right?
And we've been doing that for the last few years, a combination of adding some capacity, but also adding platters because our perpendicular recording technology is starting to run out of legs. And that's industry accepted,
right, that we are getting close to the ability for our perpendicular recording media structures
to be stable, thermally stable, magnetically stable over time. We're getting really close
to that limit. Well, talk about that for a second before you go on.
Can you describe more deeply what that means? Why
can't we just keep jumping it at two terabyte increments on
CMR or
type of media for the next decades? What's the
engineering limitation there, the next decades? What's the engineering limitation there,
the physical limitation?
So yeah, there's two things.
There's area and there's aerial density, right?
And they're very different things.
So when we talk about area,
we talk about maximizing the disc size
within the form factor.
So three and a half inch drives are the de facto standard now
in pretty much all deployments.
So there's only a
physically certain disk size you can fit in that thing we use 97 millimeter disks that's pretty
much as big as you can do almost standardized in history um so there's that right then there's
um the number of disks you can physically fit in vertically um which gives you the other component
of area and we're approaching the limit there we We're on a very solid 10-disc platform
that's been highly leveraged now for a few generations.
And really, we're coming to the limit
of the ability to put in more discs vertically.
And that comes with how thin you can get the discs
and everything else.
But adding more area is really not a good way to scale our industry.
By adding more components, you know, we stress our supply chain.
It's more difficult to build.
You've got yield associated with additional heads.
So every time we add disks and heads, it's not something we do lightly
and not something we particularly want to do for us or our customers.
It just makes the whole aspect of drive design more challenging.
So that's the area component.
And then on the aerodensity side of it, perpendicular recording really is reaching that limit.
So what I mean by that is fundamentally what really defines hard drive technology is really the media.
Think about it as a media.
Think of tape.
Tape is a media.
NAND has a media structure.
A disk has a media structure.
We have a sputtered system where we sputter on grains onto layers on disks.
That magnetics has a coercivity, and the grain sizes can be pushed to a certain level to which beyond which if we shrink them further they just they're not no longer stable for for
long enough so what that means is while we may be able to write them they won't stay in the
orientation and they can actually flip they become unstable and and the bits flip and that's
absolutely what we can't have happen right right? We can't be in a situation where...
Sort of a fundamental concern for hard drives or any storage media, right?
So that's where we're getting to.
We're getting really to the point where the media coercivity
is not high enough in the materials we're using
to sustain that stability.
So while we've made that change to perpendicular recording,
we chose a material and it served us
very well and we've been miniaturizing everything as you miniaturize the media and you miniaturize
the grains um and there's two components to that too so the grains get smaller and there's the
thermal stability but then the right structure the the pole that you use to magnetically write
it has to also reduce in size to to match with the size of the media structures.
And as you reduce that, you can get less flux out of it, right? It's a smaller tip.
Think of just sharpening a pencil. You're becoming very, very small in the tip of the electromagnet you're using to generate the flux. And so now you've got a situation where you have very small, potentially unstable grains
and a very, very small right pole that can't be generated enough flux.
And that's kind of where you're getting to now.
Obviously, the price we have now, super reliable, super robust, we make sure that's true.
But as we look forward in terms of generating, like said the next two terabytes we're getting to that point where we just can't get there with the
perpendicular recording technology so you could theoretically solve this with larger platters
taller drives is is the three and a half inch form factor obviously has been around for decades at this point.
Is that the right shape going forward for hard drive media?
It's a great question. Something we always look at. It's just very, very difficult to justify that change in the form factor. It's like you said, three and a half inch
is the de facto that's been around for a very long time.
Systems are aligned that way.
Our supply chain is aligned that way.
Now, clearly with our new customer base and very big cloud vendors, there's more potential opportunity to do something different, do something new. But in terms of the economics of how you scale storage, I mean, one way would be,
for example, just create a two-inch high drive instead of a one-inch, right? You can just go
vertically and you could have a single connector and absolutely that could be done, right? There's
nothing saying we couldn't do that. We could put 20 disks in there, for example, instead of 10.
You get some savings because you have a single connector and you know
single pcba and that kind of thing but it really doesn't um it's not the greatest way for us to
scale um it's much much better for us to invest in scaling aerial density than scanning area
it just is hard for our customers it's hard for us to yield those products.
You end up with, think about now we have 20 heads in a drive.
You'd end up with 40 heads in a drive.
The test time associated with even processing and building those drives become extremely
long.
So while we're always open to exploring opportunities in terms of new form factors, we're always
looking at archival solutions
relative to this technology
and how you can leverage those into those spaces.
But right now, the three and a half inch form factor
is really hard to beat.
It is so pervasive in the industry
and the supply chains have been honed
that it just makes sense for us to really stay there.
Yeah, and I only bring it up because I think if we look at the flash side of the house,
we're really seeing this transitional pain right now.
And even there from an engineering and design perspective,
you could argue making a skinnier or a different rectangle or whatever in PCB and NAND
is a heck of a lot easier than what you
would have to do on a hard drive side to reform that shape, right?
But the transition to these EDSFF drives has been really bumpy for the industry.
Enterprise server guys are struggling with it.
They made a call largely on E3S, but who knows? I mean, if that's
the long-term answer for the next generation of SSD form factors, we know the hyperscalers love
E1S and they do all sorts of different things. Like as you mentioned, with OCP coming up in just
a few days, there'll be lots of conversation about new different ways to consume storage.
But the hyperscalers drive a lot of that, but still the big ISG firms that make these systems
have to make design decisions too. So it is a bit of a challenge to move off of one size or shape
onto something else.
Yeah, I think clearly the design decisions on SSD and DERS get a little bit different on that, right?
In terms of we have some constraints that they don't have.
And OCP is a very interesting forum
for everybody to really collaborate.
I mean, open standards are very, very important.
We really believe in those.
I mean, we need scale in our industry to survive and thrive,
and OCP is a good way of doing that.
We obviously have standards with SATA and SAS and other places,
but OCP is a really great forum to have those conversations
about what next generations could look like.
And we'll be there, and we'll be talking to the industry and
aligning on what their needs are for the future. And we want to have those conversations. And if
it makes sense to do something different, we'll absolutely be having those conversations. I'll
be right in the middle of those. So speaking of OCP, because you guys are very much in the
middle of it, I've been the last couple of years and the NVMe hard drive is something
that seems to be of interest.
There was a session that one of your guys
actually delivered last year on the progress
and had a device with an NVMe connection on the hard drive.
Do you have any insights there in terms of,
you mentioned SATA SAS,
but is NVMe really
a thing for hard drives or can it be a thing, or is this just a whimsical exploration at
the behest of OCP members?
So I would say watch this space.
I think we were certainly excited about it.
We really excited that we have the control in our silicon.
We have complete vertical integration in our company,
and we've been able to demonstrate that capability
and put out some demo units into the industry
to start having that conversation.
We at the moment don't have anything to announce
relative to products in that area.
It's very much a market exploration activity.
But we're definitely getting some good feedback.
I mean, as you know, you know, MDME is really the interface of the future,
should we say.
You know, it's definitely very much being standardized around the SSD.
So we're trying to decide ourselves with the industry,
does it make sense to have that third interface or not?
So, yeah, nothing really to announce there.
We're still continuing to explore
and see the industry with samples
and see what makes sense.
Obviously, in the future, we think that
it could be a good opportunity to converge
around a single interface.
And all the simplification that could drive
just by getting all storage behind one interface.
But honestly, we've had a long history with SaaS, and they've served us very, very well.
And we don't see that there would be an absolute quick transition to something new.
No, clearly not.
We do hear, though, from system design engineers that, man,
sure would be nice to have just one interface
to design around.
It would simplify motherboard design
and a number of cabling and other challenges.
So the appeal seems obvious.
And it's good to hear that you guys are still involved there and that,
that, that, you know, that's kind of fun for,
to think about for hard drives to have the interface, not for you know,
7,000 megabytes a second, but to have it for,
for simplicity and uniformity potentially in interface.
Yeah, absolutely. Absolutely. We agree.
It's just a case of obviously getting enough of the market to agree in terms of the timing
and what that looks like relative to system design.
But yeah, we're keen to lead that conversation and very interested to hear feedback from
we've been talking to OEMs, the cloud vendors, and we'll see where it goes.
Watch this space.
Yeah, I think it'll be fun to watch.
And to your point, I mean, this is an OCP topic, but I'm sure you're obviously talking to others outside of OCP.
But, I mean, to be fair, if Azure comes on board and says, okay, we're all in on NVMe hard drive, sell us 10 million, then that accelerates the process, right? And the hyperscalers
have a dramatic influence in what you and other storage vendors produce because just the sheer
volume that they can command. So that'll be fun. All right. So I diverted a little bit off of
technology on the media. We've got a question that came in from one of our listeners live on Discord
about SMR.
So SMR has been around for a couple of years now and was billed as a,
my take on it was billed as sort of a bridge to get us more capacity before we
were ready to adopt the next gen of Hammer or
Mammar or whatever other recording technologies were out there.
Can you talk a little bit about SMR and how that's gone and what the future looks
like for shingled magnetic?
Yeah, so SMR was a great innovation.
You know, we've shipped a lot of SMR drives. Back to a little bit to just to describe
it. If you imagine back to my analogy of the pencil sharpener and the right pole, one advantage
it gives us is allows us to use slightly wider, bigger writers, because we're essentially shingling.
We're writing wider tracks,
which allows us to use wider elements,
generate more flux.
So there's good things about it
from a recording physics point of view
in terms of being able to shingle
and then create...
The net-net is once you start shingling
those tracks together,
you create smaller tracks.
So still challenges on the read side
because you still need to read the narrower track,
but it gives you some relief on the right side
and allows you to pack the track slightly further together.
So as I say, we've had SMR technology.
We feel like we have a lot of competitive products
in that area.
As you said, it is somewhat of a one-time additive gain
to PMR technology or CMR, we'd say it, but it's a
one-time push and it does have some customer implications. So there are some customers that
value that gain and they want to adapt to it. So there are some restrictions around SMR technology relative to how you write.
You've got to write in bands and zones.
So great technology.
Like you said, it was a good way of being able to essentially eke some more capacity
out of the end of our perpendicular magnetic recording paradigm
that we're in where we want to shrink grains,
we want to shrink our poles, and we can't quite get SMR it gives us a good boost so good technology
using it there's certain customers that want to use it will support them with
that but all along we wanted to get to something else right we wanted that next
big step in in s-curve in. Think of SMR as you can take
your CMR growth and you just put a line vertically above it at the same DC offset. And that's what
SMR is, right? It's an additive gain that comes with a little bit of complexity for the customers
that want that to achieve that gain. But what we want to do is get on another,
we want to shift the curve.
We want to change the conversation and move it up.
And that's where we get into our heat-assisted
magnetic recording.
And that's the customer we're on now.
SMR, great technology, absolutely support it.
But it doesn't fundamentally change the growth rate.
It changes the mnemonic path you can achieve on any given drive.
Okay.
So heat-assisted hammer, as the short form is known.
You guys, Seagate's been pretty public about your intent here on earnings calls.
I believe you guys have talked about that, about the intent to launch a 30 or 30
plus terabyte Hammer hard drive. That gives you the leap from 2022 to a new media and a new
platform. That's the jump you're talking about that really gets material fast for hard drive
capacity. Yeah. Yeah. So this is going to change the game, we think.
It's our next big change.
In particular, recording was really the last one.
We've had incremental, obviously, all the way along.
SMR has given us a bit of a game.
But this really does change the game in terms of where we can go.
We're just super excited and proud of what we've been able to
achieve we were looking back we started working on this 23 years ago it was like who's that the
blink blink of an eye um yeah where people were shining lasers on heads and spin stands you know
and to think of where we've come from there has really been incredible. It takes time to mature these
technologies. It isn't simple, let's say back to where we started in terms of commodities,
to get it to a point where everything's integrated and everything's working at full reliability and
data center 24-7 takes a lot of blood, sweat, and tears from the team. And for us to have got it to
the point now where we are and we're really
just right, we're launching, it's there, we're ready, is really great. We're just super proud
of it.
Well, tell me about the technology and what should people know? Because as you said before,
when we were talking about the shift in technologies, especially in the data center,
there's always apprehension, right? I don't want to be first one in on something new because it's
scary or I don't understand it or whatever. And that's not just storage technologies, it's
fabrics, it's everything. How does Hammer different from what we have today in terms of the media, how it's written and read to or written to or read from?
What are the top fundamental changes?
Yes. I mean, the first thing to note, really, from the consumer side, customer side, there is no change.
And that's the great thing about it is there is nothing we're asking the customer to do.
It operates like any other CMR drive today with no restrictions, no protocol changes,
no restrictions in terms of how you write and read.
So that's really the great power, really is a drop-in plug-and-place replacement for a higher capacity now obviously
behind the curtain there's a lot of things going on right um so so talking about the technology
the fundamental difference is now we've introduced an assist into the right process
so it's called heat assist that's the vacuum we used. But it's actually a plasmodic effect.
So it is heat ultimately that does the magic.
But we essentially take a laser, integrated, fully integrated laser system, and we guide that down to that right pole that I talked about before, which is relatively conventionally the same.
But now you've got a very localized heat generating mechanism through this plasmonic effect where light and electrons all can combine. And we essentially heat up the media very locally
underneath the right pole. And managing that process takes a lot in terms of electronics
and integration in the wafer process and getting the light exactly where you want it to be.
We solved all those problems.
Now we have this ability to heat the media.
What does heating the media give you the ability to do?
It now allows us to change the materials of the media.
That's really the quantum leap we're talking about okay so now we've gone from the media coercivity that we had before to now a completely different media type much much
higher coercivity very different media structure that that if you tried to write that media with
today's conventional heads you wouldn't even be able to record on it because the coercivity is so
high and by heating that local spot what heat does is it locally reduces coercivity is so high. And by heating that local spot, what heat does
is it locally reduces coercivity. So that just very, very temporarily within two nanoseconds,
the heat lowers the coercivity, allows our conventional right pole to then flip the bit,
and then it cools so instantaneously that it's trapped there. And that's really the recording physics change that we've made.
That's amazing.
I mean, it sounds like Star Trek-level stuff of ripping open a wormhole
so you can slip through and letting it close behind you.
I mean, it's really unbelievable.
We actually do have some videos that look like Death Stars.
I don't know if you saw the one that was posted recently.
No, but we'll link to it.
Death Star laser, that's the analogy you can think of.
No, that would be cool.
It's just so hard to wrap your head around, I think.
You talked about the engineering that went into it over a brief 23 years, but the execution in terms of heating that local area and not just so many technology
challenges, not too much, not enough, and then being able to cool that down quickly.
When you say heat, what does that mean? Do you have a thermal spec there? Is that something
you can share? I'm just sort of curious, you know,
at what sort of scale we're talking.
Yeah, so when we say heat, it's so localized.
I mean, we're talking about nanometers of area
that we're heating, that it really, materially,
in terms of the way we talk about heat drives
and data centers, it just, it doesn't even register
on that scale.
So, but locally, we're talking about like 800 degrees fahrenheit or
something right so very very very hot very very very quick but it it's really um there to make
that bit flip and you can't really register it in the drive um so it's almost a bit of a disservice
to call it he assisted memory reporting because a lot more goes on into that but yeah in terms
of controlling the spot size and the electronics that go behind it and and and how that media is how the heat is sunk
into the media and control there's just so many pieces to get into that but that's what we've been
doing after the last 20 years and we've really ramped up that investment last few years we've
been just running so many experiments and you just have to put in the hard work to really figure out what works
and what doesn't work.
And now we've converged
onto a very, very, very reliable design.
So to your point about
what do people need to know
in terms of risk and everything,
we believe that we've done the work.
We have the data,
we have the test beds
and we have drives
in customer qualifications as we speak.
That's going very, very well.
To the point, it's like say from a host interface,
there's really nothing to do.
And we obviously have to convince our customers and ourselves that we've
built something rock solid in these environments.
But Seagate has a real good reputation of that relative to our enterprise
drives where we
got a long history and we have a lot of confidence now that we've got this right and we're going to
be able to put out a product that really changes the game in terms of the growth and this is just
the start so we're going to have this big step change but we now think once we get onto this
new media a bit like think about the original perpendicular media, now we're on a miniaturization journey where we can take what we got
and now we can further miniaturize and grow further.
We do aero density demonstrations on spin stands,
and we now have demonstrated 5 terabytes per disk.
We've started to use terabytes per disk as a good way of really
explaining aero density. I think it's just a very neat way of doing that.
So now we're launching our three terabyte per disk, but we've already got spin stand
demonstrations in our labs at five terabytes per disk, which would underpin 50 terabyte
drives in the future. Well, that's a good point there. So when we think about
the physical construct of a three and a half inch hard drive, you're at 10 platters now.
Is it right to still think about hammer drives in the same way with that same platter stacking and
arms kind of moving around? Just structurally, understanding the technology at the tip of that arm is different,
but structurally, are they that similar? Yeah, absolutely, really the same. In fact,
we've got a lot of leverage in our first generation drive we're going to be launching.
We're going to have a lot of leverage from our current drives, our 10-disc platform. So that
keeps that economy of scale. So really nothing different materially
there. A lot of it is in the way, you know, either in the wafer process to get the laser
integrated and in the electronics and everything else that goes with it. So, but yeah, if you,
if you did open up one, it would look like a conventional drive in that same way.
So this is maybe a silly question, but the drives today that you ship are sealed with
helium inside.
Something strikes me that perhaps helium and this heat may not be the best combo or am
I missing something there?
No, no, helium is fine.
It's not hydrogen, right?
Okay.
Yeah, helium is a great gas. We've been using helium for a long time. We've had that, our
sealed drive. So yeah, that's no problem.
So when it comes to hammer, I've got another question from the group here about the challenges
in thermal and magnetic stability in developing Hammer. I mean, obviously,
those are concerns that you've worked on over the last couple of decades to get here.
But was there something at the onset that was especially daunting or maybe even not
even possible in the early days until technology matured as you went through this? Yeah, so there's been a lot published on this.
The challenge mainly has been in terms of directing that amount of heat
to the interface and getting it to survive that amount of heat.
That's really where the big challenge for Hammer has been.
And there's been a lot of things done, which obviously I can't disclose all
those things, but there are a lot of things done in that area to really improve it. So
a lot of experiments. But that's really been the fundamental piece of Hammer. We've had
the ability to generate the media for a long time, and we've had all the components.
It's been the challenge of getting to the point
where we can deliver that level of heat
that we require to reduce the coercivity,
like we discussed, in a very reliable way.
And a lot of design details have gone into achieving that,
but that's where we're at.
And we've come a long way in the last 10 years. We've
come orders of magnitude in terms of the ability for heads to be able to operate under those
conditions. And now we're there. We feel like we've got a very robust design solution.
So when I think about Hammer and the way you talk about it, it presents the next leap forward in density for hard drives,
starting with something in the 30s range and then growing from there over time
as the same density benefits come to you in terms of platters and media
and all those other things that you talked about at the beginning. Is there a play for Hammer in smaller drives or is this really a
30 and up kind of technology?
Oh, absolutely there is.
We definitely see a future where we can leverage this technology really across
our portfolio of products.
Reducing components is just a great thing for the industry
and for our world, right?
Our supply chain, the more we can do to reduce the number
of components in our drives is, from a sustainability point
of view, there's just so many benefits.
So that's why we like to talk about capacity per disk
because it's just so relevant in some of these smaller capacity drives, right? Once we get to four about capacity per disk, because it's just so relevant in some of these smaller capacity drives.
Once we get to four terabytes per disk, we could generate a single platter of four terabytes per disk drive.
We could do those things.
So absolutely, we see a future where we can start generating lower disk discount platforms that can leverage the technology and reduce the number of components that we need to ship with our drives,
which clearly is a cost reduction to the system and just a general TCO advantage to the industry.
So, yeah, Hammer won't just be reserved for the max payload customer.
Obviously, that's where we want to launch, but the future will definitely be looking to leverage it beyond that.
Speaking of the environmental issues, do you have any data on what power consumption looks like? product, but in your early dev models or your customer qual samples for Hammer, is there a change
in power consumption for that drive versus your leading
Exos 22 or whatever today in terms of
just raw power consumption?
I mean, it's similar. There's similar power.
The way we like to look at power really is through the lens of a bigger deployment.
So the biggest real knob in terms of data center power is error density and capacity, right?
So if we can start replacing, I mean, there are still a lot of 4 and 8 terabyte drives out there in data centers.
We can start replacing those with 30 terabyte drives.
Just think about power reduction that generates for the data center.
It's really immense, not just for the device.
Obviously, the device is somewhat similar, but from the fans and the air handling and
the number of servers you need, it does mount up very, very quickly.
So when we start running TCO calculations on these large capacity drives, it really
becomes very, very powerful in terms of how watts per terabyte look on a large deployment.
So we talked a little bit about sustainability and supply chain relative to low discount products.
Certain industries in the mass capacity deployments, the TCO advantage just from the reduction in the surrounding infrastructure around the device become very, very significant, very, very quickly. And that's why you've been seeing these actually. I mean, you referenced before that the two terabyte
increments and how they maybe didn't feel significant
to some people.
But the capacity equation in the TCO of a mass capacity
deployment is such a big lever that that's
the reason we released those capacities and those increments
because there's a big demand for that's the reason we released those capacities and those increments because
there's a big demand for that latest two terabytes right it's such a big big knob in that equation of tco um so that's why we think it's such a game changer for us to be able to get to 30 plus and
with a future of 50 plus it's well to be fair i mean i picked on you just a little bit on the two
terabyte thing because when you're in the when you're in the industry, it's like going from 20 to 22. Obviously,
that's only a 10% gain, right? But your point is very fair. If you've got customers or their
deployments that are sitting on fours or sixes or eights, when you make that leap,
it's not a 10% leap. It's a 100 or more percent gain in capacity. So we get sort of myopic on what's the latest and
greatest today, not necessarily what's been in the data center for three and a half years that's due
for an upgrade. But the environmental point is very salient, I think, because we're talking about
everywhere else in the data center how to get more density out of your rack deployments.
And if we're talking about GPU servers, that means liquid cooling.
If we're talking about other technologies, it's all about in that rack U, how much can I squish in there efficiently?
And then can my rack deliver the power to handle those things?
And I know you guys are very much a part of that too.
We've even seen OCP designs with their storage server with liquid cooled hard drives. So it's
certainly not something I'm sure you're ready to pitch today as an industry standard, but there are
all sorts of things going on. And your, to this point, progressive growth and now a giant jump in capacity will go a long way to terabytes per rack U, which is pretty exciting.
Yeah, and that's, like I say, we're a hardware company, right?
We focus on hardware.
We think that's what we do really well.
We invest our money on really fundamental solving the physics problems of the storage industry.
And like I said, everything really does come back to aerial density in our industry.
If we can find ways of improving aerial density, the gains just compound.
You know, we can take components out of the drives. We can take components out of the drives, we can take components out of the rack, we
can lower the power of the data center.
It's such a powerful knob that that's really where we want to invest our money.
We want to create devices that have the highest error density to keep cost reduction coming
down in the disk world and really make that a real differentiator for our company.
And that's why we've spent the money we have over the years and invested in this technology
because we saw, and the industry is aligned, right?
This is the technology that will take disk into the future.
So we're really just excited to launch it and work with our customers on it.
We've already started, we've been working on it with them for actually several years, really.
We don't do anything in a silo in this world.
We have very, very strong relationships with our largest customers.
And we've been seeding those samples into the industry and getting feedback.
And like I said, the qualifications are now running really well.
But we, yeah that for them
scale i mean it's such a massive scale right anything we can do to improve that tco equation
for them i mean everyone wants to buy gpus these days right everybody that's the latest thing
i think everybody saw nvidia's stock you're doing pretty well. A lot of money going that direction. And that's great for the industry.
I think while obviously we'd love some more of that money to come to us,
we think it will come around.
Once they start building out these GPU clusters and Gen AI starts really
generating the data we think it will generate,
the amount of data demand out there is just massive. We see projections of nine zettabytes out in 2028.
Some very, very big numbers from where we are today
beyond what our industry really can generate.
We need to keep our foot to the gas.
We need to be innovating.
We need to be riding these curves and this technology
because the demand is going to be real
and it will be out there.
And we just don't feel like we can slow down.
So we think we have a very compelling value proposition
with this.
We're just really excited to get it to market now.
So all I heard in that was Seagate's developing a GPU.
Is that accurate?
We'd love to have one in cell right now.
I think they're probably, I imagine they're probably sold out over NVIDIA right now.
We're well aware of some of the pain points around getting access to the GPUs.
Well, the point on Gen AI is real or what we used to call analytics or business intelligence,
maybe as few as eight months ago.
But for mature enterprises, there's a severe lack of interest to delete or remove any data
because whatever my guys or AI team is working on now
could very much make use of data that before we couldn't analyze or couldn't analyze
well or didn't understand or whatever.
So the notion of hanging on to data and keeping it in some place that's relatively easy to
access to funnel back into these GPUs at any moment is a pretty real concern.
So again, I think we go back to the density argument of the more data I can keep online, keep available, the better in terms of an intelligent enterprise these days.
Yeah, that's a really good way of looking at it.
We look at it the same way.
We see two trends.
One is people are going to want to hang on more, like you said, not just for kind of what could I do with it in the future if I have a better model to train with.
That's one reason.
There's a lot of legal reasons, too, based, you know, once you start training models with
data and then you start using those models, people want to know where the data came from
to generate the models.
And so there's lots of legal pieces.
There's a lot of geography, geopolitical pieces to that, too.
So that's definitely a real trend.
And then the other trend on the generative side,
I don't think anybody knows yet really where that's going to go.
But now with the ability to generate video automatically through AI,
you know, that's another trend we're watching really, really closely.
So both of those just a net positive for storage.
I mean, people, I think some people look at AI and just,
they look at SSDs and, you know,
some of the fast stories that needs to be attached to these systems. And at a certain level, you just
say, well, that's not really disk, right? That's just, that's their training systems, it's NVMe
SSDs. We don't really look at that same way. I mean, we look at it that that SSD tier really is
a cache, right? It's a cache that you're using to take data from somewhere.
You put it into that layer, you train on it,
and then what do you do with it?
You don't generally keep it in that layer anymore.
So it depends on how you look at it, right?
They're definitely driving demand,
and it will drive demand for high-speed SSDs.
But relative to the whole industry equation,
that's just good for the whole industry
because, like you said, there's just good for the whole industry because
like you said, there's going to be plenty of need to keep the data that trains the models
and store the data that gets generated by the models.
Well, I mean, to be fair, we see SSDs used primarily with GPU servers and we do it ourselves
here, but it doesn't have to be that way. I mean, we did a deep dive on CoreVault
earlier this year, the 106 bay hard drive chassis, smart JBOD. And, you know, if we're talking about
going from 20 terabyte, 22 terabyte drives in that today to 30 plus, you know, in the relatively
near term with hammer enabled drives, now you're looking at three, three and a half petabytes
of storage that in aggregate is actually pretty dang fast. And if you guys and others in the
industry start thinking about, okay, well, we've thought about GPU direct as really a speed problem
that can only be addressed by a handful of SSDs, but what if I had three and a half petabytes or whatever,
that's actually able to deliver 15, 18 gig a second? I mean, that's a different kind of
conversation. Oh yeah, absolutely. It often gets missed, right? That on an individual device basis,
this clearly relative to SSDs, there's just no real comparison, right? It's not
in the same realm relative performance. But like you said, once you put 100 together in a box,
they're pretty, pretty performant, right? You're talking about, like you said,
gigabytes per second throughput. Our core wall system is really an incredible system. And
we are going to be using that to deploy our hammer we're going to be
leading that in terms of making that system capable and of shipping our hammer drives
and now we have that you know both in a 1.2 meter and a one meter
variant so we can we can deploy that everywhere um but you're right i I mean, in aggregate, disk is not low performance,
and sometimes that gets missed.
So in terms of throughput needs
where we want to stream a lot of data into a system,
these disk-based systems like COBOL
are actually very, very powerful,
and it does beg the question sometimes,
do you even need that SSD cache layer? You may
not in those cases. Well, I think fundamentally it depends, right? But your point in mind, I think,
is that you can't discount the spinning Russ, as you would adoringly call it, because it's in units of one, tops out at whatever, 285 megasecond. When you aggregate
106 or 53 in each controller, the speeds are pretty impressive, as we found out hands-on
with that system. So there are other ways to skin that big data cat for sure. I know we're coming up on our time together,
but I don't want to miss out on one other
technology or hard drive technology
that's a bit of an anomaly to many, multi-actuator.
Can you give us an update on what's going on there
and how Seagate sees that part of the industry.
Yeah, so that one's actually very near and dear to my heart.
I actually ran that program through.
So I know it very well.
And again, an amazing amount of technology went into that.
Seagate led that transition, led that design.
People aren't aware, essentially, we split the actuator
into two inside the drive.
So now we have a parallelization.
We have two active actuators that
can access the drive at the same time,
essentially giving us double the performance.
So like you mentioned, we typically
are about 250 megabytes per second
on a 3 and 1 half inch disk.
Now we're at 500, which is a the od of a three and a half inch disc now we're at 500 which
is pretty amazing out of a drive if once you can get to that streaming capability
uh yeah we're very proud of it we've sold a lot and it's it's we have customers and it's it's out
there um in terms of where it goes we're still watching it um. We absolutely know there is a trend relative to performance density that is real.
Once you get into these very multi-tenant environments where lots of users are using the same device,
the metric of IOPS per terabyte, performance per terabyte, it becomes very real.
And we're acknowledging that we haven't really,
for several years, increased the performance side
of that equation.
And now we're even further increasing the terabyte side
of that equation with Hammer.
So what that does do is it brings the performance
per terabyte down.
So the amount of performance and the amount of access,
think of it as the SLA that you could
generate off that device can be a challenge for some customers. So we're watching it and
watch this space, I would say. We have the technology. We're very, very happy about how
we deployed it. It went very, very well. We think it's a really, really clever way of getting more out of it.
I mean, we're not clearly trying to compete with Flash there, right?
I mean, if you're doubling the IOPS of a drive,
you're not achieving the multiples you require relative to Flash.
But that's really not the point, right?
It's enabling, so think about it this way.
If you have a, say, you had a 16-terabyte multi-actuated drive,
that's equivalent of having two 8-terabyte drives.
I mean, you can double the amount of deployable capacity
for the same amount of performance.
And so it's actually a very, very powerful knob
for customers that become constrained
by the amount of performance they need out of these devices.
And so, you know, we're monitoring it as we go.
It's definitely one part of our toolbox of designs
we have available to us.
It's a case of how we align with our customers
relative to the next products and how that goes.
But absolutely, we're going to see a world
where we need more of that in the future.
Yeah, well, I mean, there's lots of opportunity.
You said Hammer's been 20 plus years in development.
And so we're just seeing that come to fruition in 2023, hopefully.
You must have a dozen or more other projects that have started since 2000 that are in
various stages of learning that that'll keep building on that technology or
building what's next so there's a tremendous amount of opportunity there
but this has been great I appreciate you you diving in deep with us on on some of
these technologies and when when hammers out and we've got some I'd love to have
you back and and talk a little bit more about it and understand performance profiles and understand any other nerdy nuances of the drives that we should all, the enthusiasts should be aware of.
Yeah, great to come back.
And it'll be really good to, like I said, we're getting excited now that we have this coming out party, right?
That we're getting to the point where we can really sample these into the industry more widely
and and really have that conversation about you know that this isn't just vaporware right it's
not these are powerpoint slides this is this is real um and and it's there and it's ready to be
deployed and uh yeah we'd love to be able to have that ongoing conversation once we can actually get some hardware in your hands.
Yeah, absolutely.
When you have that party, let us know.
We'll be out to see you in Shakopee or wherever it is.
We'll try and make sure it's not in the winter.
We'll make sure it's not.
Ideally not.
But I wouldn't ask you to hold off your product launch, so I'm not cold.
This has been really great.
I appreciate it, the engagement on what you guys are doing,
the leadership there with many of the technologies that you brought up is fantastic.
For anyone that wants to learn more, check these guys out at Seagate.com.
We'll find the video.
We'll link to that and other resources that are publicly available relevant to Hammer, so we'll keep you guys up on that.
And look forward to seeing what's next.
Thanks again.