Storage Unpacked Podcast - Storage Unpacked 256 – Hyper-scalers and SAS with Rick Kutcipal
Episode Date: February 23, 2024In this episode, Chris chats to Rick Kutcipal, "At-Large Director" with the SCSI Trade Association. The topic of conversation is the adoption of SAS media (both HDDs and SSDs) by hyper-scale customer...s that include public cloud vendors and companies such as Meta.
Transcript
Discussion (0)
This is Storage Unpacked. Subscribe at StorageUnpacked.com.
This is Chris Evans with another Storage Unpacked podcast.
This week I'm with return guest, Rick Cuttepal. Rick, how are you?
Good, how are you, Chris?
Yeah, very well, thanks. Yeah, very well indeed.
Now, obviously you've been with us before
you work for broadcom but um you're not here representing broadcom in terms of uh this
discussion you actually work for the scuzzy trade association if i've got that correct
which is now part of a bigger group isn't it yeah it is um that was a big change for us this year
we uh we made the change to be incorporated under SNEA as one of their
groups and we're we're looking forward to working with them and that you know
the synergies that they bring to the SCSI trade association into the future
I think everybody hopefully knows SNEA but if not we'll put some links to both
you know your website and and there's as part of this discussion now we're gonna
have a chat about hard drives and it's really part of this discussion. Now, we're going to have a chat about hard drives,
and it's really interesting because in the market over the last,
I should rephrase that,
we're not going to talk about hard drives per se,
because obviously you're not here representing the hard drive industry.
We're going to talk about hard drives indirectly
as part of a lead-in to the discussion we're going to have,
and that's to talk about hyperscalers and SAS
and the use of SAS within the hyperscale environment.
But as a lead-in to that, we noticed that there was some interesting news this week
talking about hard drives that came out of Seagate.
Seagate have released a new architecture,
and we're now seeing the pushing of the boundaries a bit further past 30-terabyte drives.
And it seemed like a good opportunity and an interesting point to have a discussion about
back-end connectivity for hard drives, in hyperscalers because everybody thinks the
markets are all moving to nvme and actually in reality of course that's not the case is it
no that that's a good point um you know and so to we can't we can't compartmentalize sas with
with hard drives while it is a very important technology you know hard drives. While it is a very important technology, you know, hard drives continue to evolve and are a very important part in the hyperscale architecture. And, you know, the
capacity increases like you're talking about with Hammer are, you know, are a testament to that.
And that's going to continue to, you know, maintain the value proposition of HDDs and then ultimately SaaS in the hyperscale data
center. Yeah and I use that as a lead-in simply because it's probably you know something people
have seen but obviously there's a there's as much use of SaaS on SSD products and you know that that
side of the market is still very important you know not everything is pushing towards NVMe even
for SSD. Correct yeah no that that's a fair statement. Okay so let's you know let not everything is pushing towards NVMe even for SSD. Correct. Yeah, no, that's a fair statement.
Okay, so let's, you know, let's dive in and really give people perhaps a bit of a background here as
to what SaaS really is and where it came from. I think, you know, the SaaS and Satara probably
turns people bandy around without realizing exactly where they derive from. But, you know,
as an example, SaaS is a long-lived protocol, probably the best way to describe it.
That's been around for an awful long time, very mature.
Yeah, and actually SAS is the serialization of a serialization of the SCSI protocol.
Parallel SCSI, you know, came about in the mid 80s as an interconnect specifically to connect computers to storage devices.
Since then, moving into the early 2000s, the serialized version, or SAS, came about,
and now we're on our fourth generation of that technology.
Yeah, I mean, at the end of the day, anybody who remembers the parallel side of SCSI from years ago with parallel connectors knows the pain that used to come from low voltage SCSI
and high voltage differential adapter terminations
and all sorts of terrible stuff we had to do
to make sure that worked correctly.
Whereas moving to serial makes life an awful lot easier
because the cables now,
or at least the last time I plugged in a SAS or a SATA cable,
they're very narrow, very simple,
and all hot pluggable, all very, very easy.
So that move to a serial interface
was actually quite a big step
and quite an important one for scalability,
I think, and operability.
Yeah, agreed.
You know, SAS, like I mentioned,
starting in the early 2000s
was a big innovation going to the serial interconnect
and making it very usable for
large-scale storage deployments. Excellent. Okay, so let's talk about hyperscalers then,
you know, generally. And you did a presentation not that long ago, I think, when you talked about
this whole issue of hyperscalers and the topologies they use in their data centers.
And I think it's quite interesting to try and understand, you know,
their mindset of what they're looking for when they're building large infrastructures
because there aren't many environments that really build out to the degree
that they would do in terms of storage infrastructure.
You know, most of us might deploy a few hundred, even a few thousand drives,
but I guess potentially hyperscalers are talking in the millions.
Yeah, and that, you know, causes, you know causes a whole different paradigm in designing these systems, right? And so
there are a number of key factors that are driving the hyperscalers and continuing to drive the
hyperscalers to use SaaS in their modern architectures. You look at it from scalability.
Scalability is a big one, right? You have to be able to scale to thousands and even then more drives per system.
Reliability is another one.
A lot of times that one kind of gets swept under the rug, but reliability is really important.
And then cost, right?
You can't forget cost.
And it's kind of interesting with these three.
They would be prioritized somewhat
different depending on who you ask, right? You ask the architect, you know, he has a document,
a PRD to go architect a system to scale to thousands of devices, you know, and do it reliably
and to meet different service level agreements, right? And so, you know, that's where, you know, he uses SaaS, points at SaaS directly
to solve those problems. Reliability, if you ask the IT engineer, right, their number one priority
is going to be reliability, right? That's their job. That's, you know, that's how they, that's,
you know, part of their metrics. And, you know, SaaS really helps out in that. And then cost,
right? That's the bottom line that, you know, if you ask the product owner, if you know, SaaS, SaaS really helps out in that. And then cost, right? That's, that's the bottom
line that, you know, if you ask the product owner, if you will, or the CFO, they're going to talk
about cost. And each one of those are very important. Now that's not saying that, you know,
these systems are exclusively SaaS. That's not the case, but for the near line tier you know the the capacity tiers those are all sass for those
reasons yeah and i look i look at it and think especially in in hyperscale environments and
that mean could mean on premises it could mean somewhere like uh you know the facebook type
companies and so on because obviously they scale out as much as say amazon or or azure but if you
look at those sort of companies, you shave off
three or four, 5%, even three or 4% of the cost of their infrastructure. That's a huge amount of
money saved. And it's not just about trying to make sort of 50% savings. Anything that you make
is going to be looked at. So it's really important to look at every part of the infrastructure all
the way down, I think, and work out what can be saved.
No, agreed. Especially when you're talking about cost, you know, you're talking about
hundreds of thousands, if not millions of drives. And so, you know, even a small differential in
cost is very important to them.
Yeah, absolutely. Brilliant. You know, I think what that sort of makes you look at, I think, as we sort of dig in a bit further is, in that case, why go down the SaaS route compared to, say, NVMe? I know, let's just bear in mind, so before anybody says anything and goes, hang on a second, but that's, you know, you have no choice with hard drives. Remember, we were talking about both hard drives and flash drives. So of course, the NVMe discussion comes into it partly.
But rather than be negative about NVMe,
I think it's probably better to be positive about SAS and say, well, what is it specifically
that SAS offers the hyperscalers in this instance?
Yes, so we talked about some of the metrics
driving on the scalability, reliability, and cost.
But when looking at the features, I
put it into two different buckets. One are the fundamental attributes of SaaS. And then another
one, you know, some of the newer features, right? And the fundamental attributes of SaaS,
you know, that scalability, right? I mean, that SaaS scales to thousands of drives without extra,
without protocol conversions, without extra equipment.
It just natively scales.
You know, management's another one, right?
With protocols within the SCSI stack, like storage management protocol, or SMPs, storage enclosure services,
that's all part of the SCSI stack. And those all help, you know, these, in this case, you know, the IT professionals manage these large enclosures
by, you know, without adding anything,
without, you know, asking your initiator vendor
or drive vendor to do anything special, right?
That's just native in the infrastructure.
From, you know, pure feature,
like newer innovative feature perspective,
there are a number of them, you know, SMR, like newer innovative feature perspective. There are a number of them,
you know, SMR is a good one, increasing the aerial density of a drive, right? So taking the same drive
and increasing the, ultimately the capacity by, you know, 10, 15%, right again, at these,
at these numbers, at this scale, that's a big deal. Another one performance it's interesting you know people think of HDDs and they don't think of performance. Performance has a
number of different pieces to it and technology or a command set like command
duration limits or CDLs looks at and focuses on the tail latencies of the
HDDs right and this is very important for some of these big data centers
that have very specific SLAs or, you know, service level agreements that they have to admit, they
have to commit to. And so things like CDLs help with some of the problems associated with hard
disk drives. So that's, those are some examples. Yeah, so you and I talked about that one,
Rick, some time ago, CDLs, and I thought that was a really interesting one because,
just to remind everybody, and if I just make sure I've got it right, my understanding of that
technology was the idea that in high-scale environments, rather than suffer latency,
you decide it's sometimes better just to say, this Iota isn't going to complete in my time frame and you just fail it rather than actually wait for it to come to fail or to complete and as a result
you can go and maybe find that data somewhere else so you just basically say here's the limit
within which it needs to complete if it doesn't tell me it's failed rather than me wait forever
and it helps you sort of manage tail latencies because you might be able to get that data from
a mirror copy or somewhere else.
And therefore, you're not suffering the tail latency issue.
Yeah, it's an interesting one because the concept originated within OCP.
And they called it OCP fast fail.
To fast fail an IO, you know, to your point, right?
You set a limit and you can fail that IO without bad things, and, you know, logs getting thrown and things like that. A lot of times, multiple commands will go
out, right reads to a couple different drives will go out, and then they'll, you know, then whoever
responds first, then we'll, you know, the others get failed. And then that manages the latencies.
And I've done a I've done a couple presentations on
on this and there's some very interesting numbers that are starting to come out from experiments
that we've done that some of the drive guys have done on on this technology and it's very compelling
and not not to disrespect the the hard drive manufacturers because they've done some amazing
jobs you know in continuing to improve the technology but naturally as you increase the capacity of drives then your
io density is going to be challenged all the time because you've got more and more capacity on
effectively very similar speeds or similar interfaces so that there are techniques you
need to help manage that io and i know things like smr there are host-based management techniques to
allow the host to actually read and write that data in a more effective manner that tries to
smooth out some of the issues you have with things like smr right yeah and as you know smr is just a
more efficient or a more efficient way of laying down the tracks and increasing the aerial density
that way to your point though there are host implications, right?
The host does have to be aware of how it's writing the zones and where those zones are.
So there is overhead associated with technologies like SMR.
But this is, just to go back and re-emphasize that, this is what you're saying is being
added into SAS.
I mean, this is the awareness that SaaS has to deal with these sort of issues
so that as part of the protocol, this isn't necessarily,
this is a standard.
It is part of the protocol.
It isn't necessarily anything proprietary.
It's absolutely something every vendor can actually,
or every user can take advantage of.
Correct.
And I think that's quite relevant, by the way,
because this isn't like proprietary extensions onto a system that allows somebody to build something that they say, oh, well, we've built this because this fixed the problem.
These are industry standard features in the platform.
Yep.
And all very well documented.
T10 is the standards body that controls this.
It's all published information.
It's all available.
Excellent. I just wanted to go back and talk about the scalability side just a little bit,
because when we think about drive systems, it's funny, having had a lot of sort of background in
the storage industry and looked at storage array design and things like that, it's amazing to see
that today. People might not think it. People might think that a lot of systems have migrated fully to NVMe,
but actually a lot of systems are still a mix of NVMe and SAS
because SAS provides the scalability at the back end,
which allows you to put in shells and shells and shells of JBODs
and have only just small amounts of connectivity between them
to make that work effectively.
So that's, I think, something you certainly can't do very easily with NVMe
because we haven't got NVMe switches to the same degree
or PCI switches to the same degree.
And I think the scalability is quite interesting
because if you look at it, I think it's, I mean,
you're talking about thousands of drives you can put behind SAS controllers,
I guess, or at least hundreds behind a single SAS controller.
Yeah, I mean, the theoretical limit is 64,000,
but the practical limit is, you know, in the thousands, you know, low thousands,
but very seamlessly too, right? All hot ad, you can add enclosures, you can add individual drives,
all, you know, uninterrupted traffic. And this comes back to the couple of the pieces you
discussed at the very beginning there, where you're talking about the requirements of different people within the infrastructure. You
know, the architect wants to design something that's going to be reliable, but then there's
the person who's going to have to support this. There's going to be somebody who has to go in and
swap drives out and occasionally replace them. You know, maybe they don't get replaced all the
time, but when they do, anything that goes into a cabinet is an interruption.
Anything that means you're pulling drives is a risk.
So you want reliability.
You want that back-end interface to be able to recognize hot drives or drives coming in and out being hot plugged without you causing an issue.
Yeah, that brings up another one.
I just thought of another feature that is implemented in SaaS, and that's logical depopulation.
Right. Okay.
A lot of people refer to it as depop, right? As these drives get larger and larger,
and the platter count goes up, you know, 9, 10, possibly 11 platters, when something fails,
whether it's a head or the media, to your point, to send somebody into the data center to go find that drive,
you know, pull it out, rebuild it.
That's a risk.
It's expensive and it's a risk, right?
And so if there were a way to say, okay, well, you know,
my computer is sensing that there's a high degree of head errors
or media errors on this one platter,
then what can happen is they find out
what part of it is still good.
They can take that data and put it on another disk,
another platter, and then logically depopulate
that platter from the drive.
So if it's a, I don't know, so I can do the math.
If it's a 20 terabyte drive and it's 10 platters
and you remove one platter, then you just,
now the drive reports itself as 18
terabytes and and nobody has to go into the data yeah the latest drives they're saying sort of
three terabytes plus per per platter and you know ultimately even if it's whatever the capacity is
if you've got say 10 platters you know you're looking at 10 percent loss and you might only
lose one side of that you might be one head that's damaged or something like that so it might only be five percent and i think if you look at an environment where you deploy your own data on top
of the infrastructure nobody's going to say that you're guaranteed to be at 100 on every single
infrastructure component all the time you might well for example be loading everything up and you
might have an infrastructure that's 60 or 70 used% used, in which case losing a platter
doesn't necessarily directly affect the data that's being stored, but it certainly affects
the operation of the drive in terms of the drive logically wanting to fill itself in its entirety.
So even though it might not affect the actual capacity of the system directly, it affects the
operational use of that system. So able i think to logically fail a
platter actually is an operational massive operational benefit not necessarily a capacity
challenge yeah correct excellent okay so um i mean gosh you know we talked about some what seemed
like pretty logical things i think there but actually in terms of scale you know potentially it's very difficult
to do the sort of the level of management I think probably without SAS I can't think of I can't
think how you certainly I can't think how you do it on NVMe but certainly it seems that SAS has
sort of evolved to to meet those requirements as part of its sort of evolution and it seems that
that's the only way we could really do this in any practical way, I think. Yeah.
And so a couple of comments to that.
Number one, NVMe does have a management with MI management extension.
And so they are thinking about it.
They are working on it.
But SaaS, it's been built in since the beginning with SMPs and SES. And then there are all sorts of tools,
whether they're proprietary or open source tools that are all written around that to use those two different layers of the SCSI protocol. Right. That's really important. And what about
touching just sort of finally in this section, really about the pricing. I'm very interested
in the fact that you had in your presentation, you had a little report that showed sort of finally in this section really about the pricing i'm very interested in in the fact that you had um in your presentation you had a little report that showed sort of the difference
in pricing between hard drives and ssds in terms of the interface is that really something that
people should be aware of well and and so let's call it cost right the cost of the cost of the media you know comparing a comparable ssd to a near line hdd so
ql right so it's funny you can you can find articles that say oh well the crossover of ssds
and aces has already happened that's comparing qlc nvme drive uh for a flash drive to you know 15k
sas drive right and that's And that's apples and oranges.
If you compare comparable devices, right now the cost delta is between 5 and 6X.
Now, it was significantly higher a number of years ago.
It was on the order of 10X.
And right now we see it plateauing at about that.
With innovations like Hammer.
You know, it may even start trending the other way, depending on how pervasive that technology becomes. And we start seeing, you know, 30 terabyte and 40 terabyte drives as the mainstream.
That's going to, you know, change that equation even more.
And it's interesting because in surveying the hyperscale customers, the crossover seems to be at about 3x.
You know, they claim once an SSD can get within 3x the cost of a nearline or a QLC drive can get within 3x of a nearline drive.
Right.
Then it starts to become compelling and things might start to shift.
But right now we don't see, you know, we don't see that happening in the near future.
Could it happen sometime? Yes, absolutely. With new bit cell architectures on the NAND side,
there are a lot of things that could change it, right? Because remember, the NAND and SSDs aren't
standing still either. But for the next foreseeable future, five to 10 years, that crossover won't
happen. The interesting thing I think is when
you look at those two technologies the QLC media has started to exhibit similar issues to hard
drives in the sense that as we scale up QLC media to much larger capacities the endurance has now
become more of an issue in terms of how many times it can write to it. But not only that, but the latency of actually reading and writing to it
is very different compared to what, say, SLC would be.
So a bit like fast hard drives were great, bigger hard drives were slower.
We've now seen the similarity in the SSD market where SLC was smaller but faster.
QLC is bigger but slightly slower. And it shows that
across our industry, we have this constant hierarchy of technologies that we have to deploy,
whether it's DRAM, SSD, hard drive, dare I say tape, tape's still got a place. And ultimately,
at each of those levels, you're looking at the most cost-effective way to deliver that,
which continues to bring you back to the idea that there's always going to be a place for things like SaaS because that cost profile is always going to be a considering factor in storing your data.
That's good.
You brought up tape. Hyperscalers, I mean, you can go, if you go do your homework, the hyperscalers use a lot of tape, and it's actually driving innovations in tape right now as well.
So not real popular topics, but real nonetheless.
Do you know, there's an AI angle to this, and it's a very tenuous AI angle to this, but that's, you know, one thing ai is doing is it's generating large larger and larger
volumes of data that we need to be able to move in and out of very expensive compute environments to
process and do something with if people are going to spend well what did mark zuckerberg say 10
billion dollars on 350 000 gpus or something crazy like that if people are going to build out massive
compute infrastructures like that or if we're going to rent them even, we're going to have to have techniques to move data in and out of those platforms.
And we're going to have to have somewhere else to put that data when we're not crunching it.
And it's going to have to be relatively quick to get it in and out.
And I think that's why I can see there can be tiering coming in where, you know, we use maybe Flash to access it mainly. Sometimes we put it back down onto a hard drive
because it's the next layer down
in terms of making it sort of just about ready to be used.
And then maybe we archive it completely on tape
when we want to keep it, but save it for another time.
So I think our industry will still have those tiers within it,
even with AI.
Absolutely.
AI has made, you know, huge, you know,
huge leaps and bounds in the recent past, but one of the fundamental things that's
enabling it is the amount of data, the content that has been created and stored that those GPUs
can go and put into their models and work on. You think of the surveillance data that's collected
on every street camera and everything, and then that has to get saved, and then it gets worked on.
I mean, they go and search it, whether it's traditional AI servicing a big model or more computational storage, going and finding a blue car or whatever you're looking for in the surveillance data you know that amount of data you know it's not slowing down it's still you know while ai you
know the computational side of ai has gotten kind of the gotten the spotlight recently it still
relies on a lot of data and that data is still coming in and it still has to be saved and it
still has to be accessible yeah yeah exactly so my assumption is then that we're not necessarily
all moving everything to NVMe tomorrow.
You've got a nice graph, I think, that shows Exabyte shipped,
I think, which is quite interesting.
And I think that one, for me, sort of gives you a good indication
of that sort of tiering model in terms of what people still want to use
different devices for and different protocols for.
Yeah, if you look at the exabytes shipped over time, over the past 20 years,
and then pick your favorite source, information source moving forward,
the forecast all say that right now we're at maybe 10% of,
well, let me say 90% of all exabytes shipped are behind SAS infrastructure. So that
would be SAS HDDs, SSDs, SATA HDDs, and SATA SSDs. So 90% of all exabytes shipped are behind
the SAS infrastructure. And it's going to stay that way for a long time, right? I mean, you know,
you may see some growth away from that, but not significant. So it's still, it mean you you know you may see some growth um away away from that but not
significant so it's still it's you know the sas infrastructure still um service is a very important
part of the storage ecosystem yeah how does that fit in terms of things like power consumption um
another one of the the charts i i thought was quite interesting you had a power consumption
comparison and i think i've seen a few things around the industry in the last little while Another one of the charts I thought was quite interesting, you had a power consumption comparison.
And I think I've seen a few things around the industry in the last little while where people have said,
oh, yeah, you know, hard drives are using far more power than an SSD.
And then somebody will come back and say, well, actually, it depends on the mode they're operating in.
And then you look at it and think, well, OK, if it's operating in a busy mode, an SSD is going to be getting warmer
because certainly the ones I've got on my desk do that
when they're busy writing data.
But if you're using an SSD and it's just idling all the time,
well, you're not really getting the most out of the SSD.
So you want it to be active.
I mean, so there's sort of like a real balance
between the two here, isn't there?
Yeah, no, and so power is an interesting one.
You know, to put together the chart that you're referring to,
I went out and researched and talked to many experts,
and it was interesting because depending on, you know,
how you search or how you ask the question,
you're going to get a very different answer.
And you can go find articles saying that, you know,
SSDs are far superior in power performance over HDDs. And then you can find
completely the opposite. And so what I did was I worked with a number of the hyperscalers,
domestic hyperscalers, and, you know, ask them questions like, you know, what's a normal,
you know, workload, you know, how active is a drive in like a near line tier, right? So again, so good example, you look at what an NVMe
drive might be doing in a met, you know, in a caching tier, in a, you know, a close tier,
a hot tier, and it's just going to be getting, you know, hammered the whole time. It's going to be
working really hard, pushing a lot of data near line. It's going to be, the drives are going to
be spun down a lot of the time. And there's going to be a lot of downtime.
So what I did was I took common HDD and then a comparable SSD, and I compared the two using a workload that, again, I generated from working with the hyperscalers.
And what I found was normalized by capacity.
So the actual metric I use is terabytes per watt.
And what I found was, you know, in read-intensive workloads, there's a slight advantage in HDDs.
But on the write-intensive workloads, there's almost a 50% advantage with HDDs.
So, again, you know, you have to be careful with how you put the
numbers together. But I believe that HDDs do have a compelling power advantage in a nearline tier
where they're designed to be in these high capacity nearline tiers. And interestingly,
if you look at say, I know Hammer, obviously obviously it's termed heat assisted magnetic recording so there's
a there's a little bit of extra power needed for those drives i think around just over 10 watts
compared to about eight or so for or 8.909 for a for a traditional cmr type drive but if you look
at that as they increase the bit density that power shouldn't change so you know if that drive
becomes double in capacity you shouldn't expect a massive increase in the power demand for that so the terabytes per watt
value should actually decline for larger and larger drives right so that so so again that's
part of the numbers game right because if you're just looking at power that's one story but you
know from a you know a hyperscalers perspective it's all about
you know that that slot you know yes what what's the storage density what's the power density of
that slot and that's why i normalized it terabytes per watt funny enough you say about slot the slot
cost in the mid-2000s when i used to do a lot of design for systems, when we were building systems out for customers,
I would use the per slot mechanism as a way of working out cost because with per slot you could work out how much you're going to get in a rack.
So you could look at floor cost, you'd look at power because you knew how much each slot would take in terms of power,
but also you knew how much you'd get in terms of spindle performance and various other things. So the slot was generally sort of like our central point in terms of working
out all those calculations. So it's interesting to think that it's still something that the
industry looks at today. Okay, so I think all of that's really interesting and it sort of points
to why we see SAS still having such a big place in in the hyperscalers other than what you've already
discussed can you give us any sort of good examples of where where this is being done
and the sort of scale we're talking about yeah so a really good example is meta's recent ocp
submission they submitted uh their latest uh storage server that they call Grand Canyon. It is all 20, you know,
it's nearline infrastructure is all 24 gig SAS.
Now it does use nearline SATA drives,
but it uses a SAS initiator, SAS expanders,
all the SAS management to do that.
And this public information,
you can go find it on OCP's website.
It's very well documented. And so's that's a good one right so that was you know a recent architecture that
was developed and and a lot of these things that we've been talking about went into it the
scalability reliability you know the cost how do they meet those objectives and the way they meet it is you
know with using a sas infrastructure for their near line tier okay i'll go and take that out
and we'll find it we'll put a link to that into the show notes so people can have a look and
yeah that for themselves because that sounds you know it's an interesting uh interesting use case
so rick it all sounds really it sounds really interesting i think we should definitely um
get people to go and look at the OCP website
and have a look at that.
As I said, we'll put a link to the show notes in for that.
But I think if anybody is actually interested in learning a bit more about this
and exactly what you do and what the industry is doing to promote this,
where should we point them to?
A good reference is our website.
We are on snea.org, so www.snea.org slash stayforum, S-T-A dash forum.
That's the best site.
And then social media, we're on all the social media channels.
We have a Twitter handle.
We're on LinkedIn and YouTube as well.
Brilliant.
Do people use Twitter anymore?
I don't know.
It seems to be one of those platforms that it seems to be in a bit of a,
I don't know, sort of sitting there floating around,
not really sure about anymore.
But there are lots of other alternatives,
and we'll make sure we'll put links to everything that you've got.
But for now, Rick, thanks for your time. It's been really interesting to have the chat,
and I look forward to catching up soon.
Yep.
Thanks, Chris.
Bye.
You've been listening to Storage Unpacked.
For show notes and more, subscribe at storageunpacked.com.
Follow us on Twitter at Storage Unpacked or join our LinkedIn group by searching for Storage Unpacked Podcast.
You can find us on all good podcatchers, including Apple Podcasts, Google Podcasts, and Spotify.
Thanks for listening.