Grey Beards on Systems - 162: GreyBeards talk cold storage with Steffen Hellmold, Dir. Cerabyte Inc.
Episode Date: February 21, 2024Steffen Hellmold, Director, Cerabyte Inc. is extremely knowledgeable about the storage device business. He has worked for WDC in storage technology and possesses an in-depth understanding of tape and ...disk storage technology trends. Cerabyte, a German startup, is developing cold storage. Steffen likened Cerabyte storage to ceramic punch cards that dominated IT and pre-IT over … Continue reading "162: GreyBeards talk cold storage with Steffen Hellmold, Dir. Cerabyte Inc."
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Keith Townsend.
Welcome to another sponsored episode of the Greybeards on Storage podcast,
a show where we get Greybeards bloggers together with storage assistant vendors
to discuss upcoming products, technologies, and trends affecting the data center today.
We have with us here today Stefan Helmold, director of Cerebite Inc.
So Stefan, why don't you tell us a little bit about yourself and what's new with Cerebite?
Hello Ray, wonderful good morning. It is a pleasure to be on this podcast today.
And what is new about Cerebite is we established the US entity at the end of 2023.
And with that, Cerebyte Inc. was born, which is a daughter company of Cerebyte GmbH, a German limited liability company.
And what has Cerebyte got to do in the storage business, Stefan? CeraByte has a very innovative new storage
technology that is ideally suited for cold storage, especially storing data for a long
period of time, very cost-effectively and sustainably. It is using a ceramic-coated
glass media that is written with a femto-laser and allows you to store data virtually unlimited
in a similar way as you can imagine a ceramic punch card would work.
Ceramic punch card. Now you're going back to my roots here. That's interesting.
I've, yeah, well, it's a different story, but I've done punch cards before. And so it's,
is it a Hollerith code kind of thing, or is this more sophisticated than that?
Well, the reason why I like to use ceramic punch card is you want to give people an image, right, of what is the technology in a very easy to understand way, right?
And it starts with basically clay tablets that have a proof of concept, the last 5,000 years, that ceramic storage
has a very great longevity.
The reason why I call it a ceramic punch card is because you have a thin nano layer, basically,
on a ceramic layer on top of a glass sheet, and you use femtolasers to ablate dots in that and write with a matrix onto the sheets.
And with that, you can basically encode the information onto it.
And that is what brought me to the analogy of calling it ceramic punch cards, because it is kind of like you're punching holes into that nanolayer, in this case, with a femto laser.
And so it's, let's say, like a card-based kind of solution that on each card you could store some number of, I don't know, gigabytes, terabytes, something in that order?
Yeah, you can think of it as they are basically square formed ceramic coated glass sheets.
Think of it as very similar to your screen protectors like Gorilla Glass that you have on your cell phone.
And that is coated with a thin ceramic layer basically on top of it.
Those can store information on both sides of the surface that carry those ceramic layers. And then you have a number of sheets within cartridges that typically are the size of
an LTO cartridge, which allows us to use basically LTO automation to build entire data center
racks.
So give us a sense of scale.
How big are these cars and sheets?
Yes, you have the sheets are about nine by nine centimeters in size of which eight by eight centimeters are actually used for active data storage. The reason why that is is because they
need to fit into the form factor of an LTO cartridge in order to use those cartridges into in a system and we have showcased all of that as a working prototype
already where you can demonstrate today the writing of a gigabyte scale sheets at megabyte
per second bandwidth you can store it in a rack
as well as you can subsequently retrieve it
at similar speeds.
And all of that is giving us a full demonstration
of an end-to-end workflow
that can showcase how the technology works in principle.
With that, the next step would be to scale it up from here.
And so how many sheets can fit in an LTO cartridge form factor?
Today we have three sheets in the initial demonstrator.
We are planning to go and increase that to more than 100 sheets in order to
scale the density up for the overall solution.
So I assume the ideal of using the same form factor as LTO cartridges is so that we can take advantage of existing physical infrastructure like robot arms, etc.?
That is correct, exactly. The idea here is to basically innovate with using as many as possible off-the-shelf components,
so you can build a proof of concept with a fairly low budget. The company has been able to do this
within a period of about 18 months and less than $7 million. And building a storage prototype
right for that budget and for that timeline, that is quite an accomplishment.
Right, right, right, right, right.
And so the interface, let's say the protocol that you would be using to access read and write,
or at least read or write, I guess,
the solution would be LTO tape protocols,
or how does that work?
You would actually use fully transparent protocols
to what you're using today so think of an s3 interface right and yes you could use so for
example uh use that as a virtual tape library if you want so on as you know tape also has uh
the option to have a right once operating motors, right. Which would be basically aligned with the,
the Cerabyte right once media technology.
So help us understand the problem we're solving here because tapes,
you know, they, LTO tapes have been around for my whole career.
Yeah.
And I can go back to, you know,
the start of my career and still recover data
from probably the start of my career.
I actually ran a restore some Windows NT servers
just a couple of years ago.
Yes.
So first of all, as you know,
the CIA golden rule is that you should store your archives on two different media technologies.
Tape today represents only one.
People are also using disk storage, right?
But that wasn't intended to be actually used for archive storage.
So augmenting what we have today with tape as an archive storage, that would be the first
problem to solve.
The second problem is that you have a longevity issue that you need to ultimately migrate
data at some point in time.
Typically that happens every five to seven years and that is often not necessarily because
of the media longevity itself, right, to the point that you made.
But it is often driven by the availability of the right electronics
in order to make sure that not only do you have your archive storage media,
but you also have the ability to actually read the data back.
And on top of all of that, that the capacity scaling is another factor
is we need to increase the capacity by about 100x from what we have today.
And while tape certainly can deliver 10x, right,
from a theoretical perspective,
it is questionable whether it can deliver 100x, right?
Neither can any of the other current mainstream storage technologies.
And that's another factor to consider and then all of that
of course from a cost scaling perspective has to be also scaled down by by similar orders of
magnitude and then lastly you you need to have a media technology also that when it is disposed, it is recyclable or it has a sustainability profile that is
most favorable and does not require a lot of energy, power or expense to basically be
disposed at the end of life.
And all of these problems have to be solved in order to scale up, if you want so, from
the current storage scales to the Yottabyte scale.
You're talking Yottabyte scale.
If there's, I don't know, 100 gigabytes per surface
and there are 100 of these storage cards in an LTO capacity,
and each of those have two, so there's 200 gigabytes per card, and 100 cards.
So it's still in the order of what an LTO, I mean, LTO cartridge today is 20 terabytes, right?
Yeah, yeah. You can certainly see that when you go from an LTO 9 cartridge, right, that is in that general vicinity that you referenced, right? If you scale it up from there, you certainly will
get with the optical femto laser domain, you'll get to it a hundred petabytes per rack right which
you could argue that tape may also be able to deliver the ultimate scaling to
exabyte scale per rack is actually delivered through particle beam
technology which allows you then to get down to a few nanometers in basically
bit size which is comparable to what uh dna data storage holds as
promise well so so you would be moving from today's demonstration solution using lasers to
something that's more of a particle beam read right i guess right head kind of thing? Is that? Yeah, think of it the following way.
So in the tape textbook, right, or the tape playbook,
you have tape is trailing disk by about 12 to 15 years, right?
And that was a decision that was made in order to use amortized disk drive technology.
So at Cerebyte, we deploy a similar foundational idea
to use amortized semiconductor FAB tool
technology.
So today's demonstrator is actually a maskless lithography tool repackaged in a data center
rack, right, and adopted for storage.
The reason why we can project out 20 years from now is because in 20 years from now, today's semiconductor fab tool technology is going to today is that it'll cost you more than $100 million, right, per tool.
And that is not yet at a level that it's affordable for storage, right?
But it's foreseeable when you follow the path of semiconductor technology scaling
that you eventually will get to that point.
So you mentioned the DNA stuff.
I mean, there's been a lot of work in the DNA space
trying to make it more amenable to a storage solution.
I think Cerebite originally had some DNA technology
they were focused on early on.
Why the move from DNA to ceramic glass?
I'm not aware that there was dna data storage technology early
on but there is certainly storing of dna data right in a ceramic storage where we have at a
pilot that we're currently engaging in for moon missions lunar lunar stellar missions, right, that you basically can store DNA on
ceramic sheets.
That is certainly something that is another interesting use case.
So are you meaning, so this is always an interesting conversation, DNA as in a DNA sequence that's not more chemistry than it is,
more bio and chemistry than it is computer science?
No, you can read, you can sequence DNA, right?
You can read it and then you put the data that you read then, right?
The basis, you can store those, right?
Same as you're doing the DNA sequencing today
for medical purposes, right? You can do that. So it's not actually DNA on the ceramic device.
It's not. Yeah, that's correct. It's the encoded version of the base pairs and that sort of stuff.
That is correct. So where do you see this technology playing?
What sort of role?
I mean, obviously it's cold storage.
Where does cold storage play in the enterprise or the world of IT these days?
Yeah, you have, I would say you can think of this as multiple phases of a rollout, right?
You have the digital preservation, the national archives, right?
That's kind of like one vector that goes then also into the medical field where you have to
retain data for the life of the patient and ideally be able to retrieve it within minutes
if that patient shows up in an emergency room. You have critical infrastructure all the way into
financial data that needs to be retained for a longer period of time, as well as critical infrastructure data.
Think of it as bridges, nuclear power plants and other things. of opportunity in the collaboration of basically also backup and archiving of primary storage
data, right?
As well as the ultimate, let's say, largest consumer likely will be the hyperscalers for
use cases there that are for storage of video data, for example, as well as AI training data, as well as the decision trails, audit trails,
governance for AI, I think will drive significant need for cost-effective cold storage in several
cases. That storage will be only retrieved if there's any dispute, let's say, within a court case, right? So you have the archives that have only retrieval
upon subpoena. All of these new use cases will drive significant demand for storage on top of
what we have today. And we have just seen the National Academies of Science and Engineering
Medicine publish a rapid expert consultation where the office of the director of national
intelligence asked for how to go about storing exabyte scale or hyperscale type of cold storage
data for a period of 25 to 50 years whoa when you say office of national intelligence director of
national intelligence you're talking about German?
I'm sorry. No, that was the Office of the Director of National Intelligence of the United States of America, ODNI.
Some guy from the CIA level wants to store exabytes of data for 25 or 50 years.
Yeah, that's the declassification period horizon that they're anticipating now. Yes,
of course, the details, right, have not been disclosed as part of that, right, but I had the
pleasure to be one of the co-authors of this rapid expert consultation, and I was especially excited
about it because it is a public use case, right, that there is demand for hyperscale type
of cold storage data sets for decades, right? And that's, I think, very important to note.
And, you know, one of the biggest challenges for, you know, long, long-term archival storage is the
format of the data. You know, if you look at, you know, video today,
it's MPEG-4.
If you look at, you know, audio, MPEG-3.
You know, 20 years ago,
those sorts of things didn't exist.
And in 20 years, I'm sure the format for video and audio
will have drastically changed.
How do you handle something
where the format itself is changing on a period of, you
know, a decade or so? I mean, you look at doc files and stuff like that. They've undergone
significant change over the course of 10, 15 years. Yes, yes. And these are fundamental problems that you have to solve for all cold storage issues, right?
That that's clear.
And you have multiple approaches in that.
First of all, you have to today in today's world, you have to media obsolescence, then
you have to format obsolescence, right?
That's what you currently referring to.
And then you have the software obsolescence on top of that, right? There will have to be Rosetta Stone-alike, basically, references so that archives can be retrieved.
First of all, you need to understand how the data is encoded.
And secondly, you need to understand in which format it is.
And in some cases, you may have to also give means to actually read the data itself back again, right? So meaning that you may have to include
potential software packages with the data
if you want to be assured that you have a
universal format to regain the data. This is all something
that is a general problem that's been discussed and that we already
as an industry,
ran into for any of the long-term storage considerations. That's not unique to Cerebyte.
It's the same problem that you face with DNA data storage.
Yeah, absolutely correct. I mean, the format problem, the software problem, the media problem,
all those issues exist. The nice thing about something like DNA or even a ceramic glass is that in the
media itself will probably persist long after, you know, the software and the format will
be non-existent or go out of fashion.
I was thinking that, you know, in order to do something like this effectively, you'd
almost have to open source the format of the storage media and that sort of stuff.
Are you guys doing anything with open source?
We are working within OCP, the Storage and Sustainability Workgroup. is without a doubt a need to have a worldwide repository on how to read certain, let's say,
media formats. So I'm fairly certain that something like that will emerge.
Today, you already have the problem, and I've seen that when I was at the International Council
and Archives Congress, that we don't yet have everything on a worldwide scale,
fully standardized to deal with these type of problems that you described.
So there's work ahead of us at the national and at the international level to tackle this.
So one of the bigger problems or challenges for archival storage, we run into this
problem on the software side. There's been, you know, interesting business development between
Veritas and Cohesity, but, you know, these technologies, especially LTO tape, is extremely sticky. Are we looking at LTO plus?
Are you looking mainly to displace LTO?
What's the addressable market for this type of archival system?
Yeah, that's an excellent question.
So just for reference, I'm a big believer, right, that each new media technology
has its unique characteristics and the same holds true for Cerabyte. We just actually got done
with publishing a co-sponsored study together with IBM and Fujifilm. So to indirectly answer
your question on how basically archive storage is going to be harnessed
with the use of all of the various technologies.
So it's not one or the other,
but think of it as an active archive maybe
from a concept approach
and think of it as where you store each bit
at a given point in time,
depending on its access paradigm
in a particular storage
media in order for it to best be addressed with regards to the needs for the access as well as
storage costs and sustainability. You can say that there is, you know, in the report, it's referenced as ultra-frozen, right, cold storage.
So there is that new element, that new segment, right, that is emerging, which is quite sizable, where you have a significant amount of very cold data that is infrequently, if ever, accessed, right?
And so that's i would say that's
where you see see this emerging at the bottom and then there's also another tier this is actually
above tape which is uh in between tape and disc uh which has access within call it a few seconds
so just think of uh if you wanted to serve up a video today, then you typically get maybe 10, 15 seconds of an advertisement before you actually get the video playback.
That could be a very attractive application for ceramic data storage.
So as we're thinking about deep storage for stuff like video applications, we can go this route. You know, one of the challenges
for as the streaming services
as their catalog grow larger and larger
is kind of these, you know,
we'd like to call deep cuts,
stuff from 30, 40, 50, 60 years ago,
even that may not be in high demand,
but has a cost to keep it either real-time or near real-time
access. That's right. That's exactly the type of use case, right? And in the report, by the way,
it was about 35% of all data was earmarked as frozen, 25 percent as cold, 20 percent as cool, and 13 percent
as warm, and 7 percent as hot. So this is kind of like a new further, how to say, refined data
storage pyramid that I think will give opportunity to deploy all of the media technologies in a call it active archive setting.
Yeah, historically.
So if you're looking at cloud service providers
such as AWS with their Glacier storage,
where would this play in kind of a tier of storage
potentially for a cloud,
a hyperscale cloud source provider?
Yeah, so you can think of this solution as ceramic data storage to be deployable in an accessible
fashion, as well as in the standard archive fashion where you have a vault type of storage,
right?
With that, it can address up to 80% of the data stack in totality, depending on implementation model.
When you think about hyperscalers,
it could be offered as new,
basically storage services to hyperscalers,
but at the same time,
it can also be deployed on-premise.
That will be a question of basically choice
from a customer perspective.
Given the fact that the writing itself is the most costly aspect of it, there's a good
chance that that will be basically an ideal starting point for cloud service providers,
right, where you leverage centralized infrastructure to generate the media that you then fairly inexpensively can read back.
So you're saying that something like Starabyte could fit
almost above and below Glacier in price and access density
or access performance. Is that how you read that?
Yes. So when you think about cloud offerings, cloud storage offerings today, take a dollar a terabyte a month,
I think that that will further evolve and ultimately will get down to, you know, dollars a petabyte a month.
Right.
So from a scale perspective, from an offering perspective. So you have to have the cost structure in order to not only take the density up 100x,
but need to be able to also take the cost down in the first instantiation by 10x, right?
And then ultimately 100x.
So in order to be, let's say, competitive, right?
For example, vis-a-vis take disk as a benchmark,
you ideally want to be positioned in order of magnitude below disk.
Yeah, that's where tape has sort of found its sweet spot over the course of, God, 60, 75 years.
You know, the challenge with, you know, we've had different archive storages emerge over the, you know, over the last 50 years or so.
I mean, holographic storage was big.
I mean, there's been various ceramic storage solutions out there.
There's been, you know, obviously DNA storage is the most recent iteration of that.
The challenge has always been that, you know, none of these storage technologies are standing still. Tape is, you know, LTO 9 today, LTO 10 in two years, you know, they're
double the storage. And then disk, of course, is not standing still, and neither is SSDs and
NAND flash. They're all moving at almost a, it's not quite constant, but a fairly dramatic exponential activity here where they're increasing density and decreasing costs.
I mean, can something like ceramic technology, you know, mind you, these organizations, Disk, Tape, and NAND have billions and billions of dollars of revenue coming in and they're devoting billion dollars plus to R&D.
How can something like ceramic technologies like this continue to maintain, you know,
an order of magnitude better, you know, better cost per bit or access density or whatever
than these other technologies?
Yeah.
So you made several points.
Let me digest them.
So first of all, there is a slowdown of scaling.
It's definitely there, right?
But you will see continued scaling.
There's no question about it,
but the rate and pace of scaling is slowing down, right?
You already have seen that from the LTO8 to LTO9.
We'll have to see howTO8 to LTO9.
We'll have to see how it goes to LTO10.
That's one, right?
Just to probe on that point.
Yeah, the rate of scaling has gone up and tape, it's a new head technology and media technology.
Those sorts of things go back to, you know, whatever the rate is. But over time, you know, over the course of decades, they've been able to fairly maintain a reasonable scale constant.
Yeah.
Again, the rate and pace of
scaling, there's data that you can
see when you plot the density
advancements, it has
slowed
and is expected
to continue to slow.
There's a reason why we also
get a lot of interest from the main storage
providers today into this
technology to be a complementary offering right and um the the the thing that uh was for me actually quite
telling when you look into the storage industry and and the um as as a r d spend that you're
referenced right i assure you that uh for at least this drives and tape. It's not in the billions of dollars.
It may be within the world of nanoflash memory.
So rate and scale of investment also has slowed.
That's another piece as well.
And then keep in mind that the reason why I was also making the connection
to the semiconductor fab tool industry
is that we have the benefit with the semiconductor fab tool industry, right, is that we have the benefit
with the ceramic data storage to ride on the coattails of a trillion dollar industry,
the semiconductor industry, right? And that's where you have the largest, let's say, R&D budget,
right, to work on core technology that can then subsequently be adapted to storage, right? From
that perspective, I'm fairly confident that the rate and pace of scaling
will allow it to be ahead of other technologies.
The other thing is, in a general concept by itself, right,
using a Gorilla Glass that is coated with a ceramic layer,
that media is extremely inexpensive.
The largest cost, actually, and cost actually in the current demo system
is the femtosecond laser technology. That's where you have the highest burden of initial cost for
writing. So from that perspective, I think the inflection point will be from when will we expect to see crossover with cost for disk as well as cost for tape right and
you can say that the cost crossover for disk is definitely within the remainder of this decade
and the cross crossover for tape depends at what rate the pace tape is moving forward if we give
it the benefit of the doubt to move forward as the LTO roadmap says,
right,
it will be some point in the next decade that you will see crossover there as
well.
So I'm sorry.
One of the interesting things that you're making highlighting is the cost
factor of writing versus reading,
which we don't have this cost factor in traditional,
at least not into traditional magnet type technologies.
A drive that can write can also read.
Are we going to see a ratio of drives that are read-only drives versus read-and-write
drives?
Yeah, so the technology itself is a write-once technology.
So from that perspective, the only other thing that you can do
is basically to sanitize, right?
That's why I was also using the analogy of the ceramic punch card
because once you punch the hole, right, in the ceramic punch card,
you cannot take the hole back.
You can only punch out all holes.
That's the only thing you can do to ultimately then erase the data,
but you cannot rewrite it.
And as such, this is a right-once media technology, which also has the benefit that it has basically an audit trail, right? And it is a media that you know hasn't been modified or cannot be modified.
So my assumption is that lasers are not put at work or put in use when you're reading.
So there's deep storage.
And then when there's recovery, if I need to have a ratio of maybe a half dozen drives that can have the capability to write, I can have maybe two or three dozen drives that can read
that should be a lower cost model to, you know, overall reduce my operating cost.
Yeah, you're using a microscope, right, to read the data. So you don't need a laser to actually
read the data. And that's why I was saying that the most expensive part of the process is the
writing. And the reading is at a considerably lower cost, right?
And that is also why this is, from a use case perspective, ideal for particular sets of data.
Yeah, yeah.
So you mentioned crossover.
You're talking like a NAND crossover with disk and a NAND crossover with tape.
Is that when you said crossover, is that what you meant?
Yeah, when you think of it, for example, for disk,
we have done the calculations projections on that
and you're going to see a TCO,
basically cost crossover within this decade.
And we expect that around the end of this decade,
you are going to see an order of magnitude lower cost for the ceramic data storage technology.
So the crossover is from ceramic storage over disk
and ceramic storage costing less than tape.
Is that how I read that?
Well, this is, again, you have the first crossovers
with disk and the second crossover
then is with tape subsequently thereafter.
Right, right, right, right.
And historically, NAND has been discussed as being a candidate to displace all disk.
And there was going to be a crossover.
And to a large extent, that hasn't happened because of continuing scaling from a data density
perspective on both disk and tape.
Although there's obviously certain portions of the disk market that have gone away completely,
high performance disk and those sorts of things.
So I'm just trying to understand what you said from a perspective of crossover.
So today, I guess...
Just look at TCO, right? And you
know that the value proposition of flash, right, is not the lowest cost per bit, but the value
proposition of flash is the lowest cost per IOPS, right, or the lowest energy. From that perspective,
the use case, right, that we always looked in the past, that is with flash, you have 100 times the
performance at 10 times the price for disk drives.
That has come down significantly, the price differentiated.
And as such, you can see also that disk drive, of course,
is still holding the position in the mass storage,
where cost is the dominating deciding factor.
But even there, don't underestimate the energy cost, right, that comes along with that.
That's why I was saying that when you look at cost models, right, we're looking more
and more into basically TCO models over time.
And that is where ceramic data storage shines vis-a-vis, for example, disk and ultimately
tape.
And you mentioned the ecological perspective.
Obviously, if it's just ceramic glass, it's relatively easily dismantled into elements and doesn't have any electronic waste or magnetic waste or special media technologies or head technologies or servo
motors or any of that stuff right yeah it is it is you can literally just shred it and recycle it
as glass right that's uh that's the advantage of the media technology being very environmentally Yeah, yeah, yeah, yeah. I still got a long ways to go from an IT perspective, but I could see where something like this.
I mean, the challenge is, like I say, I mean, the scaling factors for disk and tape may have slowed down, but they're still not non-zero or they're not zero.
So they're continuing to decrease in density density costs, et cetera, et cetera.
You know, creating a cold archive tier like this is going to be a challenge.
I like the fact that you're sort of your technology base is based on a semiconductor investment cycle.
So, you know, effectively, you're taking advantage
of semiconductor equipment that's maybe a decade old
at this point.
Is that how I read that?
Yeah, or even older than that, right?
If you think about the Masclis Leak of Glifography,
technology is like 20 years old, right?
And so from that perspective,
you have a great cost leverage and benefit.
And again, from where you see this emerging first is for where you have data that either needs to be immutable, immutably stored.
We have a lot of interest from the cybersecurity guys, as well as from folks that want to store data for an extended period of time.
Typically, that will start with centuries or decades of retention horizon.
And there you have just the challenge that if you store it on other media,
let's say you would have to go through a periodic media migration or data migration.
And that is what makes this very attractive.
If you don't have to do that, you can have a significant cost and sustainability benefit.
Right, right, right.
And keep in mind that, of course, this is part of why we are also engaged in the OCP
sustainability work group is that legislators will likely put carbon tax on storage as well
as other IT infrastructure equipment in short order.
And that will significantly then influence TCO calculations.
Yeah, yeah, yeah.
It's always an if, right?
I mean, they've been talking about carbon tax in the States for probably a decade and
still struggling to try to
get any of that played out. It's
a little bit easier in
other nations, I'll have to say that.
So we'll see how that plays out.
Yeah, no question.
But there's a cost, right,
to the environmental impact
of IT infrastructure. And you have
even, it was just a Financial Times article
about some countries already limiting
the build out of data centers, right?
Because they are fearful that they can't deal
with all the power demand that they're going to see.
Right, right, right.
So Keith, any last questions for Stefan?
This has been great.
This has been great.
You know, I followed this conversation way closer and easier than the DNA conversation, so I appreciate no deep chemistry in bioscience.
But you're a biopharma guy, Keith.
You should be up to stuff on all that stuff.
I am a biopharma guy.
You're more sequencing than any of us.
This is true.
This is true.
Stephan, is there anything you'd like to say to our listening audience before we close?
You mentioned a co-produced survey of storage technology.
Is that something that's publicly available?
Yes, it is, actually.
Yeah, there are several things I'd love to point your attention to.
One is the study from the National Academies of Sciences and Engineering Medicine.
I think that that is an interesting read on the long-term retention of exabyte-scale data.
The other report that I referenced was from Further Market Research.
It's available at furtherdata.com for download free of charge.
And another two events coming up,
I think that will be very interesting,
will be the Storage Technology Showcase
that is at the beginning of March,
as well as I think the other event that will come up
is gonna be mid of April.
It's the Library of Congress is putting up
the annual designing storage architectures for
libraries and archives. And that's going to be an interesting discussion as well.
As far as a public event is concerned, the Storage Technology Showcase is going to be
exactly that venue that you're looking for, where everyone's come together to think about
how we're going to master the onslaught of storage demand.
Sounds like my kind of conference.
Yes.
So send me the links and I'll put them in the podcast post as well.
So this has been great, Stefan.
Thank you very much for being on our show today.
Thank you.
It was my pleasure.
Until next time.
Next time, we will talk to another system storage technology person.
Any questions you want us to ask, please let us know. And if you enjoy our podcast, tell your
friends about it. Please review us on Apple Podcasts, Google Play, and Spotify, as this will help get the word out. Thank you.