Storage Developer Conference - #22: Hyperscaler Storage
Episode Date: October 6, 2016...
Transcript
Discussion (0)
Hello, everybody. Mark Carlson here, SNEA Technical Council Chair. Welcome to the SDC
Podcast. Every week, the SDC Podcast presents important technical topics to the developer
community. Each episode is hand-selected by the SNEA Technical Council from the presentations
at our annual Storage Developer Conference. The link to the slides is available in the show notes at snea.org slash podcast.
You are listening to SDC Podcast Episode 22.
Today we hear from myself as I present Hyperscaler Storage from the 2016 Storage Developer Conference.
So I'm Mark Carlson. I'm the chair of the Technical Council.
I work for Toshiba.
The Technical Council played a very large role
in this conference that you're attending today.
We are the agenda committee,
along with SW, of course.
It's an honor and memory.
And so we go out and recruit speakers
like Dan and
pick his brain and find out what's going on in the industry,
try and look out a little bit beyond the horizon
and say, what's happening
with the storage industry, right? Why are
sales of enterprise storage going down
or flat, right?
Where's all the money going?
Where's all this data that's growing landing?
And in a lot of cases, it's landing in these data centers that are being purpose-built for the Internet,
for the cloud, for social things like Facebook, right?
So what is a hyperscaler?
A hyperscaler is somebody who's building a lot of data centers
and putting a lot of storage in that data center, right?
Dan's a hyperscaler.
And so is Google, Facebook, Microsoft Azure, and Amazon.
That's in the U.S.
But there's also Baidu, Alibaba, and Tencent in China.
And take any country,
and you can identify the hyperscalers
by who's building data centers,
who's filling them with these best-in-class components,
and doing software-defined storage on top of them.
So that's what's happening now.
As I said, the TC likes to look over the horizon and say,
what's coming down the pike?
We're starting to hear from the hyperscalers some interesting things
about some of the problems that exist when you do this kind of thing at scale.
And one of the largest issues that they are complaining about is something called tail latency.
And you can read some papers on this.
I've got some pointers in here.
So what the TC has done is they've created a white paper
on this whole issue about
what are these requirements coming down from hyperscalers when they're trying to do this at scale.
It's not just hard drives. It's also solid state.
In the future, it'll be more and more persistent memory, as you're hearing here this week in some of the
talks.
So go download that paper.
It talks about some of the standards that already exist that can address this,
as well as some of the standards that are coming along,
some new features in some of the existing interfaces that you know and love. So, you know, one estimate notes that currently, today,
half of all bytes shipped are going to these hyperscalers, right?
Another research said the compound annual growth rate, CAGR,
of this business is around 20.7%. That's huge.
That's bigger than most of our sales growth, right?
And they can and do request specific features
from storage devices, right?
Via the RFP acquisition process, right?
I want a million dollars worth of drives.
So you better do what I want
or else you're not getting the job, right?
And this kind of coerciveness via financial power
is changing things in the storage industry.
You're willing as a storage vendor to jump through some hoops
and make things happen in order to get that business.
And the problem comes in when all the different hyperscalers are asking for different stuff, right?
And, you know, there's slightly different requirements.
You've got to do this for that guy.
You have to write an API for that guy.
Maybe some standard extensions for another guy.
And they're all different, and you need a whole bunch of customer support people
and actual developers facing the customer
in order to meet some of these requirements.
But what we'd like to see is the hyperscalers all get together
and decide what the real requirements are
and then drive that into the standards.
But sometimes the standards organizations
are not all that friendly to actual customers.
So we want to be able to do something about that in the future.
So back in February, USNIC's FAST, which the TC does sponsor,
really recommend this conference if you're an academic
or you want to know what the academics are thinking.
We actually got a hyperscaler.
Eric Brewer, I don't know if you've heard of the CAP theorem,
this eventual consistency issue, right?
It's named for him.
And they talked about a white paper they had just published.
It was called Cloudy Drives.
You've probably seen this.
But they have some pretty specific requirements that they'd like to see.
They want to control timing over background tasks.
And, again, this is because of tail latency.
They want to leverage the disk of details. They want to
have an extraction layer with multiple implementations. Sounds like a standard
to me, or at least a standard API or standard software.
And then they'd like to have a per IO retry policy. Try
really hard to get that data out, or
fail fast, because they've got it somewhere else, right?
So there was also a paper at FAST that followed up from a 2014 paper at FAST called the Tail
at Store. and looked at 450,000 disks, 4,000 SSDs over 87 days
with a total of 857 million disks and 7 million SSD drive hours.
And they were looking for this tail latency, right?
And they found it.
And if you do all the numbers in Crunch and go read the actual paper if you want to do that.
But anywhere between 1.5% and 2.2% of the time, you're getting tail latency.
What does this mean?
That means that that stripe or that, you know, raid stripe even in some cases,
is going to be slow getting back to the customer. That means for Facebook, one out of every hundred customers
is going to get a slow response to whatever it is that Facebook's trying to provide value for.
And so that's impacting them, and they're coming to us as storage vendors
and saying, fix this for us.
So Dan went into this a lot.
If you're a hyperscaler, you're not buying no single point of failure
storage devices anymore, right? If you ever did.
You could call it traditional storage or whatever, but it ain't happening.
And just a shift over from hard drives to flash
isn't going to save enterprise storage vendors.
And there's been a lot of direct-attached storage,
although some of that's being pulled out.
Jeff Barmer is going to talk about this.
Pulled out of those direct-attached servers
and put somewhere else, just a little bit farther away
where you can still access it pretty fast,
but not so far as you need a whole fabric necessarily.
JBoffs are going to be big, right?
Just a bunch of Flash.
And look at the OCP Lightning proposal if you want to see an example of that.
And so Facebook feels so strongly about this,
they created this whole open compute project
to sort of throw their designs over the wall
into the storage vendors
and get them to build what they want.
But there's still more things that we can do, right?
They have this custom data center monitoring.
I don't know of any commercial products
that can do the level of automation
that these guys really need.
So they're having to roll their own and do it themselves.
And Dan talked about software-defined storage.
Whether you're getting open-source software-defined storage
or getting licensed software-defined storage, this is how you do it.
You get some best-in-class software. You run it on best-in-defined storage. This is how you do it. You get some best-in-class software.
You run it on best-in-class hardware.
You tease apart the storage array, as Dan said,
and that's how you're going to both reduce costs
and improve your total cost of ownership.
So what to do about tailings, right?
Well, a per- IO tag would be great.
Like I said, that you want to be able to say to the device, don't try so hard, you know. I've got the data somewhere else, and yeah, it's a little bit farther away than you are, but if you're going
to take two to times, ten times as long to respond to me,
I can get it faster from somewhere else,
and that will give me at least a decent response to my actual customer.
And what they do as a result with this automation software
is they're constantly monitoring these drives
and trying to find these long tails.
And what happens when they find a drive that's throwing repeated long tails?
Well, they're going to put it out of service.
Microsoft Azure actually does this.
They mark it as dead.
The drive is perfectly fine.
It's getting the data back to them,
but it's slow enough that it's causing problems at scale.
And so they've got to fix that, right?
Once you kill that drive, the software-defined storage is going to find other places for that data that was sitting there.
Again, it's got two, maybe three copies of that data elsewhere in its data center or elsewhere in the world
that it can pull in and use to recreate something locally for it.
But, of course, if your marking drives fail that haven't actually failed, that's going
to be expensive because you've got to replace them, right?
So one of the standard features that I'm going to talk about is a way to take those drives
that have maybe a slow piece of media on them and repurpose them, right?
Make them a smaller drive that doesn't have that slow media on them, and repurpose them, right? Make them a smaller drive
that doesn't have that slow media on it,
and then put the data back onto it,
keep it going,
and you don't have to replace it, right?
So this is also going to try,
it's also going to change some of the flash,
the firmware down in the drive.
In addition to keeping track of actual media failures,
you probably want to keep track of slow media failures.
You probably want to identify why it's a slow media failure.
If you look at that FAST paper on tail latency,
some of these slownesses went away after an hour.
What was it that caused that, and what was it that fixed it?
Certainly, background operations can be
a first place to look. If you've got background operations going and your writes or your reads
are slowing because of that, that'll cause tail latency. But as Dan said, Brendan Gregg showed how
you could actually slow these disk drives by yelling at them, right? So maybe it was a truck
driving past the data center
that actually caused the slow data latency.
Well, you don't want to reformat the drive.
You don't want to take it out of service just because a truck drove by.
So we're going to need better firmware on these drives
to help identify the root cause of these slownesses
and then remediate it as a result.
But current drives don't do this.
They don't really allow per-IO hinting,
but I'll talk about some standard features
that can get you close.
But what you like to be able to do
is remap these LBAs on the slowness
as well as the actual true media failure, right?
And then you keep track of those media failures instead of just,
or media slowness as well as the failures.
But, again, you're going to have to change how you size the drive
because now you're not just needing spare for the failed media,
you're also needing spare for the slow media.
And that's one of the changes as well.
So getting down to something that's called DPOP, right?
There's two sort of proposals in T10 and T13.
One is called the repurposing DPOP,
and the other is called the data-preserving DPOP.
And the repurposing DPOP kind of assumes that,
yeah, you've got software-defined storage.
There's other copies of the data somewhere else.
Just give me a smaller, fresh-looking drive
that I can populate with the data that's elsewhere.
Data-preserving DPOP is a slightly different proposal
that says, yeah, I'm going to reduce the size of the drive,
but if there's data there that I can continue to access,
it might be much faster than repopulating the whole drive
if I can leave that data in
place and just get rid of the slow media parts.
So that's what the depop is.
If a drive was to accommodate this, right, he'd want to keep a physical element response
times. That's kind of like how long did it take me to respond to the read,
absent any other reads that were queued up in front of it or writes that were queued up in it.
You might serve the LBAs out of cash temporarily while you figure out what's going on with the
media, just so the host isn't getting slow LBAs. And you want to keep these areas that you identify as small as possible
because that's going to be removed from their LBA capacity, right?
So what the drive does is that then it updates in the physical element status page
a health status, whether it's within, at, or beyond the design limit.
And then it needs to support the remove element and truncate command to actually reduce the size of the drive.
Streams is another concept that's coming along.
It's in several standards today.
It's a concept that associates multiple blocks
with an upper-level construct,
such as a file or object.
Temporarily, not permanently.
The drive doesn't need to keep track of files and objects, right?
But while you're writing that object, right,
you'd like it to be on consecutive write blocks
in an SSD, for example.
And that allows you that when you actually trim those blocks
and release that storage,
all those write blocks are automatically saved.
This cuts down on write amplification,
if you know what that is.
It makes it a lot more performant.
NVMe just put it into a 30-day review
for what's called directives.
That's one of the directives.
For SAS, this is supported by the right stream command, among a few others.
For SBC4, this is a stream control subclass.
And SATA is looking at it as well.
213, right, Ralph?
Advanced background operations,
and again, this could be a very likely indicator
in a lot of cases while you're getting slow response
is the drive is busy doing other stuff,
garbage collection, scrubbing, remapping,
cache flushes, continuous self-tests,
and those delay, or may delay,
the IO operation from a host,
and, of course, lead to latency.
So the idea here is to say,
okay, I'm going to be doing some latency-sensitive stuff on you for a few,
and hold off any background tasks until I can do that.
Or go ahead and work on your background tasks,
but tell me when you're done.
Tell me what percent complete you are,
and I'll hold off my own latency-sensitive operations
until you're done.
So for advanced background operations, NVMe,
it was part of directives, so we pulled it out of there.
For SAS, there is a background control command,
and for SATA, that's part of ACS for Advanced Background Operations Feature Set.
So you can see a kind of theme here is that when we do this,
we want it to get done in multiple of these standards organizations.
That's something that SNEA members are doing all the time.
Well, there's kind of a different approach
that we're starting to see more interest in,
and this is the open channel SSDs.
And this says, basically,
all that intelligence down in the disk drive
is actually causing me problems.
So let's tease it apart
and have some of the intelligence down in the drive
and some of the intelligence up in the host.
And the good news is, for the hyperscalers,
that gives them more control over the entire situation
because it's their own flash translation layer in some cases
that's actually making the decisions about when background operations are done,
how to do remapping,
how to do garbage collection, and so then you get better control over it, right?
The disadvantage for the storage vendors is that, hey, I'm not going to be able to charge
as much for my SSD because it doesn't do as much, right?
You're basically sucking the value out of my storage
offering and doing it yourself. But this is a valid approach, and we need to, over time, we're
probably going to have to expose more stuff in the storage so that the host will know what's going on and be able to control
it a little bit better. And most importantly, control its own behavior a little bit better
so that it can get what it wants out of the underlying storage.
So there's also part of the standards that are sort of intended to speed up the rebuild of drives.
As drives have gotten bigger and bigger and bigger,
it takes a lot longer to
rebuild them because
you're dumping data through a fairly
small straw into a
very large shake.
And so
any time you can understand
what the situation is on that
drive and supply it with only the data So anytime you can understand what the situation is on that drive
and supply it with only the data that is missing from that drive,
the better off you are.
And so when the rebuild assist stuff was put into these standards,
the idea was to sort of disable some of these background tasks
and remapping and whatnot
because you're trying as hard as possible
to get that RAID stripe back up and going
with that drive that needs rebuilding right now.
So that can be used to eliminate or reduce tail latency as well.
That's in T10.
It's in SPC4, subclause 4.20, rebuild assist mode.
It's also in T13, ACS4.
As far as I know, there's not been anything proposed to NVMe at this point.
We just improved it a little bit for SSDs too last week.
Per IO hints, there's something called logical block markup
that can be used
that will help you sort of
again the host can come down to the drive
and tell them
I'm going to get a bunch of reads
and they're all going to be sequential
so get ready
and be prepared to handle it
and if the drive's smart,
it can do that without creating additional latency.
So they invented a new word called sequentiality.
There's a read sequentiality,
there's a write sequentiality,
and that gives you the benefits that you can show.
So in SAS, there can be up to 64 logical block markups.
Each one of those logical block markups
contains a combination of hints that you see there.
And then each I.O. can reference
one of these logical block markups in SAS.
Well, of course, SATA did it a little bit differently.
SATA has a data set management command
that assigns an LBM to a range of LBAs. So
you're basically saying this range of LBAs is going to have IO with that kind of approach.
So if the host, if the hyperscalers can use these kind of features, then they can help,
again, reduce some of the tail latency.
And the storage industry doesn't need to do anything.
They can just ask for this standard
and these features of that standard
to be included in their next RFP.
And then they'll get the stuff they want.
And there's some other stuff coming down.
These guys are not done, right?
They're trying to get the SSD vendors to expose more information, as I said,
about the internal organization of the drives.
Well, that's not allowed, right?
You'll only get logical block address.
You don't get a physical block address.
But suppose that we could say that if you do a write to these logical block addresses in a range,
you're going to get queued up one after another.
If you want to issue parallel reads or parallel writes,
you're going to have to use separate LBA ranges and queue up the reads and writes to those separately. And so somehow we're going to have to show from the device some little bit of
exposure of the underlying
physical dependencies, right?
Show me where a read is going to queue up behind
other reads. Show me where a read is going to
queue up behind another write.
And I can only know that if you tell me sort of this mathematical model
that shows me what LBAs are going to contend with each other.
And so then I can order my reads, I can order my writes,
and as such to be able to achieve some pretty aggressive requirements
as far as response time and uptime guarantees.
So, you know, as Dan said,
enterprises are following the same sort of thing as the hyperscalers, right?
We just published today a white paper that's joint between the Cloud Storage
Initiative and the Data Protection and Capacity Optimization Committee. And we actually went out
to one of these enterprise hyperscalers and interviewed them as to what their situation was.
So there's some really interesting data in here, both in the size of these enterprise hyperscalers
as well as some of the techniques
that they're borrowing from the big Internet hyperscalers, right?
And there's a changing supply chain
to these enterprise customers
and their data centers they're building, right?
There's this term called ODM Direct,
Original Design Manufacturer, right? There's this term called ODM direct, original design manufacturer, right? And they
package up these best in class components and they put them in the pods and they send them to
these data centers and you install 20 rack pods full of storage and compute. And ideally they're not all different, but certainly the open compute project is helping
with these sort of standardized rack designs, standardized motherboard designs, standardized
JBOD and JBOF designs. And if we can get these hyperscalers to sort of request the standard
features of the drives, standard features of the enclosures,
standard features of the racks.
We'll be able to both manage it and supply there, right?
And OCP isn't the only one.
There's a similar effort in China called Scorpio,
something you may be familiar with.
But it's basically the same thing as OCP.
And so that can help.
Any amount of standardization is really going to help streamline this whole thing.
But the bank that we did interview has over 20 data centers around the world.
They create an internal private cloud for the entire bank's IT project usage. They can't use the public cloud because they've got to comply
with over 200 different country government regulations, right?
Amazon won't do that for you.
Sorry.
And then their storage budget dwarfs the revenue
of most medium-sized storage vendors, right?
Enterprise companies that embrace a blended value model,
offering software-defined storage, long-term retention,
data lifecycle management, and traditional workhorse storage,
they're better suited to benefit from that shift in the industry,
and they're all doing it, just like Dan said.
And they've also deployed tens of thousands of nodes,
around 200 petabytes of active data
and half an exabyte of inactive data,
and their data footprint is growing at 45% annually. 200 petabytes of active data and half an exabyte of inactive data.
And their data footprint is growing at 45% annually.
That doesn't mean, I mean, because the capacity increases in normal storage, that doesn't mean that their actual money is growing at 45% annually.
But they do process tens of trillions of transactions daily.
Downtime is very expensive, as Dan told us.
And they are big enough that the vendors will custom build for them.
But they also have a policy of no single source for any of their hardware,
which leads, again, to standardization.
They buy storage in six petabyte pods that are half CPU and half storage drives,
preassembled by the Odon Direct Vendor.
They installed the first touch pod back in 2015, and you're starting to see this more and more, right?
And they're expecting to save
50% over traditional
legacy storage devices, right? This is overall,
including their own people costs,
including their own power and energy costs.
The software-defined storage that they use
is with their best-in-class commodity hardware.
They're licensing from a software-defined storage vendor.
They wouldn't tell us who,
and they'd probably prohibit us from telling you.
But they want to grow that by about
50% by 2020.
And
so they want to virtualize the hardware.
They are looking at Ceph
for software-defined storage,
but they don't feel it's quite mature enough.
And it also requires kind of a new
skill set for them to develop
if they want to start maintaining their own software-defined storage code.
And they're also looking at going pretty much exclusively to NVMe.
So what is SNEA doing about this?
Well, we see the writing on the law, and we want to help these data centers and the data center customers get what they need from the storage industry, right?
So we've created a data storage task force.
You can go to snea.org slash data center.
You can see what that's all about.
You can download some of the slides.
We want to create a technical work group.
We need another couple of SNEO members to join on for that.
We may do an initiative to promote adoption
of some of these solutions and standards.
And then we want to drive these features,
these new features,
down into the T10, T13, MVM Express, and others
to actually help these guys solve this problem.
Because SNEA is the whole industry.
It's not just enterprise storage vendors. It's not just device industry. It's not just enterprise storage vendors,
not just device vendors,
not just storage networking vendors.
It needs to embrace these hyperscalers, right?
It needs to get in the minds of these customers
and see how the storage industry is going to change
and what we can do to streamline this going forward.
So, increased attention to the fast-growing hyperscaler market.
There's currently a fractured approach to new features
via the RFP process doesn't scale, right?
So we need to coordinate these hyperscaler requirements.
We need to make things happen out there in the industry,
such as NVM Express.
So we've got the right stakeholders here around the table.
Let's get involved in that and make it happen.
There's other hyperscale stuff happening.
There's a Birds of Feathers session tonight in Cypress across the hall on hyperscale and data center storage.
Bill Martin's doing a talk
on Wednesday afternoon on
SSD performance and endurance.
We also got the open storage
platform from Eric Slack. He's
going to go into a lot more of the
actual use cases and
actual
solutions that are out there that he's taking
a look at for this.
And then Software Defined Flash, Craig Robertson on late Wednesday afternoon.
So go and look at some of these other hyperscalers.
Go do your own research.
Read those academic papers.
This is going to happen.
Questions?
Please wait for the mic.
Up front here, Armani.
In one of your slides, you were talking about enhanced feature,
the garbage collection and whatnot. You said
that that's going to impact the latency of the host IA.
Give me an example.
How much impact is this going to be?
Just pick one of the enhancements and tell me how much impact.
Well, control over background operations could really help the ones that are slow for an hour and then it gets fast again.
My suspicion is it gets slow for an hour because it's got a lot of garbage collection and background
tasks it needs to run. As soon as those are all cleared out, drive becomes fast again.
I think the biggest bang for the buck is going to be putting your thumb on those background operations. But also, if we can come up with,
across the industry,
some way to give this per IO hint
without involving a whole lot of other LBAs
or tying it to LBAs, those kind of things,
I think that would be a big win as well.
They can just say,
look, it's all right to be slow in some cases,
but for this particular I.O., fail fast.
Morali is going to get the mic next.
Spoke about...
We can hear you. Don't change it.
So I spoke about I.O. hints quite a bit. So one of the observations is NVMe also has
I.O. hints in the standard. They don't call them hints, right? But in my experience, what
I found is that this is like a catch-22 situation, because it's truly an end-to-end solution.
You do need the support from the host side, as well as the target side.
And the standards have come here before the implementations. And there seems to be some
problem in terms of people. People understand the benefits. But there's a lot of problem
in implementations. And that's not on a nice path, as far as I know. Your observations
any different? So I would say it's still early days, right?
But I think what could help is if in some of these cases we could do a little
bit more testing as a group.
You know, we have plug fests.
Plug fests all around the hotel this week, right?
And it's really just a matter of focusing those plug fests on what the hyperscalers want, right? And it's really just a matter of focusing those plug fests on what the hyperscalers want,
right? If we had a plug fest and everybody was testing, you know, for tail latency and could show
during the entire week they didn't have a single tail latency issue, that would be a win from these
hyperscalers' point of view, right? Using the standard interfaces, right? And then, you know,
tightening down the specs as well. That could be an issue too, right? Do an ECN And then, you know, tightening down the specs as well.
That could be an issue too, right?
Do an ECN to say, you know,
we've got so many widely differing views
of how to implement this
that we're not getting interoperability
and the hyperscalers are not getting what they want.
Well, there's lack of implementations
both at the controller or target side
as well as the host side today.
Yeah, yeah.
No plumbing in the operating system.
That is a problem.
Yeah, and sometimes that can be solved with a little bit of open source
that shows everybody the right way to do it.
Bill, behind you, Marnie.
Thanks.
So two comments to Dr. Jibby.
We have done some testing related to performance improvements for
some of the things that Mark talked about
and between background
operations and the
streams control
we've seen in excess of
a 50% performance improvement
because it
distributes
right now you end up distributing things
like background operation over all time,
which means you're always hitting it.
If the host can say, hey, I'm idle right now and do this,
it makes a significant improvement in at least 50% performance improvement.
Then back to your talk.
In terms of what you said related to the hyperscalers wanting to know more information about geometry
and you put the thing onto the SSD vendors, if it takes something away from what they provide.
I think a bigger issue there that the hyperscalers need to consider,
and this goes back to the previous talk where they talked about, you know,
we need something that is a consistent standard-based thing because we don't want to
always be changing, is the fact that if we start doing a lot of information about the underlying
technology, that technology is changing rapidly. Every year there's some new announcement at Flash Memory Summit of the next technology,
and that changes the geometry, and therefore that changes your upper-level stack
unless there's a layer of work in between there, which is done by the developers of that new technology.
Yeah, so that's a good point.
And I think, can you go back in the back?
Ralph wants it.
I think it's going to be a give and take, right?
The host is going to have to do some extra stuff.
The device is going to have to do some stuff.
We're going to have to do,
I think this is one of the cases
where we want to do a bunch of testing
and lab work and plug fests before we settle on an actual standard
so that we can make sure that whatever we standardize is actually going to satisfy those requirements.
Yeah, Ralph.
So I want to just amplify one of the things Bill was saying,
and that is that it's not just changing the geometry in the sense that you already know the shape of the device,
so all you need are numbers about different parts of it.
It's changing the parts.
The geometry is the way the picture
is drawn, not just the numbers
around the picture, which makes it
a much more complicated problem
to try and design a standard for it
because it has to have
holes in it where a future
that we don't know yet can be put.
It's actually a nasty problem.
Yeah, but sometimes we spend too much time on future-proofing stuff in standards, right?
That may very well be.
But if you deny the device the ability to innovate to new geometry,
then those folks who insist on knowing too much are going to find they get only old devices.
Yeah, you don't want to fool.
Don't fool them.
All right.
All right.
One last question, then we run out of time.
All right.
We're done.
Thank you.
Thanks for listening.
If you have questions about the material presented in this podcast,
be sure and join our developers mailing list by sending an email to developers-subscribe at snea.org. Here you can ask questions and discuss
this topic further with your peers in the developer community. For additional information
about the Storage Developer Conference, visit storagedeveloper.org.