Grey Beards on Systems - 65: GreyBeards talk new FlashSystem storage with Eric Herzog, CMO and VP WW Channels IBM Storage
Episode Date: July 10, 2018Sponsored by: In this episode, we talk with Eric Herzog, Chief Marketing Officer and VP of WorldWide Channels for IBM Storage about the FlashSystem 9100 storage series.  This is the 2nd time we have ...had Eric on the show (see Violin podcast) and the 2nd time we have had a guest from IBM on our … Continue reading "65: GreyBeards talk new FlashSystem storage with Eric Herzog, CMO and VP WW Channels IBM Storage"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Howard Marks here.
Welcome to another sponsored episode of the Greybeards on Storage podcast,
a show where we get Greybeards storage assistant bloggers to talk with storage assistant vendors,
discuss upcoming products, technologies, and trends affecting the data center today. This Graybeards
on Storage podcast is brought to you today by IBM Storage and was recorded on June 22,
2018. We have with us here today Eric Herzog, Chief Marketing Officer and VP of Worldwide Storage
Channels at IBM. It's great to have you back on the show today. So Eric, why don't you tell us a
little bit about yourself and what's new at IBM Storage? Well, thank you very much. We love coming on the gray beards, although
I only have a gray mustache. I myself have been in storage for 32 years. I've worked at big companies
such as IBM, EMC, Seagate, and MacStore, but also seven storage startups. And all of them have been
very storage software centric. So IBM's got some exciting news.
We are launching a product on July 10th called the Flash System 9100. And love to talk to you guys
about what it delivers from an application workload and use case perspective, and how it
takes advantage of the new NVMe protocol delivering better solutions to the end user. Well, we're big fans of NVMe around here. I've been on record that I think NVMe should replace
SCSI as the lingua franca of storage. So we're glad to see IBM join the party. Why don't you
tell us a little bit about the 9100? Is it a whole new system? Where's the software come from?
All those things sure well
so howard we would agree with you nvme is a transformation the storage industry having been
around since there used to be scuzzy and then fast scuzzy then fast wide scuzzy then ultra 80
ultra 160 ultra 320 etc etc etc up through today to what we have with SAS and SATA,
the world is changing. NVMe is going to transform how storage can talk to CPUs,
how it can talk to the hosts over the fabric infrastructure. So what we've done is we brought
out a brand new storage subsystem. The Flash System 9100 will incorporate NVMe-enabled full
subsystem. So it's NVMe from our dual RAID controllers back to the media.
On the media side, we've refactored our award-winning Flash core modules
from a long card into a two-and-a-half-inch industry standard form factor.
So in fact, from a hardware perspective,
this array will accommodate either our Flash custom modules
or it will accommodate industry standard SSDs.
And what you're going to see from this product is high-end performance at mid-range pricing.
That's right, about 10 million IOPS in a four-way cluster.
Or for a single 2U box, 2.5 million IOPS at only 100 microseconds of latency with all data services and software running.
So you've reswizzled the Flash Core modules that we know and love today from the old Flash
system systems?
Yes.
So the Flash Core modules, which have been tried and true for IBM since the acquisition
of TMS several years ago, as you know, resembled sort of about the physical size of a fiber
channel HBA. We've now shrunk those into a small form factor,
the industry standard two and a half inch form factor. And we've also been able to create
a couple of different sizes of raw 4.8 terabytes, 9.6 terabytes, and actually a whopping 19.2
terabytes, which right now is larger than any of the industry standard SSDs with NVMe
form factor, which are around the 15 terabyte level. So we've got incredible density, incredible
resiliency, what the flash core modules were known for, and incredible performance. In fact, we've
even put in hardware-based compression and encryption, which gives you no performance
impact when you're doing compression or doing
data at rest encryption.
You still will have the 100 microseconds latency.
You'll still have the incredible IOPS.
And on the 2RACU, you'll get 34 gigs of bandwidth.
So incredible performance across the board.
Okay.
So to be really geeky, the flash module...
20 terabytes and two and a half inch form factor?
Actually, in a two and a half inch form factor, we could do two petabytes.
So what we could do is 20, we could do 20 terabytes or 19.2 terabytes raw.
Yeah.
We support all kinds of data reduction technologies, block level dedupe, data compression, thin provisioning,
pattern matching, iSCSI map and unmap. So with all of that, you can assume a five to one
data reduction in most use cases. So the 19.2 terabytes goes up by five. So what this means
from a two RACU perspective, how about two petabytes and two rack you yeah so i
appreciate two petabytes and two rack you because it wasn't all that long ago a customer with a
petabyte of storage was impressive to us yes and that was a that was a room full data center full
of storage in those days but the geek in me is coming back to a couple of things. Sure. So you're doing compression in the controller in the flash module?
Yes, and encryption.
Okay, that's good.
So, I mean, we've seen things like that before.
Sanforce did compression in their controller,
but it led to confusion because the SSD interface wasn't smart enough
to tell the system above how well that compression was working.
There's a good reason to build your own flash module so you can communicate that back to the upper level controller.
Right. And we've been doing that, as you know, for years.
We've been known for our resiliency and performance.
Yeah. On the flash systems, hardware has had hardware compression on the flash Core modules for a couple years now, right?
Yes, it has.
But now we've added encryption as well.
So we've got more horsepower in each of the two and a half inch form factor units.
So now we can not only do hardware-based compression, but also at the same time, hardware-based encryption.
That encryption is logically the equivalent of a self-encrypting SSD, right?
Yeah, it's 256-bit encryption, AES 256-bit, yeah.
Is FIPS certified and all that stuff?
Not FIPS yet.
So as you know, to get a FIPS certification,
you have to have a shipping product.
So we are going to get FIPS certification.
And the reason, and this is going to sound surprising,
as you know, getting FIPS certification
is sort of standard on archive configs, right?
Our IBM cloud object storage, things like that.
But we have had several government agencies, healthcare companies, and financial institutions
say, we would like to see FIPS certification on primary storage.
So we know how to do it from our object storage.
So once we have a shipping product,
we will go into FIPS 140-2 certification.
It should have that within a quarter or two
after the release of the full Flash System 9100.
Okay.
And somewhere you mentioned that it also supports
standard NVMe SSDs as well as Flash Core modules?
Yes, we do.
So we've designed the storage subsystem to take either of the types of SSDs as well as flash core modules? Yes, we do. So we've designed the storage subsystem to
take either of the types of SSDs. So you can either use NVMe SSDs or if you will, our flash
core modules, if you will, our own SSD, both two and a half inch form factor, you could put 24 of
them into the storage subsystem. So 24 in a single two rack you config. And if I use standard SSDs,
then the system processors do the encryption and compression?
Yes. So, right. So the Flash System 9800 ships with the award-winning Spectrum Virtualize,
which has hundreds and hundreds of thousands of copies of the software in the field between our
Flash Systems products, our StoreWise products, our SAN volume controller. And we also sell
Spectrum Virtualized as a standalone piece of software because it works with everyone's
hardware, not just ours. So that comes with a full set of heterogeneous capable enterprise
data services. So for example, Snapshot, Replication, Data at Rest Encryption, for example, snapshot, replication, data arrest encryption, compression, block level dedupe, all the other data reduction technology, as well as transparent block migration in the background on the fly with no server or app downtime.
So the system supports storage virtualization of the backend external storage and stuff like that?
Yes.
Now, there's additional license when you go to do the backend virtualization.
So it's a little bit premium,
but we have the Spectrum virtualized base
included in the array when you get it.
And also several other pieces of software.
For example, IBM Spectrum Copy Data Management
comes on a Flash System 9100.
IBM Spectrum Protect for backup comes on the IBM Flash System 9100.
IBM Spectrum Connect, which gives you persistent storage for Dockers, Kubernetes, and container environments comes with it.
And IBM Spectrum Virtualize for public cloud also comes with it, which allows you to create a DR copy in a public cloud of not only the FlashSystem 9100, but when you're in the storage
virtualization mode, any other block device connected to the FlashSystem 9100. As you know,
Spectrum Virtualize supports over 440 different block arrays from all kinds of manufacturers
that are not us, EMC, NetApp, the old Dell products, HPE, HDS, et cetera, et cetera,
all are supported with that Spectrum
virtualized software that is included on the Flash System 9100.
So that's how I replicate my equal logic into AWS.
That is exactly true.
Now, the first pass of IBM Spectrum virtualized for public cloud only supports IBM cloud,
but in following versions, which will be a free update to obviously those who have warranty and maintenance, you will have a free upgrade and we will be supporting in addition to IBM Cloud, which is what it supports today.
Later in the year, you'll have Amazon and then early next year also Azure support as well.
So you'll be able to go to a DR copy in IBM Cloud today, later in the year, Amazon, and then early next year, Azure. So
whichever one you choose to use. And Spectrum Virtualize is a clustered
storage system. So you could cluster these products together?
Yeah. So we have the base model with two fail-over-fail-back array controllers,
giving you 100 mics of latency, 2.5 million IOPS, and 34 gigs of bandwidth
and only 2U. Then Spectrum Virtualize allows you to create a four-way cluster. So in 8U, you can
have 100 mics of latency, 10 million IOPS, and 136 gigabytes of bandwidth by clustering four
of the 2U arrays into an 8U cluster configuration. And so it dramatically changes the IOP and bandwidth profile.
Okay. And external storage behind that, right?
Yes. Yes. So you buy the storage virtualization upgrade. And in that case, we would support 440
arrays behind that that are not ours. By the way, ours too, but other companies are raised. And so all of the data services from the ability to tier data,
to migrate data, to replicate, to snap it, to encrypt it, to compress it,
all those features would work not only on the Flash system 9100,
but all of our competitors' products that would be virtualized behind the 9100.
Yeah, that's always sounded like a great idea.
But it's the initial migration that always sold me on things like that.
Oh, God, yeah.
Because I buy a 9100, I put it on the data center, and then I can just virtual So let's say for sake of argument, I'm a more traditional company. So I'm super busy Monday through Friday, but it slows down on the weekend for sake of argument. So if I'm that type of company and I set it to migrate during the week when it's really busy, it'll continue to migrate, but it'll always favor the app workload and the migration will be slow, but it's all automated. So once you set it up with a source and a target, it just goes. And then on the weekends, when the workload is light, the
migration, of course, will accelerate at its speed. But the point is you just click and set,
select the source and the target, tell it to go. The servers don't come down. It's non-disruptive
to the storage. It's non-disruptive to the applications,
workloads, and use cases that the servers are, of course, hosting. So that's the real value of this
is you can move data around. And by the way, we'll do that when it's not even our own storage. So if
you want to migrate an EMC array to a NetApp array, you just got to point and click source
and target, and it'll go ahead and do that as long as those arrays are virtualized behind the flash system 9100 yeah you just have to be clever with your multi-path
software right which we don't control that on their side but you know if as long as you got
the right multi-pathing you can migrate from anything to to anything of those 440 arrays
that we have brands that we support okay so it's a active-passive controller, dual-controller system.
Yeah, it's active-active.
Okay, so it's an active-active.
Active-active pair.
Kind of LLUIS style?
Yeah.
All right.
Active-active failover, failback.
And then I can scale out.
It's a scale-up infrastructure, and you could put 20 expansion modules behind the base box.
So if I look at a flash system, 9100, I can put behind it another 19 storage subsystems.
So I can make a giant box, then I can cluster four of those if I really want to.
Right, that's what I was referring to by the scale-out.
So when I do that cluster, what kind of granularity,
is that loosely federated?
Is it tightly clustered?
The federation is at the four members.
So, and the four members would be four 9100s together.
All the expansion models, you'd have to create additional volumes
and additional LUNs and
configure those. So you wouldn't see it as one ginormous, obviously think about this,
you've got two petabytes in just two RACU. So if you put 19 expansion behind it,
you could be up at the 19 times two. So you'd have 38 petabytes. Then you'd cluster it together. You would not see all that
at one time. You would see essentially the primary units, which would be the 4, 2 petabyte units is
what you would really see. Then you could see the rest. So it's loosely federated for the expansion
modules, but tightly federated for the top four, if you will. And when you say expansion modules, you're talking about external storage behind the solution
or storage shelves, effectively?
Storage shelves.
The daisy chain from shelf to shelf
still uses the SaaS interface.
So you would have an NVMe 9100 at the top,
and then you'd daisy chain behind it,
but you'd use a SAS interface to go from the
9100 to expansion shells, which would not have array controls. They'd just be JBOS,
right? Just a bunch of flash. And here's where all that logic for managing a
flash disk hybrid comes in handy again, because once again, we've got two pools of storage with
different performance characteristics. And part of what we do with our easy tier is not only can we tier inside of a storage subsystem,
but we can tier from one storage subsystem to another. So let's take an example. I have a
FlashSystem 9100, super fast, 2.5 million, IOPS, et cetera. I've got an older storage subsystem let's say a vnx1 or one of the original nimble hp boxes
and it still works but it's all hard drives and it's you know 10 000 rpm it's was great in its
day but now it's kind of slow i mean you won't support my clarion cx500 anymore uh we might but
if i've got that older box why not turn it into sort of a backup tier or an archive tier? You could set up easy tier. And easy tier, by the way, is automatically based. It knows when the data set is hot or cold. When the data set is hot, it would sit on the that EMC VNX1 as, if you will, an archive tier. And it all automates.
If that data set becomes hot, like let's say financial data in a publicly traded company,
when the data set comes hot, it'll pull that data back from the EMC VNX1 back into the flash.
And when the financials are closed and the data set slows down, accounting and finance aren't
really using it anymore, it'll push it back out to the EMC array. So that's an all automated process, not by policy,
but literally it uses artificial intelligence to sense is the data set hot, meaning move it to the
flash, or is it cold? So we can do tiering inside of a storage array, but also from one storage
array to another storage array, that's ours, or from an IBM storage array to one of our competitor storage arrays.
Real flexibility on the data movement and data migration in this solution.
And the clustering can be, you know, not just Flash system 9100s.
Could it be like a Flash system V9000 and stuff like that?
Anything that uses Spectrum Virtualize.
Oh.
Yeah.
Now, by the way, you'd have a performance issue because if you cluster a Flash system 9100, which does 2.5 million IOPS,
with let's say our Flash system V9000, which does about 1.5 million IOPS, you've got a mismatch
between the performance of the two nodes of that two node cluster. So that's not necessarily a good
idea. Will it work? Absolutely.
But you'd probably want to do performance matching. So you'd want to most likely do clusters of equal performance boxes usually. And the Ddupe is across the internal storage as well as external storage
or how does that work? So the Ddupe works with any external storage subsystem that is supported by Spectrum
Virtualize. So D-Dup on the FlashSystem 9100, obviously our FlashSystem V9000, our StoreWise
products, our VersaStacks that use those all Flash arrays, and then the 440 arrays that are not ours.
We do not do D-Dup on storage that sits inside of a server,
only storage that's external, that's virtualized by Spectrum Virtualize.
And what's the scale of a deduplication realm?
Well, we use what are called data reduction pools. And the data reduction pools,
you can set up multiple data reduction pools. If I remember correctly, I think it's like,
I want to say it's 10 terabytes per pool.
Is the limit. Yeah, but then you just set up another pool and it'll it's 10 terabytes per pool. Is the limit.
Yeah, but then you just set up another pool and it'll dedupe based on the pools.
And the pools can span LUNs and span volumes.
Any of those pools is a deduplication realm.
So if I have two pools and store the same data, I got two copies.
Because frankly, sometimes I want that.
Yeah.
And I could double check for you, Howard on the on the actual capacity of of a
yeah that's not really the important part it's that that i had that that it's something that's
under my control and relatively large is kind of all i care about yes yes it is you mentioned uh
some of this other software the copy data management protect plus and that sort of stuff
so what does copy data management do for spectrum virtualized well what copy data management do for Spectrum Virtualize? Well, what copy data management do is allow you to, does, excuse me, poor English.
So what copy data management allows you to do is to provide a template-driven automation
via APIs, particularly for DevOps, and it does not require the storage admin.
So today, whether it's DevOps or test or the developer community in most of these companies,
they call up the storage guy and say, hey, I need an environment.
Or what the IT guys hate, they pull out their credit card and go to IBM Cloud or Amazon or Azure.
So what happens here is with copy data manager, you have secondary data copies, snapshots or backups.
And you can reuse that data for real-world development, which, of course, using real data sets always make what you develop more effective and more efficient and less buggy because you're not using fake data, you're using real data.
Well, it certainly eliminates the surprises when you go into production and discover it only runs a tenth as fast on the large databases you thought.
Right.
So what happens is the developer can, their storage snaps of that
database or the workload, the developer can spin up with the templates and APIs,
an environment on their own without calling up Howard Marks storage expert and Howard's so busy,
maybe it takes them two days to do that. They can do it on their own. It's very quick to do.
They can spin it up with the APIs. So the line of business guy,
the developer, the VMware guy can spin this up. But at the same time, there's a log that tracks
who spun up what. So, which by the way, IT has no control when Howard or Ray pull out their credit
card and go to Amazon or IBM cloud. They had no way of knowing it. So I assume there's some multi-tenant,
multi-function administration model where I can say, you know, this Oracle DBA can fire up copies
of these databases. Yep. Yep. Yep. That's exactly what you could do. So the admin can,
can configure it that way. The admin can track who spun up what
and when they spun it up. And you also, one of the things we've also done with the Flash System
9100 are create what we call solutions blueprints. And all the set of software I described, Spectrum
Virtualize, Spectrum Copy Data Management, Spectrum Protect Plus, Spectrum Virtualize for Public Cloud, and Spectrum Connect. We use those.
Think of these, a blueprint as a recipe.
And Virtualize, Copyedit Management, Protect are the milk, the eggs, the flour, et cetera,
to do baking.
Well, you can bake a cake.
You can bake cookies.
You could bake a pie.
You could bake all kinds of things. but it's the same base tenant items.
And then you use those base items.
In this case, it would be all the software that comes with the Flash System 9100 to create
solutions.
So for example, in one of the solutions, which combines both IBM Spectrum Virtualize sitting
on a 9100 or virtualizing other people's storage, connected to IBM Spectrum Virtualize for
public cloud, which allows you to replicate data from the IBM array or other virtualized arrays
to a cloud, reducing capex. Because what you're doing is you don't need an IBM Flash System 9100
sitting in the cloud. You don't need an EMC VNX1 in the cloud. You don't need an EMC VNX1 in the
cloud. You don't need an HPE block array sitting in the cloud. You've got a virtual instantiation
of Spectrum Virtualize sitting in a VM in the cloud. And all it cares about is the arrays just
move block data. So they're moving block data out to the cloud. Spectrum Virtualize know where it
came from and can do that. Now, one of the good things in this use case, copy data management
can be used off of the DR copy that's sitting in the cloud. So in that instance, you've got DR
to a public cloud, so a multi-cloud use case. And not only can you do DR to the cloud, but then you
could take a copy of Spectrum copy data management and use that so that the DevOps
guys can do that. And then once they've done that, that snap and its management can either stay
A, out in the public cloud, or B, you could push it back to the Flash System 9100 on-premises.
So that's just one sample of what the blueprints do. And the blueprint... Yeah, go ahead.
When did Spectrum Virtualize become a virtual edition that runs in the cloud?
So Spectrum Virtualize for public cloud, we announced the beta in the summer of 2017 and available as a GA product in November of 2017.
So in Q4.
Okay.
And now we're including on the array.
So you also can buy it as a separate piece, right?
Just a standalone piece of software.
But in the case of the Flash System 9100, part of the power of this is not just NVMe
and incredible performance and resiliency that IBM Flash Systems is known for.
And now going from older interfaces to the new NVMe interface.
By the way, we also will allow you to go out to NVMe over fabrics as well.
Whoa, whoa, whoa, whoa, whoa.
Forget all that.
I haven't talked about that yet.
I know.
But we also happen to include all this software.
So when you get the array, you get all this.
Spectrum Virtualize for Public Cloud
is one of the pieces you get.
And you could use one of our blueprints.
Again, think of it as a recipe.
In that example, you use several pieces.
You use copy data management.
You've got Spectrum Virtualize sitting on the 9100.
Spectrum Virtualize for public cloud.
All three of those pieces of software come with your Flash system 9100.
As I've struggled with making things work in the public cloud, it struck me that the public cloud may be where storage virtualization really starts to show its value.
Because, well, I mean, if you, I know Eric
isn't going to like this, but are you talking in Amazon terms?
Because that's what I know.
EBS is really limited.
An EBS snapshot isn't like a storage system snapshot.
You don't get things like consistency groups.
And so if you're doing lift and shift and you want multiple servers to access the same
LUN and get the kind of application consistent snapshots that we're used to in the
data center. It's really tough in AWS. It's not designed for the applications we run now. It's
designed for applications that know they're in the cloud. And so being able to do things like say,
let's build a two-tier storage system where EBS guaranteed IOPS, which are very expensive,
is the performance tier and S3, which is very cheap, is the capacity tier,
start making a lot of sense.
And having storage virtualization in front of all that,
providing multiple LUN access and stuff like that.
Well, that just delivers that out to the applications
that aren't smart enough to do the tiering themselves,
like everything that runs in a corporate data center.
I got you.
So we're almost producing a Spectrum virtualized cluster out in AWS to service applications, I guess. Right. Well, and right now, Spectrum
Virtualized for public cloud only works with IBM cloud. But as I stated earlier, Spectrum
Virtualized for public cloud is going to support both Amazon and Microsoft in the next couple
quarters. So you can, if you're an IBM cloud customer, we can take care
of you today. If you're Amazon or Microsoft, a little bit later in the year and into early next
year. And by the way, one of the things that's interesting, if you talk multi-cloud, is we're
starting to see multi-cloud, in this case, use of multiple public cloud suppliers. So we see this
in two ways. A, some of it's legal. So for example, IBM's got about 30 cloud data centers in 30
countries. So if I'm in one of the other 160 countries in the world, and I happen to have a
law that says, if you generate data in my country, it needs to stay in my country. Well, guess what?
They might not be able to use IBM cloud, even though they love it. They may have to use Microsoft.
Conversely, Microsoft or Amazon may not have a cloud data center in in a certain particular
country and we happen to and so you've got legal reasons why you may use clouds multiple cloud
vendors you have buying patterns among bigger companies where they may be divisionalized or
regionalized so there's an approved vendor list at corporate and the European guys can buy whatever
they want as long as it's on the list and the North America guys can buy whatever they want as long as it's on the list. And the North America guys can buy whatever they want. And the Latin America guys can buy whatever
they want. So as long as you're on the approved vendor list, there could be multiple vendors.
And then this is going to make Howard laugh in particular, but since Howard's at one of the
largest labs on the planet these days, he's probably seen this before. So he's going out
to do something and he gets a call from procurement. Well, I know you've
been using IBM cloud for many projects. You are now getting a competitive price from Amazon or
Microsoft, right? Because I would argue the cloud isn't new. The cloud is an extension of the
internet and the internet's been around for a long, long time. Clearly a guy in procurement
who's never written an application. Exactly. So we've seen some customers where the RFP or RFQ that we get says must work with
multiple clouds. And I can tell you part of the reason when we peel away the onion is, well,
you know, we don't just use one cloud at our giant company. We actually use several different
public cloud providers. Well, why would you do that? Why don't you just use IBM cloud or Microsoft? Why don't you just solidify? Well,
we can't because the procurement guys are beating us up about, we need to beat up the vendor on
better pricing, blah, blah, blah. So the only way to do that is we have to do business with both of
them. So you've got true legal reasons. You have in bigger accounts, truly regional buying patterns.
And thirdly, now that the cloud is kind of old, and I would argue, and this is going
to make Ray laugh, I would argue that every year of high tech is like a dog year.
So something's been around for three years, it's like 21 years old.
And if it's, the cloud's kind of been around, I'd say for 10 using the word cloud.
So that's 70 years old.
And if you go back to the cloud is really an instantiation of the internet.
I'm not counting the DARPA days. I'm counting like the internet.
It's like about 175 years old.
So guess what?
That means procurement's going to be involved.
So somewhere on all that, you mentioned the NVMe over Fabric.
Yes.
So where does that play in this new 9100 solution?
Sure.
So we publicly announced back in February that we had an NVMe fabric support over InfiniBand
for our Flash system 900. We also announced at that time that all of our existing products,
the 9100 wasn't out yet, of course, all of our existing products in the Flash side had RDMA
support going out to the hosts. So as long as you've got RDMA and a few other functions built into the
array, then as people move to an NVMe over Fabric, whether it be an Ethernet Fabric or whether that
be a Fiber Channel Fabric, as long as your software supports the specification, it's a
free software upgrade. So we publicly announced that back in February. So now what we're doing,
we are supporting NVMe in the array today. We will be as soon as the specs are final. And
while some of the other vendors are willing to ship it without a final spec, IBM's a little
more conservative. So as soon as we have a final spec-
There's still some enterprise-y things missing from the NVMe over Fabric specifications.
So as soon as the spec is available, we'll do a final upgrade and Spectrum Virtualize will support both Ethernet
and Fiber Channel. It'll be a free upgrade because they bought it for Lash System 9100,
which has Spectrum Virtualize on it. They download the free upgrade. It's non-disruptive to the array
while it's running. And then we will have NVMe support over Fabric for Fiber Channel or for
Ethernet. And we're just waiting for the spec to
be solidified. Now, remember, we don't control what's sitting in the server. We don't sell
NICs or HBAs. So they'll have to talk to their NIC vendor or their HBA vendor or their OS vendor on
things like multi, yeah, things like multi-pathing support, right? Some of the things that we have today in the current industry for Fiber Channel and for iSCSI, you got to make sure they've got it.
But from our perspective on the Flash System 9100, when the spec is solidified, within a couple of weeks, we will have a GA version, a Spectrum Virtualize, that will support those two interfaces over NVMe. As long as they have
the other stuff that works from a switch perspective, an OS perspective, and the
NIC and HBA vendor, our Flash System 9100 storage array will be NVMe back to the media and then
NVMe out to the hosts. So we're just waiting on the spec to be final, software upgrade,
non-disruptive. And we already are shipping today, even though there isn't a full spec, we are shipping on
the InfiniBand side NVMe over Fabric on InfiniBand with our partner Mellanox on our FlashSystem
900.
So now we're extending that across the board from our older products, which we announced
back in February, would support with a software upgrade.
And the 9100, of course, is a new product. Not only has NVMe inside the array, but it's going
to have NVMe out to the host. And the array itself has whatever hardware is needed to do that.
Have you guys announced whether you're part of the Rocky or iWork camp on the RDMA side?
Both. We're going to end up doing both. I can't remember which one is first, but
the plan is to support both because we have customers and users on both sides of the fence.
So as one of the largest storage companies in the world, as you know, for the guys that track the numbers,
last year we were the number two storage company in the world when you look at both the storage software and the external storage systems market.
We were number two.
So we've got so many customers if we're going to support both.
Now, by the way, if it turns out that one really wins.
Yeah, support for the other will stay off due to lack of customers.
Over time.
Yeah, yeah, yeah.
But if it doesn't do that and they tend to coexist,
then guess what?
They will coexist and we will end up supporting both.
Got it.
Now, is the Flash system a family of systems
or is it just one system out there?
I mentioned the external storage that can be, you know, storage shelves can be added and stuff like that.
Right.
So the Flash system 9100 actually will have two models.
So there'll be a Flash system 9110 on the lower end.
It will have dual active failover failback array controllers,
and each of those controllers will have dual eight core processors with NVMe technology on each of
those array controllers. The 9150, its bigger brother, will have dual 14 core processors. Again,
the array controller is dual active active fail, failover-failback, and everything that it needs.
But in the 9150, they will have two 14-core processors.
So the performance numbers we talked about earlier, the 100 mics, the 2.5 million IOPS, and the 34 gigs of bandwidth on the 2-rack U is for the 9150.
And of course, that also assumes flash core modules.
If you use industry standard SSDs,
which we also support,
obviously the performance won't be as good.
But two models, a 9110 and a 9150.
And the 9110 will be less expensive,
just like the lower end of any product
in the storage space or server space.
Or automotive space.
As a software.
Automotive too. Yes.
But the software is exactly the same and all that.
Yeah.
That the performance numbers are with all three HBAs in each controller too.
This is just.
So, but the software load. Yeah. Software load will be exactly the same.
Like you said, Ray. So if I get a 91, 10 or 91, 50, software load will be exactly the same, like you said, Ray.
So if I get a 9110 or a 9150, I still get virtualized.
I still get Spectrum Copy Data Manager.
I still get Spectrum Protect Plus.
I still get virtualized for public cloud, and I still get Connect.
So the software load doesn't change.
It's really a change around the array control or hardware.
That's the change.
You mentioned Protect Plus a number of times.
It's not actually Spectrum Protect, but Spectrum Protect Plus. Now, is that a new solution there,
or what's going on there? So Spectrum Tech Plus came out last August at the VMworld show.
As you know, Spectrum Protect, which is a very high-end enterprise class solution, very well respected in the industry,
has had kind of a gap.
It's had okay virtualized backup support.
It was okay.
It wasn't bad, but it was okay.
It requires a substantial, like most enterprise products,
it requires a substantial commitment to figure out.
Right, but Spectrum Protect Plus could be integrated for virtual environments
with spectrum protect and the vmware or hyper v admin could do the backup without the storage
guy doing anything it's that easy so it's a it's a reswizzle of the of the user interface and
the apis you better stuff better yeah better So Spectrum Protect Plus gives you better support for virtual environments, seamless, and again, so easy to use a VMware admin can do it.
But Protect Plus also allows us an advantage as well.
That Spectrum Protect Plus can be used in smaller accounts
so spectrum protect is clearly big giant enterprise spectrum tech plus gives it incredible
great hypervisor backup into that giant enterprise but spectrum protect plus could be a smaller
for herzog's barn grill for howard mark's it, for Ray Lucchese's cigar store. Yes. For all of those, Spectrum
Protect Plus can work for small companies. So Spectrum Protect Plus gives us two advantages.
A, in the big giant enterprise accounts, VMware and Hyper-V admins can do the backup on their own,
okay, which is really beneficial in a giant global enterprise. But it also allows the IBM teams and quite honestly,
our IBM business partners to sell to Herzog's Bar and Grill because Herzog's Bar and Grill will
never use Spectrum Protect. But Spectrum Protect Plus, full backup solution, great for the lower
mid-range to smaller shops. So it can be used in that vein or with Spectrum Protect. It also,
by the way, in this instantiation in the 9100 is part of our
solution blueprints. So several of the solution blueprints actually use Spectrum Protect Plus as
well, which can back up data either on-premises or it could back up data to the cloud.
And the other software package that was part of this was something called Spectrum Connect.
So Spectrum Connect gives us persistent storage for Dockers and
Kubernetes environments. It works on our entire set of block arrays. So we've already been shipping
it for several quarters, but it will also be on the FlashSystem 9100. So in that case,
Spectrum Connect is not new, it's existing, but it allows the FlashSystem 9100 to be easily and
seamlessly integrated for persistent storage into containerized environments that leverage Docker and Kubernetes.
Gosh, it seems like you've got everything here covered.
Well, that's what we're doing.
And by the way, here's the great thing.
We're not going to charge more.
That's relatively un-IBM, isn't it?
Well, I know it's non-EMC. So what we're doing is, as you've seen from several of the announcements that have been
coming out on NVMe, almost every vendor, with the exception of us and one other, has publicly
stated that when you get an NVMe system, you're going to pay more.
At IBM, we're not going to do that.
The NVMe systems will be no more expensive than the current flash system
V9000. Final pricing is not set yet, but it may actually be lower cost. We may sell it for less
than a current. That is what we're looking at. As you know, when you go back to the hard drive days,
and as I mentioned, I was at Seagate Mac store and the hard drive days, it was always called bigger, faster, better,
cheaper. So that mantra is for us going to be bigger, faster, better, same price, but with more value, right? Bigger, faster, better, meaning all the extra software. Remember we don't in,
in a V9,000 or a store-wise great software spectrum virtualized. So all the benefits you
guys were talking about in heterogeneous environments, that comes with those. But you don't get spectrum copy data management. You don't get spectrum virtualized for public cloud. You don't get as much software in our other ways, nor from our competitors. None of my competitors ship, by the way, base software that virtualizes their competitors' boxes. I mean, we're the only one doing that. And then they don't ship a backup product with it. So, you know, you get a lot more software for the money, right?
And then of course the performance is dramatically different, dramatically different. And I would
say, you know, one other customer, one other, sorry, member of the storage industry has announced
a storage subsystem with NVMe that'll do 10 million IOPS.
Except it's their most expensive subsystem you can buy. And they publicly stated that their NVMe
versions, at least for the next couple of quarters, will be more expensive than their standard version.
For us, we will not be more expensive. By the way, the Flash System 9100, you guys know the market
well. It's an enterprise class mid-tier product.
So you're going to pay mid-tier pricing for 10 million IOPS in a cluster
with 136 gigs of bandwidth and latency at 100 mics
with the software loaded and on.
And as you know, from a latency perspective,
one of the things when you're talking earlier about the Flash Systems family
is our latency is always the same.
So as you know, with an SSD-based family, is our latency is always the same.
So as you know, with an SSD-based system, when the array is empty, you get really good latency.
As the array fills to capacity, the latency goes up, which means the applications run slower and workloads run slower. For us, we deliver the same latency, whether there's two terabytes
on that flash system 9100, or whether there are two petabytes,
no matter whether the array is empty, or the array is partially full, or the array is completely full,
you will always get 100 mics of latency. Now that means your apps won't slow down,
whether you have a lot of storage connected, or a little bit. And that's not true with SSD based
solutions.
All right.
Well, gents, this has been great.
Howard, any last questions for Eric while we got him online?
No, I think we got it.
Eric, anything you'd like to say to our listening audience?
Well, we'd just like to thank the gray beards for having the gray mustache visit.
And we really do appreciate what you guys do for us. So again, thank you very much and looking forward to an outstanding launch
and the launch will be coming on July 10th.
So thanks a lot, guys.
We really appreciate it.
Well, this has been great.
Thank you very much, Eric, for being on our show today.
And thanks to IBM Storage for sponsoring this podcast.
Next month, we will talk to another
storage system technology person.
Any questions you want us to ask, please let us know.
And if you enjoy our podcast,
please tell your friends about it and please review us on iTunes as this will also help get the word out.
That's it for now. Bye, Howard. Bye, Ray. Thanks, Eric. Thank you.