Grey Beards on Systems - GreyBeards talk Nexgen Storage with John Spiers, CEO and Kelly Long, CTO Nexgen Storage
Episode Date: June 5, 2015In this podcast we discuss Nexgen’s hybrid storage with John Spiers, Founder & CEO and Kelly Long, Founder & CTO of Nexgen Storage. Both John and Kelly have had a long and interesting career across ...multiple companies ranging from startups to major industry organizations, so they bring a unique perspective to what’s happening in the … Continue reading "GreyBeards talk Nexgen Storage with John Spiers, CEO and Kelly Long, CTO Nexgen Storage"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here and Howard Marks here.
Welcome to the next episode of Greybeards on Storage, a monthly podcast show where we
get Greybeards storage and system bloggers to talk with storage and system vendors to
discuss upcoming products, technologies, and trends affecting the data center today.
Welcome to the 21st episode of Graybridge on Storage, which was recorded on May 28, 2015.
We have with us here today John Spires, CEO, and Kelly Long, CTO of NextGen Storage.
Why don't you tell us a little bit about NextGen Storage, folks? NextGen Storage is a data storage company
that sells into the mid-range and high-end markets. We've developed very unique SAN
capabilities involving flash, disk, and other storage medias to deliver what we believe is
the first implementation of true quality of service in a shared SAN environment,
and which, from a customer standpoint, it delivers consistent performance
and other service-level criteria across multiple applications simultaneously. So quality assurance is the quality assurance. Quality of service is a major
feature set of the product then? Is that how I read that, John and Kelly? Yeah, actually it's
integrated deeply into our entire software stack and hardware information and infrastructures. We've actually built the entire system around quality of service. The functionality is in every module all the way up and down the
stack. Every I.O. is analyzed and processed independently, looking at a stats database
historically about how the volume has proceeded prior in previous hours and months. We actually
have a lot of historic data we use.
We use current system on what's happening in the current system
as far as how many IOs are going to a particular storage device,
how's the CPU doing, how's the processor, how's the network.
It's integrated all the way up and down.
So our quality of service isn't just kind of a bolt-on little addition somewhere.
It's core to the architecture.
And you mentioned hybrid storage, so it's a disk SSD configuration?
Yeah.
Today, the architecture is currently shipping as disk and solid state on a PCIe.
The architecture was designed to be multiple tiers, so today it's the two tiers.
We've always had envisioned other tiers and
other different storage media with different characteristics. So you're teasing us already
with the all-flasher, right? I'll just say that storage has never stand still. There's always
new technology coming out. Some of it might be faster. Some of it might be slower. They have
different functionality
and capabilities. I'll speak to some hard drives I've seen recently that are more in the lines of
write to them once, don't ever write to them a second time. Oh, God. Yeah, yeah, yeah, definitely.
Yeah, that style of storage is definitely different. But for us, we can integrate it
into the tier, understand its characteristics, and put it to work.
What application, do you guys are focused on any particular application arenas?
Yeah, I mean, we sell across all verticals.
You know, we're a general-purpose storage solution, but we're getting a lot of interest from customers and have quite a few customers in production using big data applications like Splunk where they can dynamically tier data based on, you know, spikes in workload sets, for example, ingesting a bunch of data, running analytics, and then moving it off to a different tier of storage when they're done with those analytics.
And we provide the mechanism to do that very seamlessly
just by changing a policy.
So, yeah, so small SQL server data sets are on our storage
and being able to tier those data sets into Flash on the fly
and run database queries or different analytics.
We sell a lot into the VDI space as well, like everyone else.
But what's unique about our VDI value proposition is the bootload, the desktop boot doesn't
impact the performance of other applications running on the SAN.
For example, if you're running your database queries and you boot 1,000 desktops,
do you see your database queries run much slower?
In our case, you don't see any degradation in performance for mission-critical applications
while those BI workload spikes occur.
And that's another example of our quality of service.
It's really end-to-end developed in our system from the ground up.
You know, a lot of the other vendors out there claim quality of service
and tout their quality of service capabilities.
But when you look under the hood of what's really there,
it's just an IO prioritization scheme where they're starving some application
and giving others priority, whereas we can deliver consistent performance across multiple
applications simultaneously based on what the customer actually wants as far
as performance and reliability and data protection all those kinds of things
unusual and I've just you're not the first vendor to talk about this, but Splunk and big data types of applications on hybrid or even all-flash storage seems to be, in my mind,
seems to be overkill, but these sorts of things are becoming more important.
Is it this real-time aspect of what they're trying to do?
Yeah, well, it seems to me that you think of it that way because there's
a large amount of data and so i want to keep it on spinning disks to keep it cheap
and it's relatively low value data but if you can trick if you can tier dynamically
then you know you're renting the high performance when you're doing the query. Yeah, that's exactly right.
When they want to run analytics on the data or run queries on the data
and look at it and analyze it, they want it to be extremely fast.
When they're not looking at the data and don't plan to for a period of time,
they like it on the cheapest storage they can get.
Yeah, so they transfer it on and off,
and they transfer it on for the analytics and off
when they don't need it anymore.
It's like Mr. Miyagi said,
cash on, cash off.
Oh, really?
Okay.
Oh, gosh. And yeah,
the quality of service, like you say, everybody
has quality of service, John, but
it's not like it's
the same for everybody.
Yeah, I mean, the dirty little secret in the industry is, you know,
this first wave of storage products that incorporate Flash,
most of them are still using what I call a legacy disk drive data path.
So if you follow an I.O. through the system, it goes through a shared
RAID controller that's sitting in a PCIe slot, like an LSI RAID controller, for example.
You know, these RAID controllers, granted the processor speeds have increased and the number
of ports per RAID controller have increased and the overall bandwidth and throughput have
increased over time. But this device was designed for disk drives.
You know, if you look at the LSI RAID controller, I think it was designed by American Megatrends
and then bought by LSI and became the Mega RAID division of LSI, and then was eventually
acquired again by Avago, I guess.
But if you look at that technology, you know, all the I.O. goes through that RAID controller,
and these RAID controllers were designed for the latency of disk drives.
And there's a lot of timing loops and processors, everything it does, the RAID algorithms,
the striping, you know, different RAID levels, parity calculations, all those kinds of things were designed for disk drives. And most of them require a fan-out extender to really add the number of devices you need behind it.
So, for example, a six-port LSI RAID controller with six gigabits per port delivers, you know,
at a high level, delivers 36 gigabits per port delivers third you know you know at a high level delivers 36 gigabits per
second but but you know you start calculating well what happens when you add a fan out extender and
add more disks behind that and so forth but if you and and then at what point does that rate
controller become a bottleneck and it really isn't designed if you look at the latencies through that
data path they're off the charts. They're
not really designed for flash devices, which are more like RAM memory than they are like a disk
drive. So those first wave of storage products are all bolt-ons to disk drive architectures.
We saw early on that flash needed to be treated like memory, and it needs to be on a very fast,
low-latency bus. So all of our flash devices and low-latency storage devices,
whether it's an NB-DIMM or flash device or anything that's solid-state,
we deploy that on the PCIe bus or memory bus.
And then, you know, if we have disks in our system,
obviously those belong behind the controller,
but nothing else really belongs there.
So if you look at the performance of some of these bolt-on architects,
there's a lot of latency through the controller.
It makes QoS almost impossible because now you're managing workload through a central device,
which isn't meant to be managed in that manner.
Whereas managing things across PCIe lanes and buses and memory on the computer architecture
makes QoS much more uh uh you know makes qs
much more effective than it does trying to do it through through a shared device like a rain
controller so we kind of see the transformation of the industry from you know the the you know
ssd bolt-ons to disk architectures transforming to uh to PCIe backplanes and buses and switches,
managing these very high-speed, low-latency devices much more cost-effectively and efficiently.
And you can see some standards like NVMe and others that are moving in the direction of implementing those front ends on the flash devices.
Ultimately, you're thinking about architectures that are rack-scale flash, kind of like DSSD?
Definitely.
Not that you guys are announcing any such thing.
If you look at the industry,
there's been a big need for a very low latency, high speed backplane that's cost effective, easy to implement, and is standardized, right?
You know, InfiniBand was kind of the first, you know, they weren't the first, but they were the first to really kind of get traction.
And, you know, but you don't find InfiniBand in a lot of IT data centers today
or even in the cloud.
Most of it's in high-performance computing.
But the whole idea behind it.
Or buried in the back of things like isolons and extreme IOs.
Yeah, yeah.
But that's an example of a very high-speed, low-latency backplane.
And the reason they, you know, those systems are architected with that
is to have a faster data path to devices like memory or flash.
And PCIe, Intel's vision is to use PCIe as that very fast, high-speed, low-lens-y backplane using switches.
And Intel's entire roadmap on their flash devices is PCIe. And we're going to see, and if you look at the cost of the PCIe framework and the fact that of the storage systems will be very, you know, tightly
coupled, high-speed, low-latency backplanes where, you know, the front end will continue to be
Ethernet, fiber channel, you know, and whatever other devices that connect you to the outside
world. Do you support fiber channel, presumably? Do you support, like, iSCSI as well?
Bill, you want to take that one? Sure, yeah. So today we're an iSCSI that we ship.
We have various different flavors of communication protocol up and running in our labs here,
and we're trying to decide when it makes most sense
and what configurations to go along with those for shipment
and actually release to the wild, if you will.
Yeah, yeah.
So the technology's there.
It's in our labs.
We're just trying to decide where to put it all,
and today we're iSCSI only.
Ray, it's the left-hand guys.
What did you expect?
Well, you want to talk a little bit about the history, John,
of where you've been and how you got to where you're at?
Yeah, sure, why not?
Well, I started my career in the disk drive industry in 1984, developing the first Winchester disk drive for the IBM PC XT.
God, he is a gray beard.
Yeah, so that qualifies. I was going to say that qualifies me for the gray beard, right? We were sitting around arguing whether anybody would use 10 megabytes of storage or not.
Yeah, well.
Was that Miniscribe or Rodaim? Well, that was Miniscribe, which eventually became MacStore, right?
So I spent a lot of years at MacStore.
Kelly and I were working in MacStore's network systems group that made the
max-attached NAS product.
And it was this little box with two disk drives in it that had an NFS and
SIFS front end.
And people loved these little NAS boxes
because they could share files and so forth.
And customers would love their first little NAS box
but hated their 10th NAS box, right?
Right.
So customers would ask,
is there any way to consolidate these together
and manage them as one?
And that really became the vision for left-hand networks,
where it was a node-based architecture that scaled capacity and performance
by just adding nodes to the system.
It was one of the most advanced distributed systems architectures out there,
and to this day I think it still is.
Oh, God.
Because it had the capability of load balancing using data locality awareness from the client.
A lot of the clustering technologies like Equalogic and others use IO forwarding schemes where it sends data to set replication levels for high availability and campus sandcapes and all that kind of stuff.
So we sold that company, Hewlett Packard.
There's a lot of reason behind that, but one of the main reasons was we were in the midst of a financial crisis,
and we had limited choices for funding going forward
but we pulled the band so kelly and i uh you know started thinking through uh we were playing around
flash devices at hba we quickly saw that you know a couple of these ssds saturate a rate controller
and you know why would you want a bolt flash onto a SAS wire anyway?
It really belongs on the PCIe bus. And then how do you
develop a system with that and still make it serviceable and
hotswappable and all those great things? But the real
vision behind the company was clearly the QoS
and the capability to do QoS. So if you look at,
you know, left-hand had a great scale-out architecture, but customers, a lot of service
calls, complaints came in and saying, hey, when my database, you know, my financial reporting
kicks off on Friday afternoons, all the of the world will die. All the other applications.
My Exchange server performance is all over the place.
My SQL server performance is all over the place.
How do I get a handle on performance?
And it was always the tap dance,
and the overall SAN should do X IOPS.
You've got to kind of figure it out, right?
And then customers virtualized more and more,
just exacerbated the problem. Now you have this really concentrated I.O. blender feeding I.O. to your SAN and performance is all over the place.
So that's the problem, main problem we set out to solve.
I always find it really amusing that you guys and your crosstown neighbor, SolidFire, both set out to address this problem and if because of nothing
else but location both have a lot of heritage from guys who were important to left hand yeah
that was that it was really interesting wasn't it because there was really no discussions between
the groups at all and we both came to that conclusion and it was probably from the pain we experienced at left
hand with quality of service and being able to handle all these mixed workloads and being able
to control the yeah the funny part is dave saw you know a similar but not not exactly the same
problem when he was at rackspace you know and then as soon as he started hiring along came adam and
it was like yep we saw that at left-hand, too.
It's actually a good testament that the crews that are there, there's a lot of good brain trust that's forward-looking here at both of these companies that look forward to what's coming down the road and try to get out in front of it and get good technology available for customers out there.
Yeah. Yeah, the other place where I think you guys have
kind of a unique knowledge set is,
you know, we're in another one of the periodic battles
about whether we should be using custom hardware
or industry standard.
Yes, yes.
I hesitate to use the word commodity
because I don't think a Supermicro server
and an HP server are exactly the same.
But
I know at LeftHand
you guys tried several go-to-market
models and that software
only had its problems.
And we don't want to talk about that.
Whoa!
Why is that?
Come on, I'm sure there are some good stories there.
Yeah, I mean, there's a lot of different go-to-market models.
I mean, the beauty of the left-hand software, and to this day, HP ships what's called a virtual SAN appliance.
Yes, yep, yep.
Right.
Which allows you to build a converged SAN, really, with multiple nodes.
In fact, they could have probably became new tonics with that capability.
But then HP is HP, right?
Exactly.
But, you know, if you look at the channel, you know, we're 100% channel.
And when you sell to the channel, they sell complete solutions, software, hardware integrated together, one throat to choke, one price.
And, you know and the selling motion's
kind of predefined because the channel's been selling this stuff forever.
When you tell them to cobble together a server with software and who's going to test it,
whose fault is it if it breaks, and start complicating the whole business model and
go to market with software on anybody's server.
And the fact that channel partners only get, you know, five points on servers and they get 30 points on SANs kind of tells you, you know, why it didn't work, right?
But that being said, you know, there's certainly a market for a software-only virtualized SAN
product that's server-based.
Did you see this? In my mind, I recently wrote a post on, you know, the fact that hardware innovation is starting to accelerate again in the storage market at least.
Yeah, it is because the last generation of hardware was all designed around disk drives.
So, like I said, the data path is being re-architected for these new higher speed, low latency devices.
And so you're going to see a lot of the old disk drive baggage transform into newer hardware capabilities that really takes advantage of these flash, and the flash devices themselves
for that matter.
I mean, Moore's Law kind of ran out of gas on the flash side of things, and so now they're
stacking the bits
versus getting more bits on one layer.
And so, you know, you see your single-level cell, two bits per cell, three bits per cell,
and, you know, guys even working on four bits per cell.
And that's going to bring the price down on this stuff to 50 cents a gig and below.
But then the real issue with that is what's the reliability how many
drive rights per day can it handle all those kinds of things and so your software has to be able to
handle these different tiers of flash and be able to wear it so that it all lasts at least five
years or at least within the warranty period and gives the customer real-time visibility into the life of the stuff. I kind of question that.
Not that the software has to manage the flash aging,
but I question the assumption that you have to manage it
so that it lasts five years for everybody.
I think if you managed it so that it lasted five years
for 95% of your customers and you had 5% of customers where you replaced it because it wore out?
No, it's a better position.
That's probably economically a better position.
Yeah, we actually take logs from all of our customers, and Kelly can talk more to this,
but we actually graph out the logs of how many gigabytes or terabytes have been written to our flash devices,
and then we look at how close they are to wear out,
and we can actually calculate our field exposure of flash card wear outs
that are under warranty and what our warranty cost is going to be.
And so we'll...
Right, you can take that the next step and you know proactively ship a
replacement yes definitely yeah and we can do that and we do do that today uh and we can understand
that bell curve and understand where we want to be on the bell curve and you're exactly right we
don't want to you know the eight you know we don't want to ship our products that last five years for
you know an hpc workload at 100% right.
Because then everybody else overpaid.
So do you guys publish any data statistics on what the aggregate right workload is across your install base?
We don't actually publish those. We do have them in-house, and we actually have
knowledge about how different products are being used and what different workloads and environments.
We haven't come forward and published kind of an average or a sanitized version of that.
Kind of statistical analysis of it. Yeah, it would be interesting. I mean, the only thing I really
saw that was published was, you know,
EMC did something, Chad had some stats,
and I think it might have been Pure did something as well.
But Toshiba had done something a while back on, like, laptops and stuff.
We actually have all of that stats.
It's actually integrated into our QS engine, as I was mentioning earlier,
with all of that stats. It's actually integrated into our QS engine, as I was mentioning earlier, with all of our stats database.
So we track every I.O. and what it's done, where it's gone, how it's been.
So we have a large number of data points of how all of the systems working out in the field that we can bring back and analyze and scrub.
It's actually a good question that we may eventually try to publish.
Yeah, we'll have to talk about publishing some of that data the next time I'm up there.
Yeah, let's do that. Then we may decide to try to say, here's three environments. Here's a VDI
environment. Here's a Splunk environment. By application, that'd be great. Yeah.
I think that would be very useful from an industrial perspective, just from a storage
industry aggregate level kind of thing. I do know we have the data because we look at it to
decide how our market formatting should go, where we should be in the market and what
product should go where. Right, right. But, you know,
we're all working under, we just don't know, those of us in the chattering class, we're all
working under, you know, you really need a five drive rights per day
drive in that application.
And that's why I'll guess, because we don't have the data.
And it would be lovely to be able to ask the same question of four or five vendors and go,
okay, so what do you guys see here?
And be able to say, well, for this application, this is what most people do.
And it's actually a quite interesting question because most of our customers don't even really know.
They say, well, I run VDI.
And then they're going to tell me they do petabytes per day.
And I'm going to say, I'm pretty sure you don't.
But no problem.
We'll put in an N5 and we'll see how it runs because we've got metrics that will show you.
Yeah, that's interesting. Yeah, I mean, you know, the data clearly shows, for example,
the Fusion I.O. flash that we use in our system, which is now SanDisk.
Right.
And we're not necessarily tied to Fusion,
but Fusion is still one of the best PCIe flash products out there.
You know, our product consistently shows that we can do 10 drive rights per day
with that technology.
10 full drive rights a day.
A day, and have it last five years.
Yeah, yeah.
Depending on the application, obviously, but that's kind of, you know, and at a high level
where if you look at some of the, you know, commodity SSDs from Intel that some of our
competitors use,
our data shows those are less than one drive right per day reliability.
But there's a market for that,
and it really depends how you tier the data in your system and how friendly you are to those devices.
So I had another technical question, but before I go on to that,
there's been some interesting history with NextGen.
You were purchased by SanDisk and then spun out.
Do you want to talk a little bit about what transpired there?
Yeah, so we were acquired by Fusion.io.
Oh, I'm sorry, Fusion, that's right, yeah.
Fusion had an incredible vision.
And, you know, David Flynn clearly saw that, you know, they were the market leaders in the first market with PCIe Flash,
but they saw a lot of competition coming up, and they realized that it was eventually going to become commoditized.
So David refocused the company on building solutions using that Flash, right?
He acquired three storage companies, ID7, us, IO Turbine.
It was building out a portfolio of solutions to take through the channel
and sell into the enterprise.
Obviously, that strategy got derailed by the state of the business
and revenue and all those kinds of things.
Well, and the board giving David the boot.
Yeah, exactly.
And so we were caught up in that.
And six months later, they're selling the company to SanDisk and we're like scratching our heads.
What's this all about?
And the next thing you know, HP management's calling up SanDisk and said, what the heck are you guys doing selling to San?
Competing with us in the channel.
So it was clearly a conflict of interest for SanDisk.
They do not want to be in the systems business.
Any systems that they build, they build it for the OEMs, not to sell themselves.
Right, right.
So it's a pure OEM business model, and we were clearly a conflict with that.
And plus we had disks in our system, which were a real no-no for SanDisk, right?
Yeah, as a Flash foundry, it didn't really fit their model yeah so so we uh you know they were real nice to us and we have
great oem and technology agreements and all that good stuff but uh we we worked with our management
team on spinning the company out and then getting back what we do best, which is build great, great SAN products and take them to market through the channel.
The brand kind of changed off and on across that transition back to next-gen storage.
Isn't this where you started?
Yeah.
So we were next-gen from the very beginning.
We were even next-gen under Fusion I.O., but our product was called I.O. Control.
Right, right.
You know, we kind of put the NextGen name back on
the shelf. When we spun out from
SanDisk, we actually spun
out the actual NextGen entity
that still existed.
We have all the intellectual
property. We own 100% of the company
and all that great stuff.
We spun the company back
out on its own.
We're 100% independent.
And SanDisk has no ownership whatsoever.
And we're back on our own, like I said, doing what we do best.
Did your channel undergo any transition between the various acquisitions?
Well, the channel kind of saw us go away.
Okay, okay.
And, you know, lack of channel programs, lack of communication,
lack of, you know, feeding them leads through our lead gen program and et cetera, et cetera. Now they all see us coming back and are all, you know,
the channel's been real friendly with us and understanding,
and they're still very excited about our differentiation and our product.
And so we're back engaged with the channel, and CDW is a great partner, and we're out there growing the channel and doing great.
I want to just be to the point on it.
The Fusion and SanDisk really squelched
our ability to market and sell through the channel they they had their own ideas and so it wasn't
our decisions in our directions it was them kind of squelching us so we're back on our own now
we're ramping all that back up and we're already seeing the channel accept us pretty quickly on it
because they remember us from the past yeah to tell you how bad it was under sandisk they wouldn't even let us have our product in the sandisk booth at bm world
whoa yeah that was tough and they gave us a little back a closet somewhere to have meetings
and and they said please don't show yourself at our booth um you about red-headed stepchildren.
Interesting.
And then Fusion was just, you know,
Fusion, the big problem they had
is it was a direct and OEM
model as well, even though they
had aspirations of building a channel program.
And so we kind of got
caught up in that as well, and
it never really did get transitioned
into a channel program.
Okay, back to the technical side. I didn't hear anything mentioned about data reduction.
Because of the fact that you're a hybrid environment, you don't do any data reduction
on board the system? Actually, we do actually have data reduction algorithms we run. We actually
call it our dedupe algorithms, as kind of a lot of people call it, but it's more of what we call simple dedupe, and it helps us reduce IOPS,
is kind of along the lines of really what it targets. It's actually a calculable pattern or
calculable engine. We actually can calculate and decipher data, and we'll just write a little bit of metadata stating at offset X inside this volume,
regenerate from the CPU this workload pattern. So it's kind of around our architectural tiering.
We truly have two that we described today. This is really a third tier, and I call it our CPU tier,
because the CPU is the one generating the data for us, and the CPU is the one generating the data for us and the CPU is the one absorbing the data for us. And we just have to store a little bit of metadata that we can use to regenerate it
back through the CPU. So it's not what you'd consider a traditional dedupe algorithm. We have
the ability to put that into our software stack and it's actually been designed and we've got
some shims in there, but we haven't chosen to go tackle that project just yet,
mainly because it's not as big of our differentiators our QoS is.
Our QoS engine really is kind of the core there,
so we haven't targeted the dedupe just yet in its full flavor.
The dedupe we do have actually reduces a lot of IOs
and actually has a pretty effective rate as it stands today.
So, Kelly, would you effectively call that a large dictionary
compression algorithm? Yeah, I was going to say it sounds like LZ compression or something like that.
It's close to that. It's not actually an LZ. I would call it, I think Howard's got a pretty
good term there. A large dictionary compression algorithm is probably a good term. Okay. Because
I've got a large dictionary of things that I will compress and I will recognize and put into the dictionary.
Right, and anything you find in the dictionary gets tokenized.
That's correct. That's a good term.
Got it.
But it's architected in a way where it actually helps performance and doesn't impact performance.
And it's integrated into our overall software stack.
Right, right, right.
Okay, interesting.
It is interesting.
Yeah, marrying dedupe to QoS is also interesting.
That's another side of this problem, right?
Yeah, exactly.
You know, it's just all sorts of extra metadata.
Yep, and it actually is integrated in there quite tightly for us.
As I mentioned, the QoS is kind of all the way through the system.
We can actually, on an IOP per IOP basis, an IO that comes into the system,
we can look at the system and say, our CPUs are currently a little bit too busy.
We're not going to deal with de-duping the write.
We'll actually go ahead and store the write because the CPU is busy and the flash has extra headroom.
So the QS engine really drives the whole show on who's going to do what and when.
And, you know, if you're out scrubbing blocks looking for data matches,
you can make sure that's not impeding performance
based on a KOS policy as well.
Right, right, right.
Because you know all that stuff in real time.
Have you guys done any performance benchmarks and stuff like that
on the product?
Well, here's where I have to be completely honest, Ray.
Ah. I feel the disclosure coming up. that on the product? Well, here's where I have to be completely honest, Ray.
I feel a disclosure coming up.
NextGen is a client of Deep Storage Labs, and
N5 is winging its way to the
labs for such testing right now.
And they're also
sponsoring a series of VVOL seminars
that I'm doing around the country.
Oh, great. So we won't steal the thunder
on that, but we do perform very well
and we outperform our competitors.
As well it should be, John.
I wouldn't expect anything less.
You know, we're running into
a little bit of time here.
Howard, do you have any last questions?
No, I think we got it covered.
Yeah.
I think the commodity
versus non-commodity hardware
is worthy of a whole different discussion
at some level.
What's really interesting
is how the industry
got caught up. Everybody's doing
off-the-shelf hardware,
all the new storage products. Even
EMC and NetApp have moved to
x86- based head units.
I think it's mostly
about accelerating the development cycle.
Yeah.
I think you guys are right.
There's going to be another wave of innovation
on hardware just because of
Flash and NBDIMS.
I have seen
some stealth startups
with some very interesting switched PCIe.
Yeah.
Yeah.
We might know who those guys are,
by the way.
Yes.
Among,
among the many people we know that are doing stuff in this space.
Right.
It,
it,
it,
I find it funny talking to the guys at PLX,
um,
about how,
what, five or six years ago, we were talking about NextGen and Zego and all of this.
We're going to do fancy things with PCIe that were about peripheral sharing and turned out not to be worthwhile.
But now all that same technology becomes, oh, we'll build a rack scale system with a PCIe backend becomes an interesting concept.
Right. That's actually one of the things we had laid out.
So our N5 today has PCIe fabric between the two compute domains.
Right.
So we're already kind of on a fabric, but we don't share it as much as we want to down the road
when additional hardware technology is available.
And you guys have a standard DRAM cache kind of solution as well, and it's mirrored across the two domains?
We, yes, we don't have any single point of failures.
All of our IOs are mirrored for writes.
Our read caches or read areas are not necessarily mirrored.
They don't, you know, we're in a mode where we don't need that, but all data is persisted on at least two media
before it's actually acknowledged back.
And when you say two media,
like DRAM and SSD would be two media?
If the DRAM was NVDIMS,
I would give you that.
Oh, okay.
Persistible media.
I'm not going to be able to take...
So both of them are persistible media.
Okay, I got you.
That's correct. I got to be able to handle a total power outage of the box.
At very worst, NVDIMM in both controllers.
Correct.
We do know that there are data centers that take power outages,
and we actually have had a couple here in the lab.
One of our friendly neighbors over here was doing some construction in the road,
and they popped a big transformer that took
the entire block out.
And the UPS almost
always works, but the diesel generator
my
experience with the diesel generator is just
not good enough for me to live with
anybody who goes, oh yeah, power failure
in the data center, don't worry about
it. Yeah.
The UPS approach that Xtre ExtremeIO and some other guys use,
we're not bought into that necessarily either.
I have a blog post I'm working on about that.
Because what you're really talking about there is doing NVRAM at the system level,
and UPS becomes part of the system,
and it better be tightly coupled
and tightly tested and all of those things yeah and if you've got the capability when your ups
is about to run out to flush your volatile contents and things like that then then you're
okay yeah i don't want to break any ndas there howard but um there's another company that's
doing that that might make that more commoditized.
Oh, God.
All right, folks.
We're getting about to end.
John and Kelly, do you have any last words for the podcast audience?
No, I don't think so.
I really appreciate the time, guys.
And you guys are certainly storage experts in our space and really appreciate you doing this.
And I appreciate all your services and support as well.
Our pleasure.
All right.
Well, this has been great.
It's been a pleasure to have John and Kelly with us here on our podcast.
Next month, we'll talk to another startup storage technology person.
Any questions you might have, please let us know.
Thanks for now.
Bye, Howard.
Bye, Ray. Until next time, thanks again again John and Kelly bye guys thanks okay that's a
wrap