Grey Beards on Systems - PB are the new TB, GreyBeards talk with Brian Carmody, CTO Inifinidat
Episode Date: November 12, 2015In our 26th episode we talk with Brian Carmody (@initzero), CTO of Infinidat. Howard and I also talked with Inifinidat at Storage Field Day 8 (SFD8), a couple of weeks ago which recorded their sessi...on(s). For more information about Infinidat, we would highly suggest you watch the videos available here . As they say, Brian is … Continue reading "PB are the new TB, GreyBeards talk with Brian Carmody, CTO Inifinidat"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here and Howard Marks here.
Welcome to the next episode of the Graybeards on Storage monthly podcast show where we get
Graybeards Storage and system bloggers to talk to system and storage vendors to discuss
upcoming products, technologies, and trends affecting the data center today.
Welcome to the 26th episode of Graybeards on Storage, which was recorded on November 6, 2015.
We have with us here today Brian Carmody, CTO of Infinidat.
Why don't you tell us a little bit about yourself and Infinidat?
Sure. Hey, Howard. Hey, Ray. Thanks for inviting me. Great to be here. So Infinidat is a group of engineers, and we develop new technologies for storing large amounts of data reliably, cheaply, and efficiently.
It's funny you say it's a group of engineers. No marketing people the early days and even brag to customers that there was only one MBA in the whole company that we knew of.
And that was up until around when we had about 100 people.
Of course, that's not the case now, and we've grown tremendously, and we have a lot of really smart people both on the business and the tech side.
But the DNA of the company is definitely an engineering group.
Yeah, so we were talking with you at Storage Field Day 8 here a couple weeks back.
It was a very interesting session.
One of the things I found kind of different from most of the startups we've talked to in the past
is the amount of burn-in you guys do.
It's almost like you've been in this industry for a while.
Yeah, so I'm relatively young.
I just turned 35.
It's going to be a while before we call you a gray beard.
That's correct.
You know, maybe me and my Boston Irish friends, we can create like ginger beards or something to compete with you guys.
Oh, God.
A lot of the experience and the learning about storage operations and, you know, how to build systems that deliver value and that don't break and are enterprise ready.
A lot of this stuff was developed in the late 80s and early 90s at companies like EMC and IBM. But, you know, we're kind of in a really interesting place right now because, you know, I think everybody kind of senses that there
is a change of guard going on in the industry. We're starting to see innovation isn't coming
from the old line R&D divisions of these big companies as much anymore.
And there's so many interesting startups, you know, around.
Oh.
So, yeah.
Sorry, go ahead.
I hazard to say that it's been a really long time and certainly longer than I've been in the storage business since innovation came from IBM Labs or Bell Labs. The big guys keep buying up innovative companies more than, you know,
what was the last product that EMC or IBM or HP created organically
that you would call innovative?
VMAX, EVA 8000, what was the other?
IBM DS 8000.
But VMAX was evolved from?
1990s.
Right.
Yeah, so I think that...
Not to mention that Brian just keeps dancing around the edges of the person who was doing that innovation.
Right, right. I think IBM Research is a standout example.
I mean, they're still doing amazing basic research. That's the kind of stuff that IBM gets right. Their storage lab is a stellar example of that. The GPFS storage system and stuff is pretty solid. about EMC is not so much the products they've developed, but the things that they could have
been. I mean, if you look at Cumulo, Infinidat, XIV, even huge parts of Amazon Web Services,
those ideas all started with people that were part of the EMC ecosystem or could have been
either natively or through acquisitions. And I mean, think about if they had brought those
products to market. I mean, they'd be bigger than Apple right now, bigger than Google. I would say EMC has done very well
with its acquisition portfolio over the years. I mean, you've got to look at things like VNX and
VMware and gosh, Legato. I mean, obviously there have been some less than stellar examples, but
from an acquisition perspective, they have done, Isilon, they've done real well.
They're better than most at acquisitions.
But, I mean, look at – instead of that trajectory of those companies that I mentioned,
and those are technologies that could have been developed by EMC Engineering, what did we get instead?
We got Atmos, Solera, EMC Control Center, which is a crime against humanity.
Oh, God.
So I don't know. I think that they have kind of an issue where innovators are kind of repelled from their culture.
I think when they write the story about EMC, it's not going to be how they ended up where they did or how they got,
but what they could have been if they had harnessed the amazing talent of their developers better than they did at the corporate level. Yeah, but the only company I've ever seen do that is Cisco via the spin-out spin-in.
I think you just can't develop an XIV that's going to threaten symmetrics
inside the company whose heart and soul is based on the concept of symmetrics.
I would say it's extremely hard, but not impossible.
The innovator's dilemma is very real,
but if you're trying to build a company that's going to last 100 years,
you cannot be afraid of, in fact, you have to insist
that you blow yourselves up and get out of your comfort zone
and bet the company once per generation at least.
Yeah, and today's Wall Street environment makes that really difficult.
No, no, and that's why at Infinidat our plan is to stay private for as long as humanly possible.
Because, I mean, that's how you stay in charge of your own destiny.
And, you know, one way or the other, once you either offer shares and do an IPO,
which we're eventually going to have to do,
or if you become acquired, which we're definitely not doing.
We've done that before.
We're not going to do that this time.
But either way, you have overlords then, and you're not as much in control of your destiny. And I think instead of working for customers, you're working for people who aren't customers, rather the street.
Well, yeah, there are organizational features that you can provide in an IPO,
like Google and stuff like that, where you control the voting stock,
and that helps keep the innovation flowing, I would say.
But, yeah, there's certainly lots of challenges when you go public.
Absolutely.
I think Mark Zuckerberg kind of rewrote the book.
Yes.
When he floated Facebook about how you do that and how you retain control and everything like that.
So absolutely.
So tell us a little bit about what's going on with Infinidat.
You guys have been out and about for how many years now?
So we were founded in 2011.
Okay.
And we started shipping product in the second half of 2013.
Oh, gosh, two years.
Not bad.
Yeah, yeah.
So since then, it's gone really fast.
And we kind of operated from the second half to 2013 until the beginning of this year in total stealth.
Well, you had a booth at VMworld 2014.
That's right.
That was our one little experiment that we did.
And we literally got the last booth that was available.
Literally, if we hadn't gotten that, there was no booths left.
It's kind of like in a restaurant where you get the table by the bathroom.
It was a little 10 by 10 in the corner in the front.
Yes, it did. Yes, it did. So we didn't have a Maserati or anything in the booth this year like the other guys.
We also didn't have a helicopter in the booth, which I pushed for very hard.
But the bond that Moscone Center needed us to put up in order to bring the chopper in was like more than our marketing
budget for the year or so.
So what's the differences with Infinidat versus the other storage systems on the market today,
Brian?
Well, I mean, the first thing you need to get out of the way when you're evaluating
kind of what the field looks like for new innovators is how you use Flash. You know,
there's two strategies for using NAND Flash, which is the game changer in media right now.
And we are proponents of a hybrid architecture where you use NAND Flash as a caching layer, and you never keep the, you know, the only copy of your data
on Flash. But there's a lot of startups, in fact, you know, when I run into somebody, you know,
at a networking event, and they're like, hey, I have a startup, I'm like, let me guess, you're
scale out all Flash as. So I would say that it's, you know, that's one of the things that makes us not unique.
There's other companies that are doing it. Nimble is doing well in this space. But, you know,
our philosophy around storage is that it has to get cheaper. And, you know, as fantastic as
NAND Flash is for random IO performance and everything like that, the fact is that there's a 10x differential in cost for the media.
And for any storage system you look at, the cost of the physical media
is the single largest component of the bill of materials.
Well, the software.
I'm talking about from the vendor's perspective.
Product plus product.
Yes.
Of cost.
Of COGS, yes.
Part of COGS kind of thing.
Yeah.
Yeah. Yeah.
Yeah.
So when InfiniBox was initially being developed, you know, a lot of the early development was on all flash.
In fact, we have plenty of all flash InfiniBoxes in our lab.
And you know what?
They're not any faster.
All they are is way more expensive and more annoying to troubleshoot because so much of the logic is burnt into firmware on the controllers rather than doing it in a general purpose operating system. Information storage is absolutely staying as in our model where you have very inexpensive, very dense media, which is the backing store.
And then you have a small amount of the expensive stuff, the truffle oil, which is the NAND flash.
Well, hopefully real truffles because truffle oil is an abomination.
That's a whole different story.
Right, right.
Okay, so you guys are following the flash and trash model using 7200 RPM disks on the back?
Yeah, that's exactly the way it works.
So we use 7200 RPM near-line SAS for the backing store,
and then we use boring commodity off-the-shelf MLC flash as the caching layer.
Okay.
And is there one ratio, or do I get to configure how I like?
Yeah, yeah.
So it's variable, and, you know,
we have a pretty sophisticated performance modeling practice.
There's a colleague of mine named Avi Adoffer
who runs our corporate solution
engineering lab up in Marlborough, Massachusetts. Aviad and I need to talk. That's another subject.
Aviad's awesome. And we take a very quantitative approach to the pre-sale process. So we know
the game. We know the way that the EMC guys and other companies work, which is the solution architecture that they put together is just however much money they think you have.
Right.
And the amount of over-purchased and over-provisioned storage is just – it's crazy.
Well, I have to blame the customers for part of that. The guys who want to buy the VNX completely
populate to
their best guess as to
what would be sufficient for them
not to run out after three more
years.
Howard, I agree with you and I think
that it's admirable to say that
customer technologists have to take responsibility
for getting that right and you're right that storage people
are conservative. There's a budget issue there.
You know, trying to grab money over a course of three years versus trying to grab it all at once.
I'm not saying that it's necessarily the IT department in the customer.
I think it's like a corporate America problem that we don't let you budget to buy it when you're going to need it.
You have to buy it now and use it and lose it.
And that's just all stupid. Yeah. So I hear what you're saying, but here's the part I disagree about. When we
are typically coming in and working with a customer, we are near the end or at the end
of that asset's lifetime in the company. So the lease or the maintenance is coming up,
and now they're looking at, okay, are we going to re-up with the incumbent or are we going to look at new technology?
And even then, so those projections for growth should have been fulfilled, but we still see.
Right, but I'm a storage guy and I took my projection for growth and I multiplied by
pi so I wouldn't run out.
Pi.
And therefore, the system's only 33% utilized at the end.
That's how it got to 33%?
All these years, Howard.
What, you haven't followed the rule of pi?
Not yet, but I'm going to.
Yeah, so we, rather than just pulling variables out of thin air,
we have a very quantitative process.
Fundamentally, we believe spiritually, philosophically,
that as a vendor, our job isn't to just put the solution in, take the money,
and hit the road. We know more about our technology. And just as storage technologists,
you know, we're probably, we probably have some pretty good insights about what a customer needs.
So not only do we educate them, but we take the performance planning
and the scoping process very seriously.
We pull stats off of existing systems.
We model it.
We create a synthetic workload
that mirrors what the customer sees in production.
We take all the most conservative,
you know, worst case examples.
And then we, you know, we show the customer,
here's what you're getting today.
Here's a curve of latency versus IOPS, you know, over, over X number of period, let's say 90 days.
Here's what it would look like if you put the, if you consolidate two VMAXs down to one,
here's what it looks like if you do four to one onto an InfiniBox. And, you know, we really try to take an education approach,
but to build it with quantitative, not to do it with hand-waving and nonsense,
but actually to do it with numbers and math.
Holy cow.
And at the end of the day, it helps just reduce risk.
Yeah.
The only problem is you're dealing with data you can get off of a five-year-old disk array,
so those analytics aren't that detailed.
But it's way better than going, how much money do you have?
Design a system to match that.
So you mentioned MLC.
Are you guys using PCIe Flash or SSDs?
Yeah, so right now we're using just regular two-and-a-half-inch drives
that go right into the front-end storage bays on our controller nodes.
And looking forward on the R&D side, we're watching very closely what's going on with PCIe, Flash,
and with NVMe for fabrics.
And there's a lot of really interesting stuff that's coming down the pike.
But right now, the best fit and the most bang for the buck for us in our
architecture is just those regular two and a half inch drives, load up those nodes with flash, and
that's the big fat caching tier in front of the drives. And then the piece of it that is
right cache you replicate between the controller nodes? Yeah, exactly. So we don't use NAND flash for write caching at all. So the right
data path for us is DRAM mirrored onto two physical nodes. Any incoming write before we
knowledge it back to the host, we get it in DRAM in two places. Okay. Is that in VRAM or do you
have a UPS arrangement on the controllers? Yeah. so it's regular unprotected, well, it's ECC protected,
but it's not power protected. It's not power protected DRAM. And then what we do is we have
a cluster of three battery backup units or UPS modules, which are battery backups. And then
above that, three automatic transfer switches, which protect against single power bus failure.
So those two layers together, their job is to keep the servers energized at all times.
And then if you go all the way and you have an emergency power off in the data center,
to protect the write cache long enough that we can vault it and have a log that we can play back when we reboot the system.
And when you vault it, you would be vaulting it to disk or those SSDs?
Yeah, so it goes locally up to on the nodes.
And off the top of my head, I can't remember.
We used to use disk drives.
Yeah.
And then I thought we were using Flash, but somebody told me it was still disk drive.
It's an implementation detail.
Okay, Brian, there was one other minor thing that you mentioned, that you have three UPSs.
Are you guys triply redundant internally?
Yeah, yeah, so that's kind of another thing.
That's very unusual.
That's what I think is your secret sauce.
Yeah, it's not sexy.
It's not really intellectual property.
Oh, yes, it's sexy.
Only for those availability geeks, right?
Yeah, you know, only for the people who listen to Greybeards on storage.
Which are all availability geeks.
Right.
You know, because we have, for as long as I've been in the business, had, you know, the dual controller model, which gave you a certain level of resiliency.
But frequently, when you
were down to one controller, it was a limping. It wasn't running as well as when both of them were
up. Yeah, it depends on the active asset. Yeah. Yeah. And it depends on a lot of things. But
still, it meant that once you suffered a failure, you were very exposed. Or we had the high-end 3PARs and VMXs that built that resiliency into the system
out of platinum and fairy dust and other things that cost a huge amount of money.
I'm not sure if I agree that the VMXs are super high availability.
But you guys have managed to fit in between and say,
we're going to use X86 servers, X controllers,
but we're going to be able to survive two controller failures.
Yeah, and it's just a numbers game.
So what we're trying to do is we're trying to allow customers
to build failure domains, pools of storage
that are measured in petabytes, not in terabytes.
And it dramatically reduces complexity.
It makes provisioning easier.
It makes capacity planning easier.
But if you just do the math, look at the mean time to failure of different components,
you know, it's pretty clear that in order to create fault domains that are petabyte scale,
you can't do it with dual controllers. You can't do it with
N plus one. Let me rephrase. You can do that, but you're going to get to an availability level of a
product like a DDN system or something like that, which is great for certain use cases,
but you can't do mission critical computing on it. And that's why 3 is kind of a magic number. When we first came to market, we chose
to use 3
physical nodes as
the magic number, because it's the smallest
number that's greater than 2.
That's relatively
obvious, but yeah.
Yeah, but we'll be
scaling out and offering much
larger, much more compute-heavy
InfiniBox configurations in the future.
So we do have a scale-out
future ahead of us here. That's interesting.
We do.
We do. So there's debate about how to do it.
There's kind of two
camps in engineering.
Number one, the easy way to do it
is with a pair of
InfiniBand switches, and I could
give you a system
tomorrow that does that, but you
can't do it in production because we
can't guarantee it because we haven't
done all of our testing on it. But I could give you a
completely functional,
super-scalar
InfiniBox with IB switches tomorrow.
One of the R&D projects
I'm interested in is
we want to optimize for cost.
And InfiniBand switches, even though InfiniBand is a very cost-effective interconnect right now, the switches aren't cheap.
And so it is possible with our architecture to do a mesh routing protocol where you basically create a 4D Taurus, kind of borrowing from the HPC world.
Ah, 4D Taurus, okay.
Layout.
Yeah, I'm not talking about a Ford Taurus.
Yeah, no, no, no, a 4D.
I always have difficulty with five dimensions, but okay.
Yeah.
Luckily, this is only four.
Yeah, but that's future stuff.
That's nothing that we're selling today, but it's certainly coming.
I want to get back to a couple of things about the use of Flash.
First of all, can I pin an application Flash?
In our architecture, we don't give the capability to pin an application to Flash,
nor would you want to.
Well, I'm willing to argue about whether the technical merits
have any sufficient benefit, but I'm going to say categorically that I want to because
senior management won't buy the system if it won't. Yeah, so again, the way that we handle
things like this is, okay, go set up a meeting with me and the person who's telling you that.
My CEO won't meet with you.
It's a political problem.
Then I guess we're in trouble.
So here's why we don't do that, and here's why we don't offer the functionality and why we recommend against it. If you, so we could very trivially add a command that would basically bring everything in a
particular volume up into the NAND flash cache.
And you could even do that today on our system by just, you know, just using DD to read a
volume or using find to walk a file system.
You could bring everything up into cache.
But here's the thing. You've now filled up X number of terabytes of cache by definition by pinning it.
And now that's cache that we can't use for something else.
I'll just buy more.
So, Howard, maybe you need to do business development for us because perhaps because
most customers are, you you know they're not
so quick on the on the trigger to buy new systems but you understand what my point is technically
no no i i understand completely technically you know it's you know you know it's similar to the
question about whether you would offer an all flash array versus just a hybrid array oh yeah
no it's it's exactly it's exactly the same question it It's, you know, there are people who want to buy the Homer mobile with rack and peanut steering,
and if you don't sell it, they'll go buy something else.
Yeah, so, you know, that's kind of like, if you're in a case like that, you know,
you're trying to sell an iPhone to somebody who's a dogmatic Android person or vice versa.
Maybe it's a more valid analogy, but it's vice versa.
You're trying to talk about the benefits of Android to someone who's an Apple fanboy.
Yeah, and they say, Steve says we don't need that, so I'm not going to have that.
In reality, most customer technologists are pragmatic.
They're trying to do and they're trying to make the right calls about technology.
However, in the industry
right now, there's probably
on the order of a billion dollars
that's being spent on marketing
pumping
this message about the all-flash
data center into
the echo chamber. It's not
just the marketing of the storage vendors.
The analysts, God forbid.
I would call my own team a problem,
but that's part of where the money is going, gentlemen.
It's an echo chamber between us and our...
Yeah, and so what I've heard so often
from customers early on in engagement,
but what about the all-flash data center?
Aren't we moving to an all-flash data center?
And I say, you know, I want an all-flash data center the same way that I want a pony.
Like, it would be awesome, but it's completely unrealistic, especially because I live in New York City.
So if you look at the way flash is really, really being used in the world today. It's being used as caching devices.
What is the biggest market for NAND Flash chips? It is hard drives in laptops. It is mobile devices,
you know, my iPhone. And in the cloud model, end user devices are caching devices. They don't keep
the canonical copies of my music and my files. They're caches of the canonical copy, which is stored in the cloud, almost certainly with
these hyperscalers on magnetic disk. That is a really interesting perspective.
Yeah. End-user devices have become caches, and NAND Flash is extremely good at cache, and that's why
every end-user device
in my home is all Flash.
Uh-huh.
I like that.
I like that thought. I've got to figure out...
I have a Fusion drive on my Mac,
though. It's a mixture of cache
and Flash and disk
and stuff like that. A little mini
Infinibox.
Yeah, so to speak, so to speak, although it's not triply redundant is my problem,
but that's another question.
So, I mean, the engineering team seems like they have lots of experience in the storage.
I'm not just talking about Moshe, but, you know,
a lot of the group seems to have come from prior storage activities.
Ray, I was trying to make him say Moshe first.
Oh, God, yes, I know.
I said it first.
Sorry.
You should have hinted to me beforehand.
I'm sorry.
I should have sent you a message.
Yeah, but it's not just Moshe. It seems like three-fourths of the team came from other companies that are fairly large.
Yeah, no, it is.
So we have, I would say, the biggest group of alumni come from EMC, from XIV, from NetApp, from Google, especially on the file system side.
So let's kind of start at the beginning.
The core team, and I would say the majority of the senior managers
in the engineering organization,
are the original group that developed Symmetrix in the late 80s and early 90s.
So, for example, Chaim Kopalowicz, he does all of our hardware designs,
and he does a lot of work, or his team does a lot of work with the certification for media and stuff like that.
So Haim was the original hardware designer for Symmetrix.
So he's now, I'm not going to guess how old he is, but he's a senior citizen.
Qualifies as a graybeard.
He's a total citizen. Qualifies as a gray beard, then. He's a total gray beard.
And he has a picture on his desk of him and Moshe
leaning on Symmetric Serial No. 1 coming off the factory floor.
Oh, God.
And they have much more hair than they do now.
Yeah.
But they're much younger men.
So we have guys like Chaim,
but then we have guys like Guy Rosendorn and Dennis Wilson, who are – they're rock star developers.
They're in their 20s, and they understand the new technologies and the new programming paradigms and the technology that didn't even exist 10 years ago.
And the beautiful thing that Moshe has been able to bring together is this ability to
bring together three generations of software engineers, of the older guys who don't move
as fast, but they know the shortcuts because they've done this before and they know what
works and what doesn't.
And they've made a lot of mistakes.
Absolutely.
Absolutely.
And that's the amazing thing.
That's probably the most satisfying thing about when Infinidat was started.
It was literally a clean sheet design.
Moshe said he brought the team together and he said, OK, he goes, here's 80 million dollars.
So assume infinite capital. Start and there's no deadline. It's not like we're a VC backed
company and we're going to run out of money and, you know, in six months. So you have to
slap something together so we can get revenue. Imagine that you have as much time as you need.
You have infinite capital. You can go hire whoever you want. Go find the, you know, your smartest friends and start with a clean sheet of paper and try to design the
storage system to end all storage systems and take every lesson and everything you wish you
did differently, you know, for 30 years in the XIV project, in the Symmetrix project, and others.
And, you know, fold all of that wisdom and 2020 hindsight,
stand on the shoulders of the giants before you and try to build something great.
And that's exactly what they did.
I like that idea.
I've talked to too many startups lately who everybody came from Google,
and now they're building an enterprise storage system.
Do you guys talk to enterprise users?
Have you had those problems?
Because hyperscale is different.
Yeah, listen, you bring up, Howard, an excellent point.
I think a lot of the impedance mismatch and why a lot of the kind of Silicon Valley companies are struggling to excel in the enterprise space the way that they excel in the consumer space is looking beyond the technology itself.
The customer experience of what an enterprise customer expects is so radically different from the consumer experience.
It's high touch.
I wouldn't even know how to call Google.
If I had a problem with Gmail, I wouldn't even know how to get in touch with them.
And if I did, it's probably an artificial intelligence that's responding to me.
It's probably not even a person.
They get in touch with you.
Yes.
They don't get in touch with them.
Silicon Valley prides itself.
They put it in the pitch decks to VCs about how they remove humans from customer interactions. They automate it. They mechanize it.
But CIOs want somebody to buy them lunch.
It's not the lunch. It's going wrong. And they want somebody who's they want an organization that's invested in their success, that understands their business deeply and is kind of a partner for
getting stuff done and helping the CIO achieve his or her vision where they want to be at the
end of this year, where they need to be at the end of next year. And, you know, how can I help
you do that? And you can't do that without having feet on the ground. And that's kind of the other part of the, you know, we love talking about engineering as our differentiator.
But at the end of the day, the customer care team, you know, the group of employees that are not working on product but are out working in the field with customers,
they're the folks that have the biggest impact on what the customer experience is, whether it's positive or negative or neutral.
And, you know, that's something that we pay a lot of attention to to make sure we get it right.
I would say you're going against the grain here, Brian.
It seems like the world is moving away from the high-touch model.
And, yeah, you talk about Silicon Valley, but it's not just Silicon Valley.
I mean, it's everywhere.
I mean, look at the software-defined stuff.
They just want you to download and fire it up and pay them money.
It's not like they're going to support you.
I mean, they'd probably provide some support, but it's all automated and distant and stuff like that.
Yeah, they do.
And what we see, Ray, is that the pendulum kind of swings back and forth.
Take any big, let's say take like a financial services company, a big bank or something.
Every couple of years, there'll be a software-defined thing going on in the organization.
And it might be, hey, we're going to build from scratch, we're going to use open source tools,
or it might be, hey, we're going to go buy a software license for a product.
And then, you know, we'll say, okay, you know, we'll keep in touch,
and then, you know, we'll go out for a bite to eat or, you know, go for a drink six months later, and we'll say, touch, and then we'll go out for a bite to eat
or go for a drink six months later, and we'll say, hey, how's that going?
And you know what they discover?
Storage DevOps is really freaking hard.
And delivering even four nines of availability with something that's built on ZFS and Glue,
I mean, it's...
Are you talking about your competition here?
I'm sorry, go ahead.
Oh, no, no, no, no.
I think GFS is a good step down from what Infinidat's doing.
Yeah.
I think that Oracle hasn't been as kind to it as they could have been.
Oh, yeah.
And if Larry's listening, you need to be more embracing of open source,
and it's going to help you build bigger yachts.
Maybe it's Mark that you need to talk to.
That's another question.
I forget where we were going.
High touch, low touch, cycling back and forth.
I think that there's a new market of new customers for the low-touch DIY stuff.
But if you're used to spending a million dollars for a storage array
and having a resident engineer from the vendor,
you're not going to give that up all at once.
I mean, to think that the challenge that most of the incumbent vendors have,
they're all kind of high-touch organizations,
and they're all struggling to grow their market and grow revenue.
And this new customer base is not buying from them.
So, Howard, I really dig what you're saying about the whole software-defined thing.
And it goes kind of hand-in-hand with your ideas about the low-touch
model. And the problem that the big incumbents have is their business models are predicated
on storage being very expensive, on it being a luxury product. That's kind of what funded
their high-touch, for 30 years.
We're in an era now, especially with, you know, the rise of social data and the emerging Internet of Things and the, you know, the wave of data that's coming in that storage has to get cheaper.
And unfortunately, it's blowing up the business models of these incumbents. So part of the business philosophy that we're looking at is how do you create that customer experience that's high touch, which requires investment,
that somebody has to pay for these smart people to be working with customers, but to have the
product, the street price of the product be low enough, you know, that these visions for big data can be achieved.
And what it all comes down to is you eliminate the need for some of the touch.
You get rid of the user interfaces that require an engineer to come in
and spend three days on an install and put everything through VVols and Cinder.
Yeah, that's part of it.
That's a huge part of it.
And also, just in any, you in any customer-facing engineering company,
you typically spend, your developers typically spend a lot of time responding
and putting out fires and emergencies.
And not only is that expensive, it requires you to be, you know,
it requires you to have the people to do that,
but support takes away from, is a distraction away from shipping new features and things that kind of move the technology forward.
Well, it's a lot easier to say that you make things that don't break than it is to actually do it.
Can you tell me about it?
Yeah, no, we do.
We do.
It really is something that differentiates us.
You know, we made ridiculously large investments in, you know, things like you were talking at the beginning of the podcast about the fact that we you know, we have a petabyte.
Excuse me. We have 100 petabytes of storage in our test lab.
We burn every system in for three weeks, you know, before we put it on a truck and ship it to customers.
We're doing this because those pay dividends once you get the system deployed.
Not only does it delight customers when you have stuff that doesn't break,
you don't have to fix it.
So that's a huge part of the model.
And then, again, keeping the cost of goods sold,
keeping the bill of materials as low as possible. I don't know what the COGS is for, you know, for a VMAX or an Xtreme.io or any of these systems, but we're pretty sure that we're selling our stuff at a street price that's lower than what it costs our competitors to buy the parts to assemble their systems.
You would think that would be hard to do, given the volumes that they have versus what you guys do and stuff.
Well, if the competition is trying to give you a terabyte of storage and they're trying to put it on NAND flash.
Yeah, I agree. And we're paying $0.08 a gig or whatever for Seagate drives, and we're putting a tiny amount of flash in there.
Yeah, duplication only helps so much.
But, I mean, your main competition seems like you're more enterprise-focused kinds of guys,
you know, hitting up against EMC, IBM, NetApp, HDS, the big guys, HP.
I mean, it's like it's a different world there.
Yeah, it is.
But even at that level, Hitachi is kind of an outlier there.
But EMC for sure, I mean, they bet the farm on transitioning their most profitable customer base away from VMAX and high-end VNX and getting them onto extreme IO.
And all the attention at the corporate level has been about managing that crossfade as
smoothly as possible.
But the problem is, you know, it's a regression in terms of reliability.
It's still really expensive.
It's a premium product.
The cost per gig is high, which is what they want.
So, you know, that's kind of where we come in and we say, okay, we can give you something that's 100 times more reliable. It's the same or better performance because all your
IOPS are going to DRAM and DRAM is faster than Flash. But we're going to give it to you at a
price that, you know, is all the way at the other end of the spectrum. And, you know, I think that's
kind of fundamentally why we're having the success we see with our ramp-up right now.
So my impression is that InfiniBox is primarily competitive in that high end.
How small does it get and remain economically reasonable?
I mean, can you get down into the 20 or 50 terabyte world and compete with Nibble?
No.
So if we were going to do that,
we would probably give a configuration that was all Flash
because the economies of scale with near-line SAS,
it doesn't really happen in the tens of terabytes.
That's where an all-Flash InfiniBox configuration would be a sweet spot.
Yeah, some of your backing store performance is really based on very wide striping anyway.
Yeah, it's about aggregate throughput to the drive. So right now, 250 terabytes is our starting point.
Yeah, that's really what I was looking for, is where do you guys think that you're a good solution?
Yeah, but one thing I'll put out there, Howard, is some things never change.
So when the Symmetrix platform first came to market in the early 90s, people told Moshe that he was crazy.
They said that you are a crazy man.
Who on earth would ever have a terabyte-sized relational database?
That's crazy talk.
Maybe the NSA does.
Maybe Lawrence Livermore Laboratory for some sort of business.
Businesses?
Who would ever have a terabyte-scale database?
This is the 640K of memory nobody will ever need anymore argument.
We just all have trouble conceptualizing that next step.
Petabytes are the new terabytes.
Well, let's close on that, Brian.
It's about time for us to end the call.
Howard, do you have any other final questions for Brian?
No, but I think petabytes are the new terabytes is the episode title.
Okay, I'll try to use that.
Brian, is there anything you'd like to say to the audience before we call off here?
I just want to say thank you to you guys for inviting me.
This is a really interesting conversation, and I hope we get to do more of these in the future.
I would love to.
As do we.
It's always the good conversations when we're going,
yeah,
we're running out of time.
Yeah.
We got to stop or we're going to kill ourselves.
All right,
gents,
this has been great.
It's been a pleasure to have you,
Brian,
with us on our podcast.
And next month,
we'll talk to another startup storage technology person.
Any questions you want to ask,
please let us know.
That's it for now.
Bye,
Howard.
Bye,
Ray.
Until next time. thanks again, Brian.
Take it easy, guys.