Grey Beards on Systems - 52: GreyBeards talk software defined storage with Kiran Sreenivasamurthy, VP Product Management, Maxta
Episode Date: September 28, 2017This month we talk with an old friend from Storage Field Day 7 (videos), Kiran Sreenivasamurthy, VP of Product Management for Maxta. Maxta has a software defined storage solution which currently wor...ks on VMware vSphere, Red Hat Virtualization and KVM to supply shared, scale out storage and HCI solutions for enterprises across the world. Maxta is similar … Continue reading "52: GreyBeards talk software defined storage with Kiran Sreenivasamurthy, VP Product Management, Maxta"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here with Howard Marks here.
Welcome to the next episodes of Greybridge on Storage monthly podcast show where we get
Greybridge storage and system bloggers to talk with storage and system vendors to discuss
upcoming products, technologies, and trends affecting the data centers today.
This is our 52nd episode of Graybridge on Storage, which was recorded on September 22,
2017. We have with us here today Kiran Srinivasamurthy, VP of Product Management of
Maxta. So Kiran, why don't you tell us a little bit about yourself and your company?
Sure. Thank you, Ray. My name is Kiran Srinivas Murthy. I work for Maxta. As Ray said, I manage product management for Maxta. I've been with Maxta for quite some time now, close to five years,
seeing the right from the beta version of the product all the way up to where we are today.
And I used to work for 3PAR before joining Maxta,
which was later acquired by HP.
And to give you a brief overview of Maxta,
Maxta is a software-based hyperconvergence company.
We deliver the hyperconvergence purely in software
on any industry standard servers.
Customers can use the servers of their choice
and we deliver the
software to deliver hyperconvergence for them. We also give a choice in the hypervisors, both KVM
as well as VMware vSphere. So customers do get this layer that they can fit in the data center
where they have the ability to change both either the server vendors of their choice as well as their hypervisor.
So Maxta is headquartered in the Bay Area in Santa Clara.
Okay, so you mentioned KVM.
Are you doing your own distribution of KVM like some of the other HCI vendors,
or is this you can install KVM and then run Maxta as the storage layer?
We are not developing our own version of the KVM hypervisor.
We currently support Red Hat Enterprise Virtualization,
along with the Red Hat Enterprise Virtualization Manager.
So customers can install RHVM and then install Maxter on top of it. So this delivers the complete ecosystem for
them that they have already been used to from Red Hat point of view. That's a big benefit that we
see with our approach. And does that include a Cinder driver so I can run Red Hat OpenStack as
well? In the first version, we are not including the Cinder driver, but that's something that we
are working on and would be releasing shortly. So what differentiates your software-defined
storage solution from some of the other ones that are out there today? I mean, obviously,
the KVM support is a significant differentiator. But beyond that...
Yeah, sure. So, I mean, to answer your question, let's step back a little bit and at least differentiate between software-defined data centers and hyperconvergence. I think today
in the industry, we see a lot of confusion with software-defined data centers and hyperconvergence.
Everybody intermixes the terms and everybody says they're hyperconvergence.
Well, I can lay blame for some of that at the vendors' feet.
Sure.
Where, you know, vendors who make storage layers and call it hyperconverged then several months later go, look, it's a cloud.
Yeah, I understand. So at least that's the reason
why we are trying to at least define it so that people know how to differentiate them and so on.
So in our mind, software defined data centers essentially means that you're delivering a pool
of storage and you as an end customer is responsible to manage the storage independently,
whether you create a LUN and present it,
or whether you create a file system and explicitly present it to the hypervisor and so on.
So at the end of the day, the storage is still something that you manage independently.
And of course, people do provide some plugins into vCenter and they claim that everything
is VM centric and so on and so forth but at the end we know that
hey it's sort of a layer on top that gives you some level of abstraction but at the end from a
storage point of view it's still a LUN or it's still a file system that you manage. With hyper
convergence the way we differentiate is the object of management is the virtualized entity in this case a
virtual machine end-to-end whether you view it from the center or the vendors
user interface or you directly log into the storage platform and look at it you
would essentially as a administrator would be looking at a VM this provides
the end-to-end from data management all the way to
visualization, a unified view of a virtual machine. Gives makes it much simpler to even look at
statistics or debug information because everywhere you're looking at the same virtual machine.
You don't have to map, oh, here is a virtual machine that was residing on the salon and
how do I know which virtual machine now is having a hotspot on the salon,
and so on and so forth.
So that's essentially how we differentiate between a software-defined approach
and a hyperconvergence approach.
And people always mix the two, and they say, oh, we are hyperconvergence,
but that's essentially our pitch to say, hey, here is how we differentiate between the two.
Yeah, well, I mean, I would certainly buy into I want to manage everything at the VM level, not at some artificial volume, whatever the storage guy decides to call it, boundary.
But I don't know if I buy into it it and that's part of the definition of hyper
converged.
Yeah.
Yeah.
And beyond the definitional, it's, it's certainly a feature I want from a hyper converged system,
but it's a feature I want via VVOLs from an external storage system too.
And that would be the question of the next question.
Do you guys support VVOLs in a,ols in a vShare environment or is this?
So there are, I mean, essentially, if you look at vVols, right, it was designed primarily to what
Howard Marks said, to provide that VM level functionality. And I mean, mainly to align
the block storage vendors with, say, a file storage vendor to align, hey, you can manage a VM. But from Maxter's point
of view, we do not see it as a huge value because we already deliver all the capabilities what a
vWall was envisioned to deliver with our solution. You can manage a VM, you can get the statistics at a VM level. Everything you do
is at a VM level. And that's essentially why we do not see a whole lot of value in VVols. I mean,
yeah, it's a checkbox entity. But from Maxter's point of view, we do not really have a whole lot
of value that customers, that we envision that customers will get a whole lot of value with it.
Yeah, the primary value I would see is at backup time.
From a policy procedure perspective kind of thing?
No, if you tie into vSphere via NFS
and then do all of your per VM management
because they're files.
When you create a snapshot, vSphere creates a VMware snapshot as the backup source with the
vStorage API for data protection. And if you have vVols, then it's a storage system snapshot. And storage system snapshots are generally substantially better than...
Okay.
From an effectiveness perspective and efficiency perspective.
Well, there's a performance penalty for a VMware snapshot.
There's the unwind overhead.
And so with vVols, you should be able to avoid that. But that's really the only advantage I see compared to a system like Maxta that's taking advantage of all this via NFS.
So, I mean, the fact that you guys are – I'll call it cross-hypervisor support here. have um and i'm not sure if the right terminology i maxed a cluster that could span both kvm and
uh v-sphere clusters uh that's a that's a good question so to answer that question the way to
look at it as from a file system point of view or from the basic underlying file system it's
the same whether it's kvm or VMware. I mean, yes, you can,
but there's always the higher level entities that play in, right? So it's not just the file system.
Even if we support a VM across two hypervisors, how do you cross over the domain? I mean, HAs won't work. The VM's file system or VM's file layout is different.
So, I mean, also it's an interesting topic of discussions.
I don't know if customers would really deploy.
I'm not saying that you'd want a VM, mind you, that would run on vSphere and KVM.
But from a storage perspective, I mean, you could take an external storage device that, let's say, supports NFS, and that storage device could talk to a KVM cluster or vSphere
cluster or OpenStack or whatever, right, with the proper software. I mean, if I look at Mac stuff
from a purely software-defined storage perspective, then technically it should be able to support both of those environments,
even if they might be different file systems on the Maxta system.
Does that make any sense?
I mean, the way how Maxta really works is
the file system is sort of the same for both KVM as well as VMware.
But the higher-level cluster is not the same, right?
So since we manage a VM,
unlike software defined or a storage array,
the cluster entity becomes really important.
So you need a cluster, right?
So now the cluster would be different for KVM
and it would be different for VMware vSphere.
I mean, those two clusters are different,
but the underlying file layout
and everything else is the same for both.
But so these are two separate clusters essentially.
And once you do two separate clusters,
yes, Maxtor makes it really simple
to move between the two.
You can take a VM and
then move it over to KVM or the Red Hat environment and move it back. But do you do the V2V conversion
on that? Yes. Because, you know, the real problem there is, you know, well, I'm running with the
VMware para-virtualized Ethernet card and the VMware para-virtualized SCSI controller,
but I've got to change over to QEMM on the KVM side.
Correct. Yes.
I mean, interestingly, Red Hat also has tools
to do those kind of conversion,
and we leverage those tools,
and we provide the ability to move the VMs
from VMware over to Red Hat.
Okay.
That's powered off, right?
That's powered off.
Yes.
Okay.
Because live migration between hypervisors would be really impressive.
I'm not sure why I would use it, but it would be really impressive.
And when you say migrating the virtual machine, you're actually migrating the virtual machine's
data.
Yes. VM and its data. I mean, it's powered off, so
it's not live. You have to copy the VMDK and convert it to a VHD.
Well, whatever KVM's version. Is it VHD?
It's actually either a raw device or a key move format
is what they say. Then you have to play with the drivers inside the VM
and then you can reboot it. So Ray, let me just, I think with our discussion, we lost the earlier question that you
asked. I don't know if this is the right time or we can go back at a later time. It's always the
right time, Kira. Yeah. You did ask, how does Maxware differentiate between the other vendors in the space, right? So to just answer that question,
again, let's focus on hyper-converged vendors. And in my mind, I mean, we all know the top players,
Nutanix, VMware, vSAN, Cisco's HyperFlex, and HP, SimpliVity, and I'll include Maxtra. So
the key guys in the hyper-converged space. And then there's a couple of other guys chasing the low end, you know, the Storm Magic and Starwind and those guys.
Yeah, but I mean, again, they don't really...
Clearly a different class of product, you know, that's, you know, your cluster could be four nodes or less, and we're really attractive. So if you look at these five layers, right,
and again, there we can divide them into two parts.
One is vendors who sell an appliance
and vendors who deliver software.
So in the camp of vendors who deliver hardware or an appliance,
it's Nutanix, HyperFlex, and SimpliVity.
And who have the ability to deliver
software is Maxta and VMware vSAN. Of course vSAN has VMware's I mean EMC's own appliance too but
I mean they have the ability that they can deliver software only. So there are two camps one which
deliver the hardware appliance and one who can deliver software and the primary
delivery methodology is software so if you look at the hardware vendors and uh the soft
magster like zenitonics so simplevd as well as isco hyperflex first and foremost i mean you have to
buy the appliance from the vendor and you are tied in both from the hardware point of view and from that particular
vendor. I mean, you might be using, say, Cisco or HP or Dell or all your other server platforms.
And from HCI, you're now leveraging a completely different platform. So you are tied to that
particular platform. It becomes quite interesting when it comes to refreshing this hardware too. With MaxTask approach, we allow
customers to transfer the license. Huge benefit in terms of delivering the total cost or reducing
your total cost because you can transfer the licenses at the time of refresh. You buy a newer platform and you transfer the license.
But in an appliance model, you will have to buy the appliance and you'll have to pay for
your hardware as well as you have to pay for your software.
So how is Maxta licensed?
Is it on a capacity license basis or server vCPU kind of thing?
Maxta is licensed based on the server or server license and we
assume that it's a dual socket uh server i mean if it's four sockets it's actually two
two licenses but dual socket server has uh the license is based on a dual socket server
the capacity doesn't matter or the vcpus or the course doesn't matter so the big difference i see
between buying appliances and buying software is that most hci appliances have storage like
margins for the vendor and server margins are much thinner.
So when that year four or five comes around and it's time to refresh
and I can just buy DL380s from HPE
because I've been buying DL380s from HPE since 1492,
then I pay server prices, not storage prices.
And while I'm on record as not being a big fan of TCO analysis,
one of the flaws of a lot of TCO analysis is that it's too short a time window
because a lot of things show up.
Well, what happens when I refresh?
That's where Maxter's model really helps, right?
So you don't pay for the software again.
You just pay for, as you said, a DL380, and you're good to go.
From an end customer, they do save quite a bit of money.
And the third difference, which applies now for both software
as well as for hardware vendors, is this concept of application-defined storage.
So what we have done from the ground up is to allow customers to be able to set certain properties at, or I should say on the Maxter side, all properties at a VM level.
And these properties could be the number of replica copies,
two versus three.
I mean, in most, say, for example,
in Nutanix and SimpliVity and so on,
you need to set it at the cluster level
and you can't even change it.
But with Maxtar, you can set it,
you can change it at any time,
you can go from two to three, you can drop it, you can change it at any time. You can go from two to three.
You can drop down from three to two at a VM level.
We believe that's a huge value when it comes to capacity efficiencies and so on.
So the RAID level can be set, a replication level can be set at a VM level?
Yes, and it can be changed non-disruptively at any time.
You can set it at 2, you can set it at 3, drop down, go up, all non-disruptively.
Okay, and that's two- and three-way replication, right?
That's correct.
Are you doing erasure coding as well?
Currently, we are not.
Okay, and then are you doing data reduction in terms of compression and deduplication?
Yes.
Yes, we do.
And we do it in line, too. lines is we also allow customers to be able to set the block size at a per VM level, actually at a
per VMDK level. We allow customers to set it. So you may ask, hey, why? What's the benefit of doing
this, right? So what we have seen even in our internal testing is customers do get a benefit
of at least 25% improvement in performance when you align the application level block size to the storage level block size.
For example, Microsoft says, hey, for Exchange, it's the best practice to deploy a 64K block size.
So now when you're deploying Exchange on Magster, you can set a 64K page size or a block size for those VMs. And if you are deploying a web server on the same
cluster and you want it to be set at 4k, you can set the page size to be at 4k. So this gives
enormous flexibility and which in turn delivers better performance for those applications on the
same cluster. And certainly for applications where you're setting a large page size like 64K, it reduces
the amount of memory and the size of the hash table for the deduplication as well.
Absolutely.
And our metadata footprint reduces and so on and so forth, right?
So did you say you do inline deduplication and compression?
We do inline compression. Actually, we do inline deduplication also, but we do not turn on inline deduplication by default. It is turned off.
Of course, customers have the flexibility to turn it on if in case, but we do get benefit using compression, significant benefits. And so we turn it on. So it reduces the footprint, as Howard Marks said,
that is required in terms of resource utilization and so on.
And the other benefit we also see is we do optimize our infrastructure
to take advantage of low-cost RPM drives too
in case they're deploying on a hybrid environment.
They can even deploy a 7.2K RPM SATA drive
and get similar performances as a 10K SAS drive, right?
So a combination of all these,
we believe at the end, in our mind,
customers do care about lowering their operational
and their capex costs, right?
So a combination of all these,
we believe gives them that benefit.
So we turn off DDo by default.
Speaking of drives, you guys support SSDs, presumably?
Yes, and NVMe also.
And NVMe SSDs?
Yes.
So does that mean that it's a caching tier or is it a back-end storage tier?
I mean, I would say some vendors out there
have different tiers for SSDs.
Some do not.
Yeah, it could be either.
That's a flexibility that customers get.
They can deploy an all NVMe system
or they can use NVMe for cache
and SATA-based SSDs for capacity
or they can just use SATA-based SSDs for both
cache as well as for capacity.
If I'm building a hybrid system, can I pin workloads to the flash layer?
That's a very good question, Howard.
That's actually the direction that we are going moving forward. Currently, customers
cannot pin their workload into cash, but that's essentially where we are taking our VM-centric
approach. And we also believe that there are a lot of customers out there who would want to get
flash-level performances for certain VMs and may not be for all of them. So with our approach,
since anyway, we have the underlying framework of delivering all these capabilities at the VM level,
our approach is to go ahead and say, when you deploy a VM, if you want this VMDK to be pinned
to a flash here, and we can just go ahead and pin those SSDs. That's the direction that we are going.
Yeah. In the SME world, there's usually one to four applications where performance is so important, we don't really care about the cost and everything else is much less
important. And the easiest way to do that is to put in a hybrid system and say, these four
applications, pin them to flash. Yes. yes. I mean, that definitely makes our approach much more feasible,
the way we are doing things,
because since all properties, everything,
we do it at a virtual machine layer.
Right.
It becomes very difficult if I have to create a separate Flash LUN
and put those applications in and manage how much Flash that one gets.
That's more work than it's worth.
Right.
Yeah.
So you do support triple redundancy.
I mean, so the minimum cluster size would be three server nodes.
Is that correct?
The minimum cluster size is three nodes.
Actually, even there, we have a very interesting twist to how we support things, right? Say if you look at most other solutions,
when you are looking at supporting double faults,
of course, they combine both the server failures
as well as the drive failures to be,
they treat them the same, I should say.
Then you need five nodes to really get to sustain
any two drive failures too, right?
So the way we have approached this configuration
is we allow customers to make three copies of data
even on a three-node cluster, right?
And most solutions don't.
So now the benefit of this is for certain applications,
if they would like to do that,
then the benefit that they get is
now any two drives in that cluster can fail,
and they still have access to their data. Well, much more importantly, one drive can fail,
and we can have a read error on the second drive. Let me go back to what you said,
Kieran. So you think you're saying that most of the other solutions out there,
if they do triple redundancy, really need to have five nodes to support a two-drive failure.
Yes. Any two drives. I mean, on the same node, it will be okay, but any two drives. Yes. Once you,
the options that you get is you make a two-way replica on a three-node, minimum is three nodes,
and you make a three-way replica, then the minimum number of nodes is five.
And of course, with our solution, if you want to sustain two node failures,
yes, you need five nodes in them because you need three nodes to form the cluster majority.
So you will need five nodes.
But we also, in addition to all those, we provide another layer of flexibility, I may say,
where on a three-node cluster cluster if you make three copies of data,
I can sustain any two-drive
failures and still have access to your data.
And some of the other
players have gone
past needing five nodes by externalizing
the witness, but we get where you're going.
Yeah. I mean,
even, it's a VM, externalizing,
but still.
My favorite is one vendor will let me run the witness on a Raspberry
Pi
oh geez I can offer a witness
hosting solution here
somehow
my experience in the enterprise market
is people talk a lot about wanting
a software only solution so it'll be less
expensive but when they actually issue POs, they want to buy something more assembled. Do you have a reference
design or meet in the channel program with any of the server vendors so that I can buy
one thing and not worry about whether I've actually combined three things from the HCL that nobody's ever combined before?
Yes, we do. There are two. I mean, one is we work with, say, a channel partner or a value-added
reseller directly, and customers can buy directly from the value-added reseller, and we work with
them. The customer gets a fully pre-configured appliance,
more sort of a build to order sort of thing, right?
So they can say, I want it on an HP platform with such and such a configuration.
The partner will deliver the appliance or the fully configured appliance to the end
customer.
The other approach we have is also with some of the major distributors, wherein the partner can get it
directly from the distributor, and the distributor has a pre-configured appliance based on our
reference architecture, and the partner can deliver it directly to the end customer. So it eliminates
the partner having to do some of the work. Have you guys had specific success in any particular verticals?
Yeah.
I mean, we do see a lot of traction with the managed service providers,
especially because they are very interested in, as we all know,
they have the expertise to go build systems,
so they would like to build it on the hardware of their choice and deliver both CapEX and the OPEX efficiencies, right?
So we do have a major traction within the MSPs.
Do you have a pay-as-you-go license model for those folks?
Because they always like to pay out by the month
because they get paid by the month.
Yes, we do.
They can go, I mean, we have the option of either by the month or by the quarter or so on and so forth. Yes, yes, we do. They can go. I mean, we have the option of either by the month or by the quarter or so on and so forth. Yes, we do. And the second area that we have found a lot of traction, of course, is in China and you guys know Lenovo OEMs are solution also in China they
resell product under their own under their own brand name and we have other
other OEM partners also also also in China and I mean there is a huge trend
in software right they want to be sort of the, there's a wave in other words, around
being software only, and they can package, even Lenovo, they can package the solution
and they can sell it as their own appliance. And we completely rebrand for the OEM vendors.
It looks like it's their own solution. So they have the ability to sell as an entire solution
and not have to say, hey, this is from this vendor,
this is from this vendor and so on and so forth.
It looks like it's a completely fully integrated solution.
So we see we have a lot of traction
and we see the growth really well in China
and also in Latin America.
A lot of traction in these two areas.
This is more from a geographical area point of view.
And MSPs is one vertical.
And the other vertical, it's not a vertical,
but it's more of a sector that we see a lot of traction
is in the small to medium enterprises,
wherein customers are trying to
replace their storage array, wherein the storage array, they were running multiple workloads on
them. Now with Maxtor, they get the same level of flexibility in being able to run multiple
workloads and so on and so forth. So that's another area that we do see a lot of traction.
And of course, VDI is an interesting use case like anybody else's
because it's something new that customers can try.
And we do see customers deploying Maxtor for their VDI platforms also.
I mean, you would think like robo environments would also be a likely solution
that you guys could go after.
The remote office, branch office kind of things.
Yes, absolutely.
Remote offices.
I mean, for example, Driscoll's,
the Berry company or the agricultural company,
they were the first Maxtra customer
and they are still one of our leading customers.
And they have standardized on Maxtra
for all their branch offices or remote offices, I might say. So yes, you're right. I mean, branch offices is another
vertical that we do see a lot of interest in. So getting back from the smallest, what's the
largest cluster you guys have deployed today? I'm not sure if from a capacity perspective or
a server perspective. Let's go nodes, so servers.
Sure.
I mean, one is what we support and what is the sweet spot, right?
That what customers have.
I mean, we have customers who are running with 24 nodes also,
but the sweet spot that we see in terms of cluster deployment
is anywhere between 8 to 12 nodes.
It's what we see that most of our customers deploy their clusters in. And Lenovo, for example, is also using
our product for all their exchange deployments for all their employees across Asia Pac. It runs
our way, it runs on Maxter software on Lenovo server,
sorry,
runs on Maxter software for all their exchange deployments.
And from a capacity point of view,
we have customers running into hundreds and hundreds of terabytes in a
cluster.
We see customers deploying 30,
30,
40 terabytes,
a node,
30 terabytes.
Yeah.
With 10 terabyte hard drives, it gets a lot easier.
I mean, of course.
And I just put together a capacity cluster in the lab with 24 six terabyte drives.
Most of our customers, I mean, for at least their large capacity, currently they are using
four terabyte drives.
Right.
That's what we see predominantly used at least for some high capacity.
Okay.
For the robo use case, do you have a manager of managers so that I can sit at corporate IT and check the health of the Maxta clusters in my 400 branch offices?
Yeah, interesting. So, yeah, I mean, we approach this in two ways, right?
One is we do a plugin into vCenter that customers can use
and they can see all the clusters within vCenter.
And we also have the concept of what, I mean,
we have branded it as Cloud Connect or MX Cloud Connect
that is pulling the information from all these different sites
and it gives you a centralized view of all the clusters
that you have within you as a customer.
So a customer can have 20 different sites.
And he could say when the customer logs in, it pulls in the information from all these regions.
And it gives all these sites and gives them a centralized view of all the clusters that are within their organization.
And, of course, we host this particular site in the cloud
and we can also do proactive analysis on this data.
We would know what kind of drives they are using
or if there is any recalls on any of them
or if there is a hot patch.
I mean, we do a lot of proactive analysis on these
on this data and we can be more proactive with the end customers on what's going on we would
we would also see alerts in case a drive failed so we could go reach back to the customer to say hey
we we noticed that the drivers failed of course the drive the data gets rebuilt across all the
other drives in the cluster but then we could let them know, hey, replace the drive when it's appropriate.
Very good.
Yeah, so are you saying anything about the number of customers you've got nowadays?
I mean, we don't disclose the exact number, but we have north of 200 to 300 customers.
Okay.
And also, I mean, within Lenovo itself, I mean, Lenovo
resells the product in China, so there are hundreds of customers
from Lenovo themselves, right? For us, it's Lenovo is one
customer. Yeah. Right.
It's interesting
looking at the
Chinese market
just as being
very different
than because
they got to
start in the
x86 era and
not carry all
the cruft forward.
Cruft.
Speak for
yourself,
Howard.
So let me
go back.
So you
mentioned
exchange a
couple of
times and it looks like a real viable solution for you guys.
Do you have any, like, Exchange solution
for your program performance runs with Maxta?
Do you know what that is, the ESRP?
Yeah, I mean, we do have Exchange.
We use, I mean, we model against just stress and so on.
We give the performance numbers.
We did a joint paper with Intel on Exchange.
We have done similar work on VDI also.
And we are working on a similar approach on SQL databases and Oracle databases and so on.
So, I mean, Exchange, I brought it up because, I mean, it's one
that Lenovo is using for all their APAC email.
It seemed like a likely thing to show off some of the performance capabilities of the
system and things of that nature.
Yeah, the funny part is Exchange just isn't that difficult to workload anymore.
Yes, it's, but of course, I mean, there is all the background work.
Yeah, there's still plenty of ESRP
reports being done
every quarter, Howard, I might
add. Oh? Yes.
I understand.
Mostly that's users haven't realized
Exchange isn't that bad a
workload anymore.
I'm going to start looking at Exchange 2000.
It was many IOPS per user.
And every
release, Microsoft has said,
we're going to use a little bit more capacity, and we're going to
use less IOPS. Speaking of capacity,
okay,
what's the sort of DRAM
requirements for a maxed
node? And I'm not sure, are all
the nodes the same sort of nodes, or do you have
like a metadata node,
a storage node? No, we don't. All nodes are independent. It doesn't have, there is no
single node which is acting as a metadata or a metadata node or anything, right? It's a
fully truly distributed environment. Every node is independent and they run the
services uh independently and of course we communicate against uh with each other using
our own private network uh predominantly 10 gig deployments uh but they all communicate with each
other and they're all independent uh to go back and answer your question, what's sort of the requirements from a Mac star controller VM from a DRAM point of view, we reserve 20 gigs of memory for our controller VM on every node within the cluster. run what we call a compute-only node, where even, say, in a very small deployment like a three-node
config, I can have two nodes which are compute and storage, and the third node is sort of just
a witness node with some minimum configurations and maybe even a single drive which hosts our
software, and that's pretty much it, right? So we do have the flexibility also.
Not every server have to be uniquely
or it doesn't have to be homogenous.
It doesn't have to be the same.
It could be, I mean, we also support multiple.
Actually, we have one customer who is running
two Supermicro servers and one Cisco server.
So, I mean, it could be heterogeneous in some sense,
but it's an odd deployment.
But most of the time, it's either Cisco or Supermicro or anything.
But even on the capacity side, it could be different.
It doesn't have to be identical.
I mean, one server can have 10 terabytes and the other server can have 12 and
so on. I mean a small variation we internally manage it to optimize the VM deployment so that
you get the best out of your capacity that you have deployed on each server. But I mean of course
there are some best practices you don't want one server to be 5 terabytes and the other server to be 30 terabytes.
So with some margin of percentage of differences, the software is intelligent to go manage it. Okay, so if I have a relatively large environment, you know, 12 nodes, and I just need to add compute,
do I pay a software license for that like I do with vSAN or do I just attach that host to the cluster and
consume storage?
So, I mean, from the licensing
model point of view, you will pay for the license for
my stuff. That's the standard, I mean, the
licensing model.
And you don't have storage-only nodes at the moment, right?
I mean, it could be a storage-heavy node,
but when we say storage-only, it still needs some compute
and some minimum CPU and memory resources to run our VM, but you could have a node which has very little CPU and memory resources,
but a lot of storage resources.
Okay, but I can't build that node and have it run KVM
and support VMware VMs yet, right?
No.
Okay.
All right, Jens, it's about gotten to the time here.
Is there any last questions you have for Kiran, Howard?
No, I got it.
Okay, I'm good.
Kiran, is there anything you'd like to say to our listening audience?
No, I think it was a pleasure giving a quick overview.
And just to summarize, the key benefits are the differences of Maxster.
One is choice.
Customers get the choice of both server platforms
and as well as the hypervisor.
Of course, our transferable software license
that reduces their operational
and as well as their capital expenses,
especially with the time of refresh.
And the third is primarily
around application defined storage platform.
We talked about two variables.
One was configuring the replica copies and the block size,
but we have much more interesting things around.
You can enable, disable, read caching on a per VMDK basis,
say for transaction logs when you're deploying SQL Server.
You don't need to enable read caching for transaction logs. So you're deploying SQL server, you don't need to enable
read caching for transaction logs. So you can go disable read caching for that VMDK.
Or if you know that you are going to deploy certain data sets, which do not compress,
you can go disable compression for those VMs. So a lot more interesting aspects around the
flexibility and app defined policies that customers get.
So those are essentially the three key benefits and the differences that we see compared to other
solutions. All right. Well, this has been great, Kiran. Thanks very much for being on our show
today. Next month, we will talk to another system storage person. Any questions you want us to ask,
please let us know. That's it for now.
Bye, Howard. Bye, Ray. Thank you.