Podcast Archive - StorageReview.com - Podcast#117: Fibre Channel Still Tops for Virtualization
Episode Date: March 14, 2023Brian invited his friend Nishant Lodha, Marvell’s Director of Emerging Technologies to sit for… The post Podcast#117: Fibre Channel Still Tops for Virtualization appeared first on StorageR...eview.com.
Transcript
Discussion (0)
Hey everyone, thanks for joining the podcast. I'm Brian Beeler. Today we're talking about all the latest advancements in virtualization and how connectivity fits into that specifically Fibre Channel connectivity.
And for that I've brought in my friend Nishant from Marvell. Nishant, thanks for doing this. How are you?
Hey, I'm doing well. Thanks for having me, Brian. All right. So you've got a great opportunity here to talk about all the great virtues of Fibre Channel
without losing half the audience because they get bored with Fibre Channel.
Okay.
This is a big challenge for you.
It is.
It's a technology that's out there and used by millions of people, but it's a conversation
people don't want to have.
It's one of those things, I guess.
Well, you know, fiber channels in there with tape is perpetually dead,
but seemingly the footprint continues to expand
and few practitioners want to admit it.
It's just one of those things, right?
That especially as young people come into the industry,
they see existing technologies, whether it's hard drives or connectivity or whatever it is as,
as old and kind of ruffle against it, I suppose.
Yeah.
But in new buildings are often wanting to be constructed, uh, you know,
by demolishing old ones,
even though they are perfectly functional and historic and beautiful and serve the purpose.
Enough said on that.
Okay.
So we ran into each other a bunch of times last year.
We've not yet seen each other in person this year, but I suspect we'll resolve that soon.
OCP, I know we talked about a bunch of emerging technology topics there. But VMware Explorer, I almost said VMworld,
VMware Explorer was a good show for you guys and good timing with the advancements in vSphere that
the users are adopting with vSphere 8 and so much energy around virtualization. It was surprising to me, I didn't know this and
I'm knee deep in this industry, how big of a footprint the Fibre Channel install base is
in Vsphere. I mean, absolutely amazing. I can never remember if I'm allowed to say the numbers
that I have in my head or not, but if it was a pie, it would be an extraordinarily large part of the pie.
That blew me away in terms of the adoption and is obviously the reason why VMware tends to innovate with Fibre Channel in mind when it comes to new storage features.
But to your point at the very beginning here, it's still foundational to most organizations.
What's, I mean, you know all of this, but what's your takeaway from an industry perspective on that kind of data?
Yeah, Brian, you know, when it comes to on-prem data centers, right,
the kind of customers that I talk to, people who are running mission-critical enterprise applications,
applications whose data is close to their heart
and the criticality of the business is super critical
or super important to their business function.
I see a lot of VMware ESX deployments out there.
In fact, I would say that uh over 70 percent if 60 to 70 percent of my customers
run vmware esx and a huge majority of those actually kind of deploy fiber channel and the
latest and greatest fiber channel out there and it's once again it's it's getting a top-notch
virtualization infrastructure which is kind of designed to bring all the cutting-edge cloud-like
management performance efficiency features
to on-prem data centers and then mixing and matching that with
the fabric-like fiber channel to deliver
the efficiency and operational simplicity
that these people want.
If you look at enterprise data centers they're not
staffed with PhDs running their IT data center. They are only staffed with people who
have a billion things to do to make all of that run and they look for
reliability and simplicity as their two kind of key tenants when they look at
technology.
Well, and on that, it's funny, on that front fiber channel,
so the reliability standpoint, I mean, I'm sure we can get into some of the performance benefits too, but it just works.
And I'm laughing because we worked together recently
on a piece we posted on storageview.com,
and I'll link to it in the show description here
so people can learn more about some of the latest innovations with Fiber,
especially in VMware.
But a lot of the comments we got on social media were like,
using Fiber, ESXi, and Pure, and it just works, no problems. And for as many people that were maybe questioning the
purpose of Fiber in a modern data center, I mean, guys that are in there doing it day to day
see the benefits. And as you say, operational simplicity is fundamental to that in terms of
making sure that I keep my data center running, my applications
running and don't have to be chasing down goofy networking stuff. I mean, it just goes, right?
Yeah. And it just works. It's not an accident, Brian.
Oh, there was an engineer or two involved in that? It's been years and decades of carefully crafting every new capability,
working with hypervisor vendors like VMware and others,
and switch vendors and storage array guys, the cables guys, the optics guys,
things that are super important to make all of this just work.
There's a lot of that stuff that's done in the background so that our customers don't have to go through the pain.
Well, it's funny, though, too, because there's all sorts of new stuff, fabrics for one.
And I'd like to get into some of that with you, too.
But it's not as if it's
an architecture that's static. You've got 64 gig fiber. You've got new switches from your
partners like Brocade and Cisco. You've got, you guys continue to innovate on the HBA side or
NIC side, I can't remember, whatever you prefer it to be called, but your adapter side, right?
Yeah.
So there's so much going on there.
Pick one of those new things that you're excited about.
Let's dive into one of those topics a little more.
Sure.
I think one of the big things that is kind of driving the direction in which we are innovating in fiber channel technologies is NVMe.
NVMe is definitely kind of being established as kind of the new definition for storage.
It's being deployed within servers when people are crafting out a vSAN hyper-converged environment.
NVMe is deployed in external storage arrays,
as people are looking to get that next level of performance,
accelerating their applications,
and going beyond just using NVMe as a caching tier,
but finding that to use NVMe and SSDs as
capacity tiers as prices have become more stable.
So NVMe is giving us the direction because that's where the capacity tiers as prices have become more stable.
NVMe is giving us the direction
because that's where our customers are going.
It goes back to your point around NVMe or fabrics,
which is a native way of accessing NVMe,
which is remotely available in a storage array,
and how do you access that efficiently
from a virtualized server like
VMware and if you kind of go back and look at the history on this a little bit right VMware
introduced NVMe or fabrics was it in 7.0 I think right around the time when we were you know being
rushed back home and the pandemic was just just beginning to hold. And a lot of progress has been made since then.
And ESX 8.0 today supports not just Fiber Channel or NVMe over Fiber Channel,
but also NVMe over TCP.
And both of these fabrics have advanced significantly
to meet customers' demands around NVMe. Well, what do you think is the biggest challenge around deploying any of these storage arrays
over fabrics, over fiber channel, whatever you want to do, regardless of interconnect,
just take a step back a little bit from the enablement to the feature set itself, we've seen in our lab, we did some work with NetApp last year on
NVMe over Fiber Channel. And the benefits there are quite tangible. We've got a paper out there
on the performance benefits, but still the uptake by customers has been a little bit slower there.
And I can't quite tell if it's just the sort of lagging kind of way that enterprise IT works, which I'm sure is some of it, or if there's some quantifiable concern out there in the industry.
You might be a little bit closer than me in terms of the customers you talk to, but there's so many benefits there.
I'm just curious what you're seeing in terms of adoption.
And maybe it's one of those things where the adoption hyper cycle or hypervisor cycle is a little bit slower.
So very few people will jump immediately to aid.
So maybe it took a little bit of time to remove that slack from the system.
But what's your take on adoption and what's going on there?
So if I look at kind of just talking to customers, getting a view of where the deployment cycle is in VME or Fabrics, like you said, irrespective whether it is Fiber or TCP or RDMA for that matter,
less than five, six percent of customers
today in enterprise datacenters actually adopt NVMe or Fabrics.
So there's very little penetration as of now.
I'll go into some of the reasons why that is so and how that is changing,
but it is expected by us and many other
industry analysts that this number will grow to about 25 30 percent of customers deploying nvme or
fabrics in the next few years right there what is driving that is that you know we have as time has
progressed in the last several years as we have built up the NVMe or Fabrics ecosystem,
it is the last few things that are being buttoned up before these implementations are complete
and a heterogeneous set of hardware infrastructure is available for customers to deploy NVMe or Fabrics.
And if I just break that down in a second here, right,
if you want to be successful with NVMe, right,
there is not just hardware infrastructure,
but there needs to be new enhanced software infrastructure,
which was traditionally designed for SCSI and rotating drives.
The software infrastructure also need
to enhance to better appreciate the benefits of NVMe. The hardware side of the story
is pretty much complete and even before ESXi 8.0 came out, the hardware story was pretty complete.
For example, most storage array vendors for example you brought
up the name of netapp or you look at dell emc or you brought up pure storage and others and as you
were exchanging emails earlier all of them support nvme or fabrics and all of them support fiber
channel or nvme over fiber channel the hpa is supported and so there's the kind of or adapters i should say
supported as well as the fiber channel switches but it is the software in this case which has been
somewhat of a long pole for example if you look at vmware esx when they started introducing
nvme or fabric starting 7., it was a pretty limited implementation, right?
It was not available at scale, very few namespaces,
which is another way of saying LUNs in the NVMe world.
The scale, the multi-potting abilities was not there.
Critical features, we'll go into some of those
around storage like vVolsols were not available for NVMe
or Fabrics. And that was a kind of significant impediment for customers who want to deploy
mission-critical stuff in a simple kind of easy manner, but the software was not there yet. And
ESX-8 has actually buttoned that up. I believe will be a huge driver in getting our customer base from this 6-7% NVMe or Fabrics deployment to 25-30% over the next few years.
That makes a lot of sense.
I mean, we're excited for it because the performance benefit is clearly there.
And if you just look at it on that front alone,
and really there's almost no additional complexity.
And I think those two things of free performance
and again, limited to no complexity,
should make it something that everybody's doing.
I don't even know what the licensing schemas are
from the storage vendors,
but I think it's either a low cost or no cost option, depending on who you're talking to.
You'd have to check with your individual array sales guy on that, but just tremendous potential.
So that's one that we're looking forward to and definitely want to see higher adoption in. And it sounds like based on the data you're seeing in the customers you're
talking to, you fully expect that to as customers upgrade their software stacks and move into
vSphere 8. Yep, I expect that to happen. And I'm already hearing the kind of the first signs
ESX 8 has been out for like six, 10 months now. And there's already a lot of excitement among
customers saying, hey, we've been waiting for this.
You know, I have the hardware in place.
I have the NVMe arrays in place.
I have your HPAs there.
Now I can kind of, I can take this Ferrari out for a ride.
So, you know, one of the other big topics at VM, I almost said world again, VMware Explorer.
It's going to take me probably half a decade to stop doing that.
But one of the hot topics are DPUs.
So I'm interested in your perspective on this because I'm sure somewhere in the Marvell skunk works,
you've got a DPU product and I don't even know what's going on there exactly.
And you can tell me, but do you see, how do you see that transforming the VMware
world? And what does that do from your perspective for Fiber Channel? Is there a bridge for the two
to coexist? Is it one or the other? Just how do you think about that and talk to your customers about what these new accelerator cards and new ways of computing and running vSAN and things like that, what does that mean to them?
Yeah, and Brian, on DPUs, and some people call them smart NICs, and people also call them fixed function accelerators and things like that.
And I get that question a lot and in general for
for many years smartnix or dpus have been primarily used in two places a they have been
you know the networking accelerator in the public cloud there's a lot of kind of open information
available how public cloud vendors have used SmartNICs,
including a popular underpun acquisition by AWS several years ago, which formed their
nitro hardware function within AWS.
Beyond that, most of these TPUs were being used in network appliances to accelerate networking.
Cavium, my previous employer,
as well as now Marvell after the acquisition of Cavium into Marvell,
has led in both of those fronts as use cases for DPUs or data processing units.
It's just now that VMware is bringing
DPUs so that they are available to use for on-prem server side applications.
Actually, I applaud the amount of work the VMware team has
done to bring a diverse set of capabilities across vendors that exist in
DPUs under a common platform under
Project Monterey, one of my favorite cities close to where I live. And, you know, trying to level
the playing field between different vendors and create a common infrastructure so it can be
leveraged. However, I would say that the initial use cases, as you would imagine for these smart NICs slash
DPUs have been primarily the networking function offloads within VMware ESX. But it's a good
foundation which has been set up to accelerate networking within VMware ESX with the use.
Storage, I think at least my view is kind of remains another parallel domain
outside of these smartnets although smartnets can do and a bunch of networking offloads as well as
storage offloads but I continue to see kind of storage as a separate dedicated interconnect
that brings external storage into the world of VMware.
And the reason for that is as follows, right?
A, typically external storage applications are mission critical and you want kind of
multiple redundant pieces of hardware and parts to access it.
Number two, the very kind of premise of using DPUs is to offload the protocol down to some coprocessor,
which is the DPU. But that problem does not exist with fiber channel HPAs because the fiber channel
HPAs are fully offloaded. They already offload the entire protocol. So pushing that onto a smart NIC does not yield you additional benefits.
But if you want to pay something more for the same functionality, my guest.
Yeah, well, the storage world, as you say, is distinct and different.
The software-defined world or the hyper-converge where you're putting all these resources together,
maybe a little something else.
And then there's this whole notion of disaggregated
that many companies are out there going after
and hoping for things like CXL3 probably
to be able to change the balance again
of where componentry goes and how the technology works.
I mean, it's always an evolving space and it's certainly exciting.
It is. And since you brought up the term CXL, right?
I find CXL as a pretty interesting technology.
And yes, I think your timeframe on when CXL becomes relevant
is probably in the CXL 3.0 or PCIe 6.0 time frame
several years out and a bunch of work is being done in the industry. But the opportunity that
we have with CXL is huge because this is our opportunity as an industry to see if we can disaggregate memory.
The amount of memory that is sitting in these servers, unused, untouched,
while dollars and billions of dollars have been spent on it, is just a terrible waste.
And if we can disaggregate memory, like with disaggregated storage, it can be huge.
Now, time will tell how successful we are,
but there is tremendous potential there.
No, I totally agree.
And I do think it's three where that really becomes
a reality or potential reality, right?
The promise is there to be able to pull that out
and have memory nodes.
I mean, think of all the infrastructure guys
that have been building even, you know,
all these old blade chassis way back when.
I mean, blades were hot 15 years ago,
maybe even longer ago,
as a way to get a lot of density,
a lot of compute density.
But, you know, in something like this Cisco X system,
which is a modern version of that, could we not just have nodes that slot in that are GPU nodes or nodes thateon whatever you want and and continue to build
out your infrastructure rather than be tied into these one u2 u4 u physical nodes that that you
know rack in and are somewhat constrained now we can unleash some of that potential by having them
connected to a high speed network and do all sorts of other things. But man, the CXL potential is really huge.
Yep, I agree.
Like you said, time will tell.
But I see the right technology and the right investments going into it, right?
Which is two critical factors for something to succeed.
But like you said, time will tell.
Well, how do these technologies impact the decisions
that you guys make in terms of where to make
your engineering investments?
And I'm thinking about, I mean,
we've been talking a lot about NVMe storage.
That's been progressing.
Right now, we're at an interesting inflection point
with Gen 5 drives that should hit this year sometime,
depending on the brand.
We're hearing a lot of that.
We're seeing the SSD guys go through a form factor revival.
A lot of E3S going into the data center, which will increase slot density.
A lot of E1S, E1L going into the hyperscalers, which is a separate conversation altogether. But as AMD and Intel improve their
enterprise server CPUs, as more lanes become available in these systems,
it starts to be really interesting in the way server vendors kind of prioritize where they
want to send these lanes and how they want to manage these devices. Because even with all of the power delivered by AMD and Intel, you still don't have
an infinity number of Gen 5 slots in the front and Gen 5 slots in the back. And there's still
a bit of management there in terms of what they enable, how many lanes go where, are we using two to the drives,
four, and then how do you get to the back, you know, which ones are five, some are three or
four, maybe legacy ports. But all of this, you know, I think it's exciting. You guys probably
think it's maddening though. I'm not really sure, you know, what the Marvell view is on all of this
technology as it comes out in this latest Gen 4 refresh from both AMD and Intel.
Yeah, Brian, the way I look at this is in kind of two parts, right?
A, I clearly see that the bus that is coming out of the CPUs,
and that is relevant today and will be more and more relevant as time progresses, is PCIe.
There is increasingly number of lanes,
like you said, certainly never infinite,
coming out of modern processors,
allowing you the ability to hook faster and faster devices,
getting closer and closer to the CPU
in terms of performance,
thanks to kind of progressive speed increases
as PCIe technology has increased.
But end of the day, you know, these buses and the number of buses are finite.
There'll always be a situation where there isn't enough room inside the four walls of
server to put all the resources that these applications need inside that box. That's where networking technologies will
continue to stay relevant, whether it is
Ethernet transitioning from 540, 100, 200 gigabit
and then fiber channel going down the same path
of progressive speed increases because there will be
infrastructure, the high performance
infrastructure you know all the cash local memory sitting right next to the processor
then there will be kind of far memories which cxl would enable there'll be further storage tiers
you know that will be enabled by that are enabled today by by fiber channel or nvme or fabrics so it's it's going
to be a mixed ecosystem but i do see uh that you know high performance technologies thanks to pcie
are getting closer and closer to the processor and that's what you need if you want to run these kind
of massive applications especially kind of deep learning, big analytics, recommendation engines, right, HPC workloads.
So it's, it's, it's, see, Brian, you never solve the problem entirely.
You solve it in pieces, and the next thing thing comes up and then you solve that. So it's progressive is what I look at the changes that are brought about by both Intel and AMD.
No, I mean, we're well aware.
I mean, you look at it, you just take one, a single server, right?
And fill it full of decent storage and a nice high speed NIC on the back.
And something will prevent the system from going as fast as it could possibly go
in a theoretical world whether it's the cpu or the interconnect out the back or the drives
themselves or the raid card or the software or cooling i mean cooling now is getting you know
out of control not yet but it's headed in that direction, right? Where organizations
for all those AI boxes you're talking about, almost all the new AI boxes that are heavy,
dense on those NVIDIA GPUs or other GPUs, they're all looking at liquid cooling.
Putting direct to plate is the easiest way, But all these big HPC guys are looking at
all sorts of crazy stuff, including full immersion and anything, partial immersion to be able to deal
with that heat. And then of course, there's the green initiatives on top of that. So if I can get
the heat out of the data center, that's okay for step one, but can I do something with the heat that's productive for step two?
Just so many challenges around that.
But cooling these systems as we put more and more and more high-performance parts in them is a tremendous challenge.
I haven't seen, I don't think, any liquid-cooled fiber cards yet, but I'm sure that's on the...
I'm sure somebody's out there doing it. There's all sorts of, you know.
I hope we don't need to get there, but because, you know,
a fiber channel card puts, pulls what, 10, 12, 15 watts of power.
That's,
that's nothing as compared to the hundreds of watts that your modern
processors and GPUs or even smart necks pull.
So we hopefully want to stay in that realm of low power, stay green, and not overtly
complicate the already complicated landscape of cooling these infrastructures without having
to kind of break your wallet so to speak and i think going back also on on green
initiatives one of the cute little thing which actually i saw uh that vmware dsx 8.0 brought
about right and they have these nice kind of charts within v center that actually track your
energy usage your kind of energy footprint,
your carbon footprint within that.
And that's huge, right?
You can't improve what you cannot measure.
And I like the way ESX has brought this to the forefront.
And I love the innovation with those guys.
We work closely with VMware, the engineers there,
the marketing people there,
and they are heading in the right direction. Yeah, it's interesting. As we record this,
NetApp had an announcement today around visibility into energy consumption and all this that they're
doing through BlueXP, which is their dashboard. They'd probably scream if they heard me call it a dashboard,
but it's their view into infrastructure. And yeah, I mean, everyone's doing that. VMware,
as you say, is doing that. More companies are trying to give you ways to visualize that data
to make more intelligent decisions from a green, from an economics perspective,
energy consumption. I mean,
here in the U.S., it's a problem. But if you go overseas, as you well know, go to Europe,
energy costs have gone through the roof. So maintaining control over that is important.
One of the other things you talked about before, VVOL support, other visualization support and integration for Fibre Channel. I want to talk a
little bit about that from a management standpoint. What else is going on industry-wide to give
more visibility into, whether it's troubleshooting or preemptive insights or tuning either within VMware, other hypervisors, other systems?
What can you say about that in terms of visualization and sort of,
I know you've talked about autonomous sand before in terms of having things that just work,
that are self-healing, that are self-managing.
What's the latest on that front?
Absolutely, Brian. I think you brought up vWalls, right? Definitely, I would say one of the most
significant advancements in ESX 8.0 is that VMware started supporting vWalls with NVMe over Fabrics.
Also, the implementation that they have done with 8.0
today supports vWalls only with fiber channel,
but they do show a roadmap
to kind of add additional NVMe over fabrics.
And it goes back to that visibility simplicity thing.
vWalls has been so popular, the VASA framework,
the APIs that is implemented by the storage array vendors
my customers would not deploy external storage on esx without v walls it was a huge impediment in
my customers deploying nvme or fabric especially fc and vme if there was no v walls
they couldn't proceed ahead so it's a huge step forward.
And vVaults hugely simplifies the way VMC storage,
the way storage policies are applied,
makes it so much simpler and easier to configure.
Now, going back to your visibility question,
all the benefits that server virtualization has
brought to us there has been some negative externalities that have also
popped up like with anything else right one of the big challenges has been
actually loss of visibility especially on an external fabric and I'll tell you
what I mean by that term for example if if you look at a typical VMware ESX deployment on a server,
there are tens of VMs, 50, 70,
let's say 100 plus VMs sitting on that one single unit of server.
All of these are sending their IO requests,
they're accessing storage and external
storage let's say through a fiber channel hba right what the hba does is that it blends together
the ios from all of these hundred vms into a nice little smoothie right and the challenge with that
is that the fabric the switches any monitoring equipment that is sitting and trying to figure out what's going on has no idea which virtual machine or which application is actually generating what workload.
There is kind of complete loss of visibility because individual frames cannot be identified as to which virtual machine originated them.
And our customers have repeatedly come and told us, hey, this is something that's a huge pain for them,
especially when troubleshooting, quality of service, metering, things like that.
What we have done is that we have implemented a technology called VM ID or virtual machine ID and it's
available for VMware ESX 8.0 and what that does is that that tags every single frame that leaves
the Fiber Channel HPA with an ID of the VM that originated that transaction. Now, any monitoring equipment, any switch analytics
can look at every single frame and tell you exactly which application VM1 running Oracle
Database 2 is doing this IO profile, its performance over the last six minutes, six hours.
You can see exactly what your individual apps are doing or were
doing.
So, it's a huge progress forward.
Does VMware pass that ID through to you or are you assigning an ID and then managing
a meta table to match them up?
How does that work exactly? So virtual machines ID are worldwide unique,
right? They call it a visual ID and VMware's VM kernel stack provides our HPA with that ID.
And that is the ID or hash of that ID is what we use to mark individual packets.
So VMware provides us the ID. And so your suggestion is that then we can take that data and put it into a dashboard
somewhere and have better visibility into which VMs are not necessarily problematic,
but which ones are maybe causing impacts on your infrastructure that you may want to address.
Exactly. As fiber channel sands have grown bigger and bigger,
thousands of nodes,
it has become increasingly difficult to manage,
especially with server virtualization.
And the storage and SAN admins have been saying,
hey, we are running blind here,
not knowing which workload is performing with what characteristics.
And the switch analytics engine is very powerful, but if it doesn't know which application or which VM is actually kind of doing what profile, it can't give you the information you need.
That seems like a powerful tool. I mean, we've looked at some of the visibility in VROPs before, but this goes maybe even further. whatnot, a lot of great opportunity there to give your SAN administrators much more ammunition to go
ensure that their delivery is high quality and there aren't any issues. Or if there are issues,
they can get to the root of them more quickly. Which is important, right? When it comes to
uptime and things go wrong or some application is not behaving as it should, we want to resolve that problem before things escalate.
Right. All right. Well, speaking of escalation, we're running 32.
I've got one of your adapters sitting on the table next to me here and a pile of cables.
But you've got 64G out there. And I know that the fiber channel world is typically
a little slower to move through the progressions. Everything we've seen in the lab in the last year
has been 32. I think maybe we had one system that came in that was more SMB on 16. But
what does the future look like in terms of performance capabilities?
And when do you start to see significant penetration on 64?
Yep. Good point, Brian. I think just like you, my customers, a majority of my customers today are kind of deploying 32-gig fiber channels. gig fiber channel so that's my top performing speed in terms of fiber channel hbas the qle
3772 dual port 32 gig fiber channel hba that's uh you know the most popular out there 64 gig is
just being introduced i would say that 64 gig fiber channel ecosystem is is progressing well. Both the switch vendors, Cisco and Brocade have 64 gig switches.
There are HPAs available for around 64 gig. The storage arrays are not there yet. And
I expect maybe later this year and progressively over the next two years that there will be
64 gig fiber channel arrays. And that is when you would start seeing that there will be 64 gig fiber channel arrays and that is when you would
start seeing the uptick in 64 gig speeds. So today 32 gig remains kind of the popular choice and
we are already thinking as to what comes after 64 gig fiber channel and it's...
Wait, let me guess. It's got to be 128.
You do the math well, Brian.
Yeah, right.
That's because I started playing Nintendo back when I was a kid
and every progression was an 8-bit to 16-bit to 16-bit.
Anyway, carry on.
We don't need to devolve into Sega Genesis talk.
Yes, I know.
But that's where we are going, 228 gig fiber channel.
But customers today are busy deploying 32 gig fiber channel.
That's sufficient for the workloads, for the processors that they are picking up today.
And the future is already ready in terms of 64, and then we are making the next speed.
Well, if you look at the jump to anything, if it was 8 to 16, 16 to 32, and so on, it seems to me that for a while there was a big performance argument because flash was coming
in, hybrid systems were coming in, replacing disk systems.
So anyone that was sitting on 8, for instance, had a very immediate and pressing argument to go to 16 and to some extent to 32.
Do the mechanics change at all as you look at 64?
Or is port consolidation more a part of the conversation than maybe it has been historically?
What are the other dynamics other than just performance when somebody looks to,
do I make that next investment as I'm updating infrastructure to get to pick up the 64 gig
switch? What else is in that math equation? Yes. I think you have been watching this
industry for a while, Brian, and migrations from four gigabit fiber channel to 8 and to 16 were much more kind of
faster or quicker than what we have seen going from 16 to 32 and what we expect also going from
32 gig to 64 gig fiber channel and the there are several reasons for that, right? A, like we were discussing earlier, as you kind of solve the speed challenge,
or you kind of keep doubling the speed every few years, other things start popping up.
Other challenges become more dominant, right?
For example, as dense server virtualization happens, VM visibility became a huge problem. As different speeds of
fiber channel have started to coexist with each other, congestion can become a huge problem. And
that's our other conversation about self-driving sands and how we actually kind of look at
what's ahead on the road and try to make decisions to make the transition smoother for our customers.
For 64 gig fiber channel, I think the primary driver is going to be kind of the availability of PCIe Gen 5 and Gen 6 and storage arrays that would kind of bring the ability to transact at 64 gig speeds.
But the conversation with my customers, Brian,
is less about speed, right?
Right.
32 gig seems pretty sufficient for the applications,
for the number of VMs they can host on that server, right?
Tomorrow, when there are kind of double the number of cores,
double the number of VMs, that pipe may not be big enough. But today, 32 gigs seems just fine.
But the conversation with my customers is saying,
hey, how do I make this device more intelligent?
How do I make sure that I'm able to
adapt as link conditions change,
as errors are introduced in my fabric?
Can this device get
more intelligently take another route through there is congestion right and there is a bunch
of work again that we have done with vmware 8.0 to you know set the foundation for for the
autonomous sand there but so going back to your point, I think the conversation has now
changed from just speeds and feeds to more intelligence and value-driven capabilities.
Well, yeah. I mean, you're talking about a lot of the operational benefits of being able to keep
the system up and efficient and making sure the application delivery is working as intended.
We haven't talked a lot about the edge, but it seems to be that these notions of intelligence,
of operational benefits, become even more important as you get outside of the data center.
I'm sitting here in Cincinnati. We've got Kroger here, for instance, a couple thousand locations worldwide with gas stations and convenience stores and now restaurants.
And each one of these places has a different set of infrastructure within, but almost none of them have an IT guy on site.
So what do you see at the edge? Because we continue to push more analytics.
I mean, just in retail alone, the number of GPUs going into retail for shopper analytics is crazy.
And they're looking at loss prevention.
They're looking at injuries, slip, fall kind of stuff.
They're looking at out-of-stock notifications. It used to be there was a
stock boy that would walk down the cereal aisle and if we're out of Cheerios, you go in the back, get a case of Cheerios. I mean, that stuff's being all automatically driven by these intelligent
infrastructure setups in the retail shops that are embracing AI and GPUs and all of this latest technology,
but all of it comes with a cost and it stresses the infrastructure that's in place. So I am a
little curious, we haven't talked, as I said, a lot about the edge, but tell me what you're seeing
there. At least the conversation over the last few weeks has been that humans are getting obsolete and tools like ChatGPT. I know, that's
a perpetual problem. Actually, we've got people in our industry
that have gotten bitten by that, by using too many automated tools
and not enough human eyeballs with some intelligence. So I don't think we're gone yet,
but certain tasks are being
addressed with AI, right?
Absolutely, Brian.
And while I was not around then,
but I'm sure people said the same thing
around the industrial revolution,
that humans will become obsolete,
and it doesn't happen.
We retrain our workforce.
We find newer, better things to do, right?
When cars came about, they did not make...
They made horse carriages carriages obsolete
right but they didn't make humans obsolete anyways going back to get back get back to edge
definitely edge is super interesting uh brian and um the whole kind of context behind that is just so much data being generated by those cameras,
by those sensor inside the stores, inside industrial complexes that they need to be
processed, decision-makes right there. You don't have room to send that data over into the cloud
and then wait several milliseconds or seconds for a response to come back. So absolutely. But having said that, Edge has its own
set of challenges. And one of the things that you pointed out
was definitely that they are typically unmanned
or unmanaged, perhaps is the right word.
And that brings a whole
new concept of building hardware that is resilient, not just to conditions of the lab or a test environment, which it has already seen, but conditions that exist in real life. Intelligent device that's able to kind of look at how
conditions are changing around it and make decisions to at least keep the ball
moving right before you know help can actually come and try to resolve
something. How many times have you and me have been in the checkout lane and
the system is not responding and the lines build up that's terrible for
customer satisfaction and you can't always call in and resolve those challenges so so
two couple of things like a there needs to be more storage at the edge because that data needs to be
stored there it needs to be processed there absolutely so there is this kind of push to have
you know simpler higher performance lower footprint storage devices, including fiber channel HBAs
and storage at the edge, which would then kind of provide this infrastructure.
But beyond kind of just while cost is a factor definitely for the edge, there is also this
need to make these devices much more autonomous.
One of the things that we have invested in is a technology called Fabric Impact Notifications,
where fiber channel switches who have
excellent visibility of the fabric actually inform all the
endpoints that, hey, that link isn't behaving that well, that
other device in your fabric is congested.
Just, you know, inform each other, just like how, you know, Google Maps tells you there's traffic ahead.
Right. So you can take a different route and still get to your date on time.
Well, but that's good, though. I mean, I think, too, as well as some of these edge infrastructure deployments are managed, there's still so many things going on.
As you said, we've got the cameras, we've got the checkout systems, we've got the credit card systems, we've got time clock.
I mean, if you can't get your people to swipe in, I mean, what do you do?
And the resiliency and the uptime required there in an environment where there's very little
human support in terms of IT staff. I mean, all of these things, I think, combine to create a
tremendous amount of value. You know, one thing that strikes me too is that the investments that
we've seen from Dell, from Lenovo, Supermicro, and others, on the infrastructure that they're dedicating for these use cases
that can handle hotter and colder temps,
that can handle more dust and debris.
I was just on site with Dell.
They've got these air filters with a little sensor behind it on the bezel
that can tell you when the thing needs to be cleaned.
The hardware guys are getting smarter about how to address these
concerns to keep the uptime there, to keep the UPSs and distributed power and all these other
things that are required. For all that complexity, though, I do love the message about smart software
helping organizations just do some of the blocking and tackling
and get out of the way
or help the data get to where it needs to go
without much human intervention.
I think that's a pretty nice message.
Yeah, Brian, and that's the mission, right?
And I think we have taken the first steps
with the VMware starting with 8.0
where these notifications about conditions actually
show up and bubble into the VM kernel stack.
And as time progresses, we hope to bring more of those autonomous decisions of choosing
the right paths to get to your destination, making other informed decisions.
The future reduces. All right. This has been a great conversation. making other informed decisions, the future will resist.
All right.
This has been a great conversation.
I will link to the work we've done recently with you
on Fiber Channel.
We'll link to your adapters
so people can check that out.
I know you try to get that plug in
for your dual port guy.
So we'll make sure people can get more information
on that guy.
And yeah, I mean,
we're going to keep going down this path, keep working storage systems and switches and everything
else to show people what's out there, what the capabilities are. And we're pretty excited for it.
And like the guys said in our social media comments, when we plug these things in, they just
work. So there's a certain simplicity and peace of mind
that with very little configuration, it just goes. So thanks for doing the pod, Nishant. I appreciate
you jumping in today. Hey, thank you, Brian, and good work being done by the Storage Review team.
I love what you guys write and put out, and I really like the engagement that you have with the audience everybody is open and it's
conversational and it's just a matter of fact kind of thing so keep up the good work thank you sir
all right we appreciate it take care