Grey Beards on Systems - 141: GreyBeards annual 2022 wrap-up podcast
Episode Date: December 30, 2022Well it has been another year and time for our annual year end wrap up. Since Covid hit, every year has certainly been interesting. This year we have seen the start of back in person conferences which... was a welcome change from the covid lockdown. We are very glad to start seeing everybody again. From … Continue reading "141: GreyBeards annual 2022 wrap-up podcast"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here.
Jason Collier here.
With Keith Townsend.
Welcome to another sponsored episode of the Graybeards on Storage podcast,
a show where we get Graybeards bloggers together with storage assistant vendors
to discuss upcoming products, technologies, and trends affecting the data center today.
Hey everybody, Ray Lucchese here with Keith and Jason.
This is our annual year-end podcast where we discuss the year's technology trends and what to look forward to for the next year.
So, Keith and Jason, what would you like to talk about?
You know us. We can talk about anything for as long as time you got.
Well, let's talk CXL because CXL seems to be a pretty hot topic, a flash memory subject summit,
and it's been a hot topic at supercomputing and stuff like that.
So what do we know about CXL, gents?
So I'll let Jason, this is the more technical of the two of us,
so I'll let Jason talk about what it is technically.
And I'll chime in on what it means for kind of the data center at large.
Yeah, well, so the big thing, big news around CXL was you needed a CPU to support CXL functionality.
And with AMD's latest Genoa launch, CXL is now in CPUs. So Compute Express Link really allows you,
the first kind of iteration, there's a version one of it, really allows you to extend a memory
footprint out of just what you think of as a standard DIMMs, where you can extend it with
CXL MEM out into the PCI bus. So with that, I think we've started to see some pretty interesting
things. I was at Super Compute this year year and I saw a couple of interesting companies where they have, you know, in a PC,
PCI slot, basically a PCI slot that's literally full of DDR4 DEMs. And I think one of the
interesting areas that this could be utilized more in data center as well is it's extending
the life of DDR4 because a lot of the newer CPUs Genoa,
you know,
the Genoa CPU,
which is Zen four architecture at AMD supports only DDR five.
And if you've got a large investment in DDR four,
you will have the ability to utilize that DDR four investment,
basically hung off a PCI bus.
Now,
clearly it's not as fast, right?
How can a CPU come out with only support for DDR5?
Is it because the speeds of the CPU are required to have that sort of technology?
That's a different discussion.
It is.
Well, honestly, you know, and then it gets into, basically, if you want to support, you
know, more stuff, you're effectively increasing the die size.
So, you know, you've got all of these, you know, what do you develop for the next generation? And yeah, so I mean,
literally it comes down to, you know, more, you know, more die size,
more power, like every time you think of that little thing, oh, well,
if we only had this support and then you're just like,
and if you only had more power and more, more, more die capacity.
Maybe if you cut the cores from like 256 to 254 or something like that.
Right.
It's all kinds of weird stuff happens like that in the CPU world.
So, yeah.
But, yeah, I'm really excited about seeing some of those products.
And we saw a number of those at Super Compute of companies that are kind of, you know, coming out of their startup mode.
And then there are other places I think you can take the CXL as well.
And I'm sure we'll get into that. At Flash Memory Summit, there was a lot of talk about
putting, you know, flash storage behind the DIMMs in a CXL card that had effectively a storage
hierarchy on a PSI card, you know, PCIe card, which is pretty bizarre when you think about it.
But, you know, it's memory, it's paging, it's virtualizing,
and it's supporting, you know, quick access to the right data, I guess.
Yeah, so I did a sponsored, the past year, and it's timely,
I did a sponsored thing for Micron where I talked about the data center of the future.
And if you think about the data center of the future. And if you think about the data center of the future
and what people want,
cloud providers down to large enterprises
or anyone consolidating and carving up compute,
you want what HPE called, what,
in 2015, memory-driven compute.
Basically a rack of gear with a pool of memory mvme storage
and you could compose that to be whatever you needed at the time and structure thing i got
i got you yeah yeah yeah yeah and uh if you the hope is that CXL is the glue to all of that because of what Jason just said, I can put, you know, PCIe cards into a server and then load that with DIMMs that are either flash based or DDR4, even DDR5, and then have memory controller, a memory controller that, uh,
that, you know, automatically distributes hot and cold storage. And that's not so cold in a PCIe
card. Now take that PCIe card and put it on a PCIe switch. And this becomes really interesting maybe not the ddr4 speeds but flash memory flash memory
on dims on a pcie bus is super interesting stuff yeah well i mean it's it's there today obviously
through ssds and things of that nature and nvme and that stuff. But this takes it up a whole other level because now you're putting effectively
almost memory bus types of bandwidth
in front of a gang of Flash and stuff like that.
So you could create effectively a storage hierarchy,
and that's just within a server.
Now you've tried to put that out on a PCIe switch.
It becomes a whole different game.
It's like...
Yeah, there's...
Yeah, I've talked to...
Actually, I've talked to Howard Marks about
this at VMware
Explorer San Francisco.
We did a
jam session just on
NAND Flash, which we haven't talked about
on the end of your wrap-up,
the whole 200-layer-plus NAND Flash.
And I asked him about the future of CXL.
He pooh-poohed it a little bit.
He said, you know what, there's challenges when you're talking about sharing memory
across systems that CXL is hard to scale.
And I'm hoping that as we look into the new year,
that we see solutions for those types of problems.
Yeah.
Yeah.
And you see,
you see it changing in,
in,
in the specification as well.
I mean,
they're going through,
they're doing,
it's like version of the evaluation.
I mean,
it's coming up.
I mean,
it's,
it's been out there for a while now.
Right.
Right.
And then,
but you're finally seeing it start to get into the hardware and they've got
the specification where,
you know,
it's,
it's,
you know, moved out to basically having, you know, kind of the switching interconnect.
And I think, like you said, the tiering is is going to be a key component of one of the things that you'll be able to do with it.
When you think about it, I mean, you got a system with DDR5 in it, you know, that's going to be your closest, fastest, you know, when you when you need those ultra low latency components, it's going to be great for
that. But say you got another application where you just need a lot of big, slower memory,
guess what? Hanging out on the PCI bus. Are you going to get, you know, like,
is there going to be, you know, more latency components to it? Yes. I mean, it takes longer
to get out there. And literally to the point where if it's on the farthest PCI slot, it takes longer
to get to the farthest PCI slot than the closest when you're talking about, you know, memory subsystems,
memory copy, but not all applications need that. And then the same thing, once you get into the
switching, when you start talking about the composability piece is, you know, having the
ability to, okay, well, it doesn't even necessarily have to be in this box, right? It's a, you know,
a device connected via PCI switch.
And there's going to be a lot of interesting, I think, applications for it.
The question is, you know, when will mainstream applications pick it up?
I mean, what we're going to see first is like the hyperscalers are going to pick this stuff up, right?
They're developing their own software for their own clouds and very specific optimizations.
I mean, if you look, you know,
and things like that will start taking advantage of this sort of stuff.
They will, but they're going to be behind the,
they'll be behind the hyperscaler curve.
They're going to be the first adopters, but I think that there'll be,
there'll be close after it.
So, you know, just think if you had a Aurora DB,
if you're AWS and you could have a Aurora DB with 64 terabytes of flash or memory,
even if it's lower latency than in memory on bus,
it's still incredibly fast.
And that really changes application architectures.
You know, all of the fancy things you need to do
in a database to make it super responsive.
You no longer have to do that when you can have as much high-speed memory as you have the size of your database.
The limit of the size of Aurora DB is 64 terabytes.
What do you think, Ray?
Has it come a long way since how many spinning disks you got to put behind your RAID system?
We were talking about shared memory back in the 90s, quite frankly.
Maybe even the 80s, they were trying to put together a shared memory solution that could talk to multiple Z processors and all this stuff.
And it's still there.
I mean, the need for more memory never goes away for some reason. I mean, especially now with the database functionality that you can place for an in-memory database.
And, you know, Aurora is not the only one out there.
There's plenty of others that play that space.
Well, just think if you can have an Oracle DB, you can just literally take your Oracle DB and not have to rewrite your software for an in-memory database.
And you're just serving it up with, you know, what's essentially in-memory.
There'll be some inefficiency there, but still an incredibly easy way to get performance
without re-architecting that.
And if you start bringing in the composability thing in there,
now you're talking real interest because now you can carve up that 64
terabytes into, you know,
128 processors or something like that if you wanted to and have each one have
an application with, you know,
half a gig or multiple terabytes depending on what they need and stuff like
that.
Ray, don't feel too bad.
We won't necessarily be talking about how many
spindles we need to get the number
of IOPS. We will be talking about
how many DDR5 versus DDR4
modules we're going to need.
So the CXL
space is kind of
opening up a whole new
dimension to what can be
done in the enterprise.
And so you think the hyperscalers are going
to be a big adopter of this sort of thing? Yeah, absolutely. You know, if you just think about
the scale of what they're trying to do and to move to DDR5, you know, let's talk about not just the
hyperscalers, but the data center. How's this packaged? Today, if I want a CXL and DDR5
and I want it packaged, I don't want to take a motherboard and do what hyperscalers do.
HPE, their 11th gen ProLiant server, I go out and buy that. I go buy CXL card and plop it in.
The hyperscalers, they have a whole different
layer of engineering and capability
in what they do.
I can see them doing
the PCI, taking a bunch of
DDR4 that they have now and
putting that into a PCIe
switch.
With DIMMs on it?
Stuff like that? Is that how this would be
played out?
It can play out that way. I'm not as smart as Dr. Vogel to
Dr. Wernher Vogel to
know exactly how they would do it. I'm sure they're already playing around with it.
When you stop and think about it, the sheer scale of hyperscalers, I guess we call them hyperscalers for a reason,
right? I mean, it's one thing when you've got a couple of dozen DDR4 dims
laying around in a few racks. It's another thing when you've got, you know, a couple of dozen DDR4 DIMMs, like, laying around in, you know,
in a few racks.
It's another thing when you've got a couple hundred million of them
laying around.
I mean, you know, effectively with the new processors and stuff like that.
So hardware's got to be there.
Now the operating system's got to support it.
Now the functionality has to be available to, you know,
to underlying drivers and stuff like that.
Is all that in place today?
I mean, so you just mentioned the compute CPUs
are now coming online, right?
Yeah, and it's all pretty much in,
basically, you know,
everybody's pretty much using Linux kernel stuff
and CXL has been in Linux kernel for quite a while.
And, you know, a lot of the hyperscalers,
they, you know, they use their own custom distributions of, you know, a lot of the hyperscalers, they, you know, they use their own custom
distributions of, you know, the major flavors, but they're usually on latest, greatest kind of
bleeding edge kernel stuff, depending on what the application they're wanting to run. So yeah,
they've had the capability for a while. I guess my, you know, one of the questions I was thinking
about is, you know, GPUs are sitting out there with 40, 80 gig going to 160 gigs types of memory per GPU.
I mean, is something like this could support GPU processing as well?
Or you see that as the distinct game?
Depends on the GPU, I guess.
Yeah, though.
And some of the conversations that I've had and where people people thinking about putting GPUs behind CXL.
So the.
What?
I think that's.
Wait a minute.
Wait a minute.
Now you're putting you're putting compute behind outboard memory.
I don't understand this logic.
You know, I don can look at Apple as the example in the consumer space, being able to put shared memory between the GPU and general purpose CPU.
We know that's standard precious today.
So the low hanging fruit is that we know that we can that CXL in theory should be able to provide shared memory to GPU.
So what happened? And I think we talked about this in an earlier podcast this year.
What happens when I can give a GPU a terabyte of memory?
Comes a different game.
And it is like that whole memory copy over the PCI bus is like the number one latency factor you got for solving this problem.
It's like getting the data from main memory into GPU memory.
And if that can effectively be, you know, a shared address space,
then that cuts the copy out.
You can just do direct manipulation of it right on that bus.
And I think that's what one of the big drivers for CXL,
kind of the 2.0 and 3.0 specs are.
Yeah, well, Apple's got their own silicon
for GPUs and for CPUs
so they can play the game with shared memory
and stuff like that and it all works fine.
Well, they can do it at
a limit.
I was just reading
articles the past couple of days that
Pro is
delayed because they can't figure out memory.
They can only, the rumors are that they can't figure out memory. They can only, uh, the rumors are
that they can only get it up to 192 and 384 gig of Ram. And that's nowhere near enough compared
to the 1.5 terabytes they can put in the X 86 based, uh, systems. So again, they're running through what Jason was talking about, the die problem.
You can only put so much on a die before you run into physical die limits. And Apple is doing that
for these higher end applications. And this is where CXL kind of solves that problem.
Yeah. And that memory that's on there, I mean, they're effectively basically just putting the memory on the die.
And, you know, it's been in compute for a long time.
It's typically called HBM, high bandwidth, basically high bandwidth memory that's integrated into the CPU. know amds had like a few uh cpus that that utilize it mainly for yeah you know other three letter
three letter agency kind of things right yeah and then that's how that's how it works in the uh
um yep both the uh dpus and then the uh um i know the the pensando uh next but specifically the
first one it was basically it was hbm that that was on that, the arm die that was in
there. And then, you know, and then it's used a lot in GPUs as well. So it's all tech that's been
around for a while, but it's like, what, like how much power can you get to a socket? It's amazing
when you dial, when you dig into it and you find out basically it's all about what power
specification you can get and how much,
how much power you can get into a single socket. I mean,
that's one of the things when Genoa popped out, I mean, it's a, it's,
it is considerably more power to, to,
to the socket than what it was in the prior generation.
Yeah. It was all sorts of new technology coming out on the chips and stuff.
Yeah. I think it's something like 400, like 400 watt a socket, right?
Yeah. It was like a GPU only worse. Not quite. coming out on the chips and stuff like that. Yeah, I think it's something like 400, like it's like 400 watt a socket, right? Yeah, yeah.
It's almost like a GPU, only worse.
Not quite.
Oh, the funny one to really dial down into,
you look at Frontier,
and I don't know if you guys seen any of those racks that were in the supercomputer Frontier.
And it's about, it's a little bit wider than two racks,
but that's effectively what is kind of one module.
But it's, I mean, it's something, it's like, I think it's like four, like 400.
You know, typically you think of a rack,
and if it's going to be in a data center, it's going to be, you know,
like 50,000, you know, 50,000 KVA is, you know,
typically what it's going to pull.
That thing's like 440,000, right?
So it's a, it's, it's a lot of power to be dissipated. And then, so then you start,
what's it's interesting too, is like, you start doing this and you start looking at basically
power, the power consumption, then basically the cooling, you know, requirements start to change.
And that's why you're seeing a lot more liquid cooling, uh, technology start to start to really
come out because yeah, you want to put just a little bit more on that die,
and now you've got to think of more unique cooling strategies.
Jason, you're getting off to the deep end here.
Okay, let's get back down to earth here.
Okay, so we beat the CXL course.
I think there's a future of storage in there someplace,
but we haven't quite touched on that.
The other thing that's been of interest lately is that the whole cloud native space is starting
to come online.
The ecosystem is starting to be a major consideration.
We've always talked about Kubernetes and where is Kubernetes.
I think the Kubernetes thing has pretty much settled the debate, but where the ecosystem
needs are
surrounding Kubernetes and that sort of thing
are a whole different discussion. What do you guys think?
So I was at reInvent
earlier
this year,
beginning of the month and end of last month,
and the conversation
from
Amazon directly
was that hybrid cloud there's that's their code word for multi
cloud is a thing because that's what customers are that's where customers are and that's where
customers will be there is no uh mass migration to the cloud, specifically to AWS.
Customers will have everything.
And with that insight, so, you know, Ray, me and you have gotten to these debates like how big is Kubernetes or how big will it be?
Kubernetes in the cloud native ecosystem is big, no doubt, and it's fast growing, but it's still just a subset of everything that
we have to do in enterprise IT because nothing ever goes away. So the question is, how do you
build services and maintain your environment around that cloud native concept? If the idea
is to build applications faster, what the cloud native folks are running into is that enterprise IT is still enterprise IT.
You still have to secure the data. You still have to have compliance.
You still have skill operations challenges.
And Kubernetes is basically a bag of Lego or not.
I'm going to stop saying Kubernetes. The cloud native is a bag of Lego. Or I'm going to stop saying Kubernetes.
Cloud Native is a bag of Lego.
There's, you know, pick your service mesh, pick your message bus, pick your Kubernetes distribution,
pick your functions as a service, whether it's OpenFaz or Knative.
The choices are boundless. And what enterprises are discovering is that their Kubernetes distributions
or their cloud native activities are balanced.
They get three different monitoring solutions,
just like we have in traditional enterprise IT.
You look at, you know, like the landscape for cloud native anyway, it's like all
of those components, when you, when you compile them in, you look at the landscape.cncf.io,
right? You find that there are what, 1,193 different projects that cover all of that stuff,
right? And there's, you know, like you said, everything from your scheduling and orchestration,
coordination and service discovery, remote procedure calls, like service proxies, service meshes, like all of this stuff.
And you got to get it all to work together.
Right.
And that's, it's a challenge because it's like the landscape is huge.
Yeah.
CNF platform to, you know, chart and stuff like that is extremely big.
And the whole CICD part, you know, pipeline and all that stuff is coming big. And the whole CICD pipeline and all that stuff is coming online.
And ML flow kinds of stuff are starting to come out of the woodwork.
So it's getting pretty damn complex to do any of this stuff.
But enterprise has always had complexity to some extent.
And it's never gone away.
Like you said, the three monitoring solutions
have kind of been in the enterprise space for a long time.
But I think it's almost a different order of magnitude, I guess,
because of, I don't know, because the open source or what's going on
or just the activity that's going on in that space
is just multiplying all these challenges tenfold.
Yeah, so I'll sneak in the concept of the
controversy around SuperCloud.
Don't go there.
Whether you like it or dislike it is not relevant.
It is that there's choice.
And this is something that in an enterprise,
we hadn't had the example.
We didn't have the layer of choice.
Even with Linux, you had some choice, but it basically boiled down to Ubuntu, Red Hat, SUSE,
and kind of like the big three.
Cloud
native is nothing like that.
It is...
I can choose
between
10 different service meshes.
I can choose between
service
meshes as products
based on... There's probably 20 companies out
there with an Envoy service mesh. So, and there's no clear winner in any of that. So as consumers,
as IT decision makers, what do you do? I mean, there's just, there there's just in the good old days cisco told me i shall buy this switch
and now in these days it's kind of like yeah you have you know you can you can have any streaming
service you want right right and it all adds up to basically the same amount of cost you were
spending before so this is you know this has always been my negativity towards cloud native. Yes,
it gives us tremendous power, but we're monkeys with a potato gun. Like this is not, this is not,
this is not for us, like for, and when I say us, I mean, most enterprises, we just want open shift,
like, or we want Tanzu application platform.
We want a product that we can, we want a product we can implement.
You can use those.
They exist to a large extent because of the plethora of solutions in those spaces.
And, you know, OpenShift is a fine example of this.
It kind of packages all that stuff for you. You don't have to make a lot more decisions with respect to that.
Yeah.
Cloud native is the, it's like the menu of the Cheesecake Factory, right?
It's got a little bit of everything on there.
Yeah, I think, Ray, you make a good point.
I think the problem is, you have to make it to a KubeCon one day.
It goes against the grain of that original community to have a packaged product.
Like the whole point of the movement is that you don't need a redhead or you don't need a VMware.
And you should be able to just take these bits and pieces and build what you need so that you're not beholden to one of these companies.
And I think we're just in the throes.
This past year, we're realizing that that doesn't scale.
I don't think any type of new technology starts to get itself into the enterprise ecosystem and stuff like that.
There's always a proliferation of tools and software and ecosystem that
surrounds it. And then over time, some of those guys die off.
Some of those guys are bought and merged.
Some of those guys are successful and take off.
And you got to think about the skillset and the talent pool of the people,
right? That are built. So you get somebody that, you know, Hey,
I'm going to build a service mesh on on
istio and then a guy who loves linkerd comes in right it's so so it's it's it's interesting
because the uh you know back in the good old days of cisco you could have a guy with a cisco
certification and he knows everything about cisco right um and you know that was also a good way for
the vendor to keep you know ent, entrenched gear in there.
But, yeah, there's really not that for cloud native other than, you know, there's there's more popular things, but there's really not a defined way to do it.
Right. Cloud Native Foundation has it has a role to play at some of this stuff.
I mean, obviously, they're there. You know, they have a list of all the platforms that are available, all the solutions that are available in each and
every category, but they need to, it's almost like SNEA. They need, they need to have a decent
definition of what these boundaries are between these tools. And, and then, you know, some of
the tools can start to be more or less successful in those spaces. I don't know. It seems to me that we're still in the beginning throes of this
transformation of the world of enterprise IT and IT in general
to a more cloud-native space.
Like most things open source, you need for real success to happen.
You need a benevolent dictator to kind of set a direction, right?
And guys, that's why Cloud Native isn't going to win. There's no winner and loser in this.
There's not. We're going to adopt some Cloud Native, quite a bit of it, an awful lot of Cloud
Native. But that deck alpha system that's on the manufacturing floor
is not going to run Kubernetes. That's going to still be the OS, that Linux OS, I mean,
that Unix OS running on that deck alpha system. The HPUX that I'm running is still going to be
HPUX. The Windows systems are still going to be Windows. And we will have all the cloud native.
And this is the realization Amazon has come to.
Amazon has accepted the fact that they're not going to win all of the workloads.
How do they get as much as possible?
And it all will not be just one thing.
Well, that's also why Amazon created Snow and Outpost as well, right?
That's why they created Snow and Outpost as well, right? That's why they created Snow and Outpost.
You know, people are building,
if you looked at Voner Wargles' technical keynote,
it is about Lambda
and their message bus.
The,
whether you're putting stuff
in applications and containers
or whatever,
it's just an implementation detail.
You're not,
the control plane is not cloud, the CNCF cloud native control plane.
The control plane is the Amazon control plane.
And so you're going to have, so if everything is going, even if I said everything is going
to go cloud, even if I agree with that premise, it's going to be, well, which cloud? Is it CNCF?
Is it Amazon?
Is it Azure?
It's still going to be bifurcated in several different ways.
And they're all effectively built the same but different, right?
And managed differently.
Oh, yeah.
Yeah, the whole cloud native thing, it's like it's like, you know, it's like seeing
the early days of enterprise IT and stuff like that. It's just a proliferation of ecosystems,
a proliferation of, I'll call it middleware. It's not really middleware anymore, you know,
and all that stuff coming out of the woodwork. And, you know, over time, obviously, it's going to start to become less diverse and less options there.
But it's going to take time.
You said something about you don't think packages are the way to go.
I think you've got the challenge that the enterprise, I think enterprise likes packages.
It's the startups that care less about packages. They want to roll their
own. They want to have the best of everything, etc.
And Ray,
you're absolutely right. That's the conflict.
The enterprises want OpenShift.
Enterprises want TAP. They want
packaged software.
Startups
and the
community driving these projects
don't want the lock-in, that vendor lock-in.
So going into the new year, that's an interesting thing are showing up at these conferences and they're bringing the suit and ties with them.
They're bringing customers here to have these conversations.
So move over, hoodies.
The guys and gals with the pantsuits are coming in.
That could be the key. I'm thinking, is that why IBM bought up OpenShift? Because they saw
this coming. I don't know. IBM is an AR customer of mine, and it's been a definite accelerator for the rest of their business.
Like the OpenShift numbers themselves.
IBM would
disagree. I'm not impressed with the number
of licensed OpenShift users.
But the rest of the stuff
that it brings to the rest of
IBM has definitely
been a huge boost.
I don't think we've even touched the surface of cloud native,
but I'm going to have to move on to something else here.
So the big news this year was VMware and it's Broadcom acquiring it and that
sort of stuff. There was a lot of, I'll call it head job transition occurring at
VMware. You know, the whole world is trying to,
trying to get their heads wrapped around.
What does this
mean for my enterprise if VMware is purchased by Broadcom and things of that nature?
You guys see any of the pushback on that sort of stuff?
Nothing but pushback.
I haven't, well, this is not true.
Within our analyst group of community, analysts, especially the financial analysts, love the deal.
This is a great deal for both Broadcom investors and VMware investors.
VMware has been a static stock for the past three to five years.
Broadcom knows how to wring out profits out of the CAs and semantics of the world.
So great deal on a financial side and financial advantage.
Analysts believe in it 100 percent for the rest of us that focus on technology.
We're kind of scratching our heads and thinking, hmm, how does this benefit customers even the industry at large
VMware is out here doing some brave things
with TAP, Tanzu, CrossCloud
the whole hybrid cloud thing
they're playing big time
they're playing big time
they have a bunch of bets and stuff like blockchain.
Their R&D, they spend more money and a bigger percentage in R&D of their in their revenues than Broadcom does.
So from like an industry, is this good for the industry at large?
I'd have to say resoundingly, no, this is, you know, no, I'm not excited about the deal at all.
I don't see any advantage to the customers.
And we'll get into kind of why I think there's a possibility why this deal won't go through.
But I'd love to hear, you know, you two thoughts.
You know, is this is this an exciting deal?
Well, I can tell you that customers are not excited about it.
The current,
current VMware customers out there are not excited about the deal.
I mean, probably one of the biggest concerns that I heard was,
was you said, you know, Broadcom's exceptional at, at basically, you know, ringing out profits out of a, a company. And they made it pretty clear that
one of the ways they're going to do that is by charging more and, uh, and, and moving it to more
of kind of a subscription style of, of, uh, services revenue, um, which, you know, equates
to, to larger, you know, larger, you know, dollar amounts coming out of a customer's pocket
to keep their VMware installation going.
I had a lot of customer conversations at VMworld this year,
and that was probably one of the biggest concerns that I heard.
Jason and I had a little talk at VMworld.
That's your digs, actually.
Yeah.
We're talking.
Yeah.
We published that. You should link that in the show notes. Yeah, we published that.
You should link that in the show notes.
Yeah, we should probably do that.
It's interesting.
So when Dell came along and purchased EMC, I had the same sort of question in my mind.
Dell had a fairly low percentage of their revenues in R&D, and EMC was actually a pretty good spender of money for R&D and stuff like that.
In the end, does the Dell EMC thing, did that seem like it's working out for customers?
I think so.
It's not the same company that it was at one time, obviously.
But it's a similar type of situation.
It worked out for Dell customers.
If I was an HPE customer, did I look at it and say, oh, wow, I really love that VMware got purchased by Dell.
If I was Lenovo or Cisco customer, if I was not a Dell customer, did this deal work out for me?
And this is why we're getting into why the EU is looking at this deal more closely than they did the Dell VMware.
Do you think that Dell had an unfair advantage when they had purchased VMware or EMC?
Absolutely. There were Dell VMware was a between Dell and VMware. If I can, if I'm a Dell engineer and I can just pick up the phone to my VMware counterpart and say,
hey, come out to Austin or Hopkins and help me work through this problem I'm having automating the X-Ray. You know, let's circle back to 2015, 2016, when Dell did not have an HCI solution, not of their own.
They partnered with Nutanix. They partnered with Jason, your previous company.
There was great HCI competition.
Fast forward to last year, and I did our VxRail review. The product is the most
integrated product of all the HCI solutions with vSpirit, bar none. Nutanix has some slick
integration, but nothing compares to VxRail when it comes to integration with vSphere.
There's a button when you're setting up VCF, VMware Cloud Foundation, that says, is this being deployed on a VxRail system?
There's special integration with that.
That is, I don't know if you tell me, is that a fair or unfair advantage over HPE, DHCI,
and other solutions on the market you know the other side
of this is that vx rail was very popular and very successful and you know the the engineering
dollars and r&d are going to go to that sort of solution first quite frankly so yes and no it's
sort of a you know it's a sort of a combination of things to some extent and And now Nutanix is up for sale. Yeah. Would Nutanix be up for sale if VxRail didn't exist?
Well, I think Nutanix has its own problems.
I mean, yeah, it's harder.
It's been harder to gain adoption in the enterprise.
It's been harder to gain adoption in some of the spaces.
I mean, it's sort of a niche kind of product,
as far as I can tell.
Most of the people that use Nutanix seem to like it,
but it's just not,
it never got the traction that VMware did.
They did not get traction.
And this is the question that the EU was asking.
Did it not get traction because it's a niche product?
Vixrail and Nutanix,
it's the same product.
They compete when,
uh,
when,
when you're talking sales and what sales teams are in is the Nutanix sales team.
And it's the VM Dell sales team competing against the Nutanix sales team.
They are direct competitive solutions. Right.
So fast forward to the Broadcom deal, the EU and other regulators.
And it's not just the eu looking at the deal uh the eu can take
a look at and say do we want another dale vmware vertical stack do we want this inorganic stack to
come together it doesn't sell hci solutions today right they? Yeah, but they sell DPUs and SmartNICs.
Yeah.
And you know who else sells DPUs and SmartNICs?
Intel,
NVIDIA, and AMD.
If
you're Broadcom,
are you just going to open up and say
VMware equally
spend R&D dollars
with the other companies? And even if dollars with the other companies.
And even if you're the other companies, are you going to spend money to further Broadcom's knowledge of what you're doing?
This isn't like the days of EMC when EMC was a storage company that owned VMware and all the x86 guys and server guys and VMware was the neutral player in the mix.
VMware would not be neutral when it comes to DPUs and smart NICs.
And that's the future.
Like we literally we've literally had podcasts where DPUs and smart NICs and all of that is the future innovation of the data center and the hypervisor.
It is the chase to be the nitro of the private cloud.
And Broadcom will own that company and they will have an unfair advantage, just like Dell did.
Now, the question is, is that anti-competitive or not?
I think the EU, I think, is leaning towards that's anti-competitive.
It'd be interesting to see this deal unwind like that. I don't know what it would mean,
quite frankly. The Nutanix thing is interesting, but it's sort of ancillary to this whole
discussion. I mean, is it important that a company's acquisition be good
for the customer or is it good for the market or is it good for the investors? I mean, obviously
the investors stay out here, right? And the EU, that's the funny thing about the EU. The EU,
the acquisition has to be good for the customer. And the US, it just has to be not anti-competitive so uh it is a different question
and this is why i think the biggest challenge for the dale brock maybe for the broadcom vmware deal
is how the eu looks at competition differently than how the u.s looks at competition yeah i
guess we'll have to wait till the new year and see how this all plays out. Might be some litigation here, but that's it.
Yeah, I don't think the funny thing is that I don't think there's that while in the U.S.
they can the Microsoft and Activision thing is getting challenged in court.
Once the EU makes a decision, I don't think there's a mechanism to challenge it.
It's kind of a final thing.
Interesting.
Yeah, the EU, they don't have
a setup like we do here in the U.S. where you can go and challenge. It's not that the EU is suing
the Broadcom and VMware. The EU just says no, and the answer is no. All right, Jen. So I think we
covered enough ground. We could probably do this for another hour or so with a couple more topics, but I think this is good.
So, Keith and Jason, any last items you'd like to discuss before we leave?
No, it's been a full year.
We just touched briefly on SuperCloud and this whole idea.
I don't particularly the label, but I do like the concept that we need a better definition for this thing that's happening in the enterprise and new shops.
That's not multi-cloud.
It's something other than that.
Super cloud may be a bad label, but it's a great concept to define what it is that's happening more than just having multiple clouds.
Yeah.
Jason?
I tell you that it is good that the industry shows are back.
It was great seeing you guys in person.
I think that's it.
That's it for now.
Bye, Keith.
Bye, Jason.
Bye, Ray.
Until next time.
Next time, we will talk to the most system storage technology person.
Any questions you want us to ask, please let us know.
And if you enjoy our podcast, tell your friends about it.
Please review us on Apple Podcasts, Google Play, and Spotify, as this will help get the word out. Thank you.