Grey Beards on Systems - 159: GreyBeards Year End 2023 Wrap Up
Episode Date: December 29, 2023Jason and Keith joined Ray for our annual year end wrap up and look ahead to 2024. I planned to discuss infrastructure technical topics but was overruled. Once we started talking AI, we couldn’t sto...p. It’s hard to realize that Generative AI and ChatGPT in particular, haven’t been around that long. We discussed some practical … Continue reading "159: GreyBeards Year End 2023 Wrap Up"
Transcript
Discussion (0)
Hey everybody, Ray Lucchese here.
Jason Collier here.
With Keith Townsend.
Welcome to another sponsored episode of the Graybeards on Storage podcast,
a show where we get Graybeards bloggers together with storage assistant vendors
to discuss upcoming products, technologies, to look for in the next year.
Keith and Jason, what would you like to talk about today?
Keith?
AI, man. What do you mean? What do I want to talk about today?
So AI has got a lot of different perspectives the whole thing about chat
gpt and gpt3 4 and 5 and open ai and stuff like it's all generative ai you want to talk the
generative ai problem solution world so you know uh what what did i where did i see you last was
it super compute and you were saying that you, you had been following AI since the dawn of the We're not even having that conversation anymore because I think the computer scientist view of this is that generative AI classified as machine learning is now generally accepted as AI.
And generative AI has taken the air out of all of the air, including for the rest of the AI world.
And there's plenty of other AI, great AI use cases.
Right, right, right. Well, yeah, the generative AI is AI, obviously,
and it's got its nuances with respect to how they embed the tokens
and the attention logic and all that stuff.
But in the end, it's sort of supervised learning on real data.
You know, the text, they just kind of knock out a word
and they try to predict what the
word should be a lot of sophistication in this world and there's a lot of parameters and a lot
of uh hardware to make this all happen but in the end it's it's supervised machine learning kind of
stuff the uh jason what are you seeing from the from the processor side of things oh yeah well
clearly like that that is a huge uh, basically both in like kind of
CPU and GPU CPU specifically in the inferencing, uh, world has been, uh, there's just a lot going
on there. And, uh, I look back at, uh, you know, kind of this last year in 2023 and think of some
of the areas where, where I've been using a generative AI, this would be a good question
to kind of go around, like, uh, kind of some of the use cases I've been using generative AI. This would be a good question to kind of go around,
like kind of some of the use cases
that you guys have been using.
But honestly, one of the ones I've been doing,
anytime I need to write like, you know,
a bash script, a PowerShell script, anything like that,
I've actually been using generative AI
to do that quite a bit.
No way.
Oh yeah, it's awesome. So an example.
Oh yeah, you can code all that type of stuff or what?
So actually, I just go into chat GPT. So an example of this, like, so PowerShell.
So I literally will ask chat GPT, I'm like, hey, write me a PowerShell script that'll connect into my vCenter server, do a listing of a folder and output that to a CSV file.
And it writes the code.
Right.
Now, it'll get I usually find it'll get probably about 80 to 80 to 90 percent there.
And then there's a few tweaks that I always end up making.
But when you think about it, I mean,
that's just mundane code work, right? You know, when, when you do that and you typically, you know, when you're, when you're writing stuff, you'll come up with a series of libraries that
you always use, like, oh, you know, getting a listing of, of, you know, VMs in a folder kind
of thing. And then you reuse that code, you know, nonstop. But one of the things that you can do is
have it, I mean, I had it where it of the things that you can do is have it.
I mean, I had it where it would go in, pick a specific thing, and then I say, OK, now let's do some operations on a VM.
Let's start them, stop them.
Let's change this configuration with all VMs that are named this.
And you can kind of just describe it and have it write the code.
And like I said, it'll get you about 90% of the way there without having to do any of the, any of the legwork. Yeah. So I'm, I'm doing like the ultimate experiment with this,
the ultimate social experiment. I have a buddy he's, well, let's just say he's in his mid sixties
and he had contracted about eight years ago. He contracted out to have this golf tournament program uh role
he wants to basically sell subscriptions of this to uh golf courses so every time you know he needs
a new report etc he'd hire contract labor to do that and i said you know what what would be a
really interesting experiment is to see if you can get chat gtp to modify your code for
you instead of you paying you know a contract labor a few hundred dollars every time you need
to make a change does well a 20 a month investment get you there so not only is he's learning you
know kind of prompt engineering and to do what, Jason, you've been doing with infrastructure code, but actually do this with customer facing code.
And his first step has been he's taking all of the code files, feeding it into chat GTP and asking chat GTP to document the code because the developers that originally developed the code did a horrible job
at documentation. Basically, there's not any. I look at the code and I say, I can't read this
in chat GTP while it doesn't know the context that this is a golf application, which I guess
you can tell it that it's a golf application., is doing a fairly good job of documenting the code.
And then his next step will be to like edit a report or generate a new report.
So it is, you know, it's, it's, I know there's some naysayers out there,
but I don't know if you get more practical than that.
Yeah. And a chat, a Slack chat thing.
And a guy had asked if there was any co-pilot-like functionality or chat GPT-like functionality for RPG2.
I said, you've got to be kidding me.
The problem is there's just not enough RPG2 out there in the world that's public that could be used to even understand what the thing looks like, let alone how to modify it or understand it or even document it.
So there are obviously, you know, where there's a lot of data,
I think you're going to find that something like ChatGPT
or its follow-ons are going to be able to manipulate it,
understand it, document it,
maybe even provide some rudimentary code for it.
But there are some languages out there that don't exist in GitHub,
you know, and don't exist in vast quantities of public domain source code libraries.
That's going to be a problem.
Yeah.
One of the other really cool things that you're able to do with it is actually have it write it, rewrite code in different things. So example, I would write something in bash and tell it to,
okay, rewrite this and see, rewrite it in Python, rewrite it in Pearl, uh, if you want. Right. And,
uh, it's really interesting, Keith, you brought up another really good, good piece about how it
actually, uh, does the documentation. I noticed all the code that it produces. It does a fantastic
job of actually documenting it as it's writing it.
Yeah. So one of the other things that I've been using it for, so you're talking about practical use cases.
What chat GTP and these large language models are generally very good at, ironically, is language, like the science of language. And we talk about hallucination and the problem with hallucination.
The reason that the hallucination is so dangerous is because these LLMs are extremely good at language and understanding the structure of language. So whether that language is a vocal or communications language, or if it's a computer language, it knows the structure to deliver.
So when I'm doing something like proposing a new project to an Intel or AMD, my first stop is actually chat GTP because the, what I've
struggled with in these large organizations is communicating past my sponsor. So I would have
a great idea, you know, come sponsor the CTO advisor, uh, data center, and you and you have these outcomes. My first level sponsor gets it absolutely. They're
ready to go. And that second level sponsor doesn't get it. They're like, wait, explain this to me.
How is sponsoring in a data center in Chicago by this random analyst going to help us sell more
stuff? ChatGTP and these LLMs are really good at understanding what should be
in the proposal. It may not be right, but the details of the structure of language and how
you communicate a proposal is there. And that's where I found value is that these things are
really good at the science of language and communication. You're right. You're using generative AI to help write statements of work, proposals that get consumed by vendors to try to understand what you're going to do and stuff like that.
Yeah. So what I've learned happens is I'm not very verbose in email communication because I don't like reading long emails. Evidently, I'm an exception
to the rule. People like long, detailed emails. So what I'll have is I'll go in and I'll feed the
GTP, hey, give me a proposal to compare AMD Epic to Intel from the lens of one of these two companies hoping to compete against one another and feed it details about, you know, all the my service offerings, et cetera, et cetera.
And I'm telling you, it gets me 80 percent of the way there.
And all I have to do is go in and tweak, you know, just some details.
They'll say, you know, the CTO advisors audience is a million people or whatever. Some, you know, some it details, they'll say, you know, the CTO advisor's audience is a million
people or whatever. Some, you know, some, it's very good at, and I don't want to say it's
hallucination. I think the thing is a placeholder. It is a detail that it can't possibly know,
and you need to go back and check the work and plug in the correct details. but the language is absolutely correct and extremely effective. I can't tell you
how effective this is when it has increased my win rate for proposals.
I have written some outstanding letters of recommendation this year
because Chad GBT wrote it. But I think we're hitting in on one thing that's pretty, you know, across the board when you're talking about these large language models and actually having it supplement work.
And when you think about it, if you use it as a tool and not use it as a replacement, it's highly effective.
And, you know, Keith, you mentioned it gets you about 80% of the way there.
And that's kind of that 80% seems to be what I've got out of the help.
The large language model can help you doing specific tasks in which it's really good for doing.
But there's still that additional 20%.
But I think if you think of AI as a tool in a toolbox, and if you use it the right
way, it can be highly, highly effective. Yeah, yeah. So it's kind of like the translators have
been using these translation tools for the last decade or so. They get you 80, 90% there, but it's
not a perfect solution to translation. So they run this translation through Google Translate or whatever, whatever the current solution is.
And they tweak it from there.
It's the same thing with dictation.
Dictation hasn't necessarily been 100 percent perfect over the years, but it's gotten better.
But over time, you know, you can use it as kind of a bulk dictation to transcription stuff or podcasts and stuff.
I've looked at that.
It's fairly pretty good, actually.
And I don't even have to do anything anymore to mess with it.
But it's the generative AI is trained on this vast, vast, vast quantities of texts that are available in the Internet today.
And, yeah, you're right.
It knows language.
It knows how to put together a proposal or put together a memo or put together a bash script.
It's got all that kind of skills. I got my, you know, my challenges.
You know, I don't know, Keith, you're a blogger. I'm a blogger. Jason, you've been a blogger in the past.
You know, the fact that they're using our text to train their models is a concern to me.
Yeah, that doesn't bother me at all.
I mean, you've got more text than I do.
Yeah, that doesn't really bother me.
I don't get paid for volume or data.
I get paid for influence so until chat gtp and these lms can replace
my influence then i'm not worried the people come to me they follow me they followed me because I'm Keith. And ironically, I was at a HPE analyst event a few years ago, and one of the analysts was kind of digging in at my business model. And for those who don't know my business model, it's simple. I sell influence. I'm a marketer. I have no qualms about it. I provide value to my readers by giving them
data information that helps them get to their jobs. The readers choose not to pay me to do that.
The vendors will give me money to access my readers. It is a transaction that has worked.
And why my readers trust me is because they see me and Melissa in the RV,
we're at shows, we're putting in the work. And one of the analysts said, Keith, these analysts
that kind of make their work about them and not about the data, I don't think they're going to
last. Pretty much a dig at my business model.
Fast forward to the world of AI and the data is kind of free or the data is seemingly free.
Someone still has to push the envelope and discover more for these LLM to learn and take over.
Who do you trust in this world? And when I'm right now, I'm at an advantage. Excuse me. I'm at an advantage.
Right now I'm at an advantage because I'm a real person and people trust me. So, you know, this is bound, this is technology is bound to happen. You know, chairs, chairs get automated. The, the,
the production of chairs gets automated. Yeah. But so, you know, let's say I wanted to do a chat to BT. I want to, I want to
mimic Keith Townsend's report on, you know, Epic versus Intel or AMD versus Intel. And, you know,
it generates, you know, 80% of, of, of what Keith would say about these things. Yeah, it would. And
it's that 20% that matters, you know, you know, you're you're you speak a couple of languages.
But but, you know, tomorrow it's 90 percent there and the next day that's fine.
And now it can be 100 percent there. And I really wouldn't celebrate it. And, you know, we either as computer scientists, we either celebrate the advancements in science or we feel fearful of those advancements in science.
We take the tool for what it is and we abstract the value.
We add value at a higher level.
There's always going to be value at a higher level.
You know, we the network industry suffers from this. Well,
if you automate the configuration of a Cisco switch, what am I going to do? If I'm not
configuring the switch, there's plenty of things for you to do. If you're automating the blog and
the writing and the opinion, then what else am I going to do? Well, you know what? It is a very different human experience to engage a consultant who is talking to someone who has 25 years of experience and farmer versus manufacturing and taking that output and ways to add value. Yeah. I heard a quote the other day that is pretty poignant when you think about it, especially when it comes to AI.
And it's that knowledge is not information and information is not knowledge.
Right.
So giving that information to, say, a junior analyst doesn't suddenly provide them with knowledge.
Right. And it's going to be interesting. And this also, by the way, reminds me of kind of in the
80s, when everybody was afraid robots were going to take their jobs, right at factories. And while
they displaced certain things, basically, a new skill set needed to be learned. I see the same
thing, you know, with generative AI, it's going to be a tool that's going to be inserted in there.
And if you don't understand it and you don't teach it, this is another thing I'm really
concerned with how education is looking at AI right now. And I think they are looking at it
the wrong way through the wrong lens. They're treating it like it's something that's bad.
And if you don't have an understanding and firm grasp of it, you know, somebody else will,
and you will lose competitive advantage. And if we're not teaching our kids how to use this the
right way, it's something that needs to be thought of uh on a deeper level so speaking to education you know all
throughout history automation has has displaced certain activities or certain skills and and uh
effectively created new ones right i mean you know the the guy that's managing the robot or
coding the robot or training the robot versus doing stuff on the assembly line.
I understand all that. I'm good with all that.
But to a large extent, those sorts of things were within a constrained environment that, you know,
I directed somebody to train this robot to do this particular skill. The large language models,
they are taking advantage of the fact
that we've created this massive text repository
called the internet
and are going about training these models
to be able to speak our language,
to code our projects,
to create our contracts.
Ray, I think you're underestimating the human spirit to disrupt things. I think the AI and the capabilities that they create will create new markets and new
opportunities. We can't see them right now. I'm not going to sit here and pretend that I can
predict what those opportunities and things will be.
I just know history has proved itself over and over and over again.
When technology enables us to do things quicker, faster, cheaper, we create value on top of that.
As human beings, we have this innate opportunity and it is an innate opportunity, ability to create
new markets and new opportunities. And I'm counting on AI doing that for me. If I don't ever have to,
I don't write blog posts because I enjoy it. I write blog posts because that's how I create
content that my audience wants to consume or value that I want to consume.
If AI replaces me having to have to write another blog post, I welcome that.
And the value that I bring, I can now do more of that, which is not actually writing blog posts.
The value isn't me writing blog posts. The value is in something else.
And I have to do the work in figuring out what that is.
I don't know. I like writing blog posts. I enjoy it. I think it's part of me providing,
you know, my skills and expertise and knowledge to the world. It's something I enjoy doing.
It's something I don't do enough. It's something that takes a long time for me to do, actually. Yeah. And I think that's OK. I think, you know, you have you know, this goes back to the to the bricklayer initiative.
You're either you know, you can be an artist and really into the art of laying bricks.
You can be a worker and you're just laying bricks because that's what you get paid to do.
Or you can be the person who's really involved in and know the bigger picture of creating a
cathedral. All those roles are valid and great, and they all will get disrupted when you get an automated brick layer so the question becomes what becomes
your new role in this new uh uh paradigm yeah the i mean it's really tough tough if if you were the
worker who just depended on the revenue from laying bricks because all you cared about was
the revenue then this is disruptive if you're the artist who really loved about love the art of laying bricks i think the the craft market has
proven that there's always going to be a market for those you know that hand hand crafted art
i think that is very very uh much of future path for those folks. And if you're the cathedral builder,
you're just going to build more cathedrals. Yeah. Yeah. You can tell that crap thing by the beer,
the beer brew pubs are all blown up all over the place at that, which is good. I think so. I think
there is, there's always a space for, you know, pure craftsmanship in any endeavor. And I understand that, you know, as automation
takes over certain skills, you have to kind of move up the stack to some extent to try to take
advantage of that and be able to manage or monitor that sort of activity at a higher level. And as
it gets more and more sophisticated, yeah, those levels have to increase. And if you are interested in the bricklaying of how large language models work,
Stephen Wolfram did a phenomenal blog post on it earlier this year.
I think it was like February of 2023.
And you should include that in the show notes, Ray, but it goes into basically the in-depth,
basically how does chat GPT work? What specifically is the math behind it? And it is a really good
kind of primer on the understanding and it can go pretty deep. So it will be definitely,
it's a good resource to look at.
Yeah. I really appreciate you bringing that back up. I read that earlier this year. It was,
I was blown away by the level of detail in a blog, again, a blog post. But it was,
I learned an incredible amount about the science behind and math behind LLMs.
They've been doing a good job.
Okay, well, let's kind of adjust this a little bit.
So what's the effect of generative AI happening?
How is it affecting the enterprise?
I think the hardest thing is, what do I do with it?
Jason, you mentioned that every CEO is acting their CIO to take up AI.
Yeah, we have been seeing that a lot where there's such a buzz around it.
And honestly, I think from, you know, if you're hitting this from basically a C level within an organization, it doesn't matter what your organization is.
Your customers likely are going to want to know that you've got a plan to integrate AI somehow into your business as a shareholder, right, over the next year or two.
And I think there's a lot of tasking that's coming down from the C-suite that is saying, hey, we need to get on top of this AI thing. You guys are the computer dudes.
Figure it out.
It sounds like the cloud was like a decade ago.
Everybody had to go to the cloud.
You got to move our operations to the cloud.
The cloud is where the future is, et cetera, et cetera.
It's taken a decade to realize, okay, yeah, the cloud is important,
but it's not an end all.
Yeah, so, you know, Jason, this puts us geeks in like a really
tough situation, right? We're used to being responsive to the business needs. So if a
organization comes to you and says, hey, we need to build a system that takes orders over the web,
we kind of understand that. Like, okay, you've given me a set of requirements and I can build a system
to that set of requirements. But to randomly come and say, we need to figure out
AI, we're like, well, what's the business problem?
I think that is so often the question that most people
forget to ask. And one of the best sales guys
I ever knew, every time he would walk into a room,
the number, the first question that would come out of his mouth is what business problem are
you trying to solve? And like every presentation that he would give would then tailor it around
what, you know, the company had to help fix that problem. Right. And think there are two sides to this AI coin
that you have to think about.
One is the business problem.
No doubt that's the critical aspect of the thing.
But the other one is, where's the data?
You can't play this language model game
or ML game or AI game
unless you've got data to train with.
Well, you know, the great thing is
that we've gone through this phase of big data for the past, what, 10 years now?
And I don't think there's a question whether or not the data exists.
I think there's questions around how do we get that data to the GPUs to train it?
There is a question of how do we organize it?
How do we organize it? How do we tag it? How do we make sure that data that shouldn't be part of the LLM isn't part of the LLM, that we own the data, et cetera?
I think there's a lot of governance questions around the data, but I don't think there's any doubt that the data exists in most organizations asking the AI question. Yeah.
The other question is, you know, AI is more than generative AI.
Generative AI is sexy and smart and it's out there and visible today.
But AI has been a long, long process to come to this place.
And there's just lots of stuff in the AI space that can be applicable to, you know, generic businesses.
I mean, talk about facial recognition or image classification or, you know, handwriting analysis and stuff like that.
You know, doing forms, filling out forms and stuff like that.
It's just there's just a lot that AI can do that's not large language models.
It's not generative AI.
Yeah. You know, There's predictive failure models. If I point a camera at a mechanical system,
can it do some AI vision, just look at structural fractures that the human eye can't see?
There's all kinds of applicability. And then there's the question for, you know, folks that listen to this podcast.
What infrastructure should I be buying for AI in general? And it's a really this isn't like big data where we can say, you know, go out and buy a lot of fast disk or deep storage or whatever the characteristics of that technology is. AI is so diverse that I don't know you.
Do I need a,
do I need a pool of H one hundreds or do I need some,
some AMDs new GPUs,
Intel's GPUs?
Can I do this with CPUs?
Like what,
where am I in the AI consumption model and LLM creation model?
Where do I need to size my infrastructure?
It's a really, really difficult question I'm talking to a lot of folks about that they're not able to answer because they haven't gotten clear business requirements.
I think there was a lot of experimentation in 2023, people would go out, get a workstation, get a GPU and start messing around,
you know, messing around with a CUDA or Rackham and seeing what they can do on an individual
system. Now, when you get to the level of scaling this to a business, you need a lot more horsepower
than that. And like you said, Keith, there's really, there's, there's not a lot of, of off
the shelf, like what do you buy for the enterprise? Because if you're saying, oh, here's an open source, you know, here's some open source stuff on GitHub,
that's going to go over the capabilities of what a lot of IT shops can handle, right?
You have to look at something like MLPerf and MLCommons. They've been doing this,
you know, training and inferencing benchmarks for, you know, classes of AI models for, God,
probably five or six years now.
So you can look at those sorts of things and try to understand,
okay, if I want my inferences to be on the order of a half a second
and I'm doing some image classification,
this is the type of work that they do that on.
This is the infrastructure required to make that happen.
It's not perfect.
It's not all expanding.
It doesn't cover every possible corner case in the world.
I think the other thing that's evident today is the cloud.
So if you don't know what your infrastructure is going to be required to do some model training or model inferencing,
you can kind of start this stuff up in the cloud and see what it costs and see what it looks like to make it happen. And then decide that, you know, the expense is high enough that I want to bring it back on
premises and stuff like that. So I think those are a couple of things that you can use to try to
get a handle on what are the infrastructure requirements. Storage is an important part
about that. You know, how to configure your storage to configure your storage to support keeping those GPUs busy and all that stuff is another whole discussion here.
How to design your network so that the latency between GPUs is such that you're not.
I think the average utilization of GPUs is somewhere around 20 percent, even in the most busiest environments.
So and a lot of that is dealt with just system-to-system latency.
We'll talk about CXL probably next year when it's moved on
to higher level maturity, but these are the types of problems that's happening
around AI and solving. So I got to ask
the question, gents, is AI the only thing that's happening in 2024?
I mean,
over half the podcast, we've talked
about AI. Is AI
the only topic?
It's a hot topic.
The only other topic I can think of
besides AI, which is dominating
the industry
in multiple dimensions, is
the VMware Brocade.
And, you know, there's a VMware Brocade AI side of this.
Broadcom.
We got to put respect on that name, Ray.
It's Broadcom.
You said Brocade.
I'm sorry.
Yeah.
You're a storage guy.
I understand.
My bad.
It started with a B.
Broadcom.
Well, I mean, the acquisition happened.
The world has not fallen apart yet. It started with a B, you were close. Well, I mean, the acquisition happened.
The world has not fallen apart yet.
I was in a meeting with VMware, oh God, last year when they just started to talk about the acquisition and stuff like that.
The executive said something that was very interesting. He says, you know, VMware has been purchased and sold multiple times over the years, and we've always survived.
And we will survive this transition as well.
Yeah, it's going to be changes.
There's going to be things that will be done differently.
But in the end, we're still here to provide virtualization to the enterprise.
We're still here to provide services to the cloud.
We're still here to do everything we were doing before.
Yeah, I will caveat his his statement i push back a little
bit on it uh vmware has been bought and sold several times over the past years indirectly
vmware has always been since emc spun out 20 of the vmware org vmware has been this weird
uh entity with this complicated ownership structure where 80% of the company is owned by whether it's Dell or EMC and 20% of it has been owned a hundred percent and there is no the only fiduciary uh responsibility is to the
broadcom shareholder so broadcom can basically do whatever they want without public outside of their
own shareholder concerns so that while it simplifies the ownership structure, it is very different.
It is a very different transaction than in years past. I completely concur, but you'd think that
somebody like Broadcom is going to innately, it's going to change the business model for VMware.
I see. So I was in a analyst session, one of many analyst session yesterday where they talked about the new licensing, gave us a run out of new licensing.
And I can tell you some secret cows were killed yesterday.
Like the VMware has historically had horrendously complicated price lists and skews it was extremely difficult to figure out
maybe not quite microsoft level difficult to figure out but you know they they weren't a
pleasure to deal with when it comes to buying v spear they've simplified it you basically have
two levels of uh bundled uh vmware solutions for if you if you want the virtualization product.
And they've simplified it to the point where I can take my VMware licenses,
my vSphere licenses, and use them in the public cloud,
use them globally across countries, use them in a colo,
use them on my own facilities, etc. They've gone to the
subscription model. It is much simplified and much warranted. And this is something that VMware has
been wanting to do, but unable to do for over a decade. And it took the purchase of, you know,
I think, Jason, you might, Haktan, I think Jason
you might have mentioned it and I
was going to mention it too
Haktan is
pretty, he's a machine
the VMware.com
email addresses all
don't work and I don't know if we're a month
into this acquisition or a month into this acquisition
if you're trying to email your
business contacts at VMware
with their VMware address, it will bounce.
It will bounce.
Yeah.
You gotta cut the umbilical cord at some point.
I understand it's not perfect,
but it's, and you mentioned the subscription model
and how that's sort of a change,
but in essence, the business still is there.
It's still providing the same solutions.
It's still providing the same capabilities to the enterprise.
And that's not going to change.
And if anything, it'll be accelerated in a more focused, more enterprise-focused environment, I think.
Yeah, I don't think – I wasn't the biggest fan of the broadcom acquisition of
vmware but i'm also not a doomsdayer uh who says oh you should look to migrate off of vmware
and you know i asked this basic question and hocton has asked this question migrate to what
like yeah to what vmware customers have talked about migrating off vSphere for years.
Everyone complains about the price once they, you know, it's this effect of this innovators
dilemma of another angle where you've added this really great value of consolidating servers.
And over the years, people kind of forget what that value is.
And they just look at the expense and they say, wow, VMware is one of my biggest expenses.
Even though if I was to look at the alternative, I'd pay more. The basic problem is that VMware
is still one of my biggest expenses. How do I ruthlessly optimize cost?
And VMware has been, quote unquote, too expensive and customers have wanted to leave.
But to what?
Yeah, yeah.
There's not a lot of solutions out there that provide the same capabilities at a lesser expense.
I mean, you can all go to bare metal, but bare metal is the suicide for most of these guys.
Oh, yeah.
I mean, well, that was like part of the reason for changing, you know, anyway.
And when you look at basically the consolidation numbers you could get on a CPU now, I mean, you're talking about you've just consolidated 50 physical boxes down into one.
It's not going to be cheaper to go out and buy 50 servers.
Yeah, yeah.
With 128 cores and you know dual
dual cpu sockets i remember talking serious compute there and you just can't yeah yeah you
know talking a little bit more technical there's no solution that does memory management uh no
virtualization platform that does memory management better than uh vmware and the simple fact is you can, you get higher consolidation rates with VMware than
you do with other solutions. And that means more efficiency, more cost savings. So while the
license costs might be higher for VMware, you get more value. So it is a, it is a, uh, if you're a
customer and you don't, you just, you know, you can't stand the Broadcom way of doing business and you want to leave Broadcom.
You're in a tough you're in a you're in a tough ticket.
Well, yeah. And the reality is the only companies that have done it are large cloud service providers.
Right. And because they literally are writing their own stacks on top of the open source virtualization tools that are out there. But your enterprise,
like if you don't have cloud level developers and we're talking infrastructure
cloud developers at your enterprise,
you're never going to have a solution that's more custom built for enterprise
than, than what VMware and vSphere environments are.
Yeah. Yeah. Yeah, yeah, yeah.
You know, and they've been bettering themselves in the cloud too.
I mean, with VCF and all that stuff being the hybrid cloud, you know,
provider of choice throughout the world.
I mean, the enterprise, they already have the enterprise,
and they're starting to influence the cloud activity as well.
It's just, you can't leave it without real pain.
Now, where I do see a weakness is this VCF is great for if you have the use case for VCF.
But if you want true cross-cloud, multi-cloud capabilities, VMware has to do a better job
of abstracting or integrating into native cloud services and giving customers what they want, which is native cloud services, but managed in a traditional IT style.
And VCF ain't that.
So it is great for if you want to take your VMware vSphere model and run it across multiple clouds.
But if you want, you know VMware vSphere model and run it across multiple clouds.
But if you want, you know, let's super simplify this. If you want AWS, but you just want it in Google Cloud infrastructure,
that's not VCF.
It's not a way of consuming native cloud services
or cloud native services across multiple clouds and a consistent operations model.
And that's where the opportunity and the challenge that VMware has been unable to solve over the past few years,
or at least customers haven't been responsive to.
You lose a lot of the cloud, you lose a lot of the native functionality, right, is the problem. And that's why a lot of the cloud apps are the way they are, you know, having specific, you know, engineered database solutions.
And then basically the whole networking stack and how all that stuff works is unique to the cloud.
But it's also it's the advantage of it, right?
It does offer competitive advantage. And basically, you know, I mean, the VMware offerings in the cloud are,
you basically got a bare metal box that's sitting that is yours and you're just running it in somebody else's data center is kind of what it amounts to.
I don't think, you know, I see what you're saying, Keith.
And obviously that's a solution that everybody would want, but I just don't see how you have. How you run
AWS proprietary services in something like Azure or Google Cloud is, yeah, you want it,
I understand, so you can move stuff around without pain and anguish. You have to somehow come up with
the lowest common denominator across all those. And VMware VCF is not far from that mark. Yeah, and I think
the mismatch is that developers have spoken. They don't want
it. That's not the relationship they want with their infrastructure.
It works fine when you're in a traditional enterprise IT
kind of siloed environment. But when you want
developers to consume high value abstracted
services from your cloud, from your internal cloud or platform engineering, platform engineering is
probably a topic we could have talked about because that's still a hot topic. When you want
that type of relationship, VCF by itself doesn't provide that.
There's an argument that Tanzu and Tanzu platform and,
uh,
the,
however they,
they butchered the,
the,
the product name that it provides that capability.
But again,
customers have not voted with their dollar that that's what they want.
Yeah.
Yeah.
Yeah.
Well,
I see it as a,
as a, as a it as a goal,
but I don't see a reasonable path to get there from anybody.
I think, ironically,
sometimes you can consider them as hyperscaler,
sometimes you don't.
But IBM Cloud is probably the only cloud
that actually offers exactly that.
You can take IBM Hybrid Cloud platform and run it anywhere.
Literally, I'm surprised IBM doesn't make a big deal of it, but it's actually a really cool solution that allows you to take IBM Cloud's native services and basically run it anywhere.
That's interesting. I hadn't heard that.
I have to think about doing that at some point.
All right, gents, is there anything else we'd like to talk about
as we're getting close to closing here?
I think it will be interesting to see because, as we well know,
VMware is very entrenched in the enterprise
and we know that enterprises are getting tasked with um you know figuring out ai from the c-suite
uh on how that's going to fit in it will be interesting to see in 2024 um any moves that
that broadcom slash vmware um would would make as far as as enablingSphere as an AI platform to run workloads?
I think you can always see that, Jason, in the VMware Private AI initiative and things
of that nature.
They've been doing this multi-GPU virtualization for a couple of years now.
So they've seen this on the horizon and they've
been moving in that space. They haven't, you know, they haven't taken a leap, I guess yet,
but they're close to it. Well, like you said, the platforming is still a really big thing for
this, right? And, you know, it's one thing to, you know, see models on GitHub, but how do you
get that into a workflow, right? And I think platforming and workflow is going to be a big deal
this year.
The whole relationship with Hugging Face and stuff
will help a little bit of that.
Yeah.
All right, gents. Anything else?
Nope. Other than
looking forward to enjoying the holiday, I hope
everyone else has a great holiday.
Happy New Year and be a happy
2024.
Thanks, guys. That's it for now and be a happy 2024. All right. Thanks.
Thanks guys. That's it for now. Bye Keith. Bye Jason. Bye Ray. Until next time.
Next time we will talk to the system storage technology person. Any questions you want us to ask, please let us know. And if you enjoy our podcast, tell your friends about it.
Please review us on Apple Podcasts, Google Play, and Spotify, as this will help get the word out.