Grey Beards on Systems - 154: GreyBeards annual VMware Explore wrap-up podcast

Episode Date: September 6, 2023

Thanks, once again to The CTO Advisor|Keith Townsend, (@CTOadvisor) for letting us record the podcast in his studio. VMware Explore this year was better than last year. The show seemed larger. the sho...w floor busier, the Hub better and the Hands-On Lab much larger than I ever remember before. The show seems to be growing, … Continue reading "154: GreyBeards annual VMware Explore wrap-up podcast"

Transcript
Discussion (0)
Starting point is 00:00:00 Hey everybody, Ray Lucchese here. Jason Collier here. Welcome to another sponsored episode of the Greybeards on Storage podcast, a show where we get Greybeards bloggers together with storage assistant vendors to discuss upcoming products, technologies, and trends affecting the data center today. We're here at VMware Explorer 2023, the end of the season, the end of the sessions and stuff like that, trying to figure out just what went on. So, Jason, what do you think about the show? It was interesting.
Starting point is 00:00:37 So, you know, between, you know, the random booth traffic that we had and uh uh just some some of the meetings that i had uh overall interesting show it's going to be interesting to see how you know vmware progresses you know give it given the broadcom acquisition yeah how that plays out is still yet to be determined i heard that uk gave provisional approval of the the acquisition but you know where it goes from there i I think the Asians are still thinking about it. And there might be one or two other European companies looking at it to some extent.
Starting point is 00:01:12 Yeah, I thought the keynote kind of stuff, the Private AI Foundation was pretty impressive. I mean, yeah, it's the thing they need to do. It's obviously this large language model, Gen AI, hype, hype, hype stuff that they have to follow. You got to say it because it's got AI in it, right? It's got to follow. Yeah, well, it's just a game.
Starting point is 00:01:35 Generative AI is a thing, right? And yeah, if you're not on board, then you're overboard, right? Well, I mean, everybody saw this chat GPT, user adoption rate go sky high within like two nanoseconds, and I got to get some of that money. There's just too much money there being spent for us not to keep it. But to some extent, the world from VMware and on-prem and enterprise data centers, there's this, I've got my data, you've got your data.
Starting point is 00:02:05 I don't want to share mine with you, and you don't want to share yours with me. Well, I think that's the biggest thing with generative AI is everybody's trying to figure out how to contain that within basically their select set of data, right? Like, we want to use this technology. This is a great technology. There are many things that we can use it for,
Starting point is 00:02:24 but we need to contain it so that basically our intellectual property is not leaked out. So the Rayon Storage Intellectual Property, the Gray-Rayon Storage Intellectual Property is probably feeding GPT right this very second. I'm sure it is. I'm sure I'm there. But yeah, so it's to control, it's to constrain, it's to be able to take advantage of this open source plethora of large language models that exist out there and, you know, creating a Jason Collier chatbot for AMD EPYC processors. Have you seen, so have you seen the power requirements for chat? Is it more than like 12 kilowatts or something like that?
Starting point is 00:03:02 It's absolutely, so it's absolutely insane. I actually saw a really good presentation. It was done by Bill Clayman was doing it at Data Center World. He was doing this presentation and- Training or inferencing? The training component. So specifically on the training and it was a little bit of training and inferencing. It was a little bit, let and inferencing it was a little bit let's just say it was muddy but like a Google search like how much power does like a Google search I assume a kilowatt or two so so yeah here's an LED light
Starting point is 00:03:38 bulb like one LED light bulb it could power for 11 seconds. Oh, okay. That's what a Google search inference does. A chat GPT session is 36 light bulbs. 36 light bulbs? So only 36X. Yeah, only 36X. Is that a lot from your perspective? Now, is it a light bulb LED, or is it like a data center? Oh, yeah, no.
Starting point is 00:04:03 So that was the LED, but when you think about that, when you basically say that you now have to take your data center. Oh yeah, no. So that was the LED, but when you think about that, when you basically say that you now have to take your data center components and then scale them up 36x. Yeah, but it's only just GPUs and stuff like that. And Epic CPUs, of course, obviously. But yeah, that power constraint, that's a real thing. That's a real thing. There's a difference here between training something like ChatGPT from start with 12 gigazillion bits of intellectual property from everybody else versus taking something that's trained like Lama 2 or Lama 4 or whatever it is and working all the Jason Collier knowledge into that.
Starting point is 00:04:44 So it's like transfer learning doesn't take that much, I would say, versus the training. I've seen some pretty interesting companies that have been taking those large language models and then applying them to very, very, very specific tasks. As in one where they're using it for training at a law firm that is training against literally all of their case data. And not only all of their case data that they've got internally within the organization,
Starting point is 00:05:11 but basically everywhere that they practice law and how the judges typically rule on specific things. So they're taking this- All this information that they've gotten over the course of decades and decades of service, and then they're trying to incorporate that into a chatbot. What's that take? Like one light bulb? 10? 15?
Starting point is 00:05:30 There's a couple. Yeah, a couple. I think I heard that. I don't know if you've heard the same number, but the number that I heard was it was like $10 million to train up. To train a chat GPT or a GPT-4 or something like that? Yeah. Yeah.
Starting point is 00:05:45 That's a lot of compute. A couple thousand GPUs and maybe a couple 10,000 servers or chips or something. I don't know. It's a lot of stuff to train from scratch. To do transfer learning is a little bit easier. There was an article a Google guy came out with that said that, you know, something like we let the cat out of the bag. You know, training GPT is
Starting point is 00:06:05 really really tough but transfer learning on top of GPT that's not that difficult I mean anybody could do it my friend Keith there could do it on his laptop well if he had a decent cheap I think he's doing it right now yeah decent chip or something like that so the private AI foundation is interesting it's got an NVIDIA version, which is all proprietary It's all you know NVIDIA stack and then it's got the open source stuff. Mm-hmm the ray cluster manager. Yeah, and Cube flow and pie torch and a couple other things. I was interested you write the ray cluster manager I don't know why
Starting point is 00:06:39 Bring a belt from my perspective, but that's that's the difference. I've seen it before. I just never realized how embedded it is in AI and MLOps. It seems like it's a real prominent solution to some of the problems they have. It's cluster management. I'm asking myself, now, why can't vSphere cluster management do what you need to do for MLOps? And apparently it's not there. And then, okay, so what about Kubernetes cluster management, you know, Tanzu and all that stuff? No, it's not there. Yeah.
Starting point is 00:07:06 But, right. So many technologies out there. They're like 80% of the way there. Yeah, yeah. And they're not all the way there. Yeah, yeah, yeah. So AI is still fast moving. Cluster management is hard, by the way.
Starting point is 00:07:18 Yeah. It is not an easy thing to tackle. Yeah, yeah, yeah. So I think that's's to some extent why they went with proprietary and this reference architecture approach because the open side is moving so fast that they don't want to be locked into some proprietary approach which is dropped dropped to the floor when nvidia gpus are surpassed by somebody else's gpu gs who will remain unnamed and unidentified. Stuff like that.
Starting point is 00:07:47 So the private AI thing, I think, it was an obvious play for these guys. It made sense. It's something they can afford. It's something they can play the on-prem card against, and it just makes a lot of sense. But like I say, it's a lot of hype there, a lot of sense so but like I say it's a lot of hype there a lot of stuff going on there whether it's real or not there's
Starting point is 00:08:09 only time will tell well you know it yeah and that in that hype there's a lot of hype and whenever there's a lot of hype there's a lot of funding behind it too okay yeah yeah nobody's it's it's going to be interesting to see where you know venture capital lands yeah well it's already there. I mean, as far as I know, the VC is spending money left and right on this technology. Yep. Absolutely. So part of that private AI, they mentioned something about an agreement with Hugging Face that they're going to have access to Hugging Face's libraries of AI models and
Starting point is 00:08:40 stuff like that. And they're getting closer with Hugging Face. So it's kind of an interesting play because Hugging Face is you know out there to a large extent it's playing the open source game but it's also playing the proprietary game so it's kind of interesting that they would be in that space but it is good like honestly like I had no idea who they were until really like we did like announcements until we did announcements with them. I had no idea that there was this giant open source community
Starting point is 00:09:12 around all this AI and all this stuff. Then I started going in and playing around with it. I'm like, oh my God, there's so much information here. They got their own leaderboards. It's pretty serious stuff. I always thought Kaggle was the game, but these Hugging Face guys. Yeah, hey, you know, like, you know, whatever, GitHub, Hugging Face, man. Hugging Face is the, like, AI.
Starting point is 00:09:34 You talk about AI and all that stuff. Yeah, exactly. It is awesome, the amount of content they got up there. Yeah, yeah, that was great. So, yeah, I think that's a good play as well, and obviously it sustains the open source end of things. And the words out of the VMware guys was great. So, yeah, I think that's a good play as well. And obviously it sustains the open source end of things. And the words out of the VMware guys was, you know, this is just a start. And we're going to go after other cluster managers and other frameworks and other players in this space.
Starting point is 00:09:55 And we'll make this happen, which I think is good for the enterprise. Because anything that we can take to make the enterprise be more persistent or at least more viable in the current world is good for us. At least good for the Graeber. Good for us old guys, right? Yeah. So that was the private AI foundation stuff. I thought that was pretty good.
Starting point is 00:10:15 The other thing they talked a lot about was the edge. I mean, there's a lot of play in the edge now. There's lots of things going on there. There's lots of sensors moving out there. There's a lot of play in the edge now there's lots of things going on there there's lots of sensors moving out there there's a lot of intelligence they had a police car on the floor that apparently had its own land or something like that on there and with servers up the kazoo yeah it's like so edge edge is something i've been passionate about for quite some time given you know the my background where i come from right um like so so edge is one of these things where here's the standard components
Starting point is 00:10:48 that we've always had that we typically would associate with small to medium-sized businesses. And you realize that that actually is now the requirements of that are needing to be pushed out to the devices effectively yeah yeah to to that far edge component um there there are specific business cases which lend themselves very well to edge like retail is like the most common example everybody always talks about right um and you know and from that perspective when you think about retail and what they need to run you know it's everything from like we need to run our point of sale system,
Starting point is 00:11:25 we need to run our inventory control and management system, we need to run security, we need to run basically like there are 150 freezers and refrigeration units in every grocery store, right? That have to have some place to talk to. Sensors, have to be some way of detecting the problems. To monitor and know this stuff. But at the same time, this is not a location where you've got an IT guy.
Starting point is 00:11:53 No, or even an IT data guy. Or a data center. You have a grocery store manager, right, who knows how to plug things into an electrical socket and, if you're lucky, knows how to plug stuff into an Ethernet port as well. All right. So, you know, honestly, I think where Edge needs to go is really getting kind of in tune and in touch with like this whole zero touch provisioning component. Right.
Starting point is 00:12:24 Where you need to have unskilled IT folks. At best, yeah. Yeah. Or not even IT folks. Yeah. Unskilled like folks. Laborers that can do stuff. They can basically like plug this stuff in
Starting point is 00:12:36 because there are always those requirements. You think about, you know, your grocery store in the middle of nowhere. They probably have a lousy internet connection. They have lousy connectivity, they've got lousy support and services, but if you can't sell tomatoes, you got an issue. Yeah.
Starting point is 00:12:59 Right? And you need to do whatever needs to be done to make that happen. So they're going to this, they called it more of a push model rather than a pull model, I guess. I mean, they were trying to use standard vSphere services to populate the edge solutions and stuff like that. But it just doesn't scale. Okay, 64-node cluster and vSphere, that's easy.
Starting point is 00:13:19 But these guys got thousands of retail outlets and 10,000 point-of-sale terminals I mean yeah and I have seen I have seen implementations where it's like seven thousand grocery store deployments right yeah and the yeah there there definitely needs to be like a single pane of management right yeah that single pane of glass management for exceptionally large scaled out clusters of components. And then reality is like none of that stuff's going to succeed unless you've got basically an application deployment model that's going to work on top of it.
Starting point is 00:13:57 Yeah. And they mentioned the GitOps model as being their their way of moving forward. And it's primarily because of scale and being able to scale this thing from, you know, a thousand to 10,000 to a hundred thousand to a million units to be able to deploy applications on all those solutions and keep them up to date and stuff like that.
Starting point is 00:14:18 Yeah. You would like to have vSphere, but it's not, it's just not that. Yeah. So I'm not sure what the GitOps model is but it's obviously YAML scripted you know configuration statements and you fire this up with a repo and all of a sudden things start happening yep yep is that what it works that works but yeah I'm not gonna comment comment any further on it. I was out there.
Starting point is 00:14:45 Because you know where I'm going to go. I know. I was out there and there was a hardware vendor out there that had, it must have been, I would call almost a briefcase wide, maybe even a half a briefcase thick, or no, a wide but thick briefcase, two servers and a witness node all together. And you could put it on the wall if you wanted to or something like that. These things are bigger than those solutions
Starting point is 00:15:10 that these guys had. And this was a full vSphere environment if you wanted to. Now how that plugs into the rest of the point of sale and all the other edge stuff is another question. Yeah, that's the kicker, right? That's, by the way, that's the hard part. That's a hard part that nobody ever tells you about. Yeah.
Starting point is 00:15:29 Right. Then how does that zero touch provisioning piece work? Yeah. I mean, they mentioned the words, it was on the screen, zero touch and all that stuff, but how it all plays out is anybody's guess. But like they say, they're going after this GitOps model and GitOps model and its repo
Starting point is 00:15:46 and its YAML files and its infrastructure as code is their game. Only at small levels of control. I asked the guy if he could put that Witness model, Witness server on a Raspberry Pi. He said, yeah, probably. Maybe? Yeah, we could do it. We could do this, but that's not my game anymore. And there was plenty of other stuff that was talked about at the store, at the show.
Starting point is 00:16:16 The Tanzu stuff is kind of undergoing some sort of transition. I don't understand it. I mean, to me, Tanzu always was VMware's Kubernetes, right? It was how VMware spelled Kubernetes. My, yeah, it started with a T. Yeah. Just like, oh, it's Kubernetes, just spelled with a T. And I always kind of struggled.
Starting point is 00:16:42 I, like, always scratched my head. I looked at everything, every single marketing slide that like came out around it. And then I'm just like, what exactly is it? Because I would go ahead and just put Kubernetes on top of it and then use like basically a CSI model. Yeah. Yeah. You got storage. Yeah.
Starting point is 00:16:59 Right. So I'm like, I got a Kubernetes cluster. I like created the whole thing. I did the deployment. And I'm basically using CSI to plug into the storage in general. And then I'm just like, why would I pay for the Tanzu to do that? Yeah. I'm like, is it like, I don't even know if they did it like a cool, clicky, gooey UI. So that was the idea, right? I mean, so to a large extent, they've embedded the Tanzu
Starting point is 00:17:28 Kubernetes grid TKG into vSphere core. So now you can do, you know, Tanzu, you can do Kubernetes APIs and have the back end be vSphere and stuff like that. And Kubernetes guys, though, are not GUI dudes. No. They're just like, they're just like,, okay, well maybe that's a problem. Show me the yaml. Maybe that's a problem.
Starting point is 00:17:49 Yeah, you're right. But you've got the VMware operations team that are gooey guys. That's true. They want to control this Humanities stuff. Yeah. I mean, this is definitely a really interesting, I think, paradigm. And when you look at basically the operational models of it, you look at folks that are actually deploying Kubernetes applications, they're not the same people that are managing VMware
Starting point is 00:18:20 clusters. Yeah. Well, that's the bridge that Tanzanzu to some extent was trying to to uh to bridge i guess i don't know what you do other than like give these couple of guys like like some boxing gloves and stick them in the ring and let them see how that's the networking storage guys kind of thing it's not good it's not it's not going to All right. Not that we haven't seen this before, like in the industry, right? Yeah, yeah. But it's going beyond that.
Starting point is 00:18:50 This whole Tanzu application platform and Tanzu Insight and App Engine, they're trying to make Tanzu somehow be the application driver for the world. So they want to be able to run Kubernetes apps, they want to be able to run standard cloud apps, they want to be able to run vSphere apps, they want to be able to control it all under Tanzu. It's one orchestration solution to rule them all. To rule them all. You know, the ring, the ring to rule them all.
Starting point is 00:19:21 And I thought it was just, it didn't come across that way. But that's what they're, as you drill down down into it that's what they're trying to do i said you know the challenge is tanzu has always meant kubernetes it's it's how you spell kubernetes in vmware and you guys are going beyond kubernetes now yes we are but it was never intended to be they're telling me it was never intended to be kubernetes only i said oh god it's not the way i i read it i'm pretty sure you're wrong not the way i read it i'm not saying that but it's just it's just it just needs to be i think there needs to be some content surrounding this thing you know from a from a viable silverton consulting type independent organization that could explain all this stuff right no i totally
Starting point is 00:20:01 agree yeah because there's like you you and i have been around the storage block long enough to know that there is no one ring to roll them all. It doesn't exist. I mean, that's what they want. And I understand that that's the play to a large extent to get to a point where they can become, I don't know, the multi-cloud, super cloud world view of the world. There is no perfect solution to solve every problem. Yeah, yeah, yeah. There are solutions to solve a lot of different problems, and there are a lot of different solutions.
Starting point is 00:20:35 You've got to have the right toolbox, and you've got to have the right tools in those toolboxes. Yeah. And I think, to some extent, that's the way VMware started, and they've kind of gone away from that as they've gotten larger and larger to try to increase adoption, move beyond the... It wasn't DevOps in those days, but those techies that were running the systems and stuff like that.
Starting point is 00:20:58 And it is what it is. That's what they need to do. They understand the game. It's just, it's, it's, the branding stuff is just kind of messy at this point, you know, to some extent. So the other thing that we rolled out, vSAN Max was rolled out finally. These guys are starting to take the gloves off from a storage perspective. Petabytes of storage, disaggregated storage, no longer, you know, HCI solutions and stuff like that i mean that's good news i think there's you know there's obviously other things they need to add to that to make it uh really be a storage player but it's they're starting to go down that path and i think there are a lot of opportunities
Starting point is 00:21:36 in the market in general as far as like the the whole disaggregation component comes um there are a lot of things that are starting to happen there are a lot of things that are starting to happen. There are a lot of technologies that are starting to... You're thinking CXL or something like that? What? But I asked the question, I didn't get any answer on CXL. No. But it's like things like that that are starting to show where more disaggregated components are going to actually come together and you can have this unified framework
Starting point is 00:22:08 on which you can utilize this stuff. And I mean, so, RFC 1925, you hear me talk about this all the time? No, I'm gonna keep saying it. Like 10 years old, what? Well, so what, I think truth number five, every old idea will be proposed again with a new name, a new presentation, regardless of whether it works.
Starting point is 00:22:28 SGI built this shit years ago. Yeah. They built it in like the year 2000. Or something, yeah. Yeah. The origin 3000. Yeah. All this disaggregated stuff existed.
Starting point is 00:22:40 Yeah. They had it. They had basically, here's your router brick that would plug in here's your cpu bricks your memory bricks your storage bricks pci all that stuff would like plug in and here's this disaggregated thing now you're starting to hear about like oh hey well we can do this disaggregated thing shared memory was in the mainframe and and it was it was a thing they were talking about it may not have been gigabytes it was probably megabytes but it was still it was it was the same thing but uh so vsan max i think is a good play it's just it's it's going to piss off all the storage
Starting point is 00:23:16 vendors but it's it's been in the play you know playbook for a while they're just kind of starting to roll it out yeah a lot of updates to the workspace stuff. I just, it's not my area. I don't know what's going on. They mentioned autonomous workspaces. I said, what the hell is an autonomous workspace? It's just a, you know, it's just a dumb terminal that you're loading apps on and stuff like that. Once again, like every old idea. It's like timeshare, only a little bit better. I won't go there. But it's sort of that game. And so there's some updates there, this whole digital experience. You know, they're trying to do some better monitoring of how you and I would use these terminals. Can we get a 3270 into it?
Starting point is 00:23:58 Don't go there. Okay? I've been there. I know that. I don't need to know. I don't even need to talk about it. 3270s, come on. You'd be surprised what we did with 3270.
Starting point is 00:24:08 You can do an amazing amount of stuff with 3270. Trust me. Yes, yes, yes. Don't go there. All right, the other thing that came out was NSX Plus. They've been rolling it out bits and pieces over the last couple of years. It's like another one of these things, one network to rule them all. I mean, they want to be the orchestrator for all networks
Starting point is 00:24:26 for your data center. So AWS, Azure, Google Cloud, VMware, they want to be able to have that level of codependency, simplicity, cooperation, whatever. So the problem to some extent, and I'm not a network guy, there might be some other network guys in the room, stuff like that, but the problem they say is networking so when i run you know apps on aws or i run apps on in my on-prem or apps on azure setting up the networking is a nightmare yeah this is an overlay on an overlay on an overlay so i thought there was only seven levels on an overlay
Starting point is 00:25:03 don't worry about the last overlay. Okay. Oh, and these are all sitting there at levels that are... No, it's... And when you try to actually go through and expose services, like say you're setting up Kubernetes cluster and things like that. I did one where I was setting up like this cube flow thing doing some AI ML training and all that stuff and but the networking a flippin nightmare
Starting point is 00:25:34 right oh those setting up the cube flow one there's like I don't even know how the like 100 different containers that are running underneath that yeah that basically oh but in this specific part is communicating on this overlay. This microservice. And this is doing this microservice communicating on that overlay network. And all that all this crap. And then I'm just like, how do I even expose these services? Right.
Starting point is 00:25:58 I'm just like, so they're looking at like, you know, my metal LB configuration and being like, oh, my God, what is this thing looking like? And how do I expose it? Oh, and then I'm doing it on a network where I got to like go through like proxy connections and all this stuff. And I'm just like, and it was a train wreck that hit an iceberg. I think you want an NSX Plus. That's what you want. That's what you need. That's what VMware believes. It's the gameplay that it's just not not why working is a challenge well I tell you what it's too damn complicated and then here's the thing you know that when it gets too damn complicated always think about the classic game mousetrap right yeah like oh here's this thing
Starting point is 00:26:35 like oh so you go through and you crank all those things and then like I knew this is how it all works like and this is the networking in your data center and then I just think about like here's this wooden thing with a spring on it yeah man that's a hell of a lot better at catching mice and and i i look at basically what we've got running in data centers right now and i'm just like oh my god if you could make this thing any more complex yeah it's like i think it needs to it needs to get simpler i mean i i I just have a single Unix Linux system, and I'm just trying to connect it to my homeland. And, you know, it's not simple.
Starting point is 00:27:11 It's not simple. It's not as simple as it needs to be. And in protocol design, perfection hasn't been met when there's nothing left to add but nothing left to take away. Ah, famous words. Yeah. RFC 1925. Don't go there.
Starting point is 00:27:24 That's truth number 12 It is like in protocol design perfectionism in region There's nothing left to add but when there's nothing left to take away Yeah, and we have gotten to a point where all of the stuff that we're doing in the cloud native workspace is a frickin giant mousetrap And I think we really need to figure out ways to simplify it and not make it more complicated. Yeah, I think VMware is trying to go down that path,
Starting point is 00:27:50 but in the process, because of everything they have to control, it's become more complex. Don't make it more complicated. Yeah, it's making it more complicated. You'd have to strip out, sad to say, you'd have to strip out vSphere and ESXi and start over again, and that's not going to happen. Not in my lifetime.
Starting point is 00:28:08 Maybe in yours. I don't know. So the other thing that came out was they're starting to roll out chatbots. So they've got Tanzu chatbot. They've got a NetSX Plus chatbot. They've got a user's workspace and autonomous workspace chatbot assistant, quote unquote. They're not calling them chatbots, but they're assistants. And why?
Starting point is 00:28:32 Well, I mean, the view is that here I am. I'm a dumb networking guy. I've got control over this. I've got NSX I've got to deal with. Tell me what I'm looking at. What should I do? How do I get this thing to connect? Please, please help me.
Starting point is 00:28:51 So are they trying to sell this as like this is their little self-contained chat bot that is here to help? It's trained on vSphere and VMware knowledge and NSX and Tanzu and Workspace ONE, and it'll help you understand what you're seeing. It'll help you recommend solutions to the problems you're seeing, and there'll help you understand what you're seeing. It'll help you recommend solutions to the problems you're seeing and there'll be links there. It's kind of like ChatGPT only plugged into vSphere. It's like Clippy, the paperclip. There's plenty of problems with chatbots in the past and there continue to be problems but to some extent no
Starting point is 00:29:25 it's it yeah it's way better than what yeah it's way better than clippy ever was thank god and and uh hopefully it'll it'll it'll improve but you know the problem is yeah you can't get there without taking these baby steps so you have to put these chatbots out there and find out what's working and what's not working and and see you can't get them to work better. Now, next question, are they just putting it out there to say, like, look, hey, this is what we're doing? It was a big announcement. I don't know if it was tech preview.
Starting point is 00:29:54 It could have been tech preview, but my guess is not because they were pushing it pretty hard. It's like it's a marketing dollar spend. Possibly. But it's like this private AI. They took this private AI idea and applied it to NSX knowledge base and applied it to… Well, the private AI, I think the private AI industry itself, and basically training large language models on your data sets,
Starting point is 00:30:23 that is going to be a huge industry. Yeah. Period. I mean is that is going to be a huge industry. Period. I mean, that is going to be huge. It was probably the number one announcement this week at the VMware's conference. And literally the amount of horsepower that you need sitting in a rack to do this stuff is insane. It's non-trivial.
Starting point is 00:30:42 Yeah. And I mean, I think one of the things that folks, if you're running your own data centers out there or if you're running your own cloud components but want to train on that, be prepared for a bill. Yeah, get ready for it because the amount of power required for that is just absolutely insane. Yeah, and note the CEO of NVIDIA was also here at the show on that floor, and they were hawking the NVIDIA proprietary version of Private AI Foundation, and that makes a lot of sense. It's a good link up and stuff like that.
Starting point is 00:31:27 Yeah. Jensen knows what he's talking about. Yeah, yeah. It's certainly the wave of the future. Yeah, it's a huge market. And it's a huge opportunity, but there's going to be a lot of challenges for us to hit it from a data center perspective.
Starting point is 00:31:43 Yeah, yeah. If you're thinking about doing this stuff on-prem, like I said, if you're thinking about doing it on-prem or in the cloud. Think about money. Think about money. Think about all those GPUs you've got to bring on board and stuff like that. Think about the value that this is going to be to your company. And if the value matches how much you're going to have to spend for it, awesome. Yeah, and the challenge is, you know, how to get corporate governance
Starting point is 00:32:08 and privacy considerations and all that stuff. And, you know, if you put it in your chatbot and your chatbot's public, you know, what are you going to reveal with that? And what aren't you going to reveal with that? So there are plenty of challenges in this space to come for the next decade or more. Yeah. Yeah. And the other thing, obviously, is the Broadcom thing is still plugging away.
Starting point is 00:32:32 The CEO of Broadcom was there. They gave a nice session view of what he was talking about. And, you know, it's more competitive, et cetera, et cetera. It seems like it's going okay. I just fear what's going to happen when it actually clicks in. I mean, VMware has been very innovative over the course of the last 25 years. They sort of led the market and then the market adoption is a response to that. And they're there because they've been innovative. If they close off or stop the innovative engine, it's going to
Starting point is 00:33:05 be a challenge. I was thinking the other day, if you took Pat, Pat G, who was kind of a very innovative kind of guy, and you took like Sumit and Raghu and put them together, I think you make a Pat G between the two of them. Maybe one, maybe not, you know, more than one. But the problem with Pat G is how how you replace a guy like that and I think they have but it took multiple people to make it happen yeah and it's a you know it's never gonna be it's never gonna be him right yeah that's that's doing it and and it's fine that it's there it's different it is just it is just a different growth paradigm. You've got a company that now is focused
Starting point is 00:33:53 on very different metrics. Yeah, yeah. Yeah, and how's that gonna play out in this space? From an operational perspective, they run companies very different than what Pat used to run it. Exactly.
Starting point is 00:34:07 And Raghu and Sumit have pretty much kept the Pat G model from that perspective. And the Broadcom thing is going to look a different world. How that plays out is yet to be determined. We will find out. It was interesting, they said it's on track to be end of October timeframe, frame kind of closed the deal which is pretty impressive seeing how it's all going no okay i can't think of anything else i thought the floor was pretty busy and packed monday night it was craziness yeah it seemed like it was bigger the the hands-on lab seemed a lot bigger than it used to be but i haven't been to hands-on lab in a couple of years, a couple of VMware explorers, so I don't know.
Starting point is 00:34:46 But it seemed like it was huge, quite frankly. And then I didn't hear anything about participation, how many people that were here. They usually try to mention that sort of stuff, but they just don't mention that. I heard. I'm not going to mention it. Okay, don't mention the number. But I don't know.
Starting point is 00:35:01 It looked like it was up from last year. Oh, well. Anyway, I think that's it that's a wrap anything else you want to talk about the show I know I like other than that they you know I think honestly it was a good show boys like getting out here they put on a good show they did a nice job with the analysts they got us all you know well lubricated and all that stuff. And the execs come out and talk to us a couple of times, a Q&A session. So I thought it was good from that perspective. I think that's a wrap, gents.
Starting point is 00:35:34 Thanks for being here and listening to our show. Pay attention. Like us on Spotify, Apple, and Google. And we'll go from there. Until next time. Next time, we will talk to another system storage technology person. Any questions you want us to ask, please let us know. And if you enjoy our podcast, tell your friends about it.
Starting point is 00:35:58 Please review us on Apple Podcasts, Google Play, and Spotify, as this will help get the word out. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.