Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 08x02: Ultra-Converged Infrastructure from Verge IO

Episode Date: April 7, 2025

Successful edge infrastructure must be incredibly reliable and adaptable, especially in the AI age. This episode of Utilizing Tech focuses on the ultra-converged infrastructure offering with George Cr...ump of Verge IO joining Jeniece Wnorowski⁠⁠ and ⁠⁠Stephen Foskett. Edge environments often have a diversity of hardware, especially as nodes are upgraded and replaced, and this can pose serious issues when building reliable infrastructure. The ultra-converged infrastructure concept would allow nearly any hardware to be integrated into a unified platform with simple management and scalability. As AI applications are deployed at the edge, organizations will need this level of integration to ensure they are reliable and secure. As more data is collected, more storage is needed at the edge; this makes it even more important to have advanced storage management for data protection and resilience.This season of Utilizing Tech is presented ⁠by Solidigm⁠. For more information on Solidigm, head to their website and learn more about their ⁠AI efforts through the dedicated site section⁠. Follow Solidigm ⁠on X/Twitter⁠ and ⁠LinkedIn⁠.Guest: Guest: George Crump, Chief Marketing Officer at Verge.ioHosts: ⁠Stephen Foskett⁠, President of the Tech Field Day Business Unit and Organizer of the ⁠Tech Field Day Event Series⁠⁠Jeniece Wnorowski⁠, Head of Influencer Marketing at ⁠Solidigm⁠ Follow Tech Field Day ⁠on LinkedIn⁠, ⁠on X/Twitter,⁠ ⁠on Bluesky⁠, and ⁠on Mastodon⁠. Visit the ⁠Tech Field Day website⁠ for more information on upcoming events. For more episodes of Utilizing Tech, head to ⁠the dedicated website⁠ and follow the show ⁠on X/Twitter⁠, ⁠on Bluesky⁠, and ⁠on Mastodon⁠.

Transcript
Discussion (0)
Starting point is 00:00:00 Successful edge infrastructure must be incredibly reliable and adaptable, especially in the AI age. This episode of Utilizing Tech focuses on the ultra-converged infrastructure offerings with George Crump of Verge.io joining Janice Naroski and myself, Stephen Fosket. Tune in to learn a little bit more about how edge environments can be more flexible and more high performance with Verge.io. Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part of the Futurum Group. This season is presented by SolidIME and focuses on AI at the edge and related technologies. I'm your host, Stephen Foskett, organizer of Tech Field Day, including our Edge Field Day and AI Field Day events. And joining me from SolidIME this season
Starting point is 00:00:48 is my co-host, Janice Naroski. Welcome to the show. Thank you, Stephen. It's great to be back this season. It's great to have you. We had a lot of fun last season talking about all the different components that make up infrastructure. And that's really what we're talking about today
Starting point is 00:01:04 and this whole season in terms of AI and Edge. different components that make up infrastructure. And that's really what we're talking about today and this whole season in terms of AI and Edge. Absolutely, we are diving in with various organizations talking about AI and Edge and some of those organizations where you might not realize there is an AI or Edge component to it. So I'm excited that we have our special guest on today to give us a deeper dive on what that might look like.
Starting point is 00:01:25 Yeah, absolutely. It's funny. You know, it's 2025. Everything's AI now. But, you know, some of it really is. And so I'm really interested to hear how people are going to be building infrastructure to support AI applications. Now, when it comes to Edge 2, we have talked about this
Starting point is 00:01:45 on the first episode and last episode as well. It's a pretty vast definition of what Edge really means. It basically means things that are outside the traditional data center that have a different environmental, different cost structure, different applications, different use case, different supportability and manageability. And one of the companies that is doing
Starting point is 00:02:06 an absolutely phenomenal job of stitching together all of the diverse components that make up IT infrastructure is Verge I.O. So I'm thrilled to introduce our guest today, an old friend of mine, George Crump, who is the CMO over there. And George is gonna tell us a little bit more, but before friend of mine, George Crump, who is the CMO over there. And George is going to tell us a little bit more. But before we do that, let's just hear from you.
Starting point is 00:02:30 Welcome to the show, George. Hi, Stephen and Denise. Thanks for having me. Yeah, great to be here. As you mentioned, I'm the CMO at Verge.io. We are a company that has created infrastructure software. What's unique about it is all the components run as a service of our operating system, and that gives us a lot of efficiency and things of that nature.
Starting point is 00:02:54 And so, yeah, I'm really excited to talk about what we're doing at the Edge and what we're doing in AI with you guys. Well, let's start off by just kind of understanding what is Verge.io. Now, my feeling has been that essentially, there was this whole phase of virtualization, and then there was hyper-converged infrastructure
Starting point is 00:03:12 where they basically pulled storage into basically a virtual machine or something like that. But what you guys are talking is kind of next level. Essentially, what you're saying is we're gonna abstract all of the resources, all of the infrastructure resources. We're gonna make it uniform. We're going to make it organized. We're gonna make it repeatable and manageable.
Starting point is 00:03:35 And we're gonna present that up the stack as basically sort of a super ultra virtualized. Tell us more about what exactly Verge.io is doing. Yeah, in fact, we use the term ultra converged. And because what we've done, I kind of alluded to in the opening is instead of, you know, in hyperconverged infrastructure, to my knowledge, almost everybody's vSAN
Starting point is 00:04:02 runs as a virtual machine, right? If they're doing networking at all, it runs as another set of virtual machines. Even vCenter or any of the management guise, they all run as VMs. And so you got all these VMs that have to coordinate with each other across potentially hundreds of nodes.
Starting point is 00:04:18 It becomes very complex, both from a development and infrastructure standpoint, and also from a user standpoint. And so in our world, everything is one piece of software. So when you install our product, you install one thing. You don't create a VM to do anything. The first VM you create is your VM. And then everything is a service.
Starting point is 00:04:37 Storage is a service. Virtualization is a service. All the networking functionality is a service. And you just turn on and off these services as you need them. The result of that is a high level of efficiency, much, much easier to adapt. We make people, if you look at it, networking is one of those skills that's kind of hard
Starting point is 00:04:57 to really get up level on. There's a lot of storage guys, a lot of virtualization guys. Networking is kind of more abstract. We make people networking experts very, very quickly as a result of the way all this works together. And as you're kind of looking at, you know, just the different types of customers, George, right? Who's really seeking this like seamless integration?
Starting point is 00:05:17 You know, you mentioned the compute and the storage and networking kind of all working together, but what kind of customers are you working with and what makes it so easy for those customers? Yeah, that's probably the question that gets me in the most trouble because the answer is yes. Right. I mean, it can be we've got customers that have hundreds of physical servers all part of a single what we'll refer to as an instance. Then I've got other customers that have, we have a large name brand entertainment company that has locations throughout the United States and each of those locations has two or three
Starting point is 00:05:56 servers in it. All of those communicate back to the corporate office and everything's managed out of the corporate office, right? And so because of the way we wrote the software and really to Stephen's point, because we abstracted it so well, it almost can work in any environment. The only thing we don't do is a single server, right?
Starting point is 00:06:15 We're not an operating system for a single server, we're an operating system for the infrastructure. And I think that that's the key when you're talking about Edge. We talked about that at Edge Field Day many times. We've talked about that on this Utilizing Tech podcast as well. The challenge at the Edge is that you don't have management and operational resources.
Starting point is 00:06:37 You really need something that is kind of plug and forget. I mean, it's not plug and play. It's plug and forget. You bring the thing up and it just works. And it's reliable and it's remote management and it's completely integrated. And the last thing you want to be doing is dealing with complexity. And one of the things you mentioned, George, that is I think very, very true is in many hyperconverged or just virtualized environments, you need specialized hardware. You need VLANs, for example,
Starting point is 00:07:08 you need external switches that have all these capabilities, you need specialists who can deal with a lot of this additional complexity. Whereas when you have something like what you're talking about, it makes everything a lot more uniform and a lot more manageable. How do you handle diverse hardware? Is it possible to have multiple different kinds of hardware? Yes, absolutely. We could support within the same instance.
Starting point is 00:07:40 We could have servers that are from different manufacturers, we can have servers that are multiple generations of Intel processors, we can even have Intel processors mixed with AMD processors, we can have GPUs, which obviously is going to be part of the AI conversation, we can virtualize GPUs. And we're not limited to the big guy in GPUs either, right? And that same abstraction helps us. And it's interesting, the two topics here, because we actually use, I would define it as narrow AI.
Starting point is 00:08:13 I don't want to be guilty of AI washing, but it's a very narrow AI component in our product that automatically optimizes our environment. And so it means we don't have to write code to specific pieces of hardware. The software will actually push the hardware, learn its capabilities, and then know how to utilize that hardware specifically.
Starting point is 00:08:36 And so it's very adaptable. I've got customers that have, within the same instance, they have servers that are seven years old and servers that are six months old. And they all work well together. I've got customers with AMD and Intel in the same environment, all those different things. And then even at the edge, it's the same thing.
Starting point is 00:08:54 You know, you're generally getting some cases, you know, two very, very small servers, and you got to make sure they're highly available. You got to make sure you can manage them, all those sort of things. Now, I'm sorry to jump in here on you, Janice. You said two servers, right? Not three, you don't need a cluster of three?
Starting point is 00:09:11 That's correct. We don't need a cluster of three, we don't even need a witness. We don't have any issues with split brain because, and that, if you've been in this at all, split brain always comes up as part of the conversation. Yeah, that's why I bring it up, because this is one of the things about the edge.
Starting point is 00:09:28 Because if you're talking about difference between two servers and three servers, that is a 50% additional cost. Right, yes. When you're talking about edge locations. And 50% times a thousand locations gets to be a pretty big cost. Right, and part of the challenge with with split brain and why it's a thing and why you typically
Starting point is 00:09:48 need a witness server is none of those companies own the network. Right. We own the network. The network is a service for us. And so we can manage split brain functionality. We have our own voting system that makes sure that the right server has the right data. All of those things are managed, again, automatically in the product, the customer has to do nothing. And so we can tell what's causing the problem because we own the single piece of code
Starting point is 00:10:17 owns all the infrastructure. That's pretty interesting. It makes me wanna just dive in and go backwards a little bit, Stephen and George, and just kind of ask, what pitfalls do you see customers having with, say, alternative solutions in the market? And how is Verge making this better? Yeah, I think what most customers, for the, what most customers for obvious reasons are looking
Starting point is 00:10:46 for in an alternative is something that's less expensive, right? I think that kind of goes without saying. I think that that goal became easier to meet. But it is one of those things. And I think the problem is, is as you look at what's out there in the market, are you finding anything that's any different? Are you finding stuff that's basically the same, is just less expensive, not as mature, not as well supported, all of those sort of things?
Starting point is 00:11:16 That typically becomes the number one thing that people struggle with. The other big problem comes back to what you guys were talking about with hardware. Most customers don't cooperate and have servers that are getting ready to come off of maintenance or CapEx. At the moment, they're also ready to switch to another infrastructure software. And so the ability to run on other people's hardware becomes critical. Right? And so I've got examples where we're running on what used to be storage nodes for a vendor's all-flash array. And we just basically install our software on that. It actually ends up making a really good server. And we've got an eight-node cluster running on something that has somebody else's logo
Starting point is 00:12:04 on it. And we've got multiple eight node cluster running on something that has somebody else's logo on it. And we've got multiple instances of that. When I first started here, I was interviewing a customer, like, oh, send me a picture. And I realized I couldn't publish the picture because, as like most vendors, when they do a turnkey solution, it has their logo all over it. I'm like, well, it's going to look like an ad for them, not an ad for us. Right. So, but that's the real world. There's two things. There's one, you're probably not gonna throw away your existing servers, and you might want the flexibility
Starting point is 00:12:31 to buy something else in the future. And so this abstraction becomes a key element in that. Yeah, that's really important, again, in edge environments because they may have, for example, multi-generation, multiple generations of Intel NUCs out there. They might have, as you said, older systems that they're trying to migrate forward. They might wanna extend some life out of those things. And they might want to migrate these things forward
Starting point is 00:13:03 by, I don't know, sending one new node to every location and adding that into a cluster and kind of retiring the oldest one or something like that. And I think you guys can do all that, right? Yes, absolutely. Well, even more practical, let's say you've had one of those locations running for two or three years and one of the nodes dies, right? One of the downsides of a, I don't want to pick on Intel, but one of the downsides of a Nook or anything like that,
Starting point is 00:13:28 and especially in an edge environment, they're not treated well. By definition, they're not in a, a lot of times they're under the cash register drawer, right? And that's not data center quality typically. So they break. And so the problem is two or three years from now, you might not be able to get the
Starting point is 00:13:45 same server that you started with. Right. And so what do you do? Do you have to send out two new servers and replace the whole thing, even though you got one that's working perfectly well? So again, with us, you just send what you got and the software will figure it out. I mean, so you're touching on a little bit around, you know, overall cost savings and on a little bit around, you know, of overall cost savings and its overall quality and reliability at the edge. But George, what do you see in terms of your overall components being supportive of TCO? Like, how are you utilizing, say, storage differently today than you were maybe a year ago? Well, I think that the big thing with storage is most of the world,
Starting point is 00:14:28 if not, well, let's just say most of the world has definitely gone flash. But we now have generations, if you will, in flash, right? Right now, we're sort of in the shift, probably for most of 2025, we're going to be in a shift of moving from TLC to some form of QLC, either all or some. And how do you manage these dramatically different technologies, right? And for some customers, frankly, it won't make a difference. They're just not pushing the hardware
Starting point is 00:14:57 enough, right? Where other customers, it could make a significant difference. So the ability to manage different styles of storage, again, within the same infrastructure is really key. The other thing that's interesting, we spent so much time as an industry working about, and I know Stephen knows this, but auto-tiering and moving data from here to there and all that kind of, guess what?
Starting point is 00:15:22 What you really need to be able to do is, most customers don't need that. What they need to be able to do is just move a VM from tier A to tier B whenever they need to, right? And to be able to do that without taking the VM down. That's the kind of stuff that we focus on. Now, George, one of the things that we haven't heard you talk about too much yet is AI.
Starting point is 00:15:43 And of course, this is, you course, that's something that's really coming to the edge at this point. It's really coming everywhere to the enterprise at this point. What are you going to do? How would you apply this ultra-converged concept to systems that might need GPUs or special tensor processors or something to process data at the edge. So there's a couple of things
Starting point is 00:16:11 that we're gonna be able to do. So the first, to set the first layer, remember we do have that narrow AI componentry built into the product and that's what gives us our optimization. It can, I don't wanna say the word think, but automatically optimize itself for the different hardware and things like that. Remember, as a company, our philosophy is one piece of code, everything runs as a service.
Starting point is 00:16:35 And we think there's an opportunity, a significant opportunity, and we're seeing it already in some of our customers, where I want something like a chat GPT, but I don't want anybody else to have access to it, right? The not great example I always use is, if I was the CEO of Coca-Cola, it might make sense for me to put my code into something like a chat GPT, but I'm not putting it out on chat GPT,
Starting point is 00:17:01 and I'm not picking on chat GPT, but anything cloud-based, right? I'm not picking on ChatGPT, but anything cloud-based, right? Not going to do that. So we think this idea of a sovereign AI cloud, we're already seeing it. And we think customers are going to like that, because now you can load your stuff, your secret sauce, into this private thing and get assistance.
Starting point is 00:17:21 As an example, we're now running this internal at Verge.io. And so our private LLM has our source code. It has all the technical documentation. It has every successfully answered support ticket. And so as an example, I can write a paper, which I do occasionally, load it into that and say, is this accurate? Did I miss anything?
Starting point is 00:17:45 And instead of the cloud version of that, where you get sometimes wild answers, it knows. It can actually answer that. So translate across that across many different types of customers who are going to have private sensitive data that it might make sense to have AI analyze, but they don't want to put it out there. Well, the challenge is now you're talking
Starting point is 00:18:07 about a massive skills gap, right? Not everybody's going to be able to hire a guy to go set up an LLM and teach it and do all the things that need to happen to make that happen. With our software, within weeks now, you're going to be able to click a button, install an LLM,
Starting point is 00:18:25 we'll automatically, it's a service. It's not another VM. Remember, our philosophy is everything has a service. So we'll install everything you need as a service, mount the NAS share, which is also by the way, we have file sharing as a service, we'll mount the NAS share, you load your training data, it starts pulling in all the training data
Starting point is 00:18:45 and within a few days you have a functional thing that you can chat with to start getting information out of or doing whatever you would do with it. So as a service you say you're not necessarily locked in, right? Right. But you have that support and the ease of integration to pivot very quickly from where you're sitting today. Yeah. So what we're building essentially is the service will be the engine and then the actual model you'll use will be handled kind of in the same way we would do a VM today. We don't have every... We have VM templates and you can pick whatever distribution of Linux you want or whatever, and it'll go out to the Internet, download it and configure
Starting point is 00:19:31 your VM, right? That's exactly what will happen here. We'll show you all the available LLMs or models, I should say, and it'll pick the one you want. You just pick the one you want, it'll pull it down and you're ready to go. And I think the other beautiful part of this is I think, and I don't know how much of this stays this path, but it seems like we have different models that are better at solving different problems.
Starting point is 00:19:55 Like, there's some that seem to be better at research, others that tend to be better at graphics, things like that. Well, with this approach, you could very quickly spin up an LLM that's going to focus on generating imagery for you, another one that's going to focus on research, another one that's going to focus on general Q&A, and have all of those running very, very seamlessly. Now, the other part of this that gets very interesting is, you know, Steve and you were talking about it, is the GPUs and things like that. Well, what if you don't need a GPU?
Starting point is 00:20:27 What if I can abstract it enough that I could just run this right off of processors? It might take a little longer, but if, you know, you look at the kind of publicly available options and you're expecting an answer instantly. Well, if it's just you locally in your organization or at the edge, if it takes two minutes instead of 27 seconds, do we care? If that means I don't have to buy a $10,000 GPU? If I'm the CEO, I can say, yeah, you're going to wait two minutes to save that amount of money.
Starting point is 00:21:01 And so the ability to do that would also be part of this solution. And, you know, it does seem like, you know, what you're describing is going to be the sort of thing that people are going to be wanting to deploy pretty soon. You know, I'm not sure that they're ready to truly transform the business with AI yet, but I think that they are going to want to start infusing AI into all aspects. One of the other elements of the picture that we've seen at the edge is that the more businesses deploy AI, the more data they're collecting and the more data they're processing. This is causing something of a storage crunch at the edge, because essentially people are, you know, turning up the resolution on cameras,
Starting point is 00:21:49 adding additional cameras, adding additional sensors, adding additional, you know, metrics and observability and telemetry collection, turning up the frequency of data collection. And all of this is requiring just more and more and more storage. And that causes concerns in terms of the performance and the reliability of storage, especially in suboptimal environments that some of these things may be deployed, whether it's in a retail store, like you mentioned earlier, or in a factory, or as we were talking about earlier, on the top of a windmill,
Starting point is 00:22:25 or in a military application or something. I think that's another aspect that you're bringing to the table here, is because you have integrated storage, advanced storage features for reliability, for redundancy, it really helps to make use of some of these bigger and bigger storage devices, right? Yeah, absolutely. And, you know, I don't know if you've met my friends at Solidine yet, Stephen, but they brought out this. Yeah, maybe they brought out this hundred and
Starting point is 00:22:54 twenty two terabyte drive and we're going to sell all of them. Right. Because that wasn't a commitment, by the way. But the but but that kind of that kind of density in a very, very small form factor becomes suddenly very interesting because of that. The other thing I want to touch on that you reminded me of is one of the, again, as a service in our product is the ability to do multi-tenancy. We call them virtual data centers. There's a couple of reasons why I'm bringing that up. First of all, in the edge type of deployment, that edge could be what we would call a virtual
Starting point is 00:23:28 data center, physically running on a couple of nooks there. But we could copy that entire – because we've encapsulated at the macro level the virtual data center instead of a VM, I can copy that entire data center to a central office. And if one of the aspects of local offices, as we've already kind of touched on, is the servers underneath the cash store or whatever. If something goes wrong there, I can also have it immediately spin up at the corporate office until that remote office comes back online. Now, where that applies with AI is what you're saying is
Starting point is 00:24:05 I kind of want to take a crawl, walk, run approach to this. I don't want to turn everybody loose on this thing. Well, we can also, again, level of encapsulation. Because I can clone an entire data center, I can take a copy of your data center, put it right next to your production data center with all the same stuff, and you can start firing up AI on it and see what happens.
Starting point is 00:24:26 If it doesn't work, you can delete the entire data center. Who cares? Because you've got the production one right there. So it allows people to go through this experimentation phase much more quickly and safely because you have this object now. If you look at most solutions, they focus on doing things at the VM level.
Starting point is 00:24:46 The problem with that is think of all the things you miss if you're just copying a VM. You don't have any network settings, kind of important in the Edge, right? You don't have any storage settings, also kind of important in the Edge. You really don't even get a lot of the VM configuration stuff. And so the ability to encapsulate everything as one thing and do so consistently is a very powerful capability. It reminds me, George, of the demo you guys recently did, I think, live, right? We were able to kind of recover everything.
Starting point is 00:25:15 Stephen, we should bring George back and have him do a live demo at a future field day, is what I'm thinking. I think that would be very cool. George, tell us a little bit about that demo, because I know it's recorded somewhere, but we could always definitely bring it back for a live audience at some point. But you guys failed everything and then just brought it all
Starting point is 00:25:36 back up and all the data was there, correct? It doesn't sound great to say that he failed everything. But I think we understand what you mean. Yes, George, explain how you failed. It sounds like you're describing my college journey. But anyways, so there's two, we did it in two directions, right? One is, you know, two, if you will,
Starting point is 00:26:01 not edge data centers, cross replicating to each other, basically protecting each other. And then we did an edge to a primary, right? So because of this podcast, let's focus on that. And what we used was a protocol called, or a capability called BGP, which is built into our networking service. And what you can do with that is you can break the rules. You can have the same IP address coming out
Starting point is 00:26:27 of both virtual data centers, the one that's running active in the edge and the one that's running at the headquarters, except you set different priorities so that the corporate data center maybe is a priority five and the edge is a priority one. Well, that means that the only IP address that ever gets seen is the higher priority item. Well, if there's a failure, obviously priority one goes away.
Starting point is 00:26:52 Priority five is now all of a sudden the highest priority, and it starts broadcasting its IP address. And so all that would have to happen, so imagine a retail location, like a warehouse, customer warehouse sort of thing, where they're walking around with iPads or phones or whatever they're using. like a retail location, like a warehouse, a customer warehouse sort of thing, where they're walking around with iPads or phones or whatever they're using,
Starting point is 00:27:08 all they'd have to do is hit refresh and it would immediately, without IT doing anything, goes back to what you were talking about, Stephen, is all of a sudden they're just connected to corporate. And with that type of device, you probably don't even notice a performance difference, right? It's the wifi that's the bandwidth issue. So all of that's built into the core product.
Starting point is 00:27:31 It sounds honestly almost too good to be true. And that's, you know, it's funny, George, I think I remember the first time you told me about this, I remember thinking, it just can't be. You know, that's just not, you know, but it is proven to work. And you guys have successfully, you know, you got a bunch of customers signed up and you're, you know, you've got this deployed all over the place. It sounds great. And also of course, as companies are looking
Starting point is 00:28:00 for an alternative to VMware, I think a lot of them are looking at, you know, this as a potential VMware alternative because y'all were already supporting many of the same workloads that people were with VMware. Yeah, we kind of talk of it. If you're making that change, you're obviously gonna make
Starting point is 00:28:19 an important infrastructure decision, right? If you're gonna do that, even if AI isn't on your radar screen right now, why don't you do something that can do all that, the stuff you need to get done today, but if somebody throws an AI project on you, all you gotta do is click a button, start a service, and you're ready to go, right?
Starting point is 00:28:37 It just makes sense. And by the way, less expensive, right? So it just makes a lot of sense. Too good to be true, before I joined, you know my background as an analyst, I ran this thing for five months without even telling Verge.io that I was running it, right? Cause I could not believe it.
Starting point is 00:28:55 And I was just in shock. So it's a fun company to work for. We're just good people. Right now we're enjoying a hundred percent customer satisfaction, which always makes me a little nervous saying it because all you got to do is make one guy mad. But, you know, it's just rock solid product
Starting point is 00:29:13 and it works day in, day out. That's awesome. You know, Janice, this has been kind of a cool conversation about how compute could work at the edge and the challenges that they're facing and a solution to those challenges. You know, what's your reaction to this? I mean, let's wrap up with a bit of a summary from both of you.
Starting point is 00:29:32 You know, what's your reaction overall to how Verge.io helps AI at the edge and what that means for customers? Yeah, I think this is a fascinating solution. And, you know, a lot of organizations out there are running on VMware, vSAN, and maybe some other options. But I think what Verge has here is really easy to integrate. Like George said, it's an all-in solution as a service, ready to go. It's very flexible.
Starting point is 00:30:05 So I'm excited to see what they continue to innovate on. I know we just came out of Nvidia GTC and lots of excitement around AI and not just within HPC, but now we're looking at HCI. So I think this is a really exciting opportunity when it comes to what Verge is doing. Yeah, George, thanks for giving us this overview. I don't know if you have a, I guess,
Starting point is 00:30:30 what would you like to tell folks listening to this about AI at the edge and Verge's place in that? Yeah, I think the first thing is, we're not taking our eye off the ball. We're still focused on providing an infrastructure software alternative. But again, it kind of goes back to what I just said, right? If you're going to go through that process, and I'm not going to kid anybody,
Starting point is 00:30:54 no matter what you do, it's not like you just snap a button. You're not switching from Microsoft Word to Pages, right? This is a pretty big deal. And so you need to think about it. But if you're going to do that work, also have something that's going to prepare you for, you know, a larger edge deployment. You know, one thing I didn't mention, I probably should be fired for. We have a global display that you can see all the different sites and things like that. This isn't recorded and nobody, you know, nobody will know. Nobody will know. Okay.
Starting point is 00:31:22 Okay, good thing. I love live. But anyways, so we have that. And then, you know, the ability to give this flexibility so whatever comes down at you next, you've got an infrastructure that is, you know, not, it's not marketing, right? We've proven the ability to adapt to new technologies incredibly quickly. Well, this sounds great. And thank you so much.
Starting point is 00:31:45 I'm sorry to inform you that this is actually recorded. Thank you so much for joining us on this recorded episode of Utilizing Tech. Before we go, George, where can people connect with you if they want to continue this conversation? Yeah, sure. So the best place to go is our website, verge.io. All the information you need is right there.
Starting point is 00:32:07 And there's essentially two paths today. There's people that want to look at an alternative infrastructure, and then there's guys that want to talk about AI. So both of those are pretty clear on the site, so I would just go there. Excellent. Well, thanks for joining us. And Janice, welcome back to another season of Utilizing Tech. Where can people learn more about SolidIME? Thank you, Stephen. I appreciate it so much.
Starting point is 00:32:30 Yeah, same. Everyone can go over to solidime.com forward slash AI for more specific AI information. And then I'm just a message away on LinkedIn as well. Excellent. Well, thanks so much for joining us. And thank you for listening for this episode of Utilizing Tech. I linked in as well. Excellent. Well, thanks so much for joining us. And thank you for listening for this episode of Utilizing Tech. You can find this podcast in your favorite podcast applications, as well as on YouTube
Starting point is 00:32:52 if you want to see what we look like. If you enjoyed this discussion, please do leave us a rating or a review. We would love to hear from you. This podcast is brought to you by Solidheim, as well as Tech Field Day, part of the Futurum Group. For show notes and more episodes, as well as Tech Field Day, part of the Futurum group. For show notes and more episodes, head over to our dedicated website, which is utilizingtech.com,
Starting point is 00:33:10 or find us on ex-Twitter, BlueSky or Mastodon, at Utilizing Tech. Thanks for listening, and we will see you next week. Music

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.