Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 05x22: Edge Innovation is Coming from All Directions

Episode Date: October 2, 2023

As we've discussed all season on Utilizing Edge, innovation is coming from all directions, including hardware, software, and applications. This special crossover episode of the On-Premise IT and U...tilizing Tech podcasts features Edge Field Day delegates Brian Knudtson, Ned Bellavance, and Jody Lemoine discussing their perspectives about edge innovation with Stephen Foskett. The primary drivers at the edge are integration, efficiency, and connectivity, as well as the unique needs of the applications there. Starting with hardware, customers are headed in two directions, with more enterprise availability features deployed in some locations and less-capable hardware in others, both in terms of compute and networking. At the software level, most edge infrastructure is hyper-converged, meaning that multiple layers of the stack are integrated in software and managed as one. Although intended as an application platform, Kubernetes is being deployed as a packaging abstraction and distribution solution at the edge. Host: Stephen Foskett: https://www.linkedin.com/in/sfoskett/ Guests: Ned Bellavance: https://www.linkedin.com/in/ned-bellavance/ Brian Knudtson: https://www.linkedin.com/in/bknudtson/ Jody Lemoine: https://www.linkedin.com/in/jodyl/ Follow Gestalt IT and Utilizing Tech Website: https://www.GestaltIT.com/ Utilizing Tech: https://www.UtilizingTech.com/ Twitter: https://www.twitter.com/GestaltIT Twitter: https://www.twitter.com/UtilizingTech LinkedIn: https://www.linkedin.com/company/Gestalt-IT Tags: #EFD2, #Edge, #UtilizingEdge, @UtilizingTech, @GestaltIT, @SFoskett, @Ned1313, @GhostInTheNet, @BKnudtson,

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT. This season of Utilizing Tech focuses on edge computing, which demands a new approach to compute, storage, networking, and more. I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT. On this bonus crossover episode, we are actually going to share the same discussion, the same content for our on-premise IT podcast, because we've got Edge Field Day going on this week. And frankly, we felt like it made sense to do our traditional pre-Field Day episode on both the utilizing and the on-premise podcast feeds. So what you're about to hear is an episode of on-premise IT, but it's focused on utilizing Edge. So, you know, it's both things at once.
Starting point is 00:00:54 Today, we're discussing the combination of factors that are driving new Edge technology, whether it's hardware, software, applications, and more. And we're discussing it with the delegates from our Edge Field Day event. If you're interested in what's going on here, I definitely suggest taking a look at Tech Field Day on LinkedIn or techfielday.com, where you can learn more about the folks that are on this podcast, as well as the other Edge Field Day delegates, the sponsors, the independent roundtable discussions, the Ignite talks, all that stuff that we're hosting here at Edge Field Day this week. Also, I do recommend looking for the on-premise IT podcast in your favorite podcast application, since you'll hear conversations like this that go beyond Edge,
Starting point is 00:01:42 if you check that out. So now let's hop over to the on-premise IT podcast content. So before we get into the conversation, let's meet who's on the panel today. Hi, I'm Ned Belovance, nedinthecloud.com, and you can also find me on LinkedIn. Hi, I'm Jody Lemoine. I'm an independent network consultant. I'm at ghostinthenet at hackyderm.io. And you can also find me on Twitter slash X at ghostinthenet. I'm Brian Knutson. I'm a technologist, problem solver, marketer, do all sorts of things. You can find me on LinkedIn more than anywhere else, as well as Mastodon at bknutson at vmst.io. And I am Stephen Foskett. You can find me at sfoskett on ex-Twitter, where I'm not as active anymore, but you'll also find me at sfoskett at techfieldday.net. So one of the things that I love about Tech Field Day is the kind of conversations that we get behind the scenes with the delegates. And inevitably at X Field Day, well, not X Field Day, but, you know, at whatever Field Day event, the conversations revolve around the innovation that's happening in the space.
Starting point is 00:02:56 Now, Edge has been just an amazing, amazing topic for us to dive into here, both with Edge Field Day, but also, of course, with the Utilizing Edge podcast, because it occurs to me that there is just so much innovation happening there. Everything from hardware to software to applications to networking, it's all there. So, Brian, let's kick us off. Talk to us a little bit about where you see innovation coming from. Yeah, I mean, that's one of the things as I've been listening to the season of utilizing tech, and, you know, preparing for this edge field day event. A lot of the conversation focuses around hardware more than anything else, I would say. But there's always these elements of, you know, use using a hypervisor on the edge.
Starting point is 00:03:46 And should we use containerization on the edge? We talk a lot about the data and the storage aspects of things, two separate topics, of course. And I haven't heard a conversation that really talks about the fact that, hey, all of these matter at the edge. Because how they interact with each other matters. It's a stack. They all interact with each other. They all need to have the same goals, allow for the proper communication channels, allow for the right technologies to interact in order to give us those advantages on the edge. So I think the innovation is at every level of the stack or should be at every level of the stack, that no single piece of it should live independently.
Starting point is 00:04:34 Like, you know, there's a lot of talk about using nooks in the edge, and they're a great platform for that because they're low power, they don't take up much space. They're quiet. That's great. But there's some trade-offs that have to happen in order to run on that platform. And you have to account for those. Maybe the hypervisor needs to deal with it differently. Maybe the operating system needs to be more low powered to account for that. Maybe the application needs to be smarter about that. So I think that at every level matters. And those companies, those organizations that are thinking about it at all levels are going to do a lot better in that space. I would say, like, for me, the defining characteristics of something running at the edge is, for starters, your resource
Starting point is 00:05:20 constraints much more than you would be in a data center, let's say. So efficiency really matters. And so a very tightly coupled stack where everything is streamlined and sort of purpose designed to work well together is going to be much more important than a general purpose device that you might have a data center that's capable of running multiple different hypervisors. And you know, if it's a little inefficient, that's okay. It's not a huge deal. So that's like one major constraint that I think is driving the innovation across the entire stack. And then the other major constraint is usually network connectivity of some kind. And the fact that typically, you're either going to be running in a somewhat or occasionally disconnected scenario, or sometimes it's just going to be a very low bandwidth scenario.
Starting point is 00:06:10 So once again, you can't make the assumption that you have immediate access to the entirety of the Internet to download whatever thing you forgot about or need. Also setting up some sort of caching or local storage that your edge application can take advantage of. That's also going to be, you know, tightly packed into the stack. How do all the components play into it? I see hardware is definitely playing a role, but even more so, I think the higher level abstractions are playing a huge role as well, because those tend to be very leaky and inefficient in the data center. I'm thinking about like running when people running Kubernetes on virtual machines that are sitting on hardware and then plopping containers on that. That's like four levels of abstraction right there. And really, you just want to run a process on on bare metal if you can.
Starting point is 00:07:03 So it's whatever these other layers can do to either be more efficient or get out of the way when it comes to running the processes. So I think the biggest innovation actually needs to happen at the software level rather than at the hardware level. Yeah, we're definitely seeing a lot of different use cases for edge. It's kind of a new idea. So we're seeing things where you just can't put a VM out there because you've got no x86 compute. You've got no hypervisor. You've got two gigs in a container architecture. Have fun with that.
Starting point is 00:07:54 And that's what you're working with. I think you're going to have as many varied architectures for the edge as you have edge routing instances or edge environments. So what you're going to see at the branch in a large deployment is going to be very different than what you're going to see in a sub 200 person organization that's relying on software as a service, for example. If this goes to what we've been talking about on utilizing edge all season, that the unique elements of the edge, the things that make edge different
Starting point is 00:08:27 from data center, because honestly, it's all the same technology, but the things that make it different are exactly the constraints that Brian, Ned, and Jody were just talking about that, you know, we're looking for things that are efficient. Absolutely, Ned, 100%. We need it to be something that can be deployed in volume that makes sense to deploy at the kind of scale that we're talking about. We're looking for integration because traditionally Edge has not been integrated. It's been anything but. It's been a mess of all sorts of various hardware and on virtual machines or on containers or just to try to get your hands around the networking and connectivity aspects. And then the other thing, as you said, is, you know, kind of answering that call. We've seen a lot of new products and especially, you know, application packaging with, you know, Kubernetes, especially new software stacks, as we're going to hear about
Starting point is 00:09:26 at Edge Field Day, and hardware. And so let's start working our way up the stack from hardware. Now, one of the big topics we talked about this season was the surprise that Intel canceled the NUC because the NUC was one of the platforms that was very popular at the Edge. But was the NUC really a great platform? No, it was just a cheap and ubiquitous platform that was well-supported and worked great, you know? Wait, that sounds like a great platform. But there are other platforms, as you mentioned, and we're starting to see a lot more interest
Starting point is 00:09:57 in maybe kind of moving a little bit up the stack into a platform that's just a little more capable than the NUC ever was. So I guess let's dive into hardware. Who wants to go first? I mentioned networking hardware too. I'm easy to do that. The interesting thing about hardware at the edge is that, yeah, your requirements are going to be very, very different. And the thing that made the Nuke very or not, or however you want to call it, very popular, that it was inexpensive, powerful, you could put it at the edge, it would handle just
Starting point is 00:10:30 about anything. Now that that's off of the picture, you're seeing things go in two different directions, you're seeing people beef up what they have at the edge, maybe have hypervisors, maybe have whatever. But you're also seeing a scale down as well. We're seeing routing platforms, for example, that have container support on them. We're seeing people move down to things like ARM-based platforms, Raspberry Pis, container orchestration with that. Very low power, very easy to manage remotely, and just easy to put smaller workloads there. You only need to look at the more enhanced stuff when you have bigger workloads. And at that point, you know, how much are you,
Starting point is 00:11:11 how much are you leaving at the edge versus what are you putting up at the data center anyway? I'm seeing a lot of stuff on some of the hardware I'm working on where I've got small container support built right into the routing platform. I can put some of the compute aspects that I need right there and be done with it. And that's proving to be very versatile. So I think you're going to see a whole lot more Raspberry Pis out there too. Whole lot of Raspberry Pi fives just got announced. I think it's funny that Raspberry Pi
Starting point is 00:11:43 was like the accidental edge compute. That was never the original intention of the device. It was supposed to be something that, you know, a low cost computer that people could, you know, tinker around on. And people did. And then they're like, oh, I can buy 100,000 of these and roll them out to all of my locations. And if one dies, it doesn't matter. I'll just swap it out with another and I'm out 35 bucks. I can keep spares on hand. This is great. And then of course you ran into
Starting point is 00:12:09 all the limitations of the Raspberry Pi, which then led to, I think, I think a second generation of these single board computers that weren't necessarily from Raspberry Pi, but took some of those lessons and said, Hey, we need something a little more robust than an SD card. So let's, let's see what we can do about that. Oh, people want to hook up more performant cameras to it to monitor machines as they're manufacturing something or do some sort of AI processing. Maybe we can throw some specialized chips at it or add a PCI Express bus to it.
Starting point is 00:12:41 And so we have this sort of second generation of Raspberry Pi alternatives. And then if you need to step up to something more traditional, you got the Nook, which again, was another accidental edge computer, certainly not intended for that. One big thing that we haven't talked about so far, but I think also is really important when it comes to edge is the sort of idea of zero touch or minimal touch provisioning when it comes to any of this hardware and the software that sits on top of it. You know, if you have 1000 locations or 10,000 locations, rolling a truck out to every location when something goes wrong or needs to be replaced
Starting point is 00:13:16 is just not an option. So you need to be able to have an a not technically adept person be able to swap that component out and literally just swap the wires. Cause if it's going to be any more complex than that, then you're going to run into difficulty very quickly. I, uh, I worked in retail for a long time and then on retail help desk. And I can tell you that store managers and cashiers have no interest in learning the difference between a serial cable and a console cable and a keyboard port. And they will mash the thing into the other thing as hard as they can to make it fit. And then they've broken everything. So making it as easy to swap and as low risk as possible is going to be a big thing for hardware, but also the software that then gets deployed on top of it needs to play nice with this zero touch provisioning and easy swap out of devices.
Starting point is 00:14:11 Yeah, I would definitely agree with that, you know, and would like to kind of, you know, we talked a lot about accidental architectures that have become edge devices. And that's the nature of IT. Like we always end up with accidental architectures that then get adopted because they actually work. You know, we can, we could talk about all sorts of shadow it stuff for, for days, but that's to me where, where we're kind of at that, that, that crux of the industry right now is that edge is now becoming a real use case that people are actually designing for. The fact that edge Field Day exists is a testament to that. And being able to take those accidental architectures.
Starting point is 00:14:50 So if you look at like HPE, they've got their microserver that looks a lot like a Nook. You've got, you know, the Raspberry Pi 5, as you mentioned, is upgrading to become more usable in those use cases. Like they're moving away from their traditional place of being kind of an IoT platform to being more of an edge platform, which is an interesting pivot for them. And from what I hear is probably going to incur a similar cost delta there too. And what we need to see more of, I think, from the hardware perspective is more adoption of these platforms that through accidents became the standard, but are far from
Starting point is 00:15:33 perfect, are far from where we need to go. I'll tie back to what one of the things that Jody mentioned that I think is super important, which is convergence. Hyperconverged is an obvious way to go into the edge. The way hyperconverged has been done is not necessarily the best form factor to do that, the best deployment for edge. Having been in that space, I can say that it was a challenging use case for everybody. But when we think about more of the true convergence aspect of things, being able to put compute into a router, that also includes switching functionality. That also includes whatever else you would need in a branch office type scenario, all in a single box.
Starting point is 00:16:20 So it's literally plug in a cable, plug in a bunch of network cables, and in where it'll fit. Don't force it. Plug it in where it'll fit and you'll be good to go. And, you know, we have enough differentiation in plug types these days that should be possible. But again, those ports need to be super flexible so that you can manage those through software. And that kind of gets us to the next layer probably. Whatever it is we're using, I think the key bit is that we don't rely necessarily on on-site resources. Because let's face it, the technical people are not out at the edge. That's not really where we find them. use whether we're taking the angle of like Cisco's what is it their next and see the network computing
Starting point is 00:17:28 system where they're essentially building everything into a remote managed hypervisor or we pick a platform whether it's the router and containers or something else we really need to make sure that that thing is completely remote managed. The original Nukes, for example, their big failing was that they didn't have an ILO component. Something went wrong, well, you're walking someone through something. So having a primary platform that's easily reachable, no matter what it is, is key to anything at the edge, I think. Yeah, and that's, I think, what we're going to see tomorrow at Edge Field Day and all throughout is basically kind of, OK, so the NUC was a great start. You know, what should which direction should we head? And I totally agree with what you were just saying that, you know, on the one hand, we've got people deploying cheaper, disposable solutions.
Starting point is 00:18:24 And on the other hand, we've got people deploying cheaper, disposable solutions. And on the other hand, we've got people deploying better enterprise level solutions. So like one of the big differentiators with the microserver and HPE's line and with what Dell is doing with Lenovo, for example, is that they're offering more high availability hardware features like redundant power supplies and multiple drives, and even ILO and things like that at the edge because it gives remote administrators a little bit of peace of mind instead of having to deploy more and more disposable units. But of course, a lot of that stuff is coming in software. And I think that that's kind of the next step here. So if we look at the software level, I think, Brian, nobody probably here knows more about HCI than you, given your background and your focus. HCI, or what's called HCI, hyperconverged infrastructure, is basically software that takes hardware and does all the things needed to build a reliable compute infrastructure compute stack on it. Maybe you can define it a little bit better than that. But we're definitely going to be seeing that from Node Weaver. We saw that last time from scale computing and the data. You know, store magic is going to be talking about that as well. Software defined software driven storage. Tell us a little bit
Starting point is 00:19:40 about how HCI fits at the edge. Yeah, So HCI is one of those funny terms that everybody wants to do, but they define it the way they want to. I'd say it's still very much a marketing defined term right now, much like cloud was for most of a decade. And even now, I'm sure we could still argue what a real cloud really is. So HCI, the concept really is to take multiple layers of the stack, cram them together so you manage them as a single thing. You know, managing. So my history is with SimpliVity, is why I've been pointed to as the expert of HCI. We made sure one of our big things was that the management of it was within vCenter, so that you had a single
Starting point is 00:20:31 interface to manage that entire stack. That was a huge thing for us to say, this is part of hyperconverge. It's simplifying the management. We want to minimize the management of the hardware platform. We're going to automate as much of it as we can make it so that it's as simple as possible. And you see others doing the same thing. Like SimpliVity was, was, was not the only one doing that. And so taking all of that into, into concept of, of trying to make it as simple, try to make it as, as a single layer across, you know, from the hypervisor all the way down as possible is the key to me when we talk about hyper converged. And you see different implementations, different flavors of that. And what I was talking about before is taking that into the switching layer
Starting point is 00:21:17 and, you know, are the networking layer making that all single one, like, can you create a single interface that allows you to manage not only the virtual machines themselves you may be running there, but also the SD-WAN and have the SD-WAN managed in that same scenario and simplify that as much as possible so that you don't need a networking expert in order to manage it
Starting point is 00:21:38 and a virtualization expert to be able to manage it and yada, yada, yada, all the separate disciplines we've traditionally had. So combining those all in a way that a single generalist can manage it all very effectively, is to me really the key and a huge thing for for the edge to be able to get to that point where like we said, you need to be able to manage it remotely, because you cannot assume that there is going to be an IT expert on the other end and making it maybe even simple enough that a non-IT expert can do it because it may not always be available for remote hands to be able to do something with it. When you say hyper-converged infrastructure, I think of the, you know,
Starting point is 00:22:19 the original idea behind it was, hey, instead of having all these like, you know, pizza box servers, and then we've got our storage over here. And we've got our networking over here, let's start pushing it all together into one deployable unit. And that was never the intention was never to do it at the edge. But now you're talking about hyper converged in a different context, which is hyper converging more portions of the stack down to something that one person can manage. And to me, that brings an awful lot like cloud. Because if I think about the way that a lot of us grew into the cloud, we didn't have our storage team managing the cloud storage
Starting point is 00:22:59 and our network team managing the cloud networking, etc. you had the cloud engineer would go out and configure and manage all these different components. And they were able to do that in part because the solutions designed by AWS and Azure and Google had a lot of defaults in it and pre-configured things, and you could address it all via an API. And so I'm imagining the same exact sort of scenario needs to exist at the edge where things are simplified, the solutions are pre packaged to a certain degree, and they have same defaults. So that like you said, you don't need a specialist for each area, you just need
Starting point is 00:23:37 one person and they manage your edge infrastructure with all the other challenges that that encompasses beyond just managing those pieces of the stack? Well, essentially, the hyper-converged infrastructure at the edge is just a scaled down hyper-converged infrastructure at the data center. We all talk about scale and scaling up, up, up, but scaling down is a very real thing, especially when we're dealing with smaller organizations. And getting into the whole cloud thing is going to be a whole different discussion and possibly worth something to talk about at another on-premise IT. Is cloud a service or an abstraction? So if we treat cloud as an abstraction, this all dovetails at all levels, and it's worth a rethink.
Starting point is 00:24:31 Yeah, I think that that's where we need to go next, because that is the key question when it comes to edge, the question of cloud, and the question of Kubernetes, quite frankly, because many, if not most, I guess, edge infrastructure, centralized infrastructure is adopting Kubernetes, but they're adopting it in an unusual way. They're adopting it more in terms of an application packaging and delivery system than in terms of an orchestration system. In fact, many of them are running it on virtual machines at the edge or alongside virtual machines at the edge. Ned, I think you're our Kubernetes expert here. What do you think of the odd way that Kubernetes is being deployed there?
Starting point is 00:25:11 I think when you've become comfortable with a particular technology, then you'll tend to use it in a lot of different places that maybe it wasn't intended for. And Kubernetes is a great example of an application that was originally intended to scale way out, like Jody said, like it was, it was a scale out single location kind of deal. And suddenly, we had to figure out a way to maintain the same front API portion of it, but shrink down the back end to scale down to what you have the edge. And so then you had things like
Starting point is 00:25:43 first you had K three3s which was a way of taking pretty much the entire kubernetes thing and get it down to a single binary and now you have k0s which is an even smaller version of that so the idea is we really like the abstractions that kubernetes provides and we like to have a consistent api talk to. But we don't need or want all of the bells and whistles that come with a full Kubernetes deployment because that will crush whatever tiny hardware we're trying to deploy this on. So rather what we've done is take that API, take those abstractions, and just put them in a much smaller package so I can still have my application engineers writing their YAML files that deploy the application. And they don't need to know that this is being deployed to an edge location as opposed to being deployed to a cloud instance.
Starting point is 00:26:36 And so when I want to do my testing, I can test it on a Kubernetes cluster that I have running in AWS. And then when I want to actually do my production deployment, I can roll it out to my stores or whatever my edge location is. And the application code itself doesn't really change. So I want to push back on that a little bit, because I think it matters where they deploy it. I think they need to be thoughtful about that. And to be aware of the fact that yeah, you are going to be at an edge location and therefore efficiency maybe is more important than speed or functionality, which of course is always the
Starting point is 00:27:09 triad of pressures that have to be balanced there. I think Kubernetes is actually, or let's just say containerization in general, is a great solution for the edge because it allows you to share that operating system, that efficiencies of not having 10 operating systems, but having one operating system with 10 different application stacks running independently from one another is key. I think it needs to be there. But to Jody's point earlier, scaling down something like Kubernetes and Ned, what you mentioned about the different flavors of trying to make it more efficient is what's necessary at that level. But I don't think that level of abstraction takes away from the necessity of making sure that we need to balance those containers slightly different. Yeah, it also drives home the whole cloud as an abstraction concept,
Starting point is 00:28:00 because if we are using Kubernetes APIs for what we have in the cloud we're using kubernetes api's for containers at the edge suddenly you know whether it's at AWS or GCP or DigitalOcean or on our edge devices or whatever it doesn't matter we deploy it the same way we can move it around it's all the same. So it just kind of becomes even more cloudy in a good way. Yeah, criticism definitely taken, Brian. I think I may have overstepped when I said that the application architect doesn't need to care and everything's just going to work because that's never the case, right? Like they do have to care about some of the details of where their application's eventually going to run, especially if they're doing things like making a call out to a database or a remote service that works great when it's running in your data center.
Starting point is 00:28:54 But when it has to reach across a tiny little T1 line or something, the latency kills the whole performance of the application. So having some understanding of what the limitations are of your application and where it's going to be running, yeah, I overstepped there. You do need to know at least some of the details there. But maybe the more important thing is that their operational flow doesn't have to change as much. They're still using a packaging format that they're familiar with. They can continue to use containers and YAML and services and all of those components that they've known to grow and love in the world of Kubernetes. And when they're actually writing the application, they still do need to know this is going to run at the edge. So here are your constraints and some potential issues that you're going to have to deal with because we're running it in an isolated environment with low network bandwidth or something
Starting point is 00:29:50 along those lines. Yeah. And I think that there's so much more that we could talk about here and that we have talked about this season on utilizing edge. But I think that one of the key aspects that those of you listening are probably noticing is, again, everything we talk about when we talk about edge is the same stuff that we talk about in the cloud and in the data center. We're still talking about compute. We're still talking about virtualization and HCI and containers. We're talking Kubernetes. We're talking SD-WAN. We're talking, you know, trusted platforms and zero touch and all of these things
Starting point is 00:30:25 that we have been talking about from the desktop to the cloud all these years. And we're talking about those at the edge as well. And I think that that's what makes this an exciting and interesting environment. So we do have to wrap here soon. Jody, Brian, Ned, final words here? I think we're gonna see
Starting point is 00:30:44 some really interesting stuff happening. Again, harping on my scaled down portion of things. Even with small resources at the edge and in small business and medium business branches and the like, we're going to see a lot of technologies that are previously thought of as data center technologies finding their way into much, much smaller systems. I'm kind of excited about it. It gives a lot of flexibility for some of my customers. And on the other hand, it puts a little bit more on my plate because now I have to learn stuff that I previously thought of as, oh, that's data center stuff. I don't do that. So yeah, it's going to be an interesting road ahead. Yeah, for that reason, I'm excited for it. You know, being able to take advantage
Starting point is 00:31:29 of all of the data center experience I've had, virtualization, HCI, Blades, you know, the early days of Converged, bringing those all together and then thinking about how to scale it down and keeping that mindset of every level of the stack matters. The less stack, the less layers in that stack you have, the better. Um, I am, I'm really looking forward to kind of rethinking about how we do data and applications, because that's really what it all comes down to in this mindset. And,
Starting point is 00:32:05 and really kind of, I don't know, I don't want to say turning the screws to, um, the, the presenters this week, but being able to, to think about them in those contexts and try to ask those questions with that mindset to help people really understand how, um, they can take their existing knowledge and apply it in the, in these ways. I'm very curious to see not just the hardware innovations. Those are always interesting, but what sort of new software abstractions percolate up out of Edge? Because I think, you know, the way we're doing Kubernetes now with K0S is feasible, but probably not the best way to do it. So I'm curious to see what other abstractions we come up with. One that I have my own is the way that WebAssembly has been growing in popularity
Starting point is 00:32:52 and seems like it would be a pretty good fit for these edge applications because of how small it is in terms of its run space and just general size. So I'm curious to see if any of the presenters this week will be talking about WebAssembly or something similar to it technology-wise. Yeah, we've been watching WebAssembly as a key technology here too, Ned. And there is some really cool stuff going on there. We did approach some of those companies for Edge Field Day, but hopefully we'll see them. I don't know, maybe we'll see them at Edge Field Day 3. We'll see. But I love what you're saying, that of course it goes both ways. And technologies are not just percolating from the data center into the Edge or from the cloud into the Edge.
Starting point is 00:33:34 They're percolating in the other direction as well. And to be honest with you, I can see the way that Kubernetes is being used at the Edge as being sort of a model for how it might be used in the data center in the future, for example. And I can see the same with some of the approaches to networking at the edge, as we heard at Edge Field Day 1. I could see some of that stuff coming into data center networking
Starting point is 00:33:56 or even client networking, maybe cloud. I don't know, we'll see. And I think that it's going to be a really interesting space. We're just at the beginning of this. So thank you all so much for joining us this week for this special crossover episode of Utilizing Tech and the On-Premise Podcast. If you've enjoyed this conversation, please do subscribe to both of those podcasts. I promise we're not going to do a ton of these crossover episodes. This was just a special one. Just to give you a flavor for what we might talk about on the other side, you'll find it in your favorite podcast application. Before we go, though, where can we connect with you and continue this conversation? Ned?
Starting point is 00:34:34 Best way to find me is on LinkedIn, Ned Belevance, or you can go to my website, nedinthecloud.com. Yeah, you can find me on my website at knut.net, K-N-U-D-T.net. Social media is usually bknutson and, of course, LinkedIn. I'm on Mastodon at ghostinthenet at hackyderm.io. Still haunting Twitter at ghostinthenet. And you can find me on LinkedIn as well. Happy to carry on this conversation with anyone who wants to. And as for me, you'll find me at sbasket on most social medias,
Starting point is 00:35:04 as well as the host of Utilizing Tech and sometimes On-Premise. And of course, you'll find us every week with the Gestalt IT News Rundown, which you can find in podcasts, as well as on YouTube at Gestalt IT Video. If you enjoyed this discussion, please do subscribe to our shows. Subscribe to both Utilizing Edge, which is going to be called Utilizing Tech after this because we're going to be moving on to a different area of tech, as well as the On-Premise IT podcast. As I said, you'll find those in your favorite podcast application
Starting point is 00:35:33 as well as on YouTube. Also, consider giving us a rating or review. That really, really helps. It's nice to hear from our audience. And, you know, do share this podcast with your friends and your favorite means and your favorite social media networks. You can follow Utilizing Tech on social media at Utilizing Tech on xTwitter, as well as on Mastodon. You'll also find a dedicated site, UtilizingTech.com for that podcast. As for on-premise IT, this podcast,
Starting point is 00:36:07 you will find at gestaltit.com slash podcast. If you're looking for previous episodes, or of course, you'll find those as well on YouTube and in your podcast player. And you can connect with us at Gestalt IT on most social media networks. Thanks so much for joining us. It's been a lot of fun. Tune in for the Edge Field Day event. You'll find video of that at techfieldday.com as well as on YouTube at Tech Field Day. And thanks for listening. We'll see you next week.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.