Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 05x14. Considering the Diversity of Edge Solutions with Brian Chambers and Alastair Cooke

Episode Date: July 31, 2023

Edge computing comes in many forms and brings many challenges, including bandwidth limitations, network reliability issues, and limited space. This episode of Utilizing Edge brings Brian Chambers, Ala...stair Cooke, and Stephen Foskett together to discuss the state of the edge in 2023. Industries like retail, multi-tenant environments, and industrial IoT find practical applications, but defining the edge remains an ongoing exploration. Implementation varies, from repurposing existing technologies to adopting modern approaches like containers and function as a service. The debate between virtual machines and containers continues, driven by organizational comfort. Despite constraints, edge environments offer greater control and accountability. The future promises more innovation and adoption, cementing edge computing’s significance in the tech landscape. Hosts: Stephen Foskett: https://www.twitter.com/SFoskett Alastair Cooke: https://www.twitter.com/DemitasseNZ Brian Chambers: https://www.twitter.com/BriChamb Follow Gestalt IT and Utilizing Tech Website: https://www.GestaltIT.com/ Utilizing Tech: https://www.UtilizingTech.com/ Twitter: https://www.twitter.com/GestaltIT Twitter: https://www.twitter.com/UtilizingTech LinkedIn: https://www.linkedin.com/company/Gestalt-IT Tags: #UtilizingEdge, #EdgeDiversity, #EdgeComputing, #EdgeSolutions, @UtilizingTech, @SFoskett, @DemitasseNZ, @BriChamb, @GestaltIT,

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT. This season of Utilizing Tech focuses on edge computing, which demands a new approach to compute, storage, networking, and more. I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT. Joining me today for this special mid-season check-in, Alistair Cook and Brian Chambers. Welcome to the show, Alistair. Thanks, Stephen. It's a joy to be here. It's been a while since I've been on one of the episodes, which is why I'm just primarily not the only co-host on the show. Nice to be here with Brian as well. Yeah, Brian, it's nice to see you again. Yeah, good to see you guys. Alistair, it's been a
Starting point is 00:00:42 while. Great to be back and catching up mid-season to talk about some of the things that we've learned so far. So I'm looking forward to it. Yeah, and I think that that's kind of where we're going to go today. You know, we've recorded a few episodes. We've talked to many of the companies that we saw at Edge Field Day earlier this year. We've talked to, if you're paying close attention, we've talked to a bunch of the companies we probably will see at Edge Field Day later this year. We've talked to, if you're paying close attention, we've talked to a bunch of the companies we probably will see at Edge Field Day later this year. And we've talked to a bunch of other folks from the community who are deeply interested in this, including a few
Starting point is 00:01:17 episodes recently featuring our friends, Alison Klein, Gina Rosenthal, Andrew Green, Roy Chua. And all of these episodes have really sort of, well, we've hit on a lot of different topics in there. I guess, let me just throw this to you guys first. I don't know, Brian, what's the big takeaway that you've had so far from this season of utilizing? Yeah, that's a great question. Probably the thing that has come up the most times and that resonates the most with me is we've spent a lot of time talking about the constraints that exist at the edge. There are many, right? They're bandwidth related, they're network reliability related, they're bandwidth related, their network reliability related, their footprint related, environmental related, etc, etc. In spite of those constraints, I guess the fact that there are still a lot of people who are charging forward into this kind of frontier and who see a lot of
Starting point is 00:02:14 value in building edge solutions. And, and in the process of doing so, it's been really interesting to see where the paradigms that we're used to in the cloud or even data center worlds in the past have been similar or sometimes identical, even the same technologies existing in both places. And in some cases where people are taking ground up approaches and doing something completely new and different using new and emerging things. So it's been interesting to see, I guess, that kind of juxtaposition of the old and the new coming together at the edge. I always think it's interesting to hear how the things we've done in the past fit together to lead to what we're doing now and in the future. So this sort of anthropology or the archaeology of the technology, seeing that reuse of the technologies we had on premisespremises and then in the cloud, seeing how you do something similar but not quite exactly the same at the edge. And I think one of the things I've taken from this is the series of episodes and the diversity of
Starting point is 00:03:16 conversations and vendors and expertise that we've had is that the edge is a very diverse thing. It isn't a single characteristic. There isn't a single defining characteristic for it for all of the use cases. So the edge that Brian has is very much focused on retail and very constrained environments. Other times the edge really means getting close to your customers in a multi-tenant environment. And we see lots of variation in that. I'm always intrigued by seeing the solutions that people come up with, the ways people use technologies that weren't the way most people are using the technologies or potentially the way the vendor thought that the technology was going to be used.
Starting point is 00:03:57 I hope we get to see some more of that as we carry on through the series. Yeah, it's been interesting because as we predicted in episode zero of this season, you know, what is the edge is probably one of the big questions that is going to come up. Well, guess what? It came up. It came up a lot. You know, we talked about, I think, the retail and restaurant edge a lot. Thanks, Brian. We've also talked a lot about other things. We've gone into networking. We had an episode about the near edge and the network edge, as well say, Al, kind of bringing, you know, basically the tear down and build from scratch versus trying to have continuity. It's a big picture. It's a big place, right? Yeah, there's a lot of diversity in the approaches customers take.
Starting point is 00:05:02 I've said before the dirty secret of the edge is that often the thing that you're running at the edge is an old Windows machine, and that that drives the technologies and the things that you're actually going to deploy to those physical edge locations. But that's not always the case. Sometimes you have the joy of building from scratch a whole new application that's going to fit to how you're doing business and sometimes you have to do that because of what the edge means some of the the edge use cases are are a truck uh driving around with some logistics components attached inside that truck or unlikely to be running a whole bunch of windows servers or as i saw at one retail environment a long time ago os2 warp servers 10 years after that product got discontinued there is just this long
Starting point is 00:05:46 tale of tech that affects a lot of people but there are also people who are bravely striking out and abandoning all of the sunk investment and saying i need to build it from you i need to build using containers or i haven't yet seen people running functions as a service at the edge at least not on a platform they built themselves that That is something that the AWS Snow family will do for you. So there is still that huge diversity of how people are approaching building applications for the edge. I definitely agree with that. I think one of the things that would be a key takeaway, as we've talked to a lot of people and reflected on a lot of these solutions is, I mean, we have our edge environment in restaurants, but you don't really want to do that unless you absolutely have to. You want to be putting all of your workloads in the places
Starting point is 00:06:35 where you have the fewest concerns. And the edge is really kind of one of those places where you probably have the most concerns, the most things you have to be responsible for to manage to deal with. So it's kind of like a suboptimal solution from an overhead and a management perspective, but sometimes justified by the potential business value it brings. But I think there's a lot of cases that are emerging where people are doing things at the edge that are like your Cloudflare workers type edge or your CDN pops, your different solutions like that. And actually maybe moving some of the compute, some of the business logic, some of the data storage out of just like
Starting point is 00:07:08 a data center region or a cloud region into some of these like closer to the user locations and actually doing things there. But I do agree, like I don't hear people doing that on platforms they've rolled themselves. They're doing it on top of either the hyperscale cloud providers or the other major solutions like Cloudflare and such that I mentioned. So that's been interesting to observe as well. Yeah.
Starting point is 00:07:29 And that's especially true in networking and edge network applications. I think that there's definitely a lot of that stuff happening sort of like we talked about in the near edge. In other words, at the edge, sure, but not at the far edge, not on premises, but, you know, at the edge of the network and at network points of presence and so on. And as you say, I think that it's possible that we could see more and more applications inhabiting those sort of remote data centers, you know, the Equinix kind of model, and as well as the Cloudflare model, which, I mean, where does Cloudflare live? I don't know, everywhere. You know, and it's,
Starting point is 00:08:12 and that's the power of it, right? It's that it's, it does live everywhere. And Al, you know, to the point about function as a service, sure. I mean, you know, you think about Cloudflare workers and things like that. Very, very powerful stuff that doesn't exist and certainly doesn't exist in the data center. I think you could classify that as edge, right? Would you? I would definitely consider functions running very close to users as edge and Cloudflare. Like to say that they're within 10 milliseconds of 90-something percent of the population, that number changes periodically with them. But they are close to your users,
Starting point is 00:08:51 which is, I think, one of the most fundamental definitions of Edge. So yeah, Cloudflare is an interesting Edge. And it gets away from some of the sort of constraints that concern Brian, that the compute resources sitting in a trusted, controlled location rather than sitting out in a near public location like a quick service restaurant. And so you can see that there's some huge benefits to being in those more secure, more regulated, more easy to automate locations, places that are much more cookie cutter deployment than even you can get into a quick service restaurant.
Starting point is 00:09:26 Because Brian, I don't imagine every single one of the restaurants has exactly the same services and tin in it and that you can deploy exactly the same applications to every location that you have. But there's this sort of idea of these near edge platforms such as Cloudflare or running things Lambda at the Edge. I'm somewhat AWS biased because I teach their training courses, but also things like the AWS Wavelength Zones where they're putting the AWS services inside cell points of presence for 5G networks.
Starting point is 00:10:00 These are all elements of Edge where the constraints of being on premises are not causing you so much grief, yet you're getting nice and close to your users. You know, Al, you bring up a really good point that I love what you said, that the far edge is probably the worst place. I don't know, I'm paraphrasing you, but the worst place to be running this stuff, because so many of the conversations we've had at Field Day and here on the show have been trying to overcome those worst place problems like security, like reliability, like cost, like bandwidth and interrupted communications and power and zero trust provisioning and all these things that we're, if you kind of look at it from that perspective, the reason we're doing these things is because it just is such a bad place to be running this hardware. And yet, that's where we kind of have to run it if we have to solve for some of those things. And so it's almost like a self-fulfilling prophecy, right? I mean, if we have to stay up, even if connectivity goes down,
Starting point is 00:11:08 and if connectivity might go down, ergo, we have to run it at the far edge, right? Is that the kind of thought process that you think that drives this stuff to the far edge, even if that's not really the best place to be running it? Yeah, I think that's exactly right, Stephen. So in our case, for example, which I can speak best about, it's been about business continuity and making sure that we can run the things that are critical at any time. And so that's what drove us to do that. Again, I'll say it over and over again, we didn't really want to have to do that. It was just something that we needed to do based on the types of solutions that we wanted to deploy.
Starting point is 00:11:43 The more and more we can take advantage of the CDN points of presence or the AWS wavelengths or those types of services or the cloud itself, the better. So that's our default and that's where we want to run things if at all possible. But there are cases for us and we've heard a lot of stories from others, whether it was here or Edge Field Day. I always think of the remote oil field use case where things need to run, but there may just be no connectivity. So these great solutions that are really close to most users, these are some of the users that they're not close to and they have no options. So they have to have some sort of on-prem solution to sustain their business. And so they're investing in these things to make that happen. So I think that's what drives people there. Not because it's the best place to run a workload
Starting point is 00:12:30 or it's the most fun thing to do, or all the problems are solved and it's just turnkey. I think you're driven there by business need, and then you have to solve a bunch of problems to make that work. And that's why we see a lot of these solutions emerging. So I would have maybe thought it was a niche problem a little bit, but it seems like we've seen a lot of cases where, you know, commercial vendors are building solutions for these reasons, you know, and then companies are adopting them because they feel this sense of need to not be entirely connectivity dependent for their solutions.
Starting point is 00:13:03 So that's kind of what I'm seeing overall. I think it is absolutely a niche product that there's a series of specific things that drive you to requiring an edge solution and then not all the same for all customers, but that there are so many of these different niches, you know, ships at sea or public access kiosks photo printing there's a whole collection than that require a paradigm that's not well served by the the cloud plus on premises and that's what's driving vendors into seeing an opportunity to build a platform that will work across multiple of these different use cases these nations and make it easier to actually deliver it to the relatively hostile location of the fire edge or to
Starting point is 00:13:49 deliver services and more near each locations that it can lower the latency if your users don't all come to one place our fire edge tends to be predicated on your users come to a relatively small number of locations that by the the food at the restaurant they don't buy the food at the restaurant. They don't buy the food as they're walking down a country lane. Whereas the near-edge solutions tend to be your users are all over the place and you need to deliver a service close to them. And so that seems to be one of the sort of separations of use cases.
Starting point is 00:14:21 And we see platforms that are really well suited to FireEdge and quite different platforms, are really well suited to FireEdge and quite different platforms, much more data center-like and cloud-like platforms that are being delivered into NearEdge. And spoiler alert for a future episode here, another use case that we're going to be talking about is media and entertainment, specifically, you know, production, and the fact that there, it's not about business continuity, as much as it is just about throughput. And how do you get data from point A to point B. And there's a bunch of different solutions for that, you know, bulk data.
Starting point is 00:15:02 I've got a couple of companies that we've been talking to that are doing some very, very cool things that are applicable for media, for oil and gas exploration, for, you know, you mentioned ships at sea, autonomous driving, things like that, all sorts of really cool use cases where, you know, yeah, it's got to be at the edge and connectivity is important, but it really is kind of a throughput question more than anything. So turn the page and talk about that because that's come up again and again in our conversations to the point that when we talked to a company like VMware, I think that the conclusion was you got to be able to run VMs at the edge
Starting point is 00:16:00 because you just can't start from scratch. Is that true? I guess first, are VMs inherently linked to this adaptive mode versus containers and cloud functions that might be more starting from scratch? Is that logical? Is that true? Is that necessary? I don't know. Al, you brought it up, so you want to dive in there? So on recent, well, last year's briefings and writings, I upset VMware by describing building things in virtual machines as being a legacy approach. And there's a negative connotation to legacy,
Starting point is 00:16:39 but there's also a very positive connotation to legacy. It's what we normally refer to as production. For the majority of organizations, production applications run in virtual machines whether they're on premises or in the cloud. And so there's a huge sunk investment of quantifying how your business operates written into these Windows applications. And one of the hardest things when you have to move to a new platform is if you have to rewrite those applications. And whenever you're moving to something like functions as a service or using fully rich services from a cloud, you can only do this by rewriting your code. Containers seems to have been the last time we were just able to repackage our code and ship it as it was.
Starting point is 00:17:20 So I think VMware is right. There are a huge number of locations where the business value is delivered by sitting on top of the existing business value being delivered by virtual machine-based applications. But if there isn't a virtual machine-based application that will deliver the business value, I don't think many organizations are starting from scratch and writing that business application as virtual machines. They're writing containers and functions. They're writing smaller units of code rather than having big monolithic code bases. So it's the innovator's dilemma. If you've sunk a lot of effort into getting a lot of value out of one piece of technology, switching to a different piece of technology is going to remove all of that value and that differentiation in market. So we definitely see this continuing to use virtual machines
Starting point is 00:18:10 where you've already got virtual machines, but starting from the beginning, you possibly aren't gonna be using virtual machines. Yeah, I tend to agree. I think it will depend a little bit on the customer and the solution. So if people are doing a completely greenfield approach, if they have the luxury of doing that, I think they're probably more inclined to go the route of something like containers or something even smaller, WASM, intending like functions or things like that. But I think the reality is that the majority of organizations are probably not finding themselves in those situations.
Starting point is 00:18:46 They're probably starting with something. And what we're calling edge might even be a modernization effort in a lot of places, you know, to try and take something like really, really legacy and bring it to something, you know, that gets some of the benefits of the cloud. But maybe not all of the ones that could potentially be achieved, maybe not going all the way to containers. So I do think companies that already have a virtual machine, you know, like strength internally, and then maybe already have virtual machines running in some sort of edge location are probably going to keep using virtual machines in a lot of cases. And I think that's probably perfectly fine. You know, assuming the ability to manage them in those places is there.
Starting point is 00:19:29 But I think Greenfield, people are probably not going to go that route quite as much because there are some benefits to containers. It's the way people are used to developing in the cloud. So I think that's going to be what the Greenfield ones do. But I think there's going to be a hybrid of both and probably some new emerging things over the next several years. And again, all of this is contingent on what do we mean when
Starting point is 00:19:48 we say edge do we mean like the far edge and like a restaurant location or do we mean like you know a cloudflare worker which is already super modern and can kind of take you know a function and just run it um you're not going to see vms there i don't think but you probably will see a lot of them in like the far distant edge locations for some time to come. I think there's a really important aspect that you touched on in there is the organizational comfort that adopting new technologies is not always natural to an organization. And so often continuing to use the tools that the teams and the developers and the management structures and all the things we're used to working with is much easier for an organization and sometimes that
Starting point is 00:20:32 that even drives down to a financial thing of virtual machines tend to run on platforms where we buy them and run them for three five seven years more transient workloads may run on platforms that you pay per execution, per runtime, rather than for multiple years. And so, again, that difference between RPX and CapEx-oriented is often significant. Lots of non-technical requirements, lots more that are more about how the organization operates and its comfort. Yeah. And then, I mean, let's not forget that these are not mutually exclusive paradigms. You can run, you know, K3s with containers inside of a virtual machine.
Starting point is 00:21:11 But they can coexist as well. So I would expect you to see a lot of people who take that hybrid approach of maybe they keep running VMs at the foundation so that they can do VM-based apps, but then they can also start to introduce, you know, new paradigms if they make business sense and if they need them to bring containers and other types of things along. I think the tipping point
Starting point is 00:21:30 is going to be when you see vendors bring solutions that people want to buy off the shelf and put in edge locations that actually assume a container is the unit of entry as opposed to a virtual machine. And I haven't seen that yet. I don't know if you guys have, but I haven't seen anything like that tip of the scales. I have, I've seen that with things like HiSell where their standard deployment is a Kubernetes cluster. And so they're expecting you to run just containers and the standard saying, right?
Starting point is 00:22:01 The future's here, it's just not evenly distributed. Some people are running all containers everywhere. And so there's platforms that don't support virtual machines that don't care about. But I think the majority of organizations have a much longer legacy. And so they are still getting value out of those legacy platforms that they built
Starting point is 00:22:22 using Windows tools, sometimes Linux tools, in virtual machines in the past. One of the things that came up actually on the last episode when we were talking about dark data is, well, this interesting paradox for me when it comes to edge that, in a way, edge environments are naturally somewhat better controlled than data center or cloud environments, simply because they are the domain of the business, because they have to be like, like, kind of connecting the dots here. So let's see if this makes any sense to you. So, you know,
Starting point is 00:23:00 if edge is really not the best place to deploy things, I would personally say that, you know, probably cloud would be the best place, data center. You know, Edge is really not where you want to deploy things. The only reason you're going to do it is because you have to. You have to because of the needs of the business. It's not going to be like IT is going to be like, you know, oh, man, you know where we should run this. You know, no, they're going to be like, oh man, you know where we should run this? No, they're going to absolutely not. But the business is going to say, look, either the volume of data is too great, or our needs for business continuity and high availability are too great, or our users are too distributed, or whatever. There's a reason, there's a compelling reason to deploy things here,
Starting point is 00:23:40 so we've got to do it. What that means, and this is kind of, like I said, this came up during the dark data discussion, but it's also come up all season long, is this idea that it's not a free for all. It's not like a Wild West. It's not like your desktop where people could be running literally anything at any time. And it's completely unconstrained. It's, you know, the only things that are going to be running there are things that need to run there, which means that there is greater control or at least greater accountability
Starting point is 00:24:09 in terms of, you know, this line of business said that we need to run this application here. This third party provider said they need to run this application here. You know, we have decided we need to collect this here. And for, because of that, it actually is a much more controllable and predictable environment, which means that we actually could have more greenfield in the edge than we do in other areas, simply because it's really only going to be three or four or five business units, three or four or five applications. It's what they need to run. I don't know. Does this make any sense to you guys at all? I think so.
Starting point is 00:24:47 And I think it comes through from what we saw at Edge Field day one with mega networks, that they were very strong on the whole. Everything is about governance. Everything is about control. Everything is about security. And whilst they didn't yet extend out
Starting point is 00:25:03 into the compute layer, that was simply making sure the network layer was very tied down. That approach that this is a scary, difficult place, so we must be very careful about it. We can't be complacent. We must look at all of the details. We must be sure that we're delivering business value because this is relatively a high risk activity. I think there is something in that, that this is because it's such a hostile place and constrained
Starting point is 00:25:26 place, we give it more attention. Whereas if I'm just deploying new virtual machines on my data center virtualization platform, I'm not too concerned about it. I'm not going to think so deeply about it. I'm just going to casually deploy a new virtual machine, which is what I'm going to do this morning once I finish here. Yeah, I think it probably forces a greater degree of management than what you're forced to do in the cloud.
Starting point is 00:25:49 Maybe as we talk about far edge at a remote location and in cloud, you don't even have to care about how big your container image is in the cloud. It's 17 gigabytes. Who cares? In a far edge scenario, I mean, 17 gigs should really scare you because if you're talking about bad connections, maybe that are slow, like might take you weeks to actually successfully download that thing, if ever, to be able to run it and meet the business needs. So I think you govern things that maybe you can take for granted or even be sloppy on in the cloud. You got to be very precise about when you do them in an edge environment. And then, of course, the things you guys mentioned about security and it should probably scare people a little bit that they potentially have their business data sitting in a very remote location or in an office somewhere that it could be taken.
Starting point is 00:26:40 And there's no physical security to stop that, those kinds of things that hopefully drives a lot of thinking about, you know, better encryption solutions and, you know, things that maybe you get parity with in a data center or cloud environment, you know, through some other means. But it forces you to maybe think about that stuff differently, which maybe can actually make it a, you know, a better managed environment, even though I don't think that necessarily gives you a different outcome. I think what we're saying is actually it makes you do more work and pay more attention to it to get the same benefits that you get most of the time from the cloud, but you have to do it. So, Brian, while you're getting scared about things, are you scared about Intel deciding they're not going to build any more nooks and that they're going to hand NUCs over to other vendors?
Starting point is 00:27:27 That was a day of great sorrow for the Intel NUC. Very sad. Yeah, not super concerned. It looks like that's become a little bit of an open spec, for lack of a better term. I know there's been a lot of news about Asus taking over and delivering those. So I think the, the Nook is going to live on, you know, which is great. Cause I think a lot of people have liked them for you know, solutions like ours that are super lightweight or just prototyping and
Starting point is 00:27:59 things like that. So I'm glad they're going to live on. I'm sad it won't be the Intel Nook anymore, but not really scared about it. Glad to see that there's going to be some continuity and those are going to be available elsewhere just from a business continuity perspective, for sure. But yeah, I think it'll be interesting to see how that evolves and how it goes forward. What about you? Well, I've had collections of nooks in the past and have moved towards things that are a little more powerful in my shed quarters at home. But, yeah, the nooks have always been interesting. I do still have one sitting in the bottom of a rack
Starting point is 00:28:42 in the shed quarters. But what I've seen is a proliferation of similar designs, so it's not that Intel needed to necessarily be the leader in producing these things. We've seen lots of designs coming out of the minor vendors that are more compact and providing a lot of compute resource in a relatively small space. I've definitely seen a resurgence in the idea of using these small volume PCs for hobbyist kind of applications where Raspberry Pis got short. So, yeah, I'm not concerned that Intel is stepping out of this market as a problem.
Starting point is 00:29:21 They've defined a market and now the rest of the market is going to carry on producing it. Yeah, I would second that, Alistair. I think that Intel's first contribution was to show that a small form factor device made sense. And it certainly does. You know, I mean, the greatest thing about the NUC
Starting point is 00:29:42 is, you know, it's small, it's low power, it's cheap, and it's a real PC that runs everything you need to run. I mean, fantastic. The second contribution, though, and I think that this should not be overlooked, and they deserve a big pat on the back for it, is the continuity of support and sales. The best thing about the NUC in my mind is that it's not some, you know, weird company nobody's ever heard of that might go out of business or might drop the product weight. Anyway, no, it's that it, you know, the thing has been supported. They've got BIOS updates. They've got patches. It doesn't run a bunch of bloatware and weirdness. Basically, it is a really solid, supported, useful platform. And Intel has never been shy about sharing
Starting point is 00:30:34 technical details, specifications, drivers, updates, etc. I mean, it's all widely available, it's all easy to access. And for me as a user, that's the thing that I loved about this platform. But I think that the coolest thing, though, is that Intel has showed the world that there's a market for these things. And so they kind of don't need to do it anymore because essentially the whole point of the business unit, in my mind, was to show the world that you could make a product like this. And then the world made a product like that. And so I think they've handled it very professionally in terms of saying, okay, we're not going to be making these anymore. You know, here's the design. You know, Asus, you can, you know, kind of take on the support for this. You know, other companies, you're free to make your own designs like this.
Starting point is 00:31:22 You know, we're going to continue to support what we support. And hopefully those other companies have gotten the message that it's not just about making a mini PC. It's about making a mini PC that is industry standard and well supported. And that, I think, is the killer thing. I mean, Brian, I think, is that what you were attracted to by the platform? Yeah, I think you nailed it. And I mean, like generation to generation, there's like a high degree of consistency too. It's not some radically whimsically different thing or form factor. There's not, like you said, not a lot of bloat,
Starting point is 00:31:53 not a lot of extra stuff you just have to deal with. It's just kind of, I mean, actually I think it's simplicity is the key. It's about as simple as you can get. And I think that's what people love because I mean, we have our use case for it, but I hear tons of nooks behind digital menu boards and kiosks and different things like this. They just need to run an application and do it reliably. And I mean, I think they've been phenomenal at doing that. So yeah, I think you hit exactly the reason that it's been attractive to enterprises, even though it's sort of a, you know, more or less feels like a consumer grade device. It's been attractive to organizations because it gives them something they can build on top of easily and they get the results they
Starting point is 00:32:34 expect in terms of the tech and the organization behind it and the way that they've showed up. So I think that's been awesome and hope that continues. Yeah, I really hope it continues. I hope I'm not off base in saying that companies have learned that lesson. I'm a little worried that I am because some of the big vendors of products out there, there's a lot of abandonware out there where we're not going to produce a BIOS update or there hasn't been an update for this thing in 10 years or whatever. I don't know. It hasn't been out that long.
Starting point is 00:33:01 You know what I mean? I hope that the vendors are hearing that that people need consistency and they need supportability and they need to not hide hide things behind paywalls and so on. But we'll see. I guess the cat's out of the bag on that one. Yeah. The other thing, of course, that I loved about it is that they used industry standard components. You know, they were very, very strong proponents of that. So, you know, in terms of expandability, but also in terms of, you know, what network chip is this and, you know, things like that. It just it was the standard stuff that works. So looking forward to the second half of the season, looking forward to another Edge Field Day event, and of course, looking forward to a future without the NUC. What are you looking at for the second half of 2023, Brian?
Starting point is 00:33:50 Yeah, I'm really interested to see, we all hear about AI every day, especially LLMs and such. I'm really interested to see what we learn about AI use cases at the Edge. Like, are they real? Are they really coming? Are they really about AI use cases at the edge. Like, are they real? Are they really coming? Are they really just cloud use cases
Starting point is 00:34:08 that a user uses somewhere? Or are there real ones that maybe move out from cloud data centers into less resource environments, whether that's sort of midterm edges or like far edge solutions like what we have at Chick-fil-A. So I'm curious to see that. I hear a lot about it. Drivers like computer vision,
Starting point is 00:34:25 voice as a new interface for people to interact with applications. And again, we go right back to the same questions of you could do it in the cloud, but the continuity question comes up. Are these things that are customer facing and in the critical flow? Do you have a good network
Starting point is 00:34:40 and other things you can depend on or do you end up running that locally to try and make sure you deliver the best possible experience to customers and such? So I'm really curious about that. Another completely different realm for the same thing would be the whole self-driving car world. There's compute there and it's living at a disconnected or semi-connected edge in a lot of cases. So curious to see a lot of those things and how those continue to develop. So I think that whole AI thing, is it hyper? Is it real? That'll be fun to dig into more and talk more about. Yeah, seeing some real business value being delivered with AI at the edge would be something that I'm not sure that we're confident is there yet. And it would be nice to
Starting point is 00:35:21 hear from some customers and people who are actually receiving that business value. I'm also interested in what technologies do you need in order to get that business value because we know things like the driverless cars use dedicated hardware in order to deliver the AI using things like the Jetson Nano from NVIDIA and I'm interested to see just whether that means that all of the AI components are going to be on their own special separate platform or whether there is going to be a more general purpose platform that can deliver that AI at the edge. I'm always interested in strange ways that customers have used products to solve their business problems because
Starting point is 00:36:00 it all has to tie back to that business problem as we were talking about before. You only run your applications out at the edge because it solves business problems, it brings business value. I'm interested in seeing how that business value can be delivered in unusual and innovative ways. What I hope we'll see over the progression of the series is also some looking at the interface between IoT and Edge, the autonomous devices that are generating data and generating insights, or at least that we can generate insights further from, things like cameras that are doing AI analytics,
Starting point is 00:36:35 face recognition actually inside the camera, and then feeding back into a more central system because, of course, you don't just have one camera. You might have 30 cameras at a site. And so there needs to be a coordinating intelligence as well as the individual intelligence on the cameras that kind of integrated system of a collection of IOT devices plus some edge devices possibly even we can throw into the mix some near edge to make it easier to get out to your fire edge that
Starting point is 00:37:04 integrated solution, that platform for building your future, I think is an interesting part of where edge is going and where we're seeing vendors building. Yeah, I would agree with you all on those. I definitely am interested in seeing where these things go. Another thing I want to point out that I'll be watching, especially short term, is the level
Starting point is 00:37:26 of interest from hyperscalers and service providers, network service providers in this market. I'm talking more and more to the cloud and network companies that you know, the sort of familiar names in the industry who are actively, aggressively rolling out products targeting edge environments, targeting exactly what you all have talked about here. We, in the second half of the year, are going to see conferences from Google, from Amazon Web Services. I guess I don't need to show of hands how many people think that AWS is going to introduce more edgy stuff at reInvent this year. I think we can all agree they probably are. They're aware of this market. I also think that they're, I don't want to say scared, but they realize that they
Starting point is 00:38:18 need to have a big play in this market too, because it is sort of exploding around them. There's a lot of, you know, the competition from the network service providers is huge, along with the traditional service providers and telco companies is huge. And the competition from new ideas at the far edge is huge. And I think that Amazon especially realizes that they need to play there, as does Microsoft and Google. So I definitely think that that's something that we're going to see a lot more of in the second half of this year and going forward as well. And of course, I'll be keeping an eye on what happens next with the post-NUC world. You know, I mean, certainly people are talking about ARM and and RISC-V, along with x86 platforms from companies, you know, Dell, HPE, Lenovo, Supermicro. So many companies are doing so many cool things with the hardware as well.
Starting point is 00:39:17 So, I mean, on the one hand, you've got hyperscalers. On the other hand, you got hardware. I think that's what makes this thing interesting. And of course, you know, self-driving cars and NVIDIA Jetsons and all sorts of crazy things. So really an interesting space. I really appreciate y'all joining me on the podcast this season at Edge Field Day earlier this year and hopefully later this year. And just generally publishing and speaking about this really interesting world of IT. So thank you both for joining us. As we wrap this up, tell us where can we connect with you?
Starting point is 00:39:58 Also, what are you interested in? And what are your key topics for the second half of the year? Alistair, why don't you go first? Sure. You can find me online as demitas.co.nz. And I'll be at VMware Explore next month. Anyway, that is coming up awfully fast. And looking forward to catching up with people in the community there.
Starting point is 00:40:23 My interest continues to be around how people solve problems at the edge and I have some thoughts about that that I really need to get out and get published. It's been a little while since I've published anything on the blog, so I'll take an action out of this, Steven, to get some more things written. How about you, Brian? What's new in your world? Yeah, well, where folks can find me, first of all, Brian Chambers on LinkedIn. I think you can find me there on Twitter as well, B-R-I-C-H-A-M-B. So if you want to find me there or still writing, my Chamber of Tech Secrets sub stack, which has actually really been a lot of fun. Been great to challenge myself to come up with something to talk about once a week and then try and write something thoughtful. So that's been
Starting point is 00:41:10 good. That's at bryanchambers.substack.com. So yeah, I'm interested in a lot of things as it relates to the edge. I think it's very similar to what we talked about. I'm really interested in the problem of observing the the area around you um so think about for us in a restaurant how do you really know operationally what all is happening and there's a bunch of interesting things there with computer vision or lidar or other tech like that and uh really interested in exploring that problem more and seeing how does edge support that or is it less needed? Can we do things without as big of a footprint
Starting point is 00:41:47 even as we currently have? So I think there's some interesting things to think about there as it relates to the Edge through the second half of the year. So that's what I'm interested in right now. Yeah, thanks. And Alistair, I'll be at VMware Explorer as well. I'm also going to some other things,
Starting point is 00:42:02 some storage conferences and Flash Memory Summit, Storage Developer Conference. I'll probably be at reInvent. And I'm going to be using this all to continue to learn and grow. You can find me online at sfoskit. I'm pretty active on Mastodon right now. So look that up if you want. That's been my chosen post Twitter slash X platform, along with, of course, LinkedIn.
Starting point is 00:42:30 And I would love to catch people there. So thank you very much for listening to Utilizing Edge, which is part of the Utilizing Tech podcast series. If you enjoyed this discussion, we would love to hear from you. Please reach out to us on your favorite social media site. Oops, sorry about that. Yeah, I'm still on the Twitters, LinkedIn, whatever, email,
Starting point is 00:42:51 carrier pigeon, whatever you got. We'd love to hear from you. Also, please do give us a rating, give us a review, subscribe in your favorite channel. You can also find us as well on YouTube, where we would love to hear from you there too. This podcast is brought to you by gestaltit.com, your home for IT coverage from across the enterprise.
Starting point is 00:43:11 For show notes and more episodes, though, head over to our dedicated site, which is utilizingtech.com, or you can find us on Twitter and, yes, Mastodon at Utilizing Tech. Thanks for listening, and we will see you next week. Microsoft Mechanics

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.