Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 05x05. The Near Edge and the Far Edge with Andrew Green
Episode Date: May 29, 2023The edge isn't the same thing to everyone: Some talk about equipment for use outside the datacenter, while others talk about equipment that lives in someone else's location. The difference bet...ween this far edge and near edge is the topic of Utilizing Edge, with Andrew Green and Alastair Cooke, Research Analysts at Gigaom, and Stephen Foskett. Andrew is drawing a line at 20 ms roundtrip, the point at which a user feels that a resource is remote rather than local. From the perspective of an application or service, this limit requires a different approach to delivery. One approach is to distribute points of presence around the world closer to users, including compute and storage, not just caching. This would entail deploying hundreds of points of presence around the world, and perhaps even more. Technologies like Kubernetes, serverless, and function-as-a-service are being used today, and these are being deployed even beyond service provider locations. Hosts: Stephen Foskett: https://www.twitter.com/SFoskett Alastair Cooke: https://www.twitter.com/DemitasseNZ Guest: Andrew Green, Analyst at GigaOm: https://www.linkedin.com/in/andrew-green-tech/ Follow Gestalt IT and Utilizing Tech Website: https://www.GestaltIT.com/ Twitter: https://www.twitter.com/UtilizingTech LinkedIn: https://www.linkedin.com/company/1789 Tags: #UtilizingTech #EdgeComputing #UtilizingEdge @UtilizingTech @GestaltIT
Transcript
Discussion (0)
Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT.
This season of Utilizing Tech focuses on edge computing, which demands a new approach to networking, storage, compute, and more.
I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT.
Joining me today as my co-host is Alistair Cook.
Hi, I'm an independent analyst and trainer and based here in beautiful New Zealand.
And Al, you and I at Edge Field Day and on previous episodes of Utilizing Edge have had this quite a lot of conversations about what you call the far edge.
Today, we're going to dive a little bit into the near edge.
So talk to me a little bit
about those definitions in your mind. Well, this came from writing about infrastructure solutions
at Edge, in particular, hyper-converged infrastructure. And it became apparent to me that
the Edge isn't the same thing to everybody. And I had vendors who would normally tell me about data
center infrastructure products telling me about how these were Edge products. And I had vendors who would normally tell me about data center infrastructure products, telling me about how these were edge products.
And I had other vendors with really specialized hardware that you wouldn't put in a data center telling you that these were edge products.
It became clear that there isn't just a single edge.
And in my mind, I had the separation of those two kinds of products.
So I had the idea of a near edge as something that's closer to your central data center
and is more data center-like.
So it's got power protection, it's got an environmentals,
but you probably don't own it.
So it may well be a co-location
in somebody else's point of presence.
Whereas the far edge is actually somewhere you own
and that you're putting your infrastructure into,
but it's nothing like a data center. At the previous episodes, we've seen things like Brian talking about Chick-fil-A and
having edge deployment into quick service restaurants. This is very different to deployment
into data centers. And so I definitely saw completely different solutions for those two
different problems. And as you say, we've been talking mostly in the podcast and at Edge Field Day about the things that go into your own data center or
your own non-data center locations, your stores, your gas stations, your trucks.
But that's not all that Edge is. No, in fact, I think that there are a few different definitions
that we've seen in the industry. And certainly there are a few different definitions that we've seen in the industry.
And certainly there's a few different categories of products.
And not all the products are for quick serve restaurants or oil rigs.
A lot of them are, you know, network edge, as you say.
And one of the people that really knows this topic is our guest today, Andrew Green, who is a research analyst at GigaOM, like yourself,
and who focuses on edge. So welcome to the show, Andrew.
Hi, Stephen. Hi, Alistair. Thanks for having me.
So talk to us a little bit about your focus at GigaOM and your own personal interests
in edge and your reaction to what Andrew just said about the near edge versus the far edge.
Sure. Thank you. So my name is Andrew Green. I'm a networking and security research analyst at GigOM. I'm also the director of Precisum.co. In terms of edge, I was kind of pushed to it. So it
was really a natural evolution of getting there. Within GIGOM, we've really defined a couple of
reports for edge, the main one being edge platforms, which is essentially a take on an
evolved CDN. It started off as an evolved CDN and then we matured the space into its own solution.
So it is matching the definition of far edge, like Alistair mentioned. And from that scope, what they usually define as edge, and it really covers
wide distances at this point, is whatever can get you workloads in 20 milliseconds
round trip, and in terms of distance that actually equates to 1000 miles.
So for example, if I am sat here in the UK,
and for example, I have a server in Germany that I'm talking to that might be 1000 miles
away, and I still get 20 milliseconds roundtrip, I will still classify that as edge. And the
reason for the 20 millisecond threshold is because that's the point when you can actually
physically feel that something is delayed or not. And that's very obvious for anybody who plays instruments.
Like whenever I was playing it there and I was recording something,
if you have a delay of more than 20 milliseconds, you can't actually play.
It feels awkward. It feels a bit weird.
So that's the reason for the 20 millisecond definition.
And that's usually applicable across all domains
that recall that real time interaction.
So I'd characterize that 20 millisecond delay
as being about the near edge.
So it's closer to, it's further away from the actual user
than my definition of far edge.
Far edge, the person that we're targeting
is actually standing close to and you know we're down to a few microseconds of latency so
this content delivering network and then evolving services beyond that is for me very much what the
near edge is about it's it's locations where you're putting relatively heavyweight applications that are used by a diverse collection of clients.
And so it really does feel like that's a huge use case
for a lot of organizations that are delivering
over the internet or over a 5G network now,
delivering a service to their end customers.
And that latency absolutely is a killer.
You can't cure latency
other than by putting the compute closer
or the service closer to your users.
You can't buy lower latency
the way that you can buy higher bandwidth.
And that's a common mistake
as people doing system architecture
is to think you can cure
all of your performance problems
simply by getting more bandwidth.
But the speed of light,
that propagation delay in the electrical signals is a reality that
you can't buy your way out of.
You've got to distribute your application to suit that.
Yeah, that's exactly right.
And hand in hand with this also comes geographical distribution.
So at least from a CDN perspective, or it can be data centers for cloud providers, or
it can be the communication service providers, anything that has a distributed infrastructure.
You can deliver services globally.
So for example, if I have some points of presence over in North America and have some over in
Oceania and then in South America, it's all kind of a unified edge fabric. And it all depends on the vendor's ability to deliver services closer to the end user
within that 20 millisecond threshold that I talked about.
Yeah, it does, I think, depend on the perspective as well, Al, whether you're talking about
near edge, far edge, near to whom is, I guess, the question.
But I think what you're getting at there is definitely something that we've been hearing about from many different companies. You know,
you mentioned CDNs, you know, Akamai, Cloudflare. We were also, I was talking to a company,
EdgeGap, that's talking about deploying gaming closer to players
to overcome exactly the same thing
that you're talking about here,
that perception, that feeling
that if latency is too high, then interactivity drops.
And it's almost like an uncanny valley, isn't it,
with applications that as soon as you get past
a certain threshold of latency,
it's no longer real time. It just doesn't feel real
time. And that's your 20 milliseconds, right, Andrew? That's definitely it. Computer games are
actually a huge use case for this because it's essentially the definition of having to interact
with something in real time. And we see real world deployments of this today. So we're not talking
about something that's a few years down the line. It's happening today.
And to be fair, especially with
computer games, whenever you had to pick a server that you were
playing to and the one that was closest to you was very busy, so you
had to pick one that was far away, like maybe across the ocean or something like that,
you were always at a disadvantage. And today it's not the case anymore. So I think it's a huge win in that industry.
So how is it not the case anymore? I mean, it can't be just that people are playing with local
players. There's got to be technical solutions that are going out. And I know that that's
something that you're really focused on. So tell us, how do people bridge this edge? How are people getting
past this 20 millisecond threshold? In terms of what used to happen maybe 15, 20 years ago,
you used to have some servers that were in major metropolitan areas, but if you're further away,
you just didn't have access to it in the latency that you can actually play competitively.
So this time with the distribution of points of presence, I mean, for example,
especially in North America and Europe, but typically across the world nowadays,
you have something that might be just a few hundred miles away from you.
In terms of what can you actually deliver, each of those points of presence now
essentially behaves slightly like its own cloud entity, so you can have
compute and storage on there. So you don't necessarily just do caching like you used to do
with content delivery networks. Now within compute and storage, you obviously have a multitude of
technologies that you can use. You can have bare metal ideas, you can have and storage, you obviously have a multitude of technologies that you
can use.
You can have bare metal at the edge, you can have virtual machines, you can have Kubernetes,
you can have functionalized service.
So the type of technology that you want to use at the edge really depends on the types
of use cases.
And the types of use cases that I've seen most popular are for web performance and application
delivery.
So for example, can I get modern website architectures?
What can I do with server-side rendering, personalization?
A lot of companies are interested in doing A-B testing and those are really use cases that are applicable
through those new types of compute that is available at the edge.
And that compute might be something as simple as the Lambda at Edge component that AWS has
added to CloudFront or Cloudflare have their functions as well, but it might also be that you're building your own infrastructure,
as you say, building out an infrastructure
to run a Kubernetes cluster at each of these distributed edge locations.
Given that they're within 1,000 kilometers of a user,
then we're probably talking about hundreds of these data centers
around the world rather than thousands or just tens of them.
So in terms of the scale of this,
this is a relatively large implementation.
So something like Kubernetes does suit it quite well.
As you say, Functions as a service,
we really like running serverless components
for these scale out applications.
That seems like a really good fit
to the sort of use cases of accelerating
applications and essentially delivering the application much more locally to the user.
Yeah, yeah, absolutely. And within this, there's also each of those lighter points of presence
actually have, you know, limitations in terms of what you can do. And in terms of where you
actually want to deliver the service, it might be like a very remote point of presence
on an island where the service might not be accessed.
If you have a persistent virtual machine
or even bare metal there,
it might be just wasted money after all.
So having something like serverless
and functional service work and just spin up the service,
deliver whatever you have to and then spin it down,
it might be much more cost effective.
So those edge compute technologies
really have real use cases today.
But I've also seen in the market
that when people talk about the edge,
they go very well beyond what's being delivered today and talk about stuff like remote surgery, connected cars,
and all of those use cases that are pretty far out there.
And I made the analogy that it's a bit like thinking of use cases for a brick.
So, for example, I can use a brick to crack a coconut or do whatever with it.
But that doesn't mean it's the most efficient way of doing it and what you're supposed to do with a brick to crack a coconut or do whatever with it. But that doesn't mean it's the most efficient way
of doing it and what you're supposed to do with the brick, right? So let's think about where these
things are being deployed. And I think again, there's sort of a, I don't know, a frontier here
where on the one hand, you are deploying them in a service provider, globally distributed service providers locations, as
Alistair said, maybe hundreds of locations even around the world. And then there's, I guess,
another completely different step, I get like a quantum leap, where you would deploy these things
everywhere, you know, very, very close to users in literally thousands, tens of thousands,
hundreds of thousands of locations, right? And then there's even another quantum leap,
where this stuff is deployed locally, in either retail or industrial locations, or even in the
hands of end users. And I can see each of these different states being interesting,
offering distinct benefits. What's real, though? Is it possible to deploy this stuff, for example,
on every block in every city around the world? Is it possible to deploy them in every 5G
point of presence?
Or is it something that at this point we're talking about deploying hundreds of locations around the world at the premises of a service provider?
That's a great question, Stephen. And to be honest, I think it all comes down to economics and financials, right? So you can have those types of compute and storage capabilities very close to the end
user.
I mean, by definition, edge is you're looking at the topological edge of a network.
So that can be your home router.
It can be your, like the first hop in your last mile provider, for example.
So those edges can be essentially deployed anywhere.
And now it's a matter of what do you actually want to use it for and what makes sense financially.
So for example, if I have like very intensive compute storage transfer requirements,
like big data analysis,
the type of AI and machine learning we've all heard and love.
You want those to be not necessarily distributed everywhere because they don't make sense.
But having that near edge, if you have to have the processing locally, so you don't transfer over like terabytes
of data across a thousand miles, that might not be cost efficient.
You might just want to do it locally, have that distributed as much as it makes sense.
But for example, with the communication providers, considering the internet is a commodity at
this point, they need to
invest into something so i wouldn't necessarily be surprised if we have every radio mast to be
enabled with some sort of like edge compute capability uh what to do with it at this point
i'm not entirely sure would it be more more cost efficient to do for example all the network
intelligence like network slicing and monitoring
on the edge at the mast rather than doing it centrally. I'm not entirely sure, but it's
definitely something that can happen.
We are, though, seeing real world use case for these things of distributed networks for
delivering gaming and for delivering media. We're seeing telcos who want to deliver video on demand to their customers
who are putting the compute really close in that 20 milliseconds round trip distance from
the customers.
And some of it's driven by the 20 millisecond round trip, but some of it is about that economics
of transferring large amounts of data.
And that's one of the reasons why we also see far-edge deployments, things like doing
video surveillance in your store.
You don't really want to stream that video back into some central location to do analytics
to it, but you want to retain the video and you want to do some local analytics, maybe
for fraud detection on site.
So there is absolutely that separate sets of use case.
And whilst latency is a big factor, cost
of data transfer, that bandwidth that you're paying for to get more data transfer, that's
expensive particularly if you're going out to hundreds and thousands of sites.
So there are a lot of real world current use cases for putting compute where data is generated
or getting data closer to the users.
And that's where things are definitely delivering value right now.
I think there is a huge future capability
around having a more generic platform,
particularly at the near edge in those data center-like locations,
that is more cloud-like.
And we're seeing more of that distributed cloud
turning up out of normal cloud providers.
Andrew, are you seeing more cloud-like services being delivered
in these distributed locations where it's on-demand consumption,
or is it still very much about a bespoke application
being delivered into those locations?
Yes, it's very much cloud-like behavior.
And essentially, it really comes down to the scalability of a solution and I really
love the video processing at the edge for example you might want to do like human detection and you
also need to keep in mind stuff like increase of data generation so for for example, at the moment, I'm not really sure what the surveillance for,
how the resolution for surveillance cameras is,
but I wouldn't be surprised to see it increase
to like 4K and 8K, right?
So that means that you need to have adequate capabilities
to not only store, but also process the data at the edge.
And this is where the cloud-like behavior needs to come in.
How can I optimize those locations such that, as we mentioned
previously, I don't have a bunch of resources that sit idle, but I also have
the scalability on the map. And for example, you can borrow concepts
like cloud bursting where you can have
some core capability that you have to do.
And if there's a spike in demand, even with the,
you know, forfeiting the 20 millisecond rule,
you can just offload some of the workloads to the cloud
or to another location and get it back and still essentially retain some sort of performance.
But having that scalability is really difficult.
And another thing that I saw vendors and one of the new metrics that I defined in the edge
platforms report is about multi-environment networking and compute orchestration. So for example, what I mean by that is if you have some sort of edge provider,
they can integrate usually by APIs to spin up, spin down, and configure other compute resources
in other environments. So for example, I might use Cloudflare at the edge
and then do some sort of processing there,
but then I need to have also an AWS EC2 spun up somewhere
to run maybe some application that I imported.
Would Cloudflare be able to orchestrate the,
to give the developer a single point of glass or a CLI
or some sort of high AC integration
to spin up that AWS environment
such that they can communicate between each other.
You can organize that cloud bursting.
You can make sure that it's seamless connectivity.
You don't have to mess about with firewalls
and still do it securely.
So it's definitely a lot of work.
And I think edge platforms have an opportunity now
to have this overlay fabric controlling
not only their own infrastructure,
but also other types of infrastructure,
including those that you own on-premises infrastructure,
location-nash hosting, private clouds, and so on.
Yeah, this is an interesting concept because what I think we're getting at here is the
idea of basically an abstracted edge platform that would span from the near edge to the
far edge.
At least that's what it seems like to me.
Or am I off base? Because as you
mentioned, certainly state of the art in security cameras now is probably 4k with 8k coming. You
know, we've seen the recent at the OCP summit, we saw the introduction of a new edge router
that can handle, you know, massive bandwidth, because they're talking about having like literally hundreds or thousands of 4k cameras deployed at these remote locations. Clearly, clearly,
that's going to require local processing. But they're also you know, this is Amazon. And so
they're talking about integrating that with near edge with the cloud, and having sort of a unified
platform. And as you mentioned, you mentioned, services like Cloudflare,
does this mean that we're going to have
some sort of new platform for running these applications?
And if we do, do we have that
or is that still being developed?
I think that we are very early days,
but that seems to be the direction that we're going in.
For example,
somebody like AWS that also have their own CDN, not to mention all the globally distributed data
centers and the very comprehensive portfolio of products. They have a lot of power to do stuff.
It's going to be a matter of orchestration as well. There is also a note about edge routers, the actual routing appliances
that can host compute such that they can host some sort of application. The technology already
exists now that you can have your firewall hosted on the edge router, but there is no reason to have that routing appliance,
which is by definition what the edge is. It's the routing appliance at the edge. They're called
the routers. To actually have the compute and storage capability to support those types of
use cases. And we've seen these intelligent routers with the ability to run
essentially virtualization of the edge for a while. What I
think we're seeing is a shift towards them being a general
purpose tool, not a special purpose tool that you can run
multiple applications on. The other compute that's interesting
at the edge is the actual cameras themselves. A bunch of the security camera products are actually doing AI functions directly on the camera.
So you don't even store when there's no change in the video.
You actually only store the segments where there's change in video.
And that, again, shifts what data is coming into the edge location and gets that processing closer
and closer to the data generation. Andrew, are we seeing that unified platform that we all dream of
to give us, heaven forbid, a single pane of glass for managing this entire thing?
Not at the moment. And to be honest, it seems that the further we go towards the edge, the more we go back in time
where all the data is going to be processed locally, and then you just have to integrate the
outsourced infrastructure services into it essentially.
But what I have noticed is that some of the edge platforms that I featured in the recent GigaOM report are integrating both far edge and near edge in their offerings, and they don't all do that.
So I defined three types of deployment models.
So you can either have your on demand, your existing points of presence, on demand points of presence.
So upon customer request,
can the vendor actually spin up a point of presence
somewhere near to that location?
Or can the vendor support either via dedicated hardware
or a virtualized hardware,
the services that they deliver for that customer?
So for example, I can have a purpose-built hardware
shipped to the customer,
they install it in their whatever location they want,
and they can have those types of services
that include application delivery
and then load balancing and so on,
on that hardware appliance,
or it can be a virtual appliance,
can be installed on a all-purpose server,
and it still integrates with the wider edge platform offering to have,
you know, content delivery, video streaming, it can be whatever.
Yeah. It's really exciting to think about that because we've talked about this a
few times, Alistair, where, as you said,
local processing almost acts as a filter.
So you have a tremendous amount of data that goes
through a local, very hyper local filter, local to the camera, or sensor, then the interesting
stuff is down sampled, and sent to the next level for processing. And then that's down sampled and
sent to the next level for processing and so on. And it really is one of the fundamentals of distributed computing, right?
And also, I mean, at the risk of, you know, getting tomatoes thrown at me for bringing up an
old podcast topic, the rise of tensor processing and machine learning processing is just amazing.
And the technologies and the tools that we heard about in seasons one
through three of Utilizing Tech, companies that are developing TensorFlow processing on ARM chips,
on x86, BrainChip, for example, that's trying to do this with low powered chips, all these companies
that are trying to basically build machine learning based filters for all of
this data, in order to to make it possible to process this. I'm sorry, I'm just getting excited.
Some I guess that's, I had to bring up AI, I had to bring it up after, after all of this. But I
think you're right. I think that there is a lot of a lot of work being done with this.
Yeah. And one of the concepts I had was the idea that the edge was about being a data refinery,
that where data is being generated at the edge, we improve the value of the data as close to where
it's generated as possible and then pass it on for the next phase. And I think that's one of the
things that tends to characterize the far edge, where it's at a location we own and where we've got our cameras that's generating the data and sending it back into central.
Where I think near edge feels the other way, that the data is being generated in some central, often cloud location and then being pushed out to that near edge location.
That's just something that's that struck me today. Now, there's always a hybrid
because if the fire location is a gas station
and there's advertising and video advertising being delivered,
then there's data flying outwards.
But the refinery aspect sort of comes back in the other direction,
which is interesting.
You know, that's that's actually a great point, because
if I think how we initially defined
this edge platform report from a content delivery, that's exactly what you were saying.
You're delivering content from some sort of like central server, like it might be a website, it might be video, it might be images, it might be whatever.
So you're delivering that. But at the same time, when you have data generated on location and then you need to do it and then send it somewhere,
let's say in the Cloud or to some of your infrastructure that's hosted somewhere else,
then you need to have that processing the other way around.
With the risk of opening up another forms,
I'm thinking about the security aspect of it.
For example, the more data you generate,
the more you have to store it.
But then when you store that data for how,
what's your retention policy?
If data, you have some sort of attack
that's happening on location,
would you actually risk throwing that data away,
let's say after three three months if it was infected and not be able to work back to see what was the impact on the business?
I came across a recent IBM report that said it's about 273 day average for a threat to be identified. So I'm wondering if you're generating all this data on the edge, how much of it do you
store because it's going to be more and more data?
So what's your take on it?
I definitely think less and less storage is going to be what's going to happen.
We're going to collect more data.
We're going to store less of it.
We're only going to store the interesting bits.
What do you think, Al? Well, I think the reason we've stored lots of data in the past is
we had no idea of the value of that data. That's what drove the explosion of big data. As we start
applying AI and machine learning to it, we find the actual value in the data. And I think you're
right that we will start throwing out the data of unknown value when we can actually identify value in data and that idea that that all data is equal and all data should be kept forever
because we don't know what what value it has we need to shift away from that it just doesn't
continue to work uh with you saying that it um it also lead me to think that we can also apply AI and machine learning
for threat detection at the edge, such that if we want to store data, then we can as a
business choice.
But if we don't want and we risk losing that information, then at least we can apply some
sort of real-time threat detection at the edge, such that when we get rid of the data,
when we discard it,
it's not going to be a business risk as well.
Yeah, and that's actually one of the things
that machine learning is really good at,
is threat detection and determining, basically,
what to focus on and what to ignore and what to skip.
And that was one of the things
that came out of a lot of our discussions earlier.
And I agree with you.
I think that that's really something that is going to be very, very valuable
and in fact necessary because of this flood of information. So before we go, I want to kind of
return to the first question that I had, which is, Alistair, you've been talking about near edge and far edge. We've been kind of dancing around this with a lot of conversations so far about building a lot of far edge stuff.
Now that we've had a chance to talk near edge and to talk about this continuum, do you have a different opinion on this or has this just cemented that opinion? I think for me, what I think I learned today was that in business,
you always say, follow the money at the edge, follow the data. It's all about getting that
data close to your users. And there's different definitions of getting that data close to your
users or the processing close to the data. I think that's what I learned today.
Andrew, how about you? Thank you so much for joining us.
What's your summary?
What's the takeaway message for folks listening to this?
Thank you for having me, Stephen.
And I think the biggest realization is the one that I had just now
about either pushing data from the edge or pulling data from the edge.
And those use cases between near edge and far edge
and how they can edge and those use cases between near edge and far edge and how they can
you know essentially have different use cases it's not only an architectural concern but it's
also a business concern a security concern so whether you push data or you pull data from the
edge they're completely different systems and they should be treated accordingly. Absolutely. And I think that we
do also need, as we've kind of been dancing around in here, some unification in terms of
the platform, in terms of orchestration and operation, deployment, security, all of these
questions are going to be coming up. And I know there's a lot of companies working on that.
And I'm really excited to see what the two of you at Giga Home and the rest of the industry
come up with in terms of structures, descriptions, overviews of all of these solutions.
Those of you listening, thank you very much for joining us. Where can those people continue this
conversation with the two of you? Andrew, I'll give you a chance to go first. Where can we find you and continue this conversation?
So you can find me directly at precisem.co
or you can send me an email at andrew.green
at precisem.co
where we deliver technical content writing
for Enterprise IT.
And you can find me, Alistair Cook,
as at demitasnz on Twitter
or demitas.co.nz for my own thoughts and writings
about this. You can also find me in person at VMware Explore very shortly, or a few months'
time. And I definitely suggest checking out some of the vBrown bag and Build Day Live videos we've
created around data center infrastructure primarily, but maybe some edge infrastructure coming soon. Also, I think I don't want to put the cart in front of the horse,
but I think it's pretty likely that the three of us may be involved in Edge Field Day going forward.
So I hope that we have all of us involved in that and that people will be able to see us there.
You can learn more about that at techfieldday.com. And of course, you can find more
from me by going to gestaltit.com and checking out the on-premise IT podcast or our weekly news
rundown. Thanks for listening to Utilizing Edge, part of the Utilizing Tech podcast series.
If you enjoyed this discussion, please subscribe in your favorite podcast application and consider
leaving us a rating or a nice review. This podcast is brought to you by gestaltit.com, your home for IT coverage from across the
enterprise. For show notes and more episodes, go to utilizingtech.com or connect with us on
Twitter or Mastodon at Utilizing Tech. Thanks for listening, and we will see you next week. week