Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 05x06: Achieving High Availability at the Edge with StorMagic
Episode Date: June 5, 2023Although everyone wants high availability from IT systems, the cost to achieve it must be weighed against the benefits. This episode of Utilizing Edge focuses on HA solutions at the edge with Bruce Ko...rnfeld of StorMagic, Alastair Cooke, and Stephen Foskett. Although it might be tempting to build the same infrastructure at the edge as in the data center, but this can get very expensive. Thinking about multi-node server clusters and RAID storage, the risk of a so-called split brain means not just two nodes but three must be deployed in most cases. StorMagic addresses this issue in a novel way, with a remote node providing a quorum witness and reducing the need for on-site hardware. Edge infrastructure also relies on so-called hyperconverged systems, which use software to create advanced services on simple and inexpensive hardware. Hosts: Stephen Foskett: https://www.twitter.com/SFoskett Alastair Cooke: https://www.twitter.com/DemitasseNZ StorMagic Representative: Bruce Kornfeld, Chief Marketing and Product Officer at StorMagic: https://www.linkedin.com/in/brucekornfeld/ Tags: #UtilizingEdge, #EdgeStorage, #EdgeComputing, @StorMagic
Transcript
Discussion (0)
Welcome to Utilizing Tech, the podcast about emerging technology from Gestalt IT.
This season of Utilizing Tech focuses on edge computing, which demands a new approach to compute, storage, networking, and more.
I'm your host, Stephen Foskett, organizer of Tech Field Day and publisher of Gestalt IT.
Joining me today as my co-host is Alistair Cook. Welcome, Al.
Hi, Stephen. Nice to be back talking to you
today. It is great having you from the other side of the world. Now, we both got into IT a while
ago, and we've both seen quite a lot of water go under that bridge. It's interesting. Edge is quite
a bit different from data center or cloud IT, don't you think? I think it draws from both of them.
And it is definitely, you can't just transplant from either of those locations
and expect everything to work the same at the edge.
But fundamentally, we come back to what is the business problem you're trying to solve?
And how can we do that with the least effort and least risk?
And yet, some things in enterprise tech, some of the data center concepts that you and I have spent our whole career working on, things like high availability and continuous availability, that's a little bit more difficult to accomplish.
And some people, I think, are maybe throwing up their hands and saying, I'm not sure that we can have this. Maybe in edge environments,
we need to make things disposable
and run it on a single node instead of multi-node
or things like that.
It's a real challenge.
Yeah, I think there's a mix in here
is there are some very cost-conscious places
where there really isn't valuable data
or services that are really critical, where an outage
of five minutes, an hour a week is inconvenient, but it's not business critical. But I think there
are some edge use cases where that does become really critical. Imagine you have an outage at
your favorite gas station and you can't fill up for a week.
That seems like a problem.
I mean, even not being able to get service for five minutes
is probably going to lead customers to go to a different station.
And so there can be some business impact,
even at the places that we think of as being relatively low cost,
low valuable data, but actually the service delivery there is important.
Well, absolutely. I mean, I need my Slurpee and my hot dog too. I mean, don't forget that.
But one of the things that occurs to me is that one of the reasons that we don't have some of
these things, I think people would want high availability at the edge. I think it's just
too hard to achieve in some cases. It takes too much hardware, too much software, too much cost.
And I think that that's the reason that some people aren't implementing it. But that's
what we're going to talk about today on Utilizing Edge. Our guest today is an old friend of mine,
Bruce Kornfeld, who is Chief Marketing and Product Officer at StoreMagic. And one of the things about
StoreMagic is that they're trying to tackle this high availability question at the edge.
So welcome to the show, Bruce.
Thanks, Stephen. It's great to be here.
Thanks for having me. Looking forward to a lively chat.
Well, it's always a lively chat with you.
And I am really interested in diving into this because, like I said,
one of the challenges for high availability is the idea that you need to have basically redundant
hardware, you know, and redundant hardware is expensive. So if I tell you, you need to have
three nodes instead of two, or if I tell you, you need to buy 33% more storage, 33% more memory,
33% more everything, that's a huge problem when it's multiplied by a thousand nodes, right?
Yeah, listen, at Storm Magic, this is, you know, we talk to customers about this all the time.
And a lot of times they just cannot cost justify the extra hardware and software needed to achieve
the high availability. And they work with their organization to say,
sorry, you know what,
we can't, you know, the budget isn't there. So we're going to have a single node. We're going
to have a single server. And if that thing goes down, here's the impact. There's going to be
downtime for four hours, a day, a couple of days, don't know. But there are cases that have to go
that route, but we're seeing a lot of cases the other way that because technology is becoming less expensive, there are ways of achieving both.
I think there's a really interesting piece also around efficient use of the storage because there are a few solutions around that do this cluster of servers and turning local storage into shared storage, which is the most cost-effective
way of doing this. But some of them put quite a lot of overhead. And of course, the same story
as having that extra third node, having an extra 20% of resources in every server that you've got
at 10,000 sites, that gets to be really expensive. So I think there's a need for solutions that
actually fit to the small scale.
Yeah, typically what we're seeing is that a lot of IT professionals are used to what they've worked on in the past, which is typically they built a big data center or a small data center that has dozens, hundreds, thousands of servers and lots of storage. And the inclination is,
let's just do the same thing at the edge.
And that's a big problem
because typically you need at least three servers.
And yeah, using the internal storage,
that's the whole software defined storage thing.
And there's a lot of solutions for that out there.
And they all do it fine. And they all provide the high availability, but typically it requires at least three, sometimes
four in order to ensure that a node can go down. So, you know, a lot of times it comes down to the
cost equation and the management bit, because, you know, typically these data center designed software defined storage, HCI, use the flavor of word that you want, but they're designed for big enterprise and the edge presents a lot of challenges that make those solutions hard to fit.
And mostly it's cost.
I mean, it really comes down to cost.
Yeah, that's my feeling, too. It really is comes down to cost because, you know, I mean, a lot of people listening here, you might not be storage nerds.
I'm a storage nerd. But if I say RAID to you, the first thing you're probably thinking of is waste, that that's extremely expensive.
You know, oh, I got to buy twice as much,
I got to buy three times as much. You know, if I say high availability, the first thing you're going to think of is waste. Oh, I got to buy multiple systems, I got to buy multiple servers,
I got to ship them all out there, I got to power them, I got to keep them running. And they're not
doing anything. You know, why do I have to have all this wasted, you know, idle hardware out there
just in case something goes wrong.
And honestly, as much as it pains me as a data center guy to say this, you're not wrong about that.
RAID is overhead. It is waste. High availability, you know, having an extra system out there.
It absolutely is. And you need to think about that.
You need to think about the cost and you need to think about the benefit
of deploying this stuff
and whether it's worth it, right?
Yeah, yeah, the way, completely agree.
And a lot of customers still,
they cost justify it.
It makes sense for their business
to have the rate of it.
I mean, rate is very, very common, as you know.
But the good news there is the cost of the physical storage has come down so much that yep instead of you know
three drives we're going to have to buy four or five um but that cost isn't a major one it becomes
more significant when you're talking about another whole node a whole whole other server with five drives in it, et cetera.
The technology that we deliver that helps on this case is to allow for a two-node solution instead of three. Most everyone else requires three, and that's a huge cost when you have
dozens, hundreds, or thousands of sites. And the technology is just, it's kind of simple. Al, you and I have talked
about in the past, but it's a remote witness. And a lot of, because you still need three nodes in
order to have a proper HA solution. And I think that's a, you know, I don't think the topic of
this is to argue that down to the bottom, but I'm pretty sure that most IT people would agree that you need three to have a very
highly available cluster or quorum. So that's, I think, the key that, again, to the IT audience
that we have, I think that it's important to understand, yes, if you have a cluster of, well,
if you have a cluster of one node, then you don't have a cluster, I guess. You have a node, and if
it goes down, it goes down. If you have a cluster of two nodes, and you run everything on one of them, and if it
dies, then it runs on the other one, that's all well and good, but it's really hard to make that
highly available, because what happens if you snip the connection between those two nodes, and both
of them think that they are the node? That's when things go wrong. And that's
what Bruce is talking about here. The reason that we say we IT people always say you need three nodes
is exactly that. They call it a split brain or something like that, where essentially you've got
two nodes that used to be coordinating and now are no longer coordinating. And both of them are like,
I'm in charge here, you know, Alexander Higg.
And they're all running off saying,
you know, hey, listen to me, I'm doing my thing.
I'm making the decisions.
And that's a big problem because things can get corrupted.
You can have torn database accesses, for example.
You can have a client that's talking to this one
and thinks that he's just charged my Slurpee.
And this other one is like, nope, no Slurpee here. And suddenly you're losing money. It's a huge problem. And that's why
most cases you can't have a two node cluster. You need a three node cluster.
And again, that's the thing that Storm Magic is focused on. That's the thing that a lot of
companies are focused on trying to figure out how can we not have to have three nodes.
Yeah, there are technologies in order to do that
without putting that third node on every single site. And that's where we've architected our
software-defined storage solution. It's been architected for this environment, for the edge,
in that we designed it so you don't need that third node on site. You can have the third node
in the cloud. You can have the third node in your data center.
And the way that it basically works is it's a simple little VM that acts as the third
node for lots of sites, up to a thousand sites for one little VM.
So that acts as a node for Quorum for up to a thousand clusters.
And that's simply how we do it.
And it is pretty amazing what we hear from customers because when they hear about our solution,
they're blown away that,
oh, so I can do this with two.
It's reliable.
I do get the real HA with proper quorum.
I just don't have to invest in that third physical server.
And that's pretty much the way it works.
I think there's another aspect to that that I really want to highlight is the one-to-many
relationship between the central resource and the clusters.
Because a lot of the data center solutions that have been adapted to remote office, branch
office use case, you have one-to-one kind of mappings.
And that works just fine if you've got half a dozen or a dozen sites.
But when you're getting to hundreds and thousands of sites,
that mapping of a witness that's a one-to-many becomes absolutely crucial.
I don't want to have to run 10,000 witness VMs in my data center
because I've got 10,000 sites.
I'd rather only be running 15,
20 of them. Ideally, I'd find preferably a cloud solution for running those, so I don't even have
to have them in my data center. So there's a whole element around the manageability for large numbers
of clusters that becomes very different to what data center infrastructure and robo kind of use
cases are used to when you come out to that vast scale of edge.
And that's where I see some of the differences
between enterprise technology and cloud technology
kind of start to play into this discussion
of what's the right thing for your edge.
Yeah, and if you think about it, I mean, I completely agree, Al.
If you think about it, the way that systems have been designed
for years and years and systems have been designed for
years and years and years have been for data center and cloud. That's where the money's been.
That's where vendors go. Let's go get as much market share as we can selling to cloud and
large data center. And the technologies that exist are awesome and they work great,
but they're dependent on either a three node or some of them do have witnesses, but these
witnesses are in the data path. And that's the technical thing I wanted to mention here is that
the core of the reason why we're different is that we're proud to say that we don't do network
RAID or erasure coding, however you want to talk, there's different ways of doing it. But these data center class, cloud class storage and HCI solutions, they scale across hundreds of nodes.
So they have this complicated erasure coding that works for the data center. Once you try to do that
at the edge and then do a remote witness, you can, but that remote witness is still in the data path.
So now you're sending all of your local IOs, all of your credit card transactions, all of the
information from the wind farm, whatever it is, it's not just staying local, it's also having to
go to the cloud and back. So that's just not cost inducing. I don't know if that's the right word,
cost enabling either, right? So with the way that we do things, it's a simple synchronous mirror locally.
So the data is really fast, really local.
And then the remote witness is just a heartbeat, right?
We're just checking, are you alive?
Are you alive?
Are you alive?
And if we don't get an answer, then we know how to coordinate things from the cloud.
That's simply put, that's how we do it.
There's another aspect to this too.
And I think that, again, a lot of solutions
that are kind of brought from the data center to the edge
are kind of over, they're over-engineered.
There's a lot more hardware in there.
You know, my first edge environment that I was exposed to was a retailer,
and they had a fiber channel SAN with a fiber channel switch
with a multi-controller storage array,
and they put these in retail stores.
And it was, let's say, fairly expensive.
As you've heard, if you've been listening to Utilizing Edge this whole season,
and if you tuned in for Edge Field Day, a lot of companies are, well, let's say they're not doing
that. What they're doing instead is they're using software, what's called software-defined,
basically software, running on commodity hardware. You heard Bruce just mention hyper-converged.
The idea is that instead of having specialized hardware to
do things like controlling storage devices, you have software that does it on standard PC stuff.
We've talked about our Intel NUCs quite a lot, but of course there's a lot of other
x86 type and ARM type PC stuff that's deployed out there. And in all these cases, essentially,
what the result is that you have an inexpensive piece of hardware that you're deploying out there.
And in some ways, this makes it easier to deploy redundant hardware, because essentially,
if you're deploying a redundant device that costs $50,000,
well, then that's a big expense. But if it's a $500 device, then it's a lot less of an expense.
Now, it's still an expense, because you may have to buy 1000 of them. But it's not quite so bad.
And I think that that's another aspect here. And Most of the products that we're looking at here are either software or incredibly inexpensive hardware.
Yeah, I completely agree. And I would say that this may not be unanimous agreement across the table here, but I would say for edge use cases, and I'm thinking small sites, this is a small site,
they've got 10, 20 VMs they need to run. The days of the three-tier architecture are gone.
They're not going to buy a couple of servers and fiber channel and an external SAN. I mean,
believe it or not, we still see customers running in that way. And when we show them, guess what?
You can take a couple of servers. These days, I'm not an investor either way, but I'm hearing
AMD is doing quite a good job with performance these days, but it doesn't matter, Intel or AMD.
Take two servers, super high power. You can get plenty of storage in them. You don't need the expense of
that third thing, that SAN, to manage, to pay for. And by the way, it's still a single point
of failure, right? You still have three, yeah, you have two servers and you have this thing,
it can still fail. And then both servers are gone, right? So I'm saying, you know,
the long-winded way kind of to say that hyperconverge,
what you're talking about, software-defined,
two servers using the internal storage,
that is here today.
It's not everywhere at the edge,
but it's definitely heading in that direction.
So Bruce, you talked about edge sites
that have half a dozen, a dozen virtual machines.
What do you see in those virtual machines? Because we hear a lot
of noise from everything's containers in Kubernetes part of
the world, but I suspect your experience of what a lot of
customers are running at the edge is not all the latest and
greatest because although the futures here, it's not evenly
distributed. Are you seeing customers that just want to run containers
that are looking at highly available solutions?
Or are you seeing what I'm expecting
as a much more mixed population of applications at the edge?
Yeah, that's a really good question, Al.
So yeah, so I say this.
Primarily what we still see
is it's a virtualization world at the edge.
Hypervisors are everywhere at the edge.
It's mostly, I was going to say Microsoft, it's mostly VMware.
Microsoft is doing a great job, but it's some Microsoft.
And then KVM is making a play.
There are some end users that are technical enough, happy with open source, and they'll go with Linux KVM is making a play, right? There are some end users that are technical enough,
happy with open source,
and they'll go with Linux KVM build out there.
But containers are not mainstream at the edge yet,
but we definitely are talking to a lot of customers about it
because as you both know,
containers are very popular in the cloud
and in the data center.
So guess what?
Software developers, they don't think about the infrastructure at the edge.
All they know is the next retail application is going to be a container, right?
So the IT folks, the infrastructure team that's building the next generation edge, they know that they have to be ready for containers.
And in fact, we do have customers doing it today. And typically the way we see it running today is they're simply running containers inside
of a VM.
And that's a perfectly fine way to do it.
The purists will say it's not the best way.
And long term, everything should be containers.
So you don't have the extra layer of the hypervisor.
We'll get there eventually.
But for now, and I would say at least the next
five years, you're going to see a mix of both. And we're definitely going to see more and more
containers, I think. I mean, anyone who's actually experienced this, I mean, from my perspective,
whether it's on-prem, cloud, or edge, I don't want to run things not in containers because of the benefits that containers give you in terms of being able to create the exact correct software environment for whatever the application is.
And it's known good.
I'm looking at you, Python.
I'm looking at you.
You know, everything that I run is I convert it into a container.
And I'm seeing the same with, you know, at the edge with Kubernetes.
I mean, we talked with Brian and with some of the other companies about this as well.
You know, Kubernetes gives us a common language, a common concept for deploying applications,
whether they're on-prem in the cloud or at the edge.
So Kubernetes is going to be at the edge, but it's not going to be used at the edge the way on-prem in the cloud or at the edge. So Kubernetes is going
to be at the edge, but it's not going to be used at the edge the way it's used in the cloud. It's
going to be used at the edge as more of a deployment and a language for applications, in my opinion.
You think so, Al? Yeah, absolutely. So fundamentally, I see Kubernetes or containerization, not necessarily Kubernetes.
I have mixed feelings about the overhead of running Kubernetes at each site.
But definitely containers is the mechanism for software distribution.
I think that's the thing that Docker got really right with containers.
And now as I'm building applications, I'm choosing to package them as Docker containers, and I find that so much easier than having to install dependencies on every server when my application needs to run.
So I think, yeah, increasingly as new applications are being built, they're being built in containers.
But returning to Bruce's point, there is always a really long tail of applications that are built for containers.
And until there's a mass extinction event that cuts off that long tail, we're going
to see them.
We may see a decrease as the cost for running them continues to increase and then containers
become more cost effective.
And I think I'd love to see more of these edge platforms doing more integrated containers.
And last time I talked with Bruce was for the Gaohm report on uh hyper converged infrastructure and uh they were in the radar
for edge hyper converged infrastructure and there was sort of a separation between vm focused and
container focused application vendors in there and it was an interesting thought about which is best
for which use case yeah the other thing i'd say about containers, if you take this another step, is we talked about container usage for cloud and data center, and it's going to make its way to the edge eventually.
Think about, though, the edge isn't everything, obviously.
There's another group we haven't even talked about today,
which is small businesses, small and medium business.
They might have one site.
They might have five.
They have the same pain points that we've been talking about today.
They want high availability.
They want to be able to run all of their local apps with 100% uptime.
They don't have budgets.
They don't have IT people at their locations.
Containers are probably a long way away for them, right? Because they just don't have the skill set
to do it. They don't have an IT team up in a data center somewhere pushing containers to them.
So just another example that I probably see the container world, it's going to take a while to
be pervasive everywhere, particularly in the SMB world. That's probably going to be even longer, I would think.
Yeah. And particularly in the edge world, I'll point out too, because that's the thing,
as long as you need one virtual machine, then you need a hypervisor at the edge.
And there are so many applications, just like we saw in the VMware space in the data center.
One of the superpowers that VMware brought and now modern hyper-converged and other virtualization platforms bring
is the ability to basically take that old thing and run it on this new thing. And that's even
more true in the edge where you may have an application that not only is it not containerized,
it's not really intended to be run on a virtual machine, but you can run it on
a virtual machine because it's none the wiser. And so I feel like that's actually a reason that
we're going to need virtualization at the edge, even though we're using containerization for more
modern apps. Yep. Completely agree. Completely agree. Yeah, it's one thing that you mentioned, something just jumped into my head about using existing infrastructure.
And that's another thing that we see is edge, I'll just say it, edge is cheap.
They can't afford an extra $500, let alone $2,000 per site because it adds up fast.
So a lot of times when they're making
this HA decision, can I afford it? Can I afford it? The thought process has always been, okay,
I got a single server and I can think of a lot of customers that we've had this conversation with.
Got a single server. Can I afford to go to three? That's a big jump. Can I even afford to go to two?
That's a big jump. One of the things that we do because of the way we do things,
synchronous mirror, it's not erasure coding. We're allowing customers to cluster different
types of servers, different brands, different models, different ages. It doesn't matter as
long as they can run the same version of the hypervisor. So there's another example of keeping customer costs down is they can leave their server in production, roll a new one in, cluster them together, and away they go.
They've added HA for the cost of one server instead of two or three.
So that's another little benefit of going with a two-node HA solution at the edge.
So you might see that fit with budget cycles where one of the two nodes
gets replaced every two years or three years, and you can just do this rolling thing rather than
having to do the whole bulk update. And of course, you're then doing it as a failover between two
nodes, you know, failover from the oldest node to the newest node, retire the old one, put in a new
newest node. That could be very cost effective for spreading
that cost over the years, particularly here in New Zealand, where we like a low cost solution.
And so we'll retain servers for five to seven years, particularly in the provinces where I am.
Yeah, we may actually see a situation now to that point where companies are basically going to use
the stuff until it falls over over and then dispose of it.
And in a situation like that, you're almost guaranteed not to have the same hardware on
both nodes, because it's almost impossible to imagine a situation where five years from now,
you're going to be able to buy the same node. Something's going to be different.
Now you are heading towards the cloud economics of run it until it fails and replace it when it fails.
Or maybe a little ahead because you probably want to have continuous availability through that failure.
Are you seeing that, though, Bruce, that kind of situation where companies are basically running it until it falls over and fails over and then replacing it?
Yeah, we do see some of it. I would say it's more on
the smaller company side. Our large customers that have thousands of sites, they have pretty
tight protocols. But I would say four years, five years is pretty common in those environments even.
But I would say, yeah, we do have customers that get every day that they can
out of a server. Yeah. So you see some out there for six or seven, but I'd say for the big ones,
you're not going to see it go past five. That's what I've seen.
As we're heading towards the close, I want to really bring up the idea that it continues to
be build the right solution for the problems that you have.
And so this idea of having a highly available solution as being really cost effective
just adds another tool you can use to solve the problems for the organization with the right piece of technology.
Sometimes the right technology is a single node that will fail and will have an extended outage
because that fits the business risk.
Sometimes that risk is unacceptable and so you need a multi-node cluster,
maybe two nodes.
Sometimes you need something bigger.
The edge isn't just a single solution for a single problem,
and I think knowing the possible solutions is really important.
Bruce, have you got anything to bring this one home as well?
Yeah, no, I would just say, first of all,
thanks for including me in this conversation.
As you can tell, I'm quite passionate about this topic,
particularly for Edge.
I see nothing but growth of Edge computing.
I would even go so far as to say
that we're starting to see cloud applications
and cloud decisions coming back to Edge because they're realizing the cloud is very, very expensive.
So I see this problem getting worse.
It's getting bigger.
There's going to be more customers, more opportunity to think about how do I run applications locally at the least cost with high availability.
So I love talking about this stuff.
I'm glad you brought me on here.
We will be hopefully continuing this conversation
at Edge Field Day coming up in July,
which we're excited about.
But yeah, it's been fun.
So thanks for having me on.
Yeah.
Well, it's great to have you.
And I have to say too,
I think it's interesting in you know, in the discussions
that we've been having here on utilizing Edge, as well as Edge Field Day, and of course, the
roundtable discussions that we've done as well on this topic, it occurs to me that Edge is sort of
a forcing function for us to reevaluate some of the decisions we've made, just like cloud was,
just like the, you know, the intense focus on automation and everything as code in the cloud forced us to reevaluate a lot of the things that we were doing in the data center. Edge is forcing us to reevaluate a lot of the decisions that we might make. And I think that we're coming up to a much more interesting solution and one that so far, a lot of technologies are kind of trickling into the edge from cloud and from on-prem data center. But I think over time, some of the lessons
that we're going to be learning here are going to be trickling right back out. And we may be
starting to say, you know, maybe we shouldn't do this or do that. You know, maybe we shouldn't
invest in this, you know, high-end piece of hardware. We can do it in software. I mean,
that's kind of one of the messages of cloud as well. So it really is an interesting conversation and I really appreciate
you hopping in here, Bruce. Before we go, where can we connect with you and continue this
conversation? Well, I love to chat with you, Stephen and Al, anyone in the industry. We will
be attending Edge Field Day where it's going to be a nice open
format broadcast live. So we hope that we can take questions from the attendees in the room,
plus anyone else listening. So let's have some fun with Edge Field Day in a couple weeks,
couple months, six weeks, something like that. Yeah. Yeah.
Al, how about you? Where can people connect with you?
You can find me online as DemitassNZ,
both on Twitter and also as Demitass.co.nz on the intertubes.
I'm going to be at VMware Explore in Las Vegas
later in the year.
And if you're in Europe,
you can catch me at VMware Explore in Europe
much later in the year too.
And as for me, of course,
you can find me here at Gestalt IT.
You'll find me at Edge Field Day.
Also, of course, we do have our Tuesday
on-premise IT podcast,
as well as our Wednesday news rundown.
So thank you for tuning in for Utilizing Edge,
part of the Utilizing Tech podcast series.
If you enjoyed this discussion,
please give us a subscription.
You'll find us in all your favorite podcast applications
and maybe check out gestaltit.com,
your home for IT coverage from across the enterprise.
But if you are interested in show notes,
and we've been doing a lot more expansive show notes
for this season,
head over to utilizingtech.com.
You'll also find us on Twitter
and Mastodon on it, Utilizing Tech.
Thanks for listening, and we will see you next week.