Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 06x08: Operationalizing AI with VAST Data
Episode Date: April 8, 2024We are at a turning point, as AI has matured from theoretical experimentation to practical application. In this episode of Utilizing Tech, Neeloy Bhattacharyya joins Allyson Klein and Stephen Foskett ...to discuss how VAST Data's customers and partners are making practical use of AI. Data is the key to successful AI-powered applications, and VAST Data supports both unstructured and structured data sets. Neeloy emphasizes the interactive nature of AI application development and the flexibility required to support this. He also discusses the need for structured data to support LLMs and the challenges of keeping these up to date and synchronized. One of the biggest issues in deploying AI applications is the complexity inherent in these systems. That's why it's heartening to see companies working together to created integrations and standardized platforms to make AI easier to deploy. Collaboration is the key to making AI practical. Hosts: Stephen Foskett, Organizer of Tech Field Day: https://www.linkedin.com/in/sfoskett/ Allyson Klein: https://www.linkedin.com/in/allysonklein/ Guest: Neeloy Bhattacharyya, Director of AI/HPC Solutions Engineering at VAST Data: https://www.linkedin.com/in/neeloybhattacharyya/ Follow Utilizing Tech Website: https://www.UtilizingTech.com/ X/Twitter: https://www.twitter.com/UtilizingTech Tech Field Day Website: https://www.TechFieldDay.com LinkedIn: https://www.LinkedIn.com/company/Tech-Field-Day X/Twitter: https://www.Twitter.com/TechFieldDay Tags: #UtilizingAI, #AI, #Data, @VAST_Data,
Transcript
Discussion (0)
We're at a turning point as AI has matured from a theoretical experimentation to practical application.
In this episode of Utilizing Tech, we bring on Niloy Bhattacharya from Vast Data
to discuss with myself and Allison Klein how customers and partners are making practical use of AI.
Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day, part
of the Futurum Group.
This season of Utilizing Tech is returning to the topic of artificial intelligence, where
we will explore the practical applications and the impact of AI on technological innovation
in enterprise IT.
I'm your host, Stephen Foskett, organizer of the Tech Field Day event series, and joining me today as co-host is Allison Klein. Welcome to the show.
Hey, Stephen. How are you?
I am great. I am so glad to have you back. You've been here on Utilizing Tech quite a
few times over the years, and it's fun to have you back for an episode of Utilizing AI.
I know. I can't wait to talk to our guest
today and to explore the practical applications of AI. It's fantastic. Well, that's really what
it's all about, isn't it? And as I said in preparation here, the reason we called this
Utilizing Tech was because utilizing implies making practical use of. It's not just using.
It's not just, you know, oh, here's some technology,
let's figure out how to play with it. It's making it practical. It's making it useful
and making it operational. You know, I think that one of the things that I've been really
enjoying and watching across the industry is that we've been talking about AI for a long time.
Lots of the largest data center operators on the planet have been engaging in training
algorithms for a long time. But what I think we're seeing right now is organizations that are beyond
the Silicon Valley ilk actually taking advantage of this technology and really discovering what it
takes to start implementing and integrating AI into their
IT operations.
And I love this part of the technology curve because you end up with all sorts of interesting
use cases.
Yeah, and that's, I think, one of the things that came to me from AI Field Day, which we
did back in February.
We had a bunch of presenters, obviously some great technology companies, but we also had
some great partners and some great end users.
And one of the highlights of that event was a presentation by about how a greenhouse company is using AI to literally pick the freshest strawberries.
And it's like, that's just a cool application, you know.
And one of the companies that presented there as well was Vast Data. So we got a little sneak preview there. And then earlier in March, we got a showcase,
we call it, which is basically a special field day presentation where Vast Data brought in a
bunch of partners to talk about the different ways in which they are making practical use of AI. So for that reason, we decided to, for this episode,
to invite on Niloy Bhattacharya from Vast Data to help us to learn really how partners and
customers of Vast are making practical use of AI, and therefore, to help us understand how the whole
market is making use of this technology. Now, thank you very much, Stephen and Allison.
It's awesome to be here today.
As you said, Niloy Bhattacharya from Vastata,
responsible for our AI and HPC solutions engineering function.
So it's really cool, right?
I get to talk to our customers,
understand what they're trying to do,
and then find partners,
try to build an ecosystem for how we can help our customers solve problems faster, more effectively, more efficiently,
all those kinds of things. So yeah, Vast has had a lot of success in the AI and high-performance
computing space. And as you've mentioned, we're just getting
to that precipice of customers really starting to practically use AI as a part of their business.
And we're experiencing a tremendous amount of growth alongside of that. So it's been a fun time.
So Neelay, when Stephen told me that we were going to be interviewing Vast Data,
I got really excited because I've been following you guys for years.
And I've been following you first in the HPC arena.
And I'm glad that you're on the show because you transcend both of those spaces, which is really natural for those of us who get into the infrastructure that drives AI, especially AI training. But can we just start
with why those two topics come together and what has come from the HPC arena to define
the architectural underpinnings of AI? Yeah, it's actually really cool, right? Because you can draw AI as an overlapping Venn diagram, basically,
between what HPC does and what is needed from an enterprise standpoint, right? So from the HPC
side of the house, we get GPU-based workloads, very mathematical functions, right? Lots of matrix multiplication, things like
that. We also get sort of the characteristics that led to the first generation of parallel
file systems, right? We get things that require a lot of read of a large amount of data,
parallel writes for, you know, checkpointing models along the way. So we get a lot of the technical
characteristics, if you would, from the HPC side of the house, right? That the same things that can
simulate stars can create generative AI models, right? And certainly the work that Google did in
the Transformers paper really revolutionized how we thought about neural networks.
So that's the scientific side of it.
Now, for AI, though, and especially for business uses of AI, we get a plethora of enterprise use cases and enterprise challenges that also need to be addressed, right? In HPC, you run experiments in essentially a sequential manner, right?
You block time on a system and you go run experiments on it for that period of time,
leveraging 100% of the cluster.
You get summers off in HPC, right?
A lot of people go away for the holidays and so they use that to
life cycle pieces of the solution. When it comes to enterprise, the world is very different,
right? You're not able to practically afford to block aside large portions of your GPU farm.
You have things like preemption, right?
Certain business SLAs dictate
that something's more of a priority
than what we were doing before.
You have the need for a whole bunch of challenges
on the data side.
Enterprise data lives in a hundred different places, especially today,
right? It lives at the hyperscaler, it lives in SaaS providers, it lives in warehouses and all
of these places. Compare that to an HPC data set, which is generally speaking, well curated, right?
And then you have all the resiliency and uptime and SLO, SLA considerations, because if you're going to be leveraging as an enterprise, you're going to leverage AI within your workload, within your business.
You've got to meet service levels, right? And you've got to make sure that your data is being protected.
You've got to make sure that you know exactly
what the AI system said, right?
AI is non-deterministic.
And in business, that's not a good thing, right?
That the fact that AI can say one answer one day
and a different answer the next day,
you're going to be held accountable
for both those answers.
So you've got to track all of that.
And so, yeah, we picked up a lot from HPC,
but there's a bunch of additional needs and factors
that enterprise have brought into the equation
when it comes to AI.
I wonder if you can talk a little bit more
about the specifics of the workload for machine learning
specifically around AI that you're seeing supported on your solution
with your customers, with your partners.
What we've been talking about a lot this season
of utilizing is sort of the different facets of AI
because there's the training,
there's the refinement of models,
there's inferencing, there's all sorts of areas.
What are the different workloads and characteristics of those things?
And what do they need to support them?
Yeah, absolutely.
Right.
And look, I'm not at all being self-serving when I say that a lot of AI starts with data.
Right.
It really does. You could look at any number of papers,
of social media posts, anybody who's a leader in the AI space will tell you
the differentiating factor when it comes to AI is not the model or how you were able to optimize
its training time. The differentiation really comes down to the data
you fed it and what it learned from, right? And so the first portion of the AI workload
is that data preparation step, right? Taking data from the business, taking public data,
taking third-party data, and preparing it to use with your AI systems.
Now, whether you're using that for fine-tuning, whether you're using that for retrieval
augmentation when it comes to inference, maybe you're using it for evaluation. You could use
it for a bunch of different cases, but it all starts with data and preparing that data.
And that's another thing that I don't think everybody necessarily fully grasps.
There isn't one formula or one library for preparing your data, especially when it comes
to multimodal data, right?
When you're talking about words and sounds and speech and video, there are millions of ways to prepare that data.
And the only way you kind of figure out the right way to prepare the data is to experiment,
right? Is to use that data, try training models with it, try using it in inference and seeing
the outputs that you get. So it starts with data and preparing data.
The second phase is, as you indicated, model training.
And it depends on the use case and depends on the customer,
but it's some combination of foundational models
because you're in a highly regulated region or country
where you've got to have very tight controls
on everything that goes into that model. Or it can be fine tuning because you really don't care
how it learned to speak English. You know what I mean? Like you could start with a baseline that's
learned some basics and then you can fine tune on top of that. And so then the output, and the one other thing I'll say about model
training is many times when we depict these drawings and these diagrams, and VASC does the
same thing, we depict this as a linear process, right? Prepare the data, train the model, then go
to inference on it. That's also not necessarily the case, right? So part of the experimental nature of AI
is that you can actually fine-tune a model for the worse,
not necessarily the better, right?
So unlike code, you take code, you find a bug, you fix the bug.
Yeah, you could create another bug,
but you're pretty sure that first bug you went to fix,
you actually fixed, right? bug you went to fix, you actually
fixed, right?
When it comes to AI models, it isn't the case, right?
You could fine tune it.
You could change your weights or your hyperparameters.
You could do a bunch of things and you could actually find out that the model that you
created was worse than your original model at doing the tasks that you set out to do.
And so then the third part is, yeah, that the tasks that you set out to do. And so then the third part is, yeah, the tasks that you set out to do, which is ultimately
inference of some sort, right?
And people think of inference, especially in this whole generative AI world, as strictly
chatbots or speaking to the model. That is certainly one form of inference,
but especially as customers, you know, really start to adopt AI into their business processes,
inference is very much multimodal, right? It's manipulating a video. It's, you know,
creating speech. It's, you know, automating operations in business with a level of creativity to it.
One of the biggest challenges with traditional process automation is if the parameters of the
world change, if the website changes, if the business process changes, whatever,
you basically have to retrain your process automation. Well, when you combine
process automation with generative AI, now you're creating more intelligent automation entities
that can actually adapt to changing scenarios. So yeah, the inference piece of it is obviously
where customers, your true end customers, your end users, get to interact with your models.
And that's where latency SLAs matter. That's where audit matters. That's where all of those
pieces become most evident. You know, it was really interesting to listen to your team's
presentation at AI Field Day when you walk through some of this, because I think one of the things that became apparent is that that distributed data that many organizations have isn't necessarily
something where organizations have it completely organized and they've got their acts together and they know exactly how they want to apply AI, you know, their data to AI training.
And VAST had a really interesting new platform concept about how to bring that together.
Could you comment a bit on how you are working with organizations to get their arms around their data and whether it's in the
cloud, at the edge, in the data center, how to form a common foundation to go tackle this AI
opportunity? Yeah, it's interesting, right? We used to exist in a world where data was all about transactions
and a system of record to store transactions. And then we moved into this analytics world,
right? We've been there for whatever, the last decade, a little bit more than that,
maybe, right? Where we're aggregating data into data lakes and data warehouses and lake houses and so on and so forth.
A downside of that evolution is we've unfortunately picked up this behavior
that says you have to move your data into a location first
before you can start to extract value from it, right?
So organizations all over the world have spent tremendous amounts of money
moving data into certain locations
where they can analyze it.
And what we've found in talking to our customers
is especially for AI,
where it's going to need to be trained on and interact
with the entire corpus of enterprise data, that's just not practical, right?
And so one of the things that VAST has created is this concept that we call VAST Data Space,
right?
And what VAST Data Space enables you to do is it enables you to stand up clusters essentially anywhere, right? And what vast data space enables you to do is it enables you to stand up
clusters essentially anywhere, right? It could be at a hyperscaler, it can be at a warehouse,
it can be, you know, in a set with a SaaS provider, wherever it is. And all of those
entities can participate in a shared global namespace, right? So now, wherever your GPUs live,
or even CPUs, right, whether you're doing serving or inferencing or training or whatever, whichever
aspect you're doing, those GPUs and CPUs now have knowledge of all of this data, right? And so that the data scientists can start to experiment with that data, you know, prepare
it, train small models with it, try using it for RAG or whatever purpose they want to
use without having to pre-move any data.
And then as, you know, as your jobs grow, what the platform is able to do is able to shuffle
that data around in the background. One of the things that we announced, you know, recently
is this concept or is this partnership with Run AI. Run AI does a lot of work around scheduling of GPU resources, NCP resources and memory. Basically,
you feed your jobs a debt and based on business priority and resource availability, et cetera,
can trigger jobs. Well, by folding in and integrating with the vast data platform,
we're now able to factor data into that equation, right?
So now not only are you able to run your workload anywhere
and access whatever data you need to access,
but for those scenarios where data needs to be moved,
Run AI is able to help us prefetch that data
so that by the time the job starts running,
the GPU and CPU utilization can be maximized.
So yes, we have the global namespace.
And then, like I said, on top of that, we're building a bunch of integrations and things like that to make that data space more usable.
One of the other aspects of the VAST data platform is the database, which is essentially a conventional database or
even a data warehouse. How is that used in conjunction with AI applications and what's
the relevance to very structured data for AI? Yeah, lots of different ways, right? So
foundationally, the database and our ability to sort tabular data forms the basis of the vast catalog, right? So the vast catalog is critical from a security and audit standpoint, right? So if you're going to be using all of this data, you have to know who is using it, when are they using it, you know, what code version did they manipulate it with?
You know, all of those types of things.
So there's a tremendous amount of tabular data around the way that data on the platform itself is being used that we leverage the vast database to store.
Then when it comes to the various phases of the AI process, it starts with data preparation,
right?
And Spark is a very common tool that is used for data preparation.
The traditional way to feed data into Spark is through very large files, generally a parquet
format, right?
With some metadata associated, etc. Well, with VAST,
we have an integration with Spark where we're able to not only store that data in a much more
granular, tabular format, but also minimize the interaction or the movement of data between the platform and the compute that is hosting Spark,
right? So one of the things that we announced recently is this integration between Sparks,
the Rapids framework from NVIDIA, as well as the Vast database, right? And so now Spark is able to
take a look at a complex query or complex set of
operations, figure out which pieces are computationally intensive, push those over to the
GPU, and figure out which pieces are data intensive, and actually push those down to the
vast platform, right? So you're not having to, if you have to scan a billion rows, you're not actually
shuffling a billion rows back and forth. The billion rows are being scanned on the vast
platform, data platform, and then only the resulting records that are interesting are
being returned to the application layer. Another big area where tabular data is very important
is the inference piece of it, right? The model serving
bit of it. As we mentioned earlier, for many reasons, legal, compliance, audit, updating
training, updating evaluation data, you want to keep a copy of all of the interactions that are
happening with your AI model, right? The prompts, the responses, the source data that was fed in,
who asked for it, when they asked for it,
all these various attributes.
And the easiest way to store that is in a vast database.
And especially when it comes to inference,
because model serving is going to be done in a high,
generally speaking,
it'd be done in a pretty distributed manner, right? It's going to be done over a lot of
different sites. Now, in a traditional approach, if you're having to keep, you know, JSON files
or CSV files or tables in each one of these locations and continually aggregate them over
and munch them and process them, that's a huge nightmare in and
of itself, right? So with the Vast database, each one of these instances can write to their local
instance of the Vast data platform. And then just through the nature of our product and the
integration between global namespace and database, we can now query that data in a unified fashion without, once again, having to reshuffle that data around.
Now, you guys have been on a tear.
And if we are going to evaluate companies by the company that you keep, you've made an incredible series of announcements with industry leaders.
You've also defined almost a new vision for how we should be thinking about platforms of the future.
Can you give me a sense of where you think VAST is going?
And, you know, from AI Field Day to GTC,
talk a little bit about the other announcements that the company has made
that really form the foundation for that future that you're headed towards.
Yeah. So, you know, I actually joined the company pretty recently, right? I joined it towards the
end of last year. And one of, you know, so you join a new company, you talk to everybody,
hey, what's going on? How's it going? What are our challenges? All of this. And one of the things that I learned is that one of our biggest challenges is the complexity
of assembling all of these systems together, right?
Starting with the physical layer, right?
The compute, the network, the storage, all of those pieces,
but then compounded by all the various software pieces that have been layered on top of it,
and so on and so forth. So between run AI, our continued work with NVIDIA, we do a ton of work with NVIDIA, but specifically what we're announcing, what we just recently announced is
the Bluefield 3 integration, as well as the Spark Rapids work that we talked about.
And then Supermicro, right, who's also been on a similar tear as one of the leading providers of
GPU-based servers, not only for training,
but for inference, for data prep, and so on and so forth. So our vision is simplicity,
essentially, right? As customers try to do, incorporate AI into their day-to-day operations,
into every aspect of their business, how can we make it easier and simpler for them
to do that? We've had the benefit of learning a lot from converged and hyper-converged and
hybrid cloud and big data. And we're very fortunate that we're able to roll all of those
lessons together for the purposes of AI. And then And then, you know, it's funny, we have all sorts
of conversations about where we're headed. And the reality is, and one of the things I love about
VAST is we are extremely, potentially sometimes to a fault, customer focused, right? So as far
as where we're headed, we're going to continue to listen to our customers and see what more we can do to simplify their world.
Right. So anytime there's complexity, whether it's around, you know, movement or data or access to data or, you know, like we talked about with Run AI, meeting up compute resources with the right data resources.
We're going to continue to try and simplify their
world. Later on this year, we're going to be launching Data Engine, right? It's actually
already in beta with customers. And one of the capabilities under Data Engine is going to be a
Kafka endpoint, right? Plastic example of simplifying. Why would you want somebody to have to stand up servers to accept streaming data only to write it down to the data platform?
Why not just give them an endpoint that they can stream data to directly? Right.
So so that's just one example. But but that's essentially what we're doing. Right.
We're going to continue to work with our customers and we're going to try and simplify their worlds more and more. Because at the end of the day, if we can simplify their world, it drives adoption of
technology. It means that they can deliver more benefit to their end customers. And that's where
the real win comes from at the end of the day, right? If you can make it easier for somebody to grow food or provide healthcare,
build, you know, build cars and build things that we need to make our lives work,
that's really the goal overall of any of this, right? Not just technology for technology's sake.
Well, that gets back to the thesis statement at the beginning, right? I mean, it's not about
just using AI. It's not
about, you know, how can we make this thing run? How can we do cool stuff with it? It's ultimately
about how do we do something useful with it? How do we, like you're saying, how do we grow better
food or build better products or whatever? Now, I have to say, I'm very interested as somebody with
a background in storage and also somebody like Allison, who's watched Vast from literally the very beginning.
I mean, I have to say that one of Vast's first presentations, in fact, maybe the first presentation was at Field Day, Tech Field Day, back in February of 2019, I think it was.
And so it is very cool to see the company grow.
It's also cool, though, to see a company with a background in storage that realizes that storage is not the end of the game.
In fact, storage is not even the game.
Storage is a necessary function.
And then putting these things on.
So I'm very eager to see where you guys go with, for example, the data engine, because it occurs to me that that's really where the storage industry has to go,
you know, irrespective of where AI needs to go. So, you know, sort of editorially, I find that
interesting. But overall, I think that it's been interesting as well to see how the product has
evolved to support the specific needs of these applications.
Do you think, I guess, kind of stepping back from it,
do you think that the inferencing market is going to be bigger than the training market?
And the training market, admittedly, is huge.
Yeah, no, absolutely, right?
So in very layman terms, until you do inferencing,
there's no real user interacting with the model,
right? All you've done is create a model at that point, which is a feat in and of itself, right?
These are multi-billion parameter neural networks, which are far from trivial to do, but it's not
done any real work yet. In fact, the opposite. You've just invested a whole lot of time and energy and cooling and all of this, right, to build this mathematical model. So yeah, absolutely,
right. Inferencing and really what's very cool about the inferencing market, and like I said,
we've learned a lot, fortunately, from hybrid cloud and all of these other predecessor technologies along the way,
is inferencing is going to happen at all different points, right?
Inferencing is certainly going to happen in the data center.
Inferencing is going to happen in your device, right?
We know there's quite a few vendors out there that are very bullish on inferencing at the device.
And we've seen some cool examples of that working. But then there's gonna be a whole lot of inferencing
that is happening at the near edge, if you would, right?
At the points in between the centralized data centers
and the device, points where data can be aggregated,
but you can also get more computationally intensive
than you could, say, at a device itself, right? So yeah, unequivocally, the inferencing market
is going to be tremendous. I can't even guess, but many orders of magnitude bigger than the
training market. One of the things that VAST in particular is focused on is building partnerships at the
inferencing level. We actually had a model serving showcase at GTC where you got to see some of the
partners that we're working with from an inferencing or model serving standpoint.
So yeah, stay tuned to continued announcements and continued partnerships
in that area as we are very, very focused on helping enterprise users really extract value
and get value from AI. Well, obviously a tremendous amount of development from vast data.
One question that I've got is, where do you see enterprises
in terms of their own adoption curves? And as we head further in 2024,
what are you expecting across various vertical markets in terms of adoption? Do we end the year with AI as a vibrant and fully integrated entity inside IT organizations as a whole?
Are we seeing a multi-year process by which different companies adopt in different parts
of their businesses?
Yeah, no, absolutely.
Look, it's going to be a multi-year journey for sure, right? And there's there's just a lot of intricate business processes
that have to be updated and evolve to embrace AI.
And then there's also all of the regulatory and governance challenges
that come along the way.
What we've seen so far in the market
and Deloitte actually did a survey at the end of last year that validated
this is the most most of the adoption in AI has been on the efficiency and cost savings side of the equation,
right? How can I do my job a little bit faster? Can ChatGPT create a graph instead of me creating
it? If I'm a creative and I get writer's block, can ChatGPT help me unlock? Those kind of use
cases where the availability of AI at the end of the day is optional, right? Because you could
still do things the old way. You still could write it yourself. You still could create that table
yourself or do that function.
As people start to get comfortable with that, we're going to start to see AI be used to be incorporated into the actual business processes and offerings that are customer facing.
Right. So you're going to see that, you know, initially we see that in customer service type functions, right, where you're talking with the agent and the agent may be a hybrid type agent.
But you're going to see that folded into innovation. We've seen that in life sciences, right, where the companies have been trying to leverage AI to do things like direct discovery and so on and so forth. So we're going to see more of that incorporation of AI
to improve the offerings companies have,
not just their operations, right?
And through that, that's where the SLAs matter.
That's where the criticality of your data matters.
And so, yeah, no, we're in a multi-year journey.
I'm sure there's going to be bumps along the road. There's going to be points where people get frustrated. But I think compared to any other technology trend, what we're seeing in the market is a large amount of adoption and also people working with each other to, yeah, OK, this may be challenging, but there are other ways you can do this. So a lot of collaboration in the industry as well,
which is great. And I think that's going to help sort of minimize that,
you know, that slowdown that you typically get with any sort of technology adoption.
Thank you so much, Niloy, for joining us today and for emphasizing the importance of collaboration
among all of the
various companies who are duking it out in this space. I think for me, that's the takeaway message
from all of these presentations at AI Field Day, our discussions on utilizing AI, and of course,
what we saw at GTC. As we wrap up this episode, though, where can people connect with you and
continue this conversation on artificial intelligence and other topics? Yeah, I know. Thank you so much for the time, guys.
So vastdata.com is the easiest place to learn more about VAST. And I'd certainly encourage
everybody to follow both VAST Data and myself on LinkedIn. I tend to push out some nuggets
whenever I get them. So I look forward to connecting with any of the audience.
How about you, Allison?
What's up with you?
Stephen, thank you so much for having me on as your co-host today.
Everyone can find me at thetecharena.net
and of course at Allison Klein on LinkedIn.
Please reach out.
You'll see me writing about the tech space from data center to edge.
All right.
And as for me, of course,
you'll find me here on Utilizing Tech every week.
You'll also find me on the new Tech Field Day podcast.
That's the new name for our Tuesday podcast.
And of course, at Tech Field Day events,
we've got a lot more coming up soon.
Thank you very much for listening to Utilizing AI,
part of the Utilizing Tech podcast series.
You can find this podcast
in your favorite podcast application and on YouTube.
If you enjoyed this discussion,
please do leave us a rating and a review
since that's a great way to boost the podcast
and please share it with your friends.
This podcast is brought to you by Tech Field Day, home of IT experts from across the enterprise, now part of the Futurum Group. For show notes and more episodes, head over to our
dedicated website, which is utilizingtech.com, or find us on Twitter or Mastodon at Utilizing Tech.
Thanks for listening, and we will see you next week.