Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 07x08: Deploying AI Data Infrastructure in the Datacenter with Ariel Pisetzky of Taboola
Episode Date: July 22, 2024As practical applications of AI are rolled out, they are increasingly being deployed on-premises at scale. We are wrapping up this season of Utilizing Tech with Solidigm focused on AI Data Infrastruct...ure by discussing practical deployment considerations with Ariel Pisetzky, VP of Information Technology and Cyber at Taboola in a discussion with Jeniece Wnorowski and Stephen Foskett. Companies like Taboola are built on data and have been deploying AI-driven applications for years. Generative AI brings new capabilities but is part of a spectrum of solutions that leverage data to produce results for customers. As applications mature, many companies are looking to bring them back on-premises, and this trend will likely accelerate given the cost of AI infrastructure as-a-service offerings. Owned infrastructure can also deliver beyond expected lifespans, representing a potential windfall for businesses that can continue to use deprerciated hardware. This is especially true of large flash drives, which have proven much more reliable than initially predicted. Although it is tempting to buy the biggest, fastest infrastructure to extend the lifespan of equipment, Pisetzky recommends focusing on equipment that is flexible and can be re-purposed in other ways in the future. Server storage is unique in that it is easy to upgrade and replace it in place, even hot-swapping drives, and large lives have a very long lifespan. Hosts: Stephen Foskett, Organizer of Tech Field Day: https://www.linkedin.com/in/sfoskett/ Jeniece Wnorowski, Datacenter Product Marketing Manager at Solidigm: https://www.linkedin.com/in/jeniecewnorowski/ Guest: Ariel Pisetzky, VP of Information Technology and Cyber, Taboola: https://www.linkedin.com/in/ariel-pisetzky/ Follow Utilizing Tech Website: https://www.UtilizingTech.com/ X/Twitter: https://www.twitter.com/UtilizingTech Tech Field Day Website: https://www.TechFieldDay.com LinkedIn: https://www.LinkedIn.com/company/Tech-Field-Day X/Twitter: https://www.Twitter.com/TechFieldDay Tags: #UtilizingTech, #Sponsored, #AIDataInfrastructure, #AI, @SFoskett, @TechFieldDay, @UtilizingTech, @Solidigm,
Transcript
Discussion (0)
As practical applications of AI are rolled out, they're increasingly being deployed on-premises at scale.
We're wrapping up this season of Utilizing Tech with Solidigm, focused on AI data infrastructure,
by discussing practical deployment considerations with Ariel Prisetsky of Taboola.
Listen in and get some practical tips on how to deploy AI data infrastructure on-prem. Welcome to Utilizing Tech, the podcast about emerging
technology from Tech Field Day, part of the Futurum Group. This season is presented by
Solidigm and focuses on the question of AI data infrastructure. I'm your host, Stephen Foskett,
organizer of the Tech Field Day event series. And joining me from Solidigm today is my co-host,
Janice Narowski. Welcome to the show, Janice. Hi, Stephen. Thanks for having me back. You know, this is the end. This is the end
of this season of utilizing tech. And I thought that it would be sensible, and I think you thought
the same thing, to end with a practical application, end with somebody who is really building AI data
infrastructure out there in the real world.
Yeah, I couldn't agree more. We've talked to lots of different ISVs, hardware vendors and the like,
but it's really exciting to talk to an organization that hasn't just been looking at AI over the past
couple of years, but really been looking at it for the past 10 years. And how are they doing
things differently now? I'm excited to talk about or understand how they're looking at it for the past 10 years. And how are they doing things differently now? I'm
excited to talk about or understand how they're looking at on-prem versus off-prem cloud solutions.
And yeah, we have an actual end customer, so excited about this.
Cool. Well, let's waste no time and bring them in. So Ariel Pisetsky, VP of Information and Technology and Cyber at Taboola
is our guest today. Ariel, welcome to the show. Tell us a little bit about yourself.
Thank you very much for hosting me. Well, I've been in IT for many years now. I think my bragging
rights go back to AS1680, which is one of those first 2000 ISPs on the World Wide Web,
as we called it back then. We even said security. We didn't say cyber when I started out. And now,
really many years later, with physical data centers, cloud, so many things that one could
not have imagined back in the day, and having a lot of
fun providing services to many, many readers out there. Yeah. So Ariel, let's just dive right in
on this. So you're obviously a content delivery network provider, and we want to talk about AI.
Can you just give us a little bit of insight about how you are utilizing AI today in your workloads and applications?
Sure thing.
So I'll give a few words about Taboola because Taboola is not a brand name.
We're not a consumer-facing product, so many people might use our products on a daily basis but not be aware of them. We are a content discovery platform, and that means that we reside on many of the publishers
that you read on a daily basis and love and receive content from. And we are that place where
advertisers go to turn users into paying customers. So we actually provide that matching service
between advertisers and publishers and content. Now, bringing all that together, we eventually serve over 4 billion web pages a day.
And that means that we recommend over 40 billion different articles and kind of, I'd say,
pieces of content for you to consume at any given moment. Now, we do this with high levels
of personalization. What does that mean?
That means that when you are on any given website and you are browsing, reading, you are immersed in the content, then we will provide those content recommendations for
the next thing that you can read, the next article, the next content that you may like.
Bringing that content to you without actually
knowing who you are, without you logging into our service, without you providing us any specifics
about yourself, not your name, not your age, nothing about your gender, that is the kind of
essence of using AI, specifically training and inferencing in large scale and in real time. Over the
past year with the emergence of generative AI, even a bit more than a
year, we have been also taking advantage of LLMs and different generative AI
technologies to provide additional tools for editors and for advertisers to
curate the, I'd say, article name, some of the article itself,
and imagery that you might get on any given day within your beloved websites.
It is interesting that people, when they think of AI, they think of Gen AI. They think of
everything that's happened in the last 12 months, basically. But that's not the whole picture of AI.
In fact, companies have been deploying systems based on artificial intelligence concepts for a long, long time.
I'm a particular proponent, for example, of expert systems, which have existed literally since I was born.
And the whole world of leveraging data to build productive applications, I mean, that is the story of the 21st century, really.
Everything that you've talked about in terms of what Taboola does, in terms of how it impacts its customers, how those of us out there in the real world interact with it, is basically the story of IT in modern times. And I'm glad to hear you
say and recognize and claim, you know, yeah, this is an AI application as well.
And generative AI is a new direction, but it's not the only direction. And yet, it is an interesting
way to go. So I guess talk to us a little bit more about the ways that data feeds
your businesses, because I think that your business is all about data. That's so true. So
data is actually the basis for training. What you get when you see 4 billion web pages a day
is these massive amounts of data that are written somewhere, in our case, to physical servers and physical drives,
Solidigm drives that we utilize on-prem. And we'll talk a bit more about that and the additional tech
at the low level in a moment. Keeping at the AI level, that means that this data, if you don't do
anything with it, if you don't try and, I'd say, infer any knowledge from it or create knowledge from it. It's just inert data.
It's just inert bytes or bits even that just sit there with no value. So the place that IT comes
to create value for the business is taking data, even if it's vast amounts of data, and turning it
into knowledge and turning it into business-relevant knowledge.
So creating that understanding of, oh, wait, we've been providing recommendations for 15 years now.
Somewhere seven, eight years ago, we said, wait, there is a better way to provide content recommendation
by trying to understand what is actually interesting for people. And we
say, we're sitting on all this data. How do we train? How do we look? How do we put this data
into different boxes? So that's like the beginning of the training. What are the signals that we are
getting from this data and how can it impact and affect our kind of service moving forward?
Then of course, there is natural language understanding
and natural language processing,
which is a big part of AI.
Today with LLMs, it's a totally new field,
but even seven years ago and moving from then
and until last year when LLMs kind of came out to our lives,
that was something, that was a hard problem to solve.
You needed to understand as a service provider for us, as a service provider for publishers, how do we recognize the article that we reside on? How do we recognize the user coming into that specific article? And what is the relevant article where that user kind of browsing arc is going to end. And that has been done with AI on different levels,
and today with more and more with LLMs. All of that happening on-prem because storing those vast
amounts of data and actually processing them and creating value out of them is something that when
you own the data, and data has a certain level of gravity to it, when you own the data and you can process it, then you have a lot of 10,000 operations. You don't pay for deletes.
I mean, you pay maybe in performance.
There is a payment there, but it's not hard cash.
Once you bought your drive, that is it. And then the use of that drive over time is a bonus.
Thinking about accounting, sorry to bring the pencil pushes into the room, but if you use the drive for over three
years, oh, it's even not on the books anymore and it's free. So free compute, who can't love that?
Who can't love that? And you talked a little bit earlier, Ariel, when we first started discussing
this on-prem versus off-prem, and there's a lot of organizations out there who are having the same kind of debate.
So can you tell us a little bit more about
how you are, you know, working with AI on-prem
and, you know, how that might be different
from what others might be doing in this space?
Yes, yes, that's a, well, the story there is actually fun.
I came to Taboola years ago to cloudify our operations.
And somewhere in that cloudification story, we started noticing costs and how we as IT impact those costs and how we as IT can do better for the business.
And what does on-prem actually mean?
And when you optimize and look at the use of data, at the use of CPUs,
at the merging of those two, and over the past year or even a few years, the merging of CPUs,
GPUs, and data, you suddenly find that working on prem has its extreme advantages.
So when I talk about optimization, I'm talking about multiple layers
of optimization. First, understanding that when you're looking at a data center, you're looking
at compute, you're looking at data or storage, and you're looking at networking. Data itself can be,
of course, data at rest or can be data within a database service. Now, in any one of those levels, you have multiple layers of
optimization using CPUs in a, I'd say, higher capacity, making sure that all of your GPUs and
CPUs are fully utilized, understanding what layers of optimization exist there. One great story is that we looked at our CPUs
and we found that we can better utilize our CPUs
by updating our code in various ways
from different libraries coming from Intel
or coming from NVIDIA,
or that for GPUs, of course,
or coming in from other vendors
and then really better utilizing our CPUs.
Then when we better utilize our CPUs,
we of course have the drives themselves.
The drives themselves today,
the NVMe SSDs,
the NVMe interface and the SSDs
provide really so much performance.
And when you understand the geometry of the drive and start to think,
I'm not only buying space, but I'm buying specific types of space, space with different
drive geometries that fits in different places. If you are able to optimize that as well for your
read size, you suddenly get this boost of performance where you can do so much more with your on-prem hardware,
your on-prem investment in CapEx.
So we really love to see how we, year over year, optimize our use cases for storage,
for CPUs, and for network, and bring them together to a place where our developers now
have so much raw power at their fingertips
that they just do not want to go to the cloud for many of the day-to-day operations.
At the age of GPUs, where you need to push a whole lot more data into the GPUs, controlling
your storage layer and controlling your network layer also provides you with multiple opportunities for
optimization. One great example for anyone out there listening, think of data compression.
Data compression costs you with CPUs and it costs you with network, meaning the network
layer will work harder, but the CPUs, let me just clarify there, if you compress,
CPUs will work harder and the network will work lighter and you'll have more storage space on
your drives. The flip side there, if your CPUs are the most expensive component in your data center,
why send CPU cycles to something that isn't business oriented? Why compress?
Get the right type of storage that you need. Get enough bandwidth. The bandwidth in the data center
is literally free. When you have that level of free usage of bandwidth, you can really,
really utilize your drives and push a whole lot more data into GPUs.
It really is an interesting use case situation because I think that it pushes back on a lot
of the story that people have bought into over the years about cloud.
You know, I think that there is certainly a valuable use case for cloud.
I think that everybody would agree to that. But I am hearing more and more
companies that are looking at deploying cloud technologies on-prem and deploying now in the
future AI technologies on-prem. Once applications are rolled out, once they're fleshed out, once
they're, you know, sort of have an understanding of the level of hardware that they're going to need to support those applications.
Because as you say, you can buy equipment, you can deploy equipment internally.
And I think there's another aspect that you kind of mentioned there too.
Once things are depreciated, you can continue to use those past their expected lifespan.
Maybe they're not in production.
Maybe they are.
Are you seeing that there's a much longer lifespan
with modern hardware and specifically modern storage
than you expected?
So drives have become so much more reliable over the years.
And again, when you work with a good vendor,
and Solidigm is a great example of that. When you work with a good vendor, and Solidigm is a great example of
that, when you work with a good vendor that provides you drives that do not fail beyond the
expected MTBF, you're getting a good bargain for drives that maintain their value beyond their
three-year depreciation. So you get this really, I'd say, great deal out of using hardware
on-prem. The cloud, of course, is relevant. We in IT have not done a good service to many of our
kind of business counterparts over the years. We have provided services too slow. It took too long
to install things. We did not embrace automation on time. The hyperscalers have taught us so much
about how we, as the regular Joes of IT, should be providing services within our data centers.
So the cloud is great for DR. The cloud is great for testing. The cloud is great for production
loads that vary and have peaks that are maybe short-lived. But for businesses that are always on, they should probably be always off the cloud.
So if you have something, a workload that is always on, it should be always on-prem.
That is where I would take it. So Ariel, given that notion and that thought,
is there any benefit to efficiency by being always on-prem and is Taboola doing
anything specific to address some of the energy issues that are coming about with AI?
Oh, yes. So when you're on-prem, of course, efficiency suddenly becomes, well, your problem.
When you're in the cloud, then you get the great carbon footprint
of the clouds that are in renewables. You get wonderful e-waste management by the clouds and
so on and so forth. So when you are on-prem, you need to kind of control your own destiny there.
Anything and everything on the e-waste side and anything and everything on the kind of energy side, that is clear.
But then when you look at AI specifically, which is a huge energy kind of hog because these GPUs and these CPUs are running way hotter than they would normally for other activities, you want your storage at, to be as efficient as possible. So when you have SSDs instead of spinning drives, that's, of course, a non-brainer. But when you look at the larger capacities out there that still provide amazing kind of performance in terms of the level of IOPS that you can get, and their thermal footprint doesn't warm up your data center, and their energy levels are in use only when you
are at full write mode and so on, but you have varying levels of energy that you can also manage
through the NVMe interface, should you so choose. All of those provide you with the ability to,
A, optimize, and B, provide a great service, yet still keeping a very, very eco-friendly footprint.
One of the things I've heard from infrastructure managers that are considering buying equipment
for on-prem is that it makes sense to buy the biggest, baddest hardware because then it'll
have the longest lifespan. In other words, you buy the bigger drives, not the smaller ones. You buy
the faster CPUs, the faster GPUs. Even though they cost more today, they'll have a much longer lifespan and
you'll be able to get much more out of it. Do you agree with that approach? No. So a short answer,
but here I'll give you the longer version. Thank you. That's a wonderful question. Thinking of
what you need is really what the business needs. Are there big, bad, amazing servers that we buy?
Yes.
Do we always buy the biggest, baddest servers out there?
Absolutely not.
We try to balance a lot of our servers into something that is a, I'd like to call them
Lego block mode, where we buy big servers, but not very big servers.
And then these servers have a life cycle. So they can be a server that is driveless at the
beginning, and it's CPU intensive, maybe it even has GPUs in it. Two years later, I have now a
better CPU or a better GPU. And in terms of energy and optimization, this server is not a good
AI server for me anymore. But lo and behold, it has DriveBase. Now I can use it as a storage server,
and I can get that CPU that isn't the latest and greatest to do great things around managing all the data that is in that server.
So Lego blocks really, which is also why I'd say so many of us love Lego,
their versatility, the fact that their lifespan is so long
and that you can use them in different models and in different scenarios
and build with them different things.
That is what is so enticing for us to buy servers that are
somewhere in the middle, not too big, not too small. And then for specific use cases, go wild.
Ariel, one of the things I was going to ask was being on-prem and having to deploy across,
multiple environments. It's pretty taxing to have some of your IT guys go
out and have to rip and replace drives, right? And, you know, on the notion of buying, you know,
not just the biggest drives or the fastest or even, you know, that same case with your servers,
how do you feel solid state technology helps to improve how you do business, you know, at the edge or, you know, within your
on-prem infrastructure? Is there a benefit of solid state storage versus hard disk drives,
in your opinion? Okay, of course, yes. So having solid state drives at the edge is super important.
I'll talk about that in a moment. And then, of course, just to talk a moment about Taboola.
We have our front-end edge data centers
and we have a back-end data center
where we run all the training
in the back-end.
Yet the inferencing that I spoke of,
the hyper-personalization of our service
happens at the edge.
It cannot happen without data.
It happens on data, on SSDs at the edge. So the closer you
are to the edge, the faster you will be able to provide service, and the faster you need the
drives, the servers, and the service to be. Putting all that in perspective in terms of
operations and IT operations. So I'll go back to the Lego blocks I spoke of earlier.
The idea, if you don't go biggest,
baddest, but you go good and exactly what you need, and good being the solid dime seven terabyte
drives, as an example, where you're not going all 64 terabytes, 60 terabytes, 30 terabytes,
you're going at a nice moderate size, and then you spread them in the servers, but maybe you don't use them because they're Lego blocks, and this server is now a CPU server.
But once it's installed, because the installation cost might be your prohibitive cost, you can then down the road change roles for that server, and the drive is already in there.
So that is just one example.
We also, of course, try to optimize by holding drives in our kind of front-end operation center,
and then we ship them out when we need them and have remote hands install them. So there are multiple options to be had here.
If you have the automation stack to manage it,
and that is a big part,
we wrote our own automation stack
in terms of data center automation,
not in terms of compute automation.
We use Kubernetes for that.
So we have the agility to provide storage
where we need it, when we need it.
And again, the beautiful thing with Solidigm
is the connection to the OS and the tooling that is provided.
And we can manage the drives remotely through the OS,
providing us with all the serial numbers
and asset management information that we need
to do this in a responsible way.
It's interesting that you bring that up
because now that you bring that up because,
you know, now that you mention it, storage is one of the only data center components that's actually
upgradable and replaceable in the field easily. I mean, certainly you could take the server apart
and swap out the memory, but then what are you going to do with the old memory? You know,
same with the CPUs, same with the network cards. I mean, I guess network cards can be replaced, but storage is by far the easiest thing to replace and upgrade because in many cases
you can even hot swap. I don't know if you would want to do that, but you could probably do that
in a server. And, you know, and as you said, that would let you upgrade things on the fly and match
the capability to the requirement. There's another
aspect too, and that I want to emphasize here on this because, I mean, I'm a storage nerd.
One of the coolest things about storage too is that capacity influences longevity to a great
extent. And that if you have sufficient capacity, especially with flash, if you have sufficient capacity, then that can greatly
extend the lifespan of that drive because of wear leveling. Essentially, the drives are going to,
you know, they can only handle so many writes. But if the drive is big, then so many writes
spreads out across a lot more cells, which means that that drive's lifespan can
be a lot longer. And I think that we've seen that in production. As you mentioned, once drives get
into the terabyte, multi-terabyte range, I mean, certainly I imagine a 60 terabyte drive is going
to have a very long lifespan. But once drives get into the, you know, seven terabyte or something
like that, that's an awful lot of data in order to, quote, wear that drive out.
And most people are never going to hit it quite that hard because, of course, reads don't affect it.
It's only writes.
Are you seeing this as well, that that bigger drives are more reliable?
So amazing, amazing question.
I have multiple things to say.
Non-political. I'll start with taking a leaf out of yes, we can. Hot swappable. Yes, we can. Yes, we do. It is amazing to see when you control the drives and you have good, reliable drives and you're buying drives with the level of endurance that you need, as you mentioned, it doesn't have to be crazy levels of endurance because you're at the multi-terabyte size
and you are at a, even at just one, like endurance one and not crazy high endurance levels,
you can write a lot of data. And if you have the software and you have the automation in place,
you can monitor the media state and you can
monitor where that drive is. And we saw over the years that we've been using the drives that we are
not wearing them out and we should not be afraid. We can really use drives for a long time, they are sustainable. They are not prone to failures.
And doing good IT also means that you have clustering and you have N plus kind of, I'd say, architectures.
Today, it's almost no brainers.
You don't have to pay like exorbitant amounts of cost to different fancy licenses or storage providers.
You can do your own storage and
software-defined storage today has built in a redundancy that is really good enough and provides
it's excellent. It's beyond good enough and provides you with the solutions that are cheap,
effective, and provide a ton of performance for the business.
So let's move up the stack a little bit from storage.
So clearly, storage is something that we're all passionate about,
but also something that gives a lot of flexibility.
But talk a little bit more about the rest of the infrastructure
that you're building there on-prem.
What do these servers look like?
What is the GPU, CPU network, and all that?
So when you're looking at GPU servers, just as an example, you spoke earlier about these big servers.
So GPU servers, some of them are these nice, slim 2, 4 GPU servers, which are, I'd say, specialized for data crunching.
We can talk about them in a moment.
Spark specifically, we'll talk about them in a moment. Spark specifically, we'll talk about them in a moment.
And then you have these huge, humongous six,
eight U servers that host eight, 10,
really large NVIDIA drives,
NVIDIA drives, really that host eight, 10 GPUs.
And these GPUs are data hungry.
So if you look at the two of GPU servers,
let's start with the really big ones.
When you're looking at a huge GPU server
hosting multiple GPUs,
it really needs to pull in a lot of data.
To pull in all of that data,
it's going to come with the kind of huge network cards,
the 100 gig network cards and multiple ports of that going into the top of huge network cards, the 100 gig network cards
and multiple ports of that
going into the top of the rack switch.
And it's going to be pulling a whole lot of data
coming in from your HDFS
or other storage solutions,
SIFs or whatever you're using.
And then you would need some scratch space locally
to be able to cache it locally
because bringing it off the network
will get you
to one level of performance. But if you really want to harness those GPUs to their fullest and
use all of that capital that you spent on GPUs and making sure that those GPUs are fully utilized
100% of the time, then you would like to see a, I'd say, local cache, fully flash, with a lot of IOPS that can serve those NVMe to
NVLink or NVMe to PCIe, depending what type of NVIDIA you're using, and really feed those GPUs,
and then bring that data back and feed it out. When we're talking about our data processing GPUs,
we've, as Tabula, we've spoken about this quite vocally.
You can find it on our engineering blog.
We have switched from using, in many cases,
CPUs to GPUs, the smaller GPUs actually,
let's say the A30 and the 40 family,
to crunch data in Spark format.
So when you're looking at crunching data
at large quantities,
instead of having 10 CPU servers running separately,
having maybe much less scratch space
and doing each one their own thing,
you suddenly have one server
doing the job of 10 modern servers,
pulling in a whole lot of data from data storage,
crunching it and preparing it for some type of report
or some other type of,
I'd say Spark slash SQL processing.
And that we do on GPUs as well,
having only 25 gig connections for those servers, but again,
having NVMe locally so that while the GPUs are active, you can pre-read as much as possible
from the central storage and then provide space that the GPUs can work with locally
to optimize their utilization.
And that reminds me of one of the questions we,
or one of the discussions we had earlier this season,
talking about keeping these hungry GPUs fed.
That's really the whole ballgame because these are the,
that's the expensive or the most expensive component out there.
And of course, as you mentioned as well,
GPUs can be used in other areas of infrastructure.
We've talked about that this season. You know, it's interesting here, Ariel, everything you've
said, I think really does kind of summarize what we've talked about all season long here on
utilizing tech. And it's been so great to hear a practical real world use case for all the things
we're talking about, especially an on-prem use case,
because I think that increasingly people are going to be looking to that. They're looking
to deploying their own AI data infrastructure on-prem, and the lessons that you've brought here,
I think, are very valuable to them. So given that, thank you for joining us. Where can people
connect with you? Where can they continue this
conversation? Where can they learn more about the lessons that you've learned? You can find me on
LinkedIn. I'm always happy to continue this conversation. And there's the Taboola Engineering
blog where we provide additional information about our infrastructure and what we do. It's
much more down to the technical details. So if you want to know about running,
let's say Volcano on GPUs and MIGs on GPUs,
so virtualizing our GPUs to really optimize
and squeeze more out of them.
Or if you want to read more about how we're doing
optimization around MySQL and SQL servers in general
with storage, with CPUs.
It's all there and we are always happy to share.
Well, thank you so much for that.
And I really appreciate everything you've had to say here today.
Thanks for joining us.
Janice, I'm sad to say that we're wrapping up the season with this episode.
Let me get your call to action. Where
can people learn more about Solidyne? Where can they continue the conversation of how Flash
Storage can be transformational for their IT infrastructure? Thank you, Stephen. It's been
such a pleasure being able to sponsor this series, and we're excited to continue throughout the year
sponsoring other activities that you're going to be doing,
you know, things like AI Field Day and Storage Field Day and many other things.
You do just a great job tapping into our audience.
But to continue this conversation further, anyone can go to solidime.com forward slash where we have a multitude of stories, solutions, and also details around the myriad of drives
that we have to offer.
So thank you so much for the opportunity.
Excellent.
And actually I'll call out too, Solidyme presented at our AI Field Day, brought in partners,
just a tremendous thing.
Use your favorite search engine, look for Field Day,
look for Solidigm. You'll see great presentations, a lot more information about that as well.
And of course, we've had many other kinds of companies present at Field Day over the years too.
And so you can look for those too. Thank you so much for listening to this episode of Utilizing
Tech. You can find this podcast in your favorite podcast application, as well as on YouTube. If you enjoyed this discussion, please do leave us a rating,
a nice review, maybe send us some feedback.
We'd love to hear from you.
This season was brought to you by Solidigm,
and, of course, by Tech Field Day, part of the Futurum Group.
For show notes and more episodes, head over to our dedicated website,
utilizingtech.com, where you can find the entire season in order.
You can also contact us on X, Twitter, and Mastodon at Utilizing Tech.
Thanks for listening to the season.
And thank you, Ace and Janice, for co-hosting as well.
We look forward to our AI Data Infrastructure Field Day event
along with our AI Field Day event coming soon.
Check out techfielday.com to learn more information about those.
And maybe if you'd like
to participate, reach out to me, Stephen Foskett. I'd love to hear from you. Thanks for listening,
and we will catch you next season with another exciting episode of Utilizing Tech.