Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 08x08: Spreading Standards and Ideas with Dr. J Metz
Episode Date: May 19, 2025Standards bodies don't just set technical specifications, they also drive greater understanding of topics like storage. This episode of Utilizing Tech, brought to you by Solidigm, features Dr. J Metz,... Chair of SNIA and the Ultra Ethernet Consortium and technical director at AMD. Over decades of advancement, enterprise storage has evolved to be a central requirement of modern systems, especially where AI is concerned. And storage is a lot more than just capacity: It's about performance and power, connectivity and throughput, availability and reliability. Standards bodies like SNIA and UEC help raise awareness of these requirements while also setting standards to address them. Technological developments made in one area often have unexpected applications in others. This makes it even more important to foster open communication to enable ideas to spread.Host:J Metz is the Chair of both the Ultra Ethernet Consortium and SNIA, as well as a Technical Director at AMD. You can connect with J on LinkedIn or on X/Twitter. Learn more about The Ultra Ethernet Consortium on their website. Learn more about SNIA on their website. Learn more about AMD here.Hosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesJeniece Wnorowski, Head of Influencer Marketing at Solidigm Scott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
Standards bodies don't just set technical specifications,
they also drive greater understanding of the topics
in enterprise technology like storage and AI.
This episode of Utilizing Tech brought to you by SolidIME
features Dr. Jay Metz, chair of SNIA
and the Ultra Ethernet Consortium
and technical director of AMD,
talking about how we can spread standards
and ideas around the industry.
Welcome to Utilizing Tech,
the podcast about emerging technology from Tech Field Day,
part of the Futurum Group.
This season is presented by SolidIME
and focuses on advanced topics like AI at the edge
and related technologies.
I'm your host, Stephen Foskett,
organizer of the Tech Field Day Event Series,
and joining me today from SolidIME as my co-host
is my old friend, Scott Shadley.
Welcome, Scott.
Hey, Stephen, good to see you again.
I know we recently saw you in person, actually,
at a recent Field Day event.
That was awesome.
Great to be here again.
Yeah, it's good to see you as well.
And as I said, you and I go way back.
One of the areas that we've both focused on for a long,
long time is storage.
And those of you outside the storage industry
may not know it, but there's actually a really nice community
in storage.
A great group of people who love to get together at events.
For example, the Storage Developer Conference
is one of my favorite nerd fests.
FMS, there was the old storage conferences and so on.
And one of the coolest things about that
is sort of the cross-pollination that happens
when you learn about what other companies
and other people are working on
and you try to bring that into your world.
Yeah, absolutely. I mean, that's a perfect example of that is I have chosen my attire appropriately as I
now sit on the board of directors of said, one of those groups, Nia.
And it's been an interesting way to utilize to your point what's going on in the market
around all of us working to drive innovation forward.
But we all know that we can't just do it as a one-off.
We can't be unique or independent in certain ways and aspects of that.
And so being able to tie all that together and bring it in as a conglomerate of people
and companies to present to our customer base makes the existence of all these technologies
more possible.
Yeah, it's been really great having, you know, I think people, when they think about standards
bodies, they think about the technical aspect of standards bodies rather than the sort of
evangelical and spreading the gospel aspect of standards bodies.
SNIA is one of the companies or one of the organizations, I'm sorry, that is really,
I think, spreading the gospel of storage far and wide.
It's not about, you know, totally looking inward at,
you know, how can we make storage better. It's about how can we bring storage to the
world. And that's why we have brought our old friend from, who happens to know a thing
or two about SNIA, Dr. Jay Metz, to join us here on the podcast, welcome to the show, Jay.
Thank you very much.
Very happy to be here, thank you.
So tell us a little bit about yourself.
Well, I am a technical director for advanced storage
and networking strategy for the data center group
inside of AMD.
And I am also, as you said, I'm the chair of SNIA,
have been for about five years now,
and I'm also the chair of the Altereasonnet Consortium. So I've got kind of hands and feet in both areas of the networking and the
storage and the storage networking and so on and so forth. So that's what I do. That
and wear hats.
Yeah, I was going to say, of course, he's wearing the traditional J-Mets hat attire
that is known for far and wide. As Stephen mentioned, you go to an STC event and if you see J without his hat,
you don't know who he is.
So if I take off the hat, it's a, it's a being incognito, you know, people,
I would never ever make fun of Superman, you know, for taking off his glasses.
Cause I take off the hat and people just don't know I'm there.
It's a fascinating event.
And now speaking of not knowing that you're there, that's one of the things that people
take for take advantage of, if you will, as far as what this conversation is about, which
is how we participate in these standards and these different consortiums and what it's
really doing for the market and for our customers. And a lot of the time, those folks don't really
see what this behind the scenes does. So to Stephen's point, the evangelical part of it or the communication piece of it is something that you and I have spent a
lot of time doing together within SNEA and now you as part of UBC as well. So yeah, and it's always
it was always kind of interesting because, you know, from the outside looking in, you think that
this is just this one big monolithic entity that's kind of solving a particular problem.
one big monolithic entity that's kind of solving a particular problem.
But, uh, it winds up being much more involved in that with different, you know, perspectives and different attitudes and different thoughts and ideas and
creative, uh, creative venues.
But it winds up being particularly interesting for those of us who get
involved in this sort of thing, because there's always a new problem that
has to be solved, you know, there know, there's nothing is ever really finished.
So we get some pretty nuanced approaches to solving problems that get both big and very
small in different areas.
That's a good kind of summary of storage.
One of the things that makes storage interesting is the fact that it is very big and very small
and very fast and very slow and very high powered and very low powered.
I mean, storage is a lot of things.
And you know, you and I and all three of us are really have been
discussing all of these aspects of this technology for so long.
And yet I think, again, there's not as much understanding of the nuances of storage.
But that's changing now that we're starting to have greater and greater demands.
I mean, AI drives greater use, or greater collection of data.
It requires data to make it work.
AI-based applications are very data-driven applications.
And data collection means storage.
It drives storage capacity, it drives storage performance,
it drives bandwidth, it drives connectivity
and networking and IO.
And these are all things that groups like SNIA
and the Ultra Ethernet Consortium are trying to evolve.
Right, I mean, have you noticed,
and I'm gonna ask you a leading question here, have you
noticed that these things are ever more in demand in the modern world?
What we've been doing in storage is suddenly really important.
Well, I think the short answer is yes, of course.
The longer answer is a little bit more involved because the three of us and those who are storage-oriented
people, we've got a certain intuition when it comes to what we mean when we use the term.
For those that are maybe software-oriented or compute-oriented or hardware-oriented,
they may not see storage the way that we do intuitively. They may see it as capacity inside
of a drive or a hard drive in and of itself. That is storage after all. But the real element that we
think about when we talk about this is that it's not just about how it's stored.
It's about how you get it back. How do you move it? How do you preserve it? How
do you make sure that it is there when you need it, when you need it, at the
right place where you need it? How do you make sure it is the exact thing that you thought you were going to get?
And so what winds up happening is that what we consider to be storage, other people would call
memory, other people would call networking, other people would call software, other people would call
backups or archiving, other people might actually look at it and say, you know, it's management.
All of these things are depending upon how you take the diamond and you look at the
different facets and you can see storage reflected back at you at any given point in time.
But you know, what I think we try to do in SNIA and what we're trying to do in UEC and
some of the other organizations that we work with is that we're trying to say, look, if
you say that you do this or you need this kind of or that kind kind of bandwidth, or you need this kind of latency, what do you need
it for?
Well, you need it for getting data from one place to another place.
What does that entail?
And the further down that rabbit hole you go, the further there is to go.
It's sort of like saying, what is the length of the English coastline?
As you start to drill down until the beaches and the actual individual grains
of sand, you start to realize that it is an ever increasing fractal distance
of information that you're going to have to try to consume.
Storage is the same way.
If you want to talk about moving a bit from here to there,
what does that actually mean?
Does that mean you have to go through buses?
Does it mean you have to go through buffers and caches and NAND or, you know, high bandwidth
memory?
What does it actually mean?
You know, what are the channels that you use?
What are the networks that you use?
All these makes a difference and it makes a difference at a really small level and it
makes a difference at a really big level.
So for instance, you know, I know recently you had a conversation with Gary Greider.
He deals with really big systems, right?
At really big systems, right?
At that level, really enormous systems.
I mean, he's got petabytes of RAM, for example.
And then at the same time, you've got these AI models that are starting to approach the
trillions of parameters.
And the average layperson doesn't understand what that means in terms of having the material
to be able to do this.
If I've got a one trillion parameter large language model, for example, I need around
32 terabytes of RAM just to be able to hold it.
Right?
Well, no processor has 32 terabytes of RAM, which means you've got to be able to have
a lot of processors together working together as one unit.
Well, how do you do that?
The data has to move from one place to the other,
and you create little tricks, right?
You create parallelism and little tricks.
That means that I can have things going at the same time,
but what do you do when you do that?
Well, you change the nature of the data movement.
So long as big pieces of data moved all at once,
it's now big pieces of data moved all at once,
and little pieces of data moved all at once
to let everybody know where the big pieces of data are.
So you've got a
You're moving, you know the needle in a number of different areas at the same time
Just by small little tweaks and what you're trying to accomplish and storage
Has to embrace all of it
And so that's why what we do inside of SNIA is so interesting because we have to do all of it
Yeah, and if you look at it from that perspective
I find one of the unique pieces of that is like look at the history of Solidaim and where the markets come from.
We had the introduction of EDSFF and the small form factor, which is now SFF within SNIA.
Its whole purpose is to help with connectivity, what the box looks like and things like that.
And we've got new systems and solutions that are being utilized by even the fastest AI clusters.
Now they're using a unique form factor designed and developed out of something
like SNIA and led by a bunch of these different companies paying attention to
what you're talking about. I have a very dear friend in the virtualization world
and I think I may have not used this previously but he still talks about
storage as chips and we're talking SSDs that sit at 64 terabytes.
And to him, it's still just a chip
because he's not a hardware guy.
And so that's where we, as these groups of people
who really know what he's talking about,
can help evolve that ecosystem
without him really having to change his definition
but just solve this problem.
Well, I don't wanna cause any face palming,
but I think a lot of people still think of storage as disks, or maybe even as tape. And of course, those are still, they're
still relevant technologies. And I think that if you ask somebody in the know about storage,
they will absolutely stand up for disk and tape. But they will also say, you check out
what we're doing here, you know.
And as well they should, right? Because the thing is, like I said at the very beginning, we understand and intuit what it actually means.
And so, but when I talk to the people at UEC, for example, or I go across the aisle to OCP, or if I talk to people at the Linux Foundation,
I have to choose my words more carefully because there's the shortcut of saying storage
doesn't exist with them, right?
They're incredibly intelligent people,
but coming from a different perspective.
So it's really important that we allow ourselves
the luxury of learning how to communicate the proper terms
in a way that's going to be received correctly.
Right.
So, um, when I, when I want to talk about the data for AI, or if I want to talk about the data for, um, for HPC, that gives me the ability to have a level
common ground of when I'm talking topologies and networks, or if I'm talking
about bid error rates on the physical level, or if I'm talking about, you know,
in network collectives at the software level, or if I'm talking about, you know, in-network collectives at the software level.
Because by being able to redefine the question, you have a lingua franca that you can use
to be able to associate what they want with what you can provide and vice versa.
Yeah, and you actually bring up a very interesting comment about the fact that the cross-pollination,
right?
So, I sit on SNEA board alongside yourself, but I also participate in NVME and a few of the other things like OCP whatnot. And not only do
you have to choose the moniker, the terms you use, but you also have to be careful about overshare
and undersharing stuff because there's all these nuances. Just like you have NDAs between companies,
you have relationships between consortiums. And until something's officially official,
you actually have some unique aspects of how managing all of this stuff comes into play. And being able to address
that and drive across those barriers and screw driving solutions is kind of a unique aspect
of what we get to do from that side of our day job, if you will.
Yeah. And I think it's absolutely critical. I mean, you know, last month we had a joint
session between UEC and SNIA, right? We had, you know, face to face between the two different
organizations. We did technical symposia individually. And then in the evenings,
we had Birds of a Feather where we could kind of combine and work. And then, of
course, we had the regional SDC where the materials should be available online
soon if they're not already.
And the whole purpose here is to make sure
that the work that's being done can be extended
into other areas so that people don't have
to reinvent that wheel.
UEC is one of those places in particular
that is specifically going to hurt in the data area,
if it's not careful.
And this is not necessarily just a SNIA solution either,
but the SNIA group needs to be able to understand
the proper framework under which the work
that they're doing is.
And so mixing the two initiatives together
is really important.
If I want to, so for example,
in AI we have multiple network types.
We've got a front end network, which is your general purpose network.
And that's where most of the storage lives.
Then you have a purpose for AI, right?
You have a purpose built network for AI and that's what we call a backend network.
It's like the 32 terabytes of RAM that you need in order to get the system to work
for model. But since you need the data on one network,
but the data exists on the other network,
you wanna make sure that you have the best
and most efficient ways of transferring the data
or in the future, you wanna have the data on the network
that's doing the processing, which right now it isn't.
So that's a fundamental aspect of AI
that most don't even realize.
Right.
They don't realize that the data isn't even where the processors are.
That just doesn't exist.
So you have to go through all kinds of unnatural acts to get the data into the
proper network so that the GPUs and the TPUs and the accelerators can actually
process it.
And then if you're doing a lot of that movement, it makes sense that the
organizations that are focusing on the AI network and the organizations that are doing the storage
talk. And that's the whole reason why we're doing what we're doing.
Yeah, you bring up a that's a very interesting and to your point, as I as I mentioned, we
recently saw Stephen in person and one of the aspects of the presentation that we brought
forward was working with a partner to highlight how we can leverage storage to actively replace aspects of the memory required
to drive some of those language models.
There's trade-offs.
Do I spend a fortune on the RAM to add and use my storage and accept a little bit of
those trade-offs and things like that?
Those are things that these organizations can't take the time to think about if they're
busy thinking about that true backend that we're driving with the consortiums and the work that we're doing to get data from here to there, whether it's AI at the edge or AI in the data center or rack to rack, all that kind of stuff plays a very big part of what we kind of drive with all these efforts that we're doing.
that we're doing. I've always been a big fan of one plus one equals three and I've used that phrase before but if you if you look at a particular technology
like we've got inside of of SNI there's two in particular that come to mind that
are and not only two but there's just two in particular that come to mind. One
is the smart data accelerator interface and that is the ability to use hardware
to do movement data movement from from one memory location to another memory location.
And in and of itself, it is a very interesting approach to solving problems, especially in a highly abstracted area like something as a service.
The other thing that's really important is computational storage, right?
Where you've got the compute processing power next to the storage device itself.
In and of itself, it is an interesting technology.
Now, let's switch over to UEC, for example.
I've got a processor that needs to run an awful lot of data and I need to move the data
in, but the data has different types.
It's got object types, it's got parallel file systems, it's got block.
And not all are being used at any given point in time.
But if I like in, you know, in LANL situation,
they managed to figure out a way to avoid having to move
exabytes of data or at any given point in time
for their iterative cycle inside of their workload
by doing processing right at the drive itself.
In UEC's terminology, we could do the same thing
because the parallelism that involves
it really means that I've got to send data to another processor to be processed.
But if I wind up with that same principle of having the compute next to the data, I
now shut down the need for the bandwidth, the need for the latency, the need for the
network connectivity at that level.
I can actually reduce the need of it.
You can't reduce it completely. You can't eliminate it, but you actually reduce the need of it. You can't reduce it
completely. You can't eliminate it, but you can reduce the need of it. Because these things
are getting so big that you find yourself at the limits of physics. We need better efficient
ways of doing things. So what's that? That is the memory movement model, the processing
near the data itself, and you've got the proper network directly connected into the
processors one plus one equals three.
So in and of itself, it's great combined to get, it's even more powerful.
Getting those two groups to come together and have those conversations has to start
somewhere.
And so that's what we did last month and we're going to continue to do this in the future,
not just with between UEC and SNIA, but organizations like OCP,
like IEEE, UA Link, OFA, all of these different groups and organizations, that's just some
of them.
They all have a vested interest as we move forward in solving these massive problems
to be able to be aligned.
Unfortunately, it's a really good thing that right now, as of right now, there's a
willingness and an inclination to do so. You know, it's interesting that you talk about that.
If I had taken out all of the proper nouns from what you just said, you just described
the edge AI use case as well, where we have limited bandwidth, we have to move processing closer to the data,
et cetera.
I mean, and this to me, Jay, this is the thing that has been so remarkable about going to
industry standards bodies and events and seeing what they're developing.
It's the maybe unintended, but always in fun and surprising cross-pollination of ideas.
I think when the next generation storage form factor designs were set, no one could have
predicted how that would transform the design, the physical design of edge servers.
And yet it did.
And it is rapidly changing the entire edge industry
when technologies, DMA technologies
and computational storage and so on.
I think that those were defined
before the AI use case arose,
or at least the AI training use case arose.
And yet, look at that. That's pretty
useful over there, you know, and that's something that makes this all happen. It's like magic
happens when you have different people with different areas of expertise all communicating
openly and all saying, hey, wait a second, we did something really like this back in
the day to solve this other problem. And now we have a similar problem here.
What if we, for example,
move compute closer to the collection and storage of data?
What if we offload things?
What if we develop protocols that allow you to have,
for example, hierarchical memory?
That came from supercomputing,
developing NUMA supercomputers,
and then suddenly that's the core technology
that enables CXL, or the core concept at least
that enables CXL.
It's so interesting to see how these things
kind of spread like fire from one spot to the other spot.
And I think we're definitely seeing that
with HPC and AI right now, right?
Yeah. Yeah. There's a lot of characteristics of both that cross over,
but enough of a difference to make it interesting. So yeah, it's one is not exactly a complete
overlap of those Venn diagrams on the other, but there's, like I said, there's enough of a
cross combination to
really start to challenge people into solving those problems.
I think, um, you know, when we get into, when we get into what's going on in the
future, the very cool parts of this is that much of what we want to do has been
solved in the past, it's at a different scale, it's at a different level, but it
involves, you know, two different level, but it involves
two basic fundamental concepts. One is the margin for error is much smaller. And as a
result, very small increments of problems can create much larger amplification problems
for systems.
Because of the tightly coupled nature of the way that these components work,
if one goes down, the whole thing goes down.
So it's very delicate, very sensitive.
So creating robustness and reliability in there is something of a challenge.
But the other part of it is time.
And we tend to forget that as fast as we are,
we are still dependent upon other things to be able to
feed us information or we have to feed them information.
And sometimes that information is at a higher level.
It's at a control plane level or it's at a management level.
Sometimes it's the data plane itself.
But the act of time has made a huge role in what we actually need to do.
And that's where the bandwidth comes into play.
That's where the bandwidth comes into play. That's where the latency comes into play.
The ability to get something or send something very quickly
also means that you now have to run the risk of being idle
while these other things are doing their own thing.
So the whole system has to rise.
All the boats have to lift.
That water has to be able to lift all boats.
It's not good enough to say that I've got the world's best GPU or the world's best CPU or the world's fastest RAM
If I can't get data in and out of it fast enough
or if I can't get the other piece of the equipment on the other end of the wire to send or receive fast enough and
We're talking about
Wildly disparate components. We're talking about network interface cards. We're talking about, you know GPUs
We're talking about TPUs and so on and so forth.
And they're not all equal.
They don't all have the same performance characteristics.
And so, and they're not designed to.
So you've got a lot of variability and a lot of ambiguity when you start off with
a very specific and demanding kind of workload, whether it be HPC or AI that
all has to work together in concert.
You know, you can't be playing, you know, box, this, you it be HPC or AI, that all has to work together in concert.
You can't be playing, you know, Box Fifth Symphony, you know, Brandenburg's Concerto
in one group and then you have, you know, Beethoven's Fifth on the other at the same
time.
It just doesn't work.
You know, two beautiful pieces of music, just not at the same time.
So you want to make sure that everything is working together in that concert. And that is where, you know, you have to think outside of your own myopic world.
And I don't mean that, I don't mean that pejoratively.
I mean that, I mean that actually quite, quite admirably, right?
We work so closely on the stuff that we do that it's often easy to forget that what we do influences other people.
And if you don't actually work with, they're going to go off and do their own thing.
They're going to create yet another standard. They're gonna go off and do their own thing.
They're gonna create yet another standard.
They're gonna create another way of doing things.
And then someone's gonna do something that's gonna take off
and then all of a sudden,
everybody's gonna flock over to that.
So if you want your systems to work
and you want your approaches to work over time,
you've gotta rethink the significance
and the influence of what you're working on
in order to be able to make sure that not just you but the people that rely on you are going to be
able to accomplish this over time. That's a very valid point and I think it's
interesting because everything you're talking about can be to Steven's point. I
can be right here in the core data center and all those problems exist in
one version of a world and if I move out here to the edge it all still exists and
then getting between the two becomes even more of a unique challenge And if I move out here to the edge, it all still exists. And then getting between the two becomes even more
of a unique challenge.
So it's always kind of crazy to see that.
And your concept about the amoebicness and things
like that reminds me of going way back into past
to a tape-based storage war, for those that are old enough
to remember, Betamax versus VHS, for example.
That's a perfect example of kind of what
we're trying to prevent
here, both with Sneha to Stephen's point about the broadcast of an evangelical side of things
versus the nuts and bolts of being myopic and building the really cool thing, but the really
cool thing lost because they weren't playing well in the marketplace and the rest of the ecosystem
didn't parlay along with them. So it's interesting to kind of always keep that in mind
about past, present, future,
and there's always going to be a fun new thing to work on.
It's just where and how far away from today's shiny object
of AI or how far physically it is edge versus data center
in this whole play of things that we're working on.
Great.
With that said, Jay, what are you most looking forward to?
What developments in the industry are you looking forward to
now that we have this monster business driver of AI?
Where are we going?
I am most looking forward to,
and this is more of a selfish me thing,
I am looking more towards extrapolating the stuff that
we are doing inside of these different things from a technical perspective and marrying them
to the ethical and moral implications of accomplishing these goals. All right. I do think
that we have a tendency as technical people to forget that what we do matters. And what I do what I do what I do is because of the fact that in order to preserve those
guidelines you should have the most efficient, effective, accurate data when you need it,
where you need it so that you can make informed decisions.
And so I think the infrastructure is not divorced from those questions.
It's not a software problem.
It is an everything problem.
And so I am looking forward most
to being able to have those kinds of conversations
where the work that we do have a full understanding
up and down the stack, all the way to the end user
so that people can feel more comfortable
with using the technology without having to fear
some sort of AI, you know, AGI moment of, you know,
what's the word it's called? The anomaly?
Singularity.
The singularity. There you go. That's the word I was looking for. And I honestly think
that now is the time that we need to have those conversations, and I'm looking forward
to having those.
Yep. And I'm going to just go out on a limb and say, if you're interested in those conversations,
they're happening in the standards bodies. They're happening where people can get outside of their companies and out there talking to
each other.
Again, I'll just put in a little plug.
My favorite tech event is the SNIA Storage Developer Conference because it is so open
and nerdy and fun and engaging.
There are of course lots of other ones.
I mean, you mentioned OCP,
which is always a lot of fun every single year.
Super computing is always a lot of fun every year.
There's a lot of great conferences out there.
And anytime you get people together to share ideas openly
and to really explore what this all means.
What you're gonna find is that people
are not just technologists.
They're not just trying to push the.1,.3 version
of the specification.
They're interested in having the big picture conversations
that Jay just described.
In fact, he and I have had big picture conversations
with Scott, you know, sitting in the lobby of the hotel
at these conferences.
This is what it's all
about. So I urge our listeners to get involved in these things. Before we go, Jay, tell us a little
bit, how can people get involved in some of the standards bodies and events that you go to?
Well, if you're interested in working on SNIA related material, the data related material,
I encourage people to look at snia.org as a starting
point, can contact myself or Scott, you know, Scott is the
chair of the communication steering committee for snia. And
so he's responsible for communications. You can find me
on LinkedIn, you know, with J Metz, I am wearing a hat. Or
actually, am I wearing it? I think I'm wearing a hat inside
my picture there. I may not be.
I know. What the heck? What am I thinking? The UltraEthernet organization, the website is ultraethernet.org. And there's a lot of material there under the news situation. As of right now,
we are just on the precipice of finishing up the 1.0 specification. We're
gonna have a lot of material educating people on what that means, how it works,
how to implement it. And so there's a lot of material in both organizations that
are coming down the road in the next six months or so that you can find out a
great deal of information on these topics. How about yourself Scott? What's
coming up for you? Yeah so it was great to do the field day. We had the SNIA event
So we have some future events coming up as well as planning for SDC
And if you're looking to kind of connect with myself as Jay mentioned you can find me through SNIA as well as I'm on
Former Twitter as SM Shadley and blue sky at SM Shadley as well as well as LinkedIn
Feel free to drop a line and we'd be happy to talk.
Thank you very much.
And as for me, you'll find me at S. Foskitt
on most social media networks in Yes, including the ex Twitter
as well as the Bluesky and the Mastodon.
And I would love to find y'all there.
You, of course, can find me as well on the Utilizing Tech
podcast, the Tech Field Day podcast,
Tech Strong Gang every Tuesday, and the Tech Field Day rundown some Wednesdays when I don't
relinquish the chair to my friends Tom and Al.
So thank you very much for listening to this episode of Utilizing Tech.
You can find this podcast in your favorite podcast applications.
Just look for the words, Utilizing Tech,
as well as on YouTube as a video if you want to see what we look like.
You want to see Jay's hat.
If you enjoyed this discussion, please do give us a rating or a review.
We would love to hear from you.
This podcast was brought to you by Solidim,
as well as Tech Field Day, part of the Futurum group.
For show notes and more episodes, head over to our dedicated website, which is, not surprisingly,
utilizingtech.com, or find the show on X, Twitter, Blue Sky, and Mastodon at Utilizing
Tech.
Thanks for listening, and we will catch you next week.