Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 08x05: Bringing IT to Healthcare and Research with PEAK:AIO and Solidigm
Episode Date: April 28, 2025AI applications have unique requirements for server infrastructure, so a new platform is required. This episode of Utilizing Tech features Mark Klarzynski of PEAK:AIO discussing their AI-specific soft...ware-defined storage platform with Jeniece Wnorowski of Solidigm and Stephen Foskett. With a background in enterprise storage, the PEAK:AIO team evaluated the needs of AI users with a goal of delivering a simple and integrated solution that could scale to support the most demanding applications. The company began working in healthcare to support distributed applications before finding similar use cases in research and manufacturing. Rather than focusing on advancing technology and then finding a use case, Mark advocates focusing on the needs and possibilities and bringing technology to solve these problems. Edge servers are constrained in terms of power, cooling, and cost, and this requires new thinking, as well as new software and hardware approaches, to continue to progress.Guest: Mark Klarzynski, Cofounder and Chief Strategy Officer at PEAK:AIOHosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesJeniece Wnorowski, Head of Influencer Marketing at Solidigm Scott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
AI applications have unique requirements for server infrastructure,
so a new platform is required.
At the same time, we have to start with the fundamentals.
What does the user need? What does the application need?
Instead of just thinking about what the technology can deliver.
That's the subject of this episode of Utilizing Tech,
featuring Mark Klosinski of Peak AIO,
Janice Narowski, and myself, Stephen Fosket.
Welcome to Utilizing Tech, the podcast about emerging technology from Tech Field Day,
part of the Futurum Group. This season is presented by our friends from SolidIME and focuses on AI
and the edge and other related topics. I'm your host, Stephen Fosket, organizer of the Tech
Field Day event series, including AI Field Day and Edge Field Day. And'm your host, Stephen Posket, organizer of the Tech Field Day event series,
including AI Field Day and Edge Field Day.
And joining me from SolidIME as my co-host today
is Janice Naroski.
Welcome to the show, Janice.
Hi, Stephen, thank you for having us.
It's good to be back.
It is good to be here as well.
And we have been focusing all season long
on the sort of unique application requirements
for AI servers, for edge servers,
the fact that these are somewhat different
than what we found in the conventional data center.
Yeah, and the world is pretty wrapped up right now
around enterprise and all things, power and cooling,
but it's really interesting to take a look at
how are organizations deploying AI truly at the edge?
And so we're delighted to have with us here today,
Mark Klerzinski from Peak AIO,
who's gonna talk a little bit about the unique use cases
that they deploy at the edge.
Welcome to the show, Mark.
Why don't you introduce yourself quickly?
Thank you, Stephen and hello, Janice.
I'm Mark Vazinski, PKIO.
I'm the founder of PKIO, and I have the good fortune
to have worked with Solidign for a few years now
with a real focus on AI, predominantly
in that incubation period and the edge case where it's developing more and more.
And hopefully we can discuss some of those exciting
and up and coming current and emerging use cases.
So Mark, tell us a little bit more about Peak AIO
specifically, what is it that you're building?
So Stephen, let me jump back a few years,
you know, pre COVID, which we've almost forgot now.
I was, I've been in storage, as you can see,
I've gone past the gray stage
and I've been in storage for 35 years now.
And I was happily consulting to within the NVIDIA channel
at the time.
And at this point AI was beginning to take off.
This is way before chat GPT and some of the early pioneers,
which were the obvious use cases like health care, et cetera.
They were moving ahead and pioneering some amazing projects.
But the challenge was that while Nvidia had made this new amazing ecosystem
and this new market completely,
the rest of the infrastructure hadn't really caught on.
And so, the solutions were going out,
but they were really not performing maybe quite
as well as they should do,
because everybody had really rebadged
traditional IT products
and turned them into AI branded products,
but they weren't really working for a whole bunch of reasons, technical and, you know, use case bits.
So we actually started PQIO when we realized that there was a need, this was not just a new market
that was demanding a completely different level of performance and had a different use case, it needed a whole different range
of ecosystems and infrastructure.
Why would we expect data storage that's
been developed for enterprise use to suddenly work in an AI
use that's completely the opposite?
So we really focused on developing AI storage
to accelerate the use case at the time.
And we were really fortunate to work
with some of those early pioneers
in the healthcare and beyond to allow us to,
for the first time probably in my lifetime in storage,
where we didn't design something
and then told the market what they needed.
We actually listened to the market and got, hey, what is this new thing?
And what challenge do you have? And the challenges were just so fundamentally different.
We just simply started afresh, luckily enough to work with Solidine,
and we built from that Solidine foundation upwards to deliver what they needed
and it's actually what they needed to achieve the best out of that AI roadmap. So Mark, with that,
thank you for that introduction. I just want to follow up and ask, so is it software that you
guys do or how is it that your solution is vastly different than others on the market today?
Yeah, it's purely software. Now, there's nothing necessarily new about software defined storage, as we call it.
What is different in our case is we took a step back and we said, hey, you know, when we were making software to define stuff some 20 years ago, we had hard drives,
we had 10 gig NICs, we had a bunch of 20 year old technology. Today we have amazing MVME from you,
we have amazing networking from others and wonderful off-the-shelf servers that, you know,
have the power that we would have only dreamt about. So in this case, what we did is we said, well, let's not take everything we know, let's just take everything that's available and put it together
and get as close to the hardware as we can to make it work in a way that the user needs it.
Nothing more. No smoke, no mirrors, just deliver exactly what the user needs. And the advantage of that one was that it's a different level of simplicity, which
is exactly what an AI user needs because they're often
a clinician, a doctor, a professor, a biochemist,
and not an IT specialist.
But also that we're so close to the hardware that I'd love
to say it was an amazing strategy.
But that meant that when you guys brought I'd love to say it was an amazing strategy, but that meant
that when you guys brought out generation five, we just doubled in performance because we were
basically taking what you delivered and making it usable. So that's what separates us. We take
off the shelf hardware and turn it into hyperfast AI focused data acceleration.
AI focused data acceleration.
Now, when you say AI focused and, you know, what exactly do you mean?
I mean, is this for the big AI supercomputers
in the cloud or is this for your doctors and engineers
and so on in the field?
Originally it was the doctors and engineers in the field.
As AI has moved and projects have become more mainstream
then they are getting bigger and they're becoming more known.
But one of the largest challenges, Stephen, actually,
I mean, it's obvious when you know it,
but if you think, if we just simply think about
what we had before AI, what we had was an enterprise customer who may have a thousand machines, 500 of them were probably mobile, you know, laptops, 200 of them were probably workstations, 10 servers, you know, an email server, a database server. So thousands of connections, but from thousands of machines,
none of them demanding ridiculous amounts of performance,
maybe one or two,
but all of them just wanting a decent amount.
On the opposite side, you add HPC,
or still have HPC that generally would have millions of cores
over thousands of compute nodes.
And so you've now got storage here
that's delivering tremendous performance
to millions of cores over thousands of networks.
Whereas suddenly AI came along and Nvidia said,
well, hey, we've got a million cores and two machines.
Well, we've never seen that.
We've never had one machine demand that much performance
and be able to sort of basically take the entire performance of storage over protocol.
And just, you know, we were able to deliver that. But generally in the past, that would be over so many machines.
It was AI fundamentally changed the way we delivered data.
And in the beginning, yes, to be fair,
there wasn't the giant super pods that we see today.
Everybody was learning AI, right?
Nobody really knew what they were doing.
They just knew they needed it.
The amount of conversations I had that were saying,
yeah, we're going down the AI path
and it'd be, well, what you doing?
Well, we don't know, but we know we need AI.
And everybody did that.
I think it was probably some years later before,
you know, it became obvious that there was a use case
in just about every vertical.
So really at the beginning,
it was very much smaller clusters of one
to what we call DGXs, HGx's, which is sort of the nvidia servers.
And that would really be a professional in his team, or a company that was testing some AI projects,
or even a HPC company that was trying to work out how to use GPUs. And so they certainly were a lot smaller,
but they still demanded that amazing performance.
So really the difficulty was we'd always had performance.
The advantage we had was that we were able to deliver that
to many machines and it was aggregated.
Suddenly having demand and to be able to deliver it
to one or two was really challenging.
Yeah, and you mentioned Mark, you know,
being customer centric and really getting in
with the customer and listening to what they have,
you know, what their challenges are.
And I think you're right, not everybody gets access
to the big, you know, DGX servers and,
let's be honest, you can get their hands on a GPU right now, right?
So there's lots of challenges.
But you know, your solution being that it's true edge, right?
It has all that power that you mentioned of some of the big super pods, but you're putting it as close to the patient,
if you will, and the physician in a hospital environment. So let's take like an MRA use
case. I heard you guys once say, someone coming out of that machine before they even tie their
shoelaces are able to get their results. And tell us a little bit about what enables that? What
does that look like?
Yeah, I mean, we were really fortunate that one of our first encounters in AI was with
a large university in the UK, Proc King's College London, and Associated Universities.
And they were really focused on what they called AI, value-based healthcare.
Because if you think about healthcare,
there's an advantage at every level to the patient,
to the government, to the, in our case in the UK,
the National Health Service, the local authorities,
to the insurance companies,
there's an advantage in diagnosing or getting,
providing a better pathway or outcome quicker. It saves costs,
it saves lives, it saves staff. And so we were really, you know, blessed to have worked very
close with these guys and they were doing such tremendous work. And I can remember actually in
one of the lectures that one of the gentlemen was doing, he actually said the overall goal was,
let me try to get this right, it's not verbatim so I apologize to George, but it's that the doing. He actually said the overall goal was,
let me try to get this right.
It's not verbatim, so I apologize to George.
But the overall goal was for them
to be able to collect the collective intelligence
of every radiographer in the entire world that
has the knowledge of every rare disease,
as well as every other MRI scan output, and be able to put it into a little box and into a
model. So regardless of where you went for an MRI, that could be in the middle of California,
Sacramento, or it could be in the outback in Wales and the UK, you will get exactly the right person
looking over your MRI and being able to make an instantaneous decision. Now, the
first, the difficulty with that is you then learn with ethics, is that correct, should
that be right? But actually, if you twist it around a little bit and say, well, actually,
we're certainly in the UK and I suspect it's worldwide, we have a shortage of radiographers. Not many people grow up, you know, in school today wanting to be a radiographer.
It's not on the curriculum. And so this isn't to replace that.
This just helps that workload. So, for instance, generally,
the decisions today are often put in three categories.
The decisions today are often put in three categories. I've run into an MRI, it sees a problem.
It sees what is, it is pretty sure is a problem that needs real investigation.
It's not sure or it doesn't see a problem.
And in reality, they're not sure or I don't see a problem, they will still go to a radiographer.
A human should still see that,
to double check it, obviously. However, if you do see, if it does see a problem with a degree,
you know, of confidence, why not take it straight to the next step? Why wait in that waiting list
for the radiographer to look at it and just take them straight to the consultant that's going to
overview? Yep, this really is a problem. We're going to start some treatment.
So we were fortunate to be involved
in the early trials of that and also the development
of something called MonAI.
And that's become, basically I started this,
Professor Sebastian and his team,
and that's become my world standard open source
adopted by Nvidia.
Because prior to this time, and I don't want to re-alongate this much,
but I remember doing some work and we looked around the world and at that moment,
there was something like 450 individual healthcare AI projects,
all doing something similar, but none of them shared data,
none of them had anything in common.
They were all their own teams doing their own work.
So MonAI was created to make
a common operating system almost for the hospitals.
So it gives a basis that
then the individuals can write their projects on top of it, which means as we move forward
that there's a common framework that will allow every hospital to in some way interact and gain from each other
even if they're not sharing data.
You know, it's interesting that to hear you speak because what you're saying is, I think, something
we don't hear all that often in tech generally and in AI specifically, which is that you're
starting with the application, with the use case, with the need, rather than starting
with the technology.
You have the basis in technology, but you're saying, first, let's think about how this is going
to be used, what it's going to be used for, how will it benefit people. This is such a
contrast from what we are hearing in popular culture about AI, you know, the explosion
of generative AI apps and chat bots and so on. I think the biggest criticism of most
of that is that people are not doing what
you're doing. People are not saying what are we trying, what is this technology for, what are we
trying to achieve, and how can we achieve that? And instead they're saying, wow, this is cool,
how can we push this further and further and further to do something? And also everything
you're talking about is very much not chatting with a large
language model. Now it could be, that could be one of the tools you're using. But again,
AI applications and productive AI applications, especially at the edge, as we heard about
when we talked to Nature Fresh Farms and how they're growing tomatoes with AI, you know,
the stuff that you're doing is a completely different world.
And it's really refreshing.
You know, how do you bring technology to the problem and vice versa?
And how do you avoid that sort of irrational exuberance for such cool, fun technology as generative AI?
You know, that's actually a really good question at many levels because that was one of the big learning
curves for me because clearly as I said before,
I've spent many years in IT and storage
and most businesses, me included in previous storage
companies, what we tend to do is we tend to,
we think somebody like me designs what we believe
is the next generation.
We do that in stealth mode, then we launch it, and then we go out and evangelize to everybody why they need it.
The strange thing is, is when we came to AI, everybody, you know, every vendor did pretty much the same things.
You know, you need this to make your AI go faster and better, better return on investment, all the things.
And yet they got to the professor and he went, I don't understand you and I don't need you and I don't want that at all. All I want to do is
solve ABC. And what was really refreshing, I suppose, when you get to my age, it was actually
the first time where the market was so new. It was so at the edge that nobody really knew where it was going to go and still
don't today. We still get surprised by some of the outcomes. And so I initially sat down
when we were trying to work out what was going wrong with storage as we had it at the time.
And I can clearly remember some of the early conversations with some universities that were doing some really pretty cool work and I remember saying okay we're involving two dejectors here how are we going to
deal with the storage and they genuinely turned on to me and said what do you mean by storage
because they were looking at this as a problem and that specific problem was a medical one again
where patients who had given birth on a Friday,
if it happened to be over five o'clock when the doctors had gotten home, they had to wait
till Monday to determine whether or not their baby had a problem, a particular type of problem,
because only the doctor ran that test. And so the clinician was saying, hey, this makes no sense,
we should just use AI to do this. Yet he had absolute no understanding of IT,
didn't want any understanding of IT.
He just had a problem and a bunch of tools
that could probably make it work.
So it was refreshing, and that's the question,
to actually not tell the market what they need
and have IT people waiting for you to give next generation, but
to actually have a market saying, look, we need this to solve this problem. And A, that's
refreshing and B, it's just a lot more fun because you're doing some, we've spoken a
lot about medical, but we, you know, as you know, we've worked a lot with the
Zoological Society of London and that was just an amazing project because that's dealing
with real life worldwide conservation of animals and to see the impact that they are making and
the ability that AI, through the thought AI, would help them regenerate what were extinct birds
and slowly build back populations or keep control and
help the growth of populations that are slowly dying out. But by having the
ability to analyze data and see trends in data that was just not possible
before, they can run through so some scenarios that allows them to create conservation plans that we
would never have been able to do before.
So really, Stephen, this has been an eye-opener for me,
but really a great eye-opener because for once in my life,
this isn't about making a company make more profit
or run that bit faster or be more productive, which is all excellent
and very important.
This is actually, the outcome is something
that makes you smile often.
I couldn't agree more, Mark.
I mean, the ability to save the hedgehog,
as we've talked about before, right?
Or look at the ecosystem and the patterns of that,
adorable little animal,
right, is just something amazing. And you're giving those researchers such more of an advantage to do
so, right? So when we're talking about the London Zoo project, you guys were utilizing, I think, I'm forgetting the exact server and I apologize.
But we populated that server with a bunch of 122 terabyte SSDs.
And tell us a little bit about what did that do for that particular researcher?
What was interesting on this, which is, I mean, this is something again new.
If you look at traditional IT,
people have data centers.
They have, they've got nice cooling power
and have racks and everything you need.
London Zoo had an old office that they put a rack in.
And it was an Nvidia, maybe two Nvidia's as I'm trying to remember now,
DGX H100s and they pretty much stopped every bit of electric power that that office could
deliver because it isn't a data center. But they needed immense amount, I mean if you
imagine that they're doing worldwide projects of animal traps, footage
of every tiger and every lion and everything, every hot
shark that's out there, the immense amount of data
that they have.
So they needed to get petabytes of data,
but they had no power.
So without Solidion's actual real high capacity drives,
we were not able to, we just could not
make this work.
And there's, there's a sort of a bit of a funny story on this one, because what we don't
realize is, you know, with power comes cooling.
Well, with power comes heat, which means cooling.
And in that case, they actually had to build like a refrigerator on the outside of this
office block, which was right next to the water
buffaloes, I think the word, the Chinese water buffaloes. And actually the noise that this made,
they actually had to relocate the actual Chinese water buffaloes while they actually installed all
this. So the implications that this has on normal people that are using normal offices
to do really remarkable things,
actually has taken a lot of technology.
And you can't just go in there with the age old approach
of just putting a lot of storage in.
So, you know, we had to get that,
the immense amount of petabytes in about four years
and deliver a tremendous amount of performance
so that they could train
these models and learn from these.
And on a serious side on that, you know, we, they're only just beginning to realise what
they can do with it because prior to this, one of the things you do, if you think about
a, you know, a camera trap that you often see on TV, it takes photos of animals,
anything passive. They generally run this through an application first that removes anything human
like, or a picnic table, a car, a human, a ball or whatever, so that you end up with an image that's
hopefully got an animal on there. Prior to this box, they could do something like three a minute,
I think it was.
That's what their server would do.
After this storage and the solid-eye drives, et cetera,
they could do something like over 1,000 a minute.
So now what they can process, now they've
suddenly got those images.
Now they're beginning to learn what they can do.
And just as an example,
and I know we were talking earlier, I know the hedgehog is not a native in America, but
in the UK, we grew up with hedgehogs. Everybody had one in a back garden just transiently
reaching around. Now you rarely see them. And if you think about it, we've urbanized
everything, we've got roads everywhere.
Nobody's got much grass in the gardens nowadays
because of the parking.
Hedgehogs can't get across a bypass
to get to another hedgehog.
So they're into breeding.
They're not mixing.
The colonies are getting smaller.
And so they're beginning to use things like AI
to actually be able to, when they get permission
for new developments, they use AI to develop pathways for hedgehogs to be able to mingle
as they always did do, yet still allowing us to urbanize and to move forward.
Now you take that to India and the tigers over in India, they know every single tiger by its pattern.
So they can recognise that within a millisecond, no matter what. If that, unfortunately,
it ends up in a road somewhere, which hopefully it never does, they know exactly where it came from.
And so what that will end up doing, which is fundamentally a small map in an office, is changing wildlife
around the world.
And we're still learning.
We're still seeing what their next challenge is.
Now they can do all these images.
What did they do next?
And how did they deal with these?
So they're going worldwide, and they're
opening that service over to, you know,
conservation experts around the world.
It's quite amazing to be involved in
and to see the resource as something so small,
but yet so big in its impact.
When you mentioned the hedgehogs
and you said they're using AI,
I pictured hedgehogs using AI.
Yeah, exactly in the way there.
Ha ha ha.
But to be honest, it shouldn't just
be researchers in the ivory tower using AI.
Now, it probably shouldn't be hedgehogs.
But it should be everyone doing tasks
that should be able to use this.
Yeah.
One of the exciting things that I'm seeing in the AI space
is an explosion of, I guess you could call it open source.
They call it open source in many cases,
even though maybe it's a different type of thing.
But just open science, open development, open applications,
catalogs full of models.
Again, sort of refuting the challenge that's thrown at AI sometimes,
that it's just a bunch of chat bots and they're just getting better and better and all you're
trying to do is burn down the rainforest and make a super mind. It's not like that at all.
These researchers can go and they can benefit from each other's work. They can go to a conference
and learn about how an image recognition model is able to recognize
tigers by their stripes. And somebody else in some other place could say, well, I'm looking
at sharks and they have distinctive patterns as well. I wonder if we could use the same
visual model. And similarly on the, you know, the hardware and software side, people saying,
you know, how can we leverage this technology in another area? That's what makes this whole
edge space so interesting too, is because edge is
fundamentally a world of constraints. It's not
unconstrained data center where you have, you know, in
acres and acres of ultra high performance servers. No,
this is, as you said, the building next to the water
buffaloes
and we've got to figure out how we can deploy
servers in there that can do the task that we need
in this location without just destroying everything.
And in a way, I find that positive
because when people are faced with constraints,
they tend to come up with novel and interesting solutions. When they're faced with no constraints, they tend to just burn
everything, right? You tend to just like turn it all the way, turn it to 11. If it only
goes to one, you have to figure out how to make it work. And is that your experience
kind of trying to deliver these solutions in those environments and those edge environments?
I'm becoming more so and again one of the products that we've been working with Solidine on
over the last possibly the last year or so is
When we started PKAO we realized that what we needed to do was deliver six six times the performance
in a sixth in space with a sixth in the power
consumption. So we did that and we did that because that's what the labs and that's what the people
and the users demanded. It wasn't because they didn't want to spend the extra money,
which often helps, but it was because they simply couldn't power it.
And if you look at the way GPUs are going now,
a new GPU server probably takes 14 kilowatts.
That's a tremendous amount of power.
And I think I saw a statement not so long ago
by Jensen saying that,
he can see a time whenever data center
has a mini nuclear system, nuclear power plant. I mean, that's scary,
right? And one of the things we've been starting to work on, I'm likely just about to announce,
is an extension of something we call Apex Drive, which will actually enable us to save 50% power
on the MV and re-drives themselves. So that's So if you're talking about an inch,
that's not so significant.
But if you start talking about a lot of the GPUs, the service
providers that are those that are stimulating
a lot of the inception, the new starts, the incubation,
they've got thousands and millions of these MVME drives.
And if you can save 12 watts a drive,
that's a significant amount of power, cooling, carbon.
And so it is, you know, it's almost the opposite
of what we've always done,
which is we've always had space and room.
And so, you know, when you want to go bigger and faster,
you just add another thing in.
Just add another widget, and it goes faster and everybody's happy.
Take away that space, take away that power and, you know, only allow you to turn up to
number one.
You've got to innovate.
You've got to say, actually, how do I get where they need it to be, but I can't turn
this up to two.
You know, I've got to stay at one.
So something has to get better and.
That in many ways is is being the
the interesting part of our journey.
Is although we've had you know software
technology that does similar things for
the last 20 years plus we've had to scrap
many of it which is disappointing to
me wrote it but you know, you. But it's actually because that would have took us to seven,
not one.
And now when you've got to get one, you've got to get closer.
Now, the advantage is you've got superstars like Solidine
who are doing most of the work for you.
And I know most of the storage community
will probably hate me for this, but storage is,
you know, me included, we've lived on smoke and mirrors for the last few decades.
We've generally lived on, somehow we do witchcraft.
We convert these drives that you don't want to know anything about and they're not really
intelligent and we make them work for you magically.
But the reality is, is most of the work is, you know, over the last decade has been on that NVMe side.
You know, as Solidign have done the work,
oh, we really need to do, we don't need Wraithcraft.
We're just going to make them work for the user.
So we've got the advantage that everybody,
I think also related to your open source analogy,
it's in many ways, in many ways in many ways the same with the hardware.
If we truly don't hide and try and disguise everybody's contribution, then collaboratively we can all create a better solution.
If we acknowledge the advanced nature of Solidiol and use it what it is and don't send anything other,
then we make a better product. When we start adding smoking mirrors and coming up with
cool names and every other way that we can think of marketing it, we just create confusion and
proprietary solutions that are taking us away from what the new world needs. Now, the enterprise space isn't very nice.
That's the HPC space isn't going.
But AI and GPU workloads, that's a completely new market
and probably the largest technological shift
that I've seen since the day of the personal computer.
I can remember sitting looking at a personal computer thing
and what would anyone want one of
these on their desk for? Until I saw, you know, WordPerfect or whatever it was in them days.
And you know now it's the same with AI, it's the most significant shift I've ever seen
and it's time for vendors to stop trying to do it alone and trying to create proprietary solutions and their own standard and only their value. You know it's about collaboration now and
for the better good of other movements. Wow that is a great way to end the
discussion I think. I think we should stop there and that was amazing a lot of
good detail. Thank you so much for the examples. We are, you know, we are delighted
to have you here today and to have gone through the level of detail with Peak is eye opening. And I'm a big believer in your
organization. And after talking with multiple customers, I'm excited to see where Peak will go into the future. But why don't we
tell the audience though, where can folks learn more about your organization?
For those of you that are maybe just seeing me being at GTC, over with Western Digital and
yourselves, so other than that we're usually around in a lot of shows,
but just visit our pkio.com.
And I'm over at LinkedIn with a name like Klozinski,
you can find me.
And I think which is what is pretty obvious is,
even though I'm the founder,
I'm still passionate about what we learn in every day.
So if you're a user and you really wanna get into it
and you wanna influence what we're a user and you really want to get in touch and you want to influence
what we're designing, reach out to me.
That's great.
Thank you so much, Mark.
And thank you also, Janice, for being part of this conversation.
And everyone else, thank you for listening to this episode of the Utilizing Tech Podcast.
You'll find this podcast in your favorite podcast applications as well as on YouTube. If you enjoyed this discussion, please do consider leaving a rating and a nice review.
We would love to hear from you.
This podcast is brought to you by Solidim and by Tech Field Day, part of the Futurum group.
For show notes and more episodes, head over to our dedicated website, utilizingtech.com,
or find us on ex-Twitter, Looseky and Mastodon at Utilizing Tech. Thanks for listening and we will see you next week.