@HPC Podcast Archives - OrionX.net - @HPCpodcast-91: David Kanter of ML Commons on AI Performance Measurement
Episode Date: November 8, 2024Special guest David Kanter of ML Commons joins Shahin Khan and Doug Black to discuss AI performance metrics. In addition to the well-known MLPerf benchmark for AI training, ML Commons provides a grow...ing suite of benchmarks and data sets for AI inference, AI storage, and AI safety. David is a founder, board member of ML Commons and the head of MLPerf benchmarks. [audio mp3="https://orionx.net/wp-content/uploads/2024/11/091@HPCpodcast_SP_David-Kanter_AI-Performance_ML-Commons_20241107.mp3"][/audio] The post @HPCpodcast-91: David Kanter of ML Commons on AI Performance Measurement appeared first on OrionX.net.
Transcript
Discussion (0)
Are you attending SC24 in Atlanta? Lenovo invites you to visit booth 2201 on the show floor at the Georgia World Congress Center from November 17th through 22nd, 2024. In the meantime, visit lenovo.com slash HPC to learn more about Lenovo's HPC solutions. We were well positioned to be seen ahead of the curve.
And so over time, ML Commons has grown.
We've now got over 125 members.
Is there an overall pattern or trend that you're seeing in the totality from the benchmark results?
I assume other than simply
more powerful faster in the case of mlperf the data set stays the same so using a larger and
larger system becomes ever more challenging and really stresses the ingenuity of the
whole community to come up with different ways of partitioning the bubble.
From OrionX in association with InsideHPC, this is the At HPC podcast. Join Shaheen Khan and Doug Black as they discuss supercomputing technologies and the applications, markets,
and policies that shape them. Thank you for being with us.
Hi, everyone. Welcome to the At HPC podcast. I'm Doug Black of Inside HPC with Shaheen Khan
of OrionX.net. And today we have with us special guest David Cantor of ML Commons. David is a
founder, board member, and the head of MLPerf for ML Commons, where he helps lead the MLPerf
benchmarks. David has more than 16 years of
experience in semiconductors, computing, and machine learning. He was a founder of a microprocessor
and compiler startup. He was also at Aster Data Systems and has consulted for NVIDIA, Intel,
KLA Applied Materials, Qualcomm, Microsoft, and others. So David, welcome.
Pleasure to be here. Thanks for having me on the show.
Great to be with you. So why don't we start with a quick profile of ML Commons and its mission?
Yeah. So ML Commons was started to house the ML Perf Bench. And our mission is making AI better. And what that really means is how do we make AI faster and more capable, as many of our
MLPerf benchmarks help with?
How do we make it more efficient, as MLPerf Power does?
As well as how do we make it safer?
And so we have a new effort in AI risk and responsibility that's sort of focused on that
last. And so I think all of those things are what guide ML Common. We are a consortium, we're a
partnership of industry, academia, and increasingly civil society and other folks, all really focused
around the goal of making AI better for everyone. Okay.
When I looked up background information on you all, I was a little surprised the organization
dates back to 2018 when there was still talk, there were still worries about another AI
winter, which I think is a thing of the past now.
So tell us how ML Commons kind of came into being and how large your membership, how it's
grown.
Absolutely. So we started not even as ML Commons, but as MLPerf focused around the need for
industry standard, open, fair, and useful benchmarks in AI. And originally that was
what brought us together. And the initial effort would become
MLPerf training. And so that turned out to be, I think in many ways, a success beyond
our imagination and in terms of the people and organizations that it really rallied together.
But we knew from the very start that we needed a foundation or a consortium to really house it and act as the
custodian of MLPerf. And so that is MLCommons. Now, MLPerf started in 2018, but MLCommons was
formed sort of mid-2020, early 2020, actually. So I was the founding executive director and the party that we had to kick off the launch of ML
Commons was slated for the third week of March, 2020, which is really much more famous for other
events that went on. I'm afraid so. Yes, indeed. So it's funny you say that there were concerns
of an AI winter, because I don't think within our ranks anyone was really worried about that.
Well, clearly you had a vision for AI.
Yeah, I mean, I think we were well positioned to be seen ahead of the curve.
And so over time, ML Commons has grown.
We've now got over 125 members of all stripes, and, and it's just a fantastic.
That's excellent. David, let's take us through the benchmarks as you've organized them. I know
in our pre-call, I was saying that my view of benchmarking is like trying to measure a full
athlete and you can have specific races that they compete in or have a decathlon, or you can have a measurement
for a single muscle or a combination of muscles. And AI is everywhere. So choreographing what's
being measured for what reason is all over the map. You alluded to security, alluded to HPC.
Would you just take us through the whole set and the whys and wherefores of each?
Absolutely. Yeah. So MLPerf is really a family of benchmarks and they're all full system benchmarks
in many respects. Well, MLPerf storage, maybe not so much, but broadly speaking, they're a set of
full system benchmarks. And if we were to start at the top, I'd say
there's MLPerf Training, which is the one that started it all. MLPerf HPC, which is sort of a
variant of training focused on scientific problems and data sets. Then also within sort of the data
center genre, MLPerf Inference, and that has both a data center and
an edge flavor. But those are both focused on the inference side, generally higher power.
Then stepping down, we've got MLPerf mobile, which is looking at inference on the context
of mobile devices. MLPerf tiny, which is sort of inference for microcontrollers and very low power.
And then sort of the two newest or three newest members of the family, I'd say MLPerf Client,
which is looking at sort of for PC class systems inference.
And that is slated to be released soon.
Very excited about that.
MLPerf Storage, which came out pretty recently in the second iteration.
And that's a little bit of a departure in that sort of looking at training as a workload,
but then trying to isolate out the storage aspects.
And then sort of last to round it out was MLPerf Automotive, which is focused around how can we take the magic of MLPerf inference
and bring that into the automotive context, right? Because as we all know, cars are getting much more
intelligent and the capabilities there are quite impressive. And a lot of that is powered by AI.
You also mentioned safety benchmarks. What is that about? Yeah. So our AI risk and
responsibility group, that's another part of ML Common. They're focused on how can you measure
the output of AI models, so sort of quality or qualities of AI models to help us get a grasp on
sort of risk and responsibility, right? Is the model saying the
things that it should? Is it not saying the things that it shouldn't? You've got a chat bot to help
someone with customer service at Pepsi. It shouldn't really have an opinion about what brand
of cars is best, at least up until Pepsi buys a car company. Then I would expect its loyalties to shift accordingly. So is it more than a hallucinometer, basically? I mean, I think
hallucination is sort of one aspect of that. I'm probably not the best person to dig into the
details of the AI risk and responsibility, but I think that is potentially one aspect that might
be interesting. I see. Sounds like generally just the kind of bias that might interfere with
proper advice to the user. I think it's a great area, by the way. So I applaud you for doing that.
And I think it's a very hard area.
Absolutely. Yeah. I mean, and one of the things I'd say is sort of a unifying theme of all of
these is when I look at ML Commons and what we do, we're really good at building and we're
really good at measuring and we're really good at AI. And so it's sort of, I think the intersection
of those three really helped to capture sort of our sweet spot in some ways.
Now, how do these benchmarks get created, formulated? What is the governance model for
these? Ooh, that's a great question. So each of the benchmarks originates out of a working group.
And like many open organizations, anyone can join the working groups.
Our members are on six out of the seven continents.
Anyone in Antarctica would be welcome to join.
Six out of seven is good.
Seven out of seven is good. Seven out of seven is better. And those decisions are made in the working groups through a process we like to call grudging
consensus. It's not quite consensus, but we want to make sure that no one leaves with a grudge.
That's great.
Obviously, some benchmarks that you all do date back further than others, but are some,
could we say more
popular or influential among the group? I love all of my children equally.
They're all special. But I would say if you look at it by number of submissions, I mean, certainly
the most are in MLPerf inference and then maybe training And I'd have to think about from there. Some of them
also, like MLPerf client is going to be a slightly different animal and has a very different sort of
submission and publication process. So it's hard to even compare. But certainly I would say MLPerf
training and inference are extremely popular. And I was really pleasantly surprised by MLPerf
storage. We got, I think, 13 or 14 submitting organizations, which is really great for the second iteration
of a benchmark.
Yeah, that is great.
What is the Storage 1 measuring and how does it do that?
And I'm motivated when I ask this by the question of, are the storage layout patterns
and access patterns and a correlation between the kind
of hardware configuration and technology and topology, are those settled enough for a benchmark
to be able to be illuminating?
Or is it still like a fluid situation?
That's a really good question.
Well, let's start at the start.
So MLPerf storage, the focus and the goal here,
almost all of our benchmarks are formulated to answer a question. And the question is better
and faster, right? And for MLPerf power, of course, efficient. And we want to produce things
that are fair, that are reproducible, and that are going to be useful to help buyers and sellers make decisions
as well as to help engineers design the next generation of systems.
So that's kind of the goals.
MLPerf Storage looks at the workload that is AI training and then zooms in on the loading
of data that occurs prior to computation. And the need in many ways arose,
as I think even as far back as three or four years ago, I had heard reports of folks buying
really large clusters of accelerators and then discovering that their storage systems actually
couldn't really keep pace and couldn't saturate the compute side. And so that was the first clue that, oh gosh,
there's a real problem here and one that we can actually help to address. And the idea of MLPerf
storage is to model the data ingestion that you need to feed a given set of accelerators.
And so when you look at the results, which can be a little confusing, but it's typically formulated in terms of how many accelerators are being fed and what kind
of accelerator, right? Because the geometry of a given accelerator will influence how much data
is in a mini batch that you need to feed in, right? And so the actual process that we're measuring is start to finish. We've got some data for a given workload, and maybe it's a 3D volume that's
one to 200 megabytes. Maybe it's an image that's a couple hundred kilobytes. How do we feed in
these batches of data off of permanent storage into the host memory of a sort of emulated compute platform. And so that's sort of what
you're measuring. And then you had some really good questions digging in a little bit deeper
about access patterns and so forth. And a lot of that is going to be implementation specific.
For instance, TensorFlow and PyTorch, their default storage engines tend to handle data quite differently.
And there are also custom storage backends for PyTorch that are different from the default.
And so ultimately, how the workload expresses itself will be subject to all of that.
And there is this degree of configurability there, in part because I think one of the
core principles of MLPerf
is to not be opinionated about the right implementation. When we started PyTorch and
TensorFlow were sort of the most common training frameworks out there, but Paddle was widely used,
CNTK was a thing, MXNet. And so when you're faced with a situation like that,
you have to admit that they're all perfectly viable options.
And so it's sort of best left to the submitter
to pick what they think is the right one.
Smarter creates cooler HPC.
If you're attending SC24 in Atlanta, you can learn how.
Attend Lenovo's new innovation forums on November 19th and 20th.
Open to the public, these forums are designed to showcase Lenovo's solutions that help enable
outcomes, deliver insights, and solve humanity's greatest challenges. Topics include GenAI,
liquid cooling, genomics, weather and, the benchmark, the storage benchmark results.
And as a journalist, what I would like to see would be standings like
baseball standings or football standings. Here's number one, two. But what I'm seeing really is,
let's say instead I was a buyer. I'm a purchasing person, IT manager, CIO, CTO. What I really need
to do is dig into what is the issue I have, the challenge I have,
the need I have, and really start to match up what the different vendors are showing us in terms of,
say, number of simulated Excel, all sorts of these different factors that come into play
that drives the result that they get and what is appropriate for that particular buyer.
Am I on the right track here? Yeah. So, you know, I'm going to go pick one at random here that just to dig into, right? Yeah.
So let me dive in here. So let's talk about Weka, say row result 58,kaPod, you can feed 24 A100s for the 3D U-Net workload, right? So now
3D U-Net, to decode that, is a neural network that looks at medical volumes. I think they're
of brains and then picks out the cancerous and good cells. But from the standpoint of storage, I think the key thing
is that you're loading these 3D volumes that are one to 200 megabytes, and each batch of data is
going to be some number of volumes, right? And the volumes would be randomly fetched off the drive.
And what this result is saying is that Weka system can saturate 24 A100 class accelerators, right? And so that's how you read
it. And so it kind of gives you, and it's a very unique kind of benchmark, right? Like most storage
benchmarks you've seen are probably more IOPS and gigabytes per second, right? Yeah. Yeah. And so
this is just phrased in a slightly different way. Now, one of the reasons that it's not organized like the top 500, which is sort of a rank
ordered list, is it's not clear how you compare them.
And in fact, as ML Commons, we don't really want to be in the business of comparing, right?
Because again, to pick two things, I'm going to pick
two adjacent things here. Submission 58 is a single client Weka system. 44 is a direct attached NVMe
SSD from Micron, right? And it's not necessarily clear that those things are directly comparable
and in which situations they would be, right? Now,
certainly there are some situations where they are comparable, but there's many where they might not
be, right? As a benchmark, MLPerf storage supports both block file and object, right? And so you've
got a wide variety of implementation choices there. So you could have direct attached storage,
you could have network attached storage, you could have network attached storage,
you could have hyper-converged storage. There's so many different options in how you do your
storage. And we ultimately wanted to make a tool that would work for all of those. And being that
inclusive sort of necessitates the intellectual honesty to say that you can't actually make a rank ordered list.
Also, we're a neutral playing field.
So we don't really want to get into the situation of making direct comparisons ourselves.
So all of this would help explain why I get many press releases from different vendors
saying they came out on top on the MLPerf benchmarks.
And really what they're doing is showing a particular way that they excelled,. That's exactly right. Yeah. It's a very flexible tool. I mean,
to give you an example, even in MLPerf training, where every single result is expressed in terms
of time to train and smaller is better, not all submissions are really making the same point.
Like I think there was a really lovely submission a while back
that many people sort of missed the importance of it. And it was a submission using JAX,
which is a newer training framework that's very easy to use for researchers. And it was really
the point of the submission was that JAX as a training framework is plus very close in performance to
TensorFlow. And so from a marketing standpoint, I think the point they were making is, hey,
Jax is so easy to use. It's so user-friendly, but it gives you the performance of TensorFlow.
And the name of the game was not getting the biggest, the best score and tuning it to the utmost. The name of the game was showing,
hey, this is close enough. It's pretty good. And your productivity will be way better, right?
And so if you were to simply look at the time to train numbers, you'd come to potentially a quite
different conclusion. Well, I mean, at the end, the real benchmark is what you use. So if you
already are using JAX, then it tells you how that performs.
That's right.
Then if something else is faster, it's going to be relevant, right?
That's right. But on the other hand, if something else is 10x faster, yeah, that might tell you something very relevant.
That's right. That's right. Well, that still kind of poses the question of what does it take to switch to it? And the switching cost comes in.
100%. And again, that's sort of the area where, you know, organizationally, we don't have an opinion on those things. And we'll leave it to the vendors to make these benchmarks like in the old days, the TPC benchmarks, and looks like they have an AI version two now,
they were audited and certified, and it was part of the process. Is that a requirement here? Are
they self certified, like really the HPL is? So for anything that's published on our website,
all of our benchmarks have a peer review process
where when you submit the results, you and your closest and friendliest competitors,
customers, researchers, et cetera, who also submitted, get to look at your results.
And one of the cardinal rules of ML Commons is that these benchmark results should be
reproducible, at least as reproducible as we
can make them. And by the way, on the training side, we have submissions that are absolutely
massive that have been through this process, right? So in MLPerf HPC, we had some submissions
on Fugaku, which at the time was the world's number one or number two supercomputer. Some of the MLPerf
training results have well over 10,000 accelerators in them. And in many cases, those have been
through the peer review process and people may have attempted to reproduce those results and
make sure that it's correct as part of the submission and review process.
Got it. Got it. That's excellent. Now with some of the benchmarks, you can vary the size of the submission and review process. Got it. Got it. That's excellent. Now, with some of the benchmarks, you can vary the size of the data set to put your
system in its best light.
And certainly these big systems want a big problem to solve.
Yes.
Is that part of the deal?
Do you have like different sizes?
Well, actually, this kind of leads to a whole question about data sets.
I know that you provide some data sets, and I think that sort of regulates that part and
makes it apples to apples.
Can we talk about just the data sets by themselves and also the varying size of them?
Sure.
So on the training side, so the training benchmark is a strong scaling benchmark.
And so it's actually fundamentally different than LINPACK HPL or
Top500, right? And so for listeners who don't know, right, with Top500, the problem size expands
with the size of your computer. And I think you can basically pick the problem size you like.
With MLPerf training, the data set is the data set and you are training to a given accuracy target.
And so that's strong scaling.
So actually with a lot of LINPACK results, most people are able to get 80, 90% peak flops for whatever the size of the system is.
And that should generally be manageable because it is weak scaling by increasing the size
of the dataset.
In the case of MLPerf, the dataset stays the same. So using a larger and larger system becomes ever more challenging
and really stresses the ingenuity of the whole community to come up with different ways of
partitioning the problem, right? So that gives rise to different data parallelism, tensor parallelism,
sequence parallelism, pipeline parallelism, sequence parallelism, pipeline parallelism strategies
that all play a role in training. We're starting to see some of that in MLPerf inference as well.
So where the performance might require multiple nodes and then how you partition it. MLPerf
storage is a little bit different. That is actually a weak scaling benchmark where you
increase the data set size based on the scenario that you're trying to model.
And so that would have different characteristics.
Very well.
I have never been a fan of that vocabulary, strong scaling, weak scaling.
But I know it's a standard.
And for our listeners who are like-minded with me and don't like it, you're going to have to go spend some time on Wikipedia and other articles to put your head around which one means what. And also Amdahl's law and Gustafson's law
and things like that. But yeah, there are different ways of getting good performance.
Yes. David, what does it typically cost to run these benchmarks, including equipment,
personnel, time, reporting, et cetera? Do you have an idea about that?
That's a good question. I mean, some of it obviously scales up with the size of the system.
I don't want to hazard a guess of how much it actually costs to run MLPerf HPC on Fugaku.
I think it is public what Fugaku costs.
Well, maybe we can ask how long does it take typically to run these banks? Yeah, it varies based on the system.
I think for a lot of the training benchmarks, which are probably the most expensive to run,
many of them are running in an hour, some of them under a minute.
Properly.
In principle, theoretically speaking, in a couple of hours, you could go run it and
you're done, modular, whatever storage setup it needs. Sure. I think one of the challenges for a
lot of large systems is things like preparing the system are non-trivial, right? So clearing caches,
making sure that there's no failures in the system and that they've all been cleaned up.
But yeah, I mean, the runtime of the benchmarks is designed to be relatively manageable.
Well, I had another hour.
Yeah. I mean, if you were running on a really slow system, there are some benchmarks that
could take hours and hours. That's like if you're running MLPerf inference on a Raspberry Pi.
Right. There we go.
Again, that is the cost and downside of providing a great deal of flexibility.
So David, let me ask a very unfair question, possibly. Is there an overall pattern or trend
that you're seeing in the totality from the benchmark results? I assume other than simply
more powerful, faster performance, but are there other trends that you're seeing?
Gosh, that's a good one. I mean, I think one of the most interesting things in terms of trends
is just the rate of progress we've made over time has been pretty spectacular. And I think
the easiest to illustrate that with is MLPerf training because it's the oldest. And again, the first round of submission was December 2024.
And I think if you look over time, there's a great graph that I have that I use in a lot of
my public talks that shows sort of what's the performance of that bin over time. And I superimpose
that against Moore's law. And if you look at that sort of timeframe, and if you make the assumption that every incremental transistor linearly adds performance, right, which may not be fair, but you know, I'm on a, that's a generous assumption, right? In that same timeframe, I think performance has been up by maybe four or five, six X, and the MLPerf performance is up by somewhere around 50x.
And so I think one of the things to me that's most interesting is what it says is that when you look at MLPerf training and all the tools that you have to attack the problem, right, that we're obviously using just so much more than just pure brute force scaling, right? It's we're using data sets and optimizing them, how they're sorted.
There's new silicon fabrication technologies that give us the advantage of Moore's law.
There's new architectures, dedicated matrix multiply units.
There's software optimizations, new algorithms.
There's just so much that goes into it. And I think one
of the things I'd say that to me is constantly very impressive is the degree to which there's
just a huge amount of ingenuity in everything's on the table for really driving this level of
optimization. And so I think that's a really fun and exciting story. Yeah. Is there any
geographic pattern in terms of submissions? You mentioned Fugaku. That's kind of cool to have.
And of course, most of the infrastructure vendors are based in the US. Nevertheless,
how global is this? There's a huge amount of participation in Asia and in the US, and then some in Europe and some in Africa as well. So
we've got a good global balance, but I think if you were to look at the most active regions,
it wouldn't be surprising. Yeah. Yeah. By the way, David, early in our conversation,
I mentioned 2018 AI winter. And as I think back on it, you're right. I think AI, the AI winter talk, I want to
say kind of died out two, three or four years before. But did you ever, and your colleagues
ever think there'd be this explosion we're seeing now back in 2018, how AI has just been taken over
and is so dominant? Yeah, there definitely were people who were of that view for sure. Absolutely. One of the people who helped found MLPerf is a gentleman named Greg Dimos, who is actually one of the people much in the direction that we found ourselves. I would say
I was a bit surprised by the degree of uptake in chat GPT, right? Like, again, these neural
scaling laws, the general trend that more data produces ultimately, in some cases, qualitatively
different results, that's been known for a long time. I've been aware of that for a while. But I think the, I was a little bit surprised when it became sort of common
dinner table talk. And the things that I know the greatest about are semiconductors, machine
learning, computer performance, energy efficiency, all of those things. I think when we started out in 2018,
a lot of them were kind of niche topics and suddenly every single one of them is in the
spotlight. So it's actually been truly a delight. Yeah. It doesn't look like it's slowing down at
all and it's just going to keep getting better and better. There is some concern about, are these AI
companies making money? Are they going to run out of runway? Where's the ROI
for this? As you move to the enterprise, are they going to have a good ROI for the projects,
et cetera? But I think all of those conversations are not predicting any slowdown. They're just
predicting like arning out wrinkles, basically. Yeah. I mean, I certainly hope that there won't
be a slowdown, but I, again,
when I look at some of the absolutely fundamental changes that have gone on in society, things like
the internet, I lived through the.com boom and bust. Yeah. And that certainly could happen again.
But I think the thing that's clear to me is just the impact and value and just tremendous capabilities that
AI brings to the table. Like one of the things that, you know, someone, I was at a party and
someone asked me, why do I do what I do? And first of all, it's really fun. I work with fantastic
people, but then there's sort of the, what's the impact on society angle? And one of the points I made is that if I just look at something that I have day-to-day
experience with, I am not a perfect driver.
I'm pretty good, but not perfect.
And the reality is that if you live in San Francisco and you have the opportunity to
ride in a Waymo, it's an incredible experience.
And I fully believe that things like autonomous vehicles will
ultimately be safer and more efficient than human driven vehicles, right? There's plenty of humans
who get DUIs. There are humans that I've gone through stop signs before because I was distracted.
It happens to all of us and to err is human. And of course, computers make mistakes too. Absolutely. But they
don't get tired. They don't get angry. They don't get drunk. Every so often, they absolutely have to
be rebooted. But when I think about the potential, there's tens of thousands of people who lose their
lives on the road every year. If we cut that down by an order of magnitude, that would be
just a tremendous gift to society.
Yeah, that's right on.
That's right on.
All the autonomous vehicle vendors will get sued, but that's another problem.
Well, actually, the legal aspect of who gets sued, if there is a fault, is very real.
But I think the roads will be safer if there are no human drivers at all, if it's just
entirely machines. And I think we will maybe get at all, if it's just entirely machines.
And I think we will maybe get there someday, but it's happening.
That's for sure.
Yeah.
Yeah.
Or, you know, if nothing else, you know, even giving computers really the ability to help out, I think could be fantastic.
Right.
Like super, super augmented driving.
Yeah, exactly.
Yeah.
And I mean, the reality is there's just so many things
that computer and a human together
are so much better than on their own.
Yeah, for sure.
Yeah.
Okay, David.
So why don't we end with the question
looking out ahead for ML Commons, ML Perf, what's next?
Yeah, so we've got a pretty
exciting time teed up. I mean, when I look at sort of where we're growing, well, in the short term,
right ahead of supercomputing 2024, we're going to release ML Perf training results. And I've
already seen them. They're pretty exciting. I hope you guys will join us for the briefing or at least
looking at the results. And so near term, that's one of the most exciting things.
Then looking out a little bit further, the release of our MLPerf client benchmarking
app, I'm pretty excited about.
And then really taking us into new territory, whether it's MLPerf Automotive, the next
generation of storage, or MLPerf Networking.
All of these are potential areas where I think we just have a fantastic
opportunity to help make the world better. So we're going to look forward to the new MLPerf
benchmarks that you mentioned and we'll look for them at SC24. Are you doing anything special at
SC24? I hope to be there. And there's going to be both an MLPerf Birds of a Feather session and a Future of Benchmarking
Birds of a Feather session.
I think they're both on Wednesday.
And so if I'm there, I'd love to see anyone else who's interested in the topic.
Those are great topics.
I look forward to being there with you.
That's great.
Fantastic.
All right.
Thank you, David.
That was great.
See you guys later.
Take care.
That's it for this episode of the At HPC Podcast.
Every episode is featured on InsideHPC.com and posted on OrionX.net.
Use the comment section or tweet us with any questions or to propose topics of discussion.
If you like the show, rate and review it on Apple Podcasts or wherever you listen.
The At HPC Podcast is a production of OrionX in association with Inside HPC.
Thank you for listening.