@HPC Podcast Archives - OrionX.net - @HPCpodcast-73: Peter Ungaro – Industry View
Episode Date: November 2, 2023We are delighted to have a rare opportunity to catch up with none other than Pete Ungaro, long time luminary and admired leader in HPC/AI. In this episode of Industry View, we cover many topics inclu...ding the Cray journey, the HPE acquisition, the opportunities and challenges of AI, the geopolitics of high tech. [audio mp3="https://orionx.net/wp-content/uploads/2023/11/073@HPCpodcast_Pete-Ungaro_Cray-HPE20231102.mp3"][/audio] The post @HPCpodcast-73: Peter Ungaro – Industry View appeared first on OrionX.net.
Transcript
Discussion (0)
It became pretty clear that the opportunity in front of us was bigger and bigger, especially
as the world of AI started to come into play overall.
And all along, we have been talking to many different companies throughout our journey
about partnerships and such, and HPE was one of those.
And those discussions just started to turn
into acquisition discussions,
which is what we did at Cray.
And what we're finding is that more and more applications
need that kind of infrastructure.
And I think AI and large language models
and all of those things
have just brought a whole nother set of applications
that need that kind of an infrastructure.
We were more about uniqueness
and doing something different
than about just huge scale at low cost.
From OrionX in association with Inside HPC, this is the At HPC podcast.
Join Shaheen Khan and Doug Black as they discuss supercomputing technologies and the applications,
markets, and policies that shape them. Thank you for being with us.
Hey everybody, I'm Doug Black. Welcome to the At HPC podcast. I'm with Shaheen Khan.
Shaheen, great to be with you again.
Delighted to be here, Doug.
Yeah, and we have a really special guest with us today, Pete Ongaro, long time HPC luminary,
former CEO of Cray, and then became SVP GM at HPE after the HPE Cray acquisition.
Welcome, Pete. So glad to be with you today.
Hey, guys. Thanks for having me on. I really appreciate it.
Really delighted. Delighted that you made it happen, Pete.
Maybe we could just start off, Pete, if you could kind of catch us up a little bit on what you're doing, what you've been doing since leaving HPE in 2021.
Yeah, sure. I had a great time at HPE after they acquired Cray, which was in September of 2019. I was there about a year
and a half and run in different parts of that business, including, of course, the HPC business
and HP Labs and AI and a bunch of other areas. And then I decided to step away and mostly have
been doing advising and consulting with a number of different companies, from public companies to startup companies, including some quantum companies and everything in between, as well as being a board member at Source you led was really the whole Cray journey. I remember when you took over Cray, it had just spun out of SGI,
it was early days of that. And it had no IP, it had not a whole lot there. And then like 10 years
later, 15 years later, HPE buys it for well over a billion dollars. So that indicates a whole lot
of works that I'd love to get a glimpse
of. Yeah, it's got to be probably my without a doubt, actually, the funnest part of my career,
the most challenging and also the funnest part of my career. You know, I had come from IBM,
I was at IBM for almost 15 years before I went to Cray. And so coming from a huge company to a pretty small company,
in my mind, at least just over 1000 people at the time was a huge shift for me overall. But
one thing that we had was just a incredible history, of course, at Cray, but also incredibly
smart engineering talent at the company. And I was really excited about starting
that whole journey with some really smart engineers. Coming from the sales side of things,
I always felt like if we had great products, we could do great things. And so that was really
what encouraged me to join there and got me really excited about that journey.
And then brushing ahead, I guess, you know, a decade, then HPE came along.
Tell us a little bit about that acquisition and how that all came together.
At Cray, we had to do a pretty major turnaround and we were struggling financially when I
joined the company and we're struggling to figure out our technical direction. And once we got that going
and we really honed in on where our future was going to be in this whole new world of building
high-performance infrastructures for supercomputing, HBC, if you want to call it that, and AI,
it became pretty clear that the opportunity in front of us was bigger and bigger,
especially as the world of AI started to come into play overall. And all along, we have been talking
to many different companies throughout our journey about partnerships and such, and HPE was one of
those. And those discussions just started to turn into acquisition
discussions and what the combined companies could do that neither individual company could do on
their own and became a really exciting opportunity, both for taking the technology to a much broader
space, leveraging the incredible talent that we have within the Cray team,
not only in the R&D engineering team, but across the company,
and leveraging the reach and buying power of a company like HPE.
So it became a pretty exciting opportunity to think about bringing those two companies together.
And what could you do from
there? At the time, this was in the backdrop of Cray basically being four out of three with all
the exascale deals that were going on around the world, certainly in the US. And it was a little
unexpected if you were more than a couple of steps away from it that, oh my God, Cray is like winning
every one of them, really? So that must have played
a role in becoming suddenly very attractive because HP was also pursuing those deals very
heavily. Maybe that whole exascale journey is another aspect of this. We can maybe say, Shaheen,
that we were maybe a little bit of victims of our success at the time. we had built some really incredible technology and we had partnered with Intel on the Argonne exascale system.
We had independently won the exascale systems at Oak Ridge and Lawrence Livermore at the time.
I think we were three for three at the time, just to clarify.
There was a fourth one. And so, you know, it became, I think, a really interesting opportunity, because if you think about this, just the overall size of the company that Cray was and racking up, which was basically about a billion dollars of wins for a company that wasn't doing a billion dollars in revenue became, you know, really interesting, I think, from a financial perspective of what a company like
HPE could bring to the table that a smaller company just couldn't do financially itself.
I think we had our own game plan, so we were ready for it, but it became, I think, a lot clearer and
a lot easier with the HPE partnership.
And all of this was also when COVID had hit.
So that must have totally.
Yeah, COVID was right after.
So we ended up combining the two companies in September and then COVID happened, let's say March of the following year.
So it was really a challenge, I think, maybe integration, much more challenging, having to do that inside of COVID
restrictions versus, you know, how you would normally just get a lot of people together in
the same room and work things out. So definitely, I think COVID added to some of the challenges.
And I always think about how I would have ran cray in COVID times, but I never had to do that. HPE did an amazing job,
I think, of managing the COVID situation around the world.
Very complicated supply chain too, right?
Oh my gosh, crazy. I mean, it just became absolutely crazy during that time, especially,
you know, at Cray, we were bringing out a brand new product. So virtually every part of that new offering
was brand new. And so we had to start the supply chain almost again from scratch.
And the HPE team was a huge help in being able to get supply. Just because we could
way more quantity, we could leverage that, that we probably couldn't have leveraged at Cray.
So Pete, given your history in HPC,
your view of the whole landscape,
let me throw this out to you.
If I were to say a theme,
a very powerful theme that's run through HPC
defined broadly over the last, say, 20 years,
this century,
would be increasingly moving from niche
into something of a mainstream
or having broader and broader impact so that HPC
is less exotic? I'm curious what your view would be. Yeah, I think about it a little differently.
While I agree with what you're saying, the way that I think about it is more from an architectural
perspective. I think there's only a few different ways to really develop systems for different things, right?
So, for instance, if you have a big, huge database, you want to run that on a very large SMP-based system.
And so you could build very large SMP systems to do that, like they have at HPE or the Oracle and such. If you have applications that you can distribute that you run
on a single server, you could easily distribute across many, many servers and you don't need to
communicate much between those servers. Then you build out scale-out infrastructures. I would say
the cloud is the best place to go look at building out amazingly optimum scale out infrastructures.
And then for applications that you need to communicate, you know, are not so easily split
up over many, many nodes. And you have to communicate between all those processors a lot.
You have to build much more highly interconnected architectures, which is what we did at Cray.
And what we're finding is that more and more applications need that kind of infrastructure.
And I think AI and large language models and all of those things have just brought a whole
another set of applications that need that kind of an infrastructure. And so, yeah, I think HPC or
supercomputing style infrastructures are the ones that handle those kinds of applications the best.
And so you're seeing a lot more play or a lot broader market for those kinds of machines,
which is a huge premise of why HPE acquiredray, right? Is that there's a huge growth in that area of applications.
And so, you know, you need a whole different technology.
I think it's a big reason why NVIDIA bought Mellanox.
And it is very, very similar is to be able to build out those kind of systems and those
technologies.
Now, one thing that I know you put a lot of emphasis on at Cray was to build the software part of the business. And by the time HP
acquired it, it really was a big competitive advantage. I'd love to hear more about how you
went around that and the evaluation that must have happened with the convergence of the software
stack between commercial and HPC as well.
Yeah, I would say, you know, when I came to Cray, a huge focus of the software was just supporting the next generation hardware.
And so it was kind of subservient to the hardware in many, many ways.
And one thing that we did was really change that and really flip that almost. So the software became the huge driver
that changed over machines and you could bring applications forward and you can have a really
interesting environment. And over time, that software shifted to be able to work in more
distributed infrastructures and more, you know, leverage some of the great work that was
going on on the cloud and those technologies within an HPC stack, and also be able to open up
so that you can run these different types of applications like AI in this infrastructure.
Because when it started, you couldn't, I remember we did many projects with some of our customers
about just trying to run different, even HPC style applications on this software infrastructure,
and it wasn't very conducive to it. So it took a number of iterations over a dozen years to be
able to really morph the software into something that was flexible and really highly optimized and took advantage of
the underlying technology. You know, a huge bunch of the software today, especially in the commercial
space, doesn't really take full advantage of the underlying hardware technologies. Like if you can
pass messages very fast, well, then what ways can you have a different algorithm to optimize your application
on the machine? You can do very different things, right? And so how do you build a software
infrastructure that really takes advantage of that and speeds up applications so that you don't have
to scale them out so high to get the same performance level? That's a huge price performance
advantage. That's about the most energy efficient computing that you could ever do is be able to run on fewer processors to get the same performance,
right? So we really worked on things like that, that, you know, really, I would say brought the
software infrastructure a huge step forward. So looking at systems now, advanced systems,
there's some talk in HPC circles that we're really looking for
new technologies that will kind of break this dense, hot system kind of logjam. Maybe a new,
you know, optical interconnect, an idea like that, or the maturity, say, of quantum where
certain workloads can be offloaded to augment classical HPC. But do you have views on that?
That is there a technology under development possibly coming online in any time in the
near term that could sort of push the whole field forward?
I think that there's a number of technologies that show promise in different areas.
And many of them have been working in optical interconnects.
Those have been
worked on for many, many, many years. But the way that I think about it is if you think about data
centricity and being able to manage your data, I think that's going to be done on a very traditional
style system just because of all the data management infrastructure, all of that software kind of needs all of that environment.
And so what we're talking about is how do you speed up the actual application or parts of the application by maybe offloading them to processors or technology that can really speed things up from a processing perspective, whether that's a quantum system,
whether that's AI accelerator, whether, you know, there's many, many different, you know,
FPGA, there's been many different technologies that are out and about working on that and
many cool startups trying to find application niches for their technologies.
The way that I think about it is that these things will be hanging off
a more traditional infrastructure, which is why I think it's so important to think about the
integration of these things. A lot of people, when they talk about quantum computing,
think about standalone quantum computers. And I think that's not really viable. I don't see that
that's really the workflow that people are going to use because of the data side of things and how managing the data is the most important piece of all of this. traditional, let's call it infrastructure, whether that be super high performance, like what we were
doing at Cray, or whether that be a cloud infrastructure, you know, scale out distributed
infrastructure that's being built or some combination of those two things. So I think
about these technology as not coming in and just completely changing the game, but really impacting different applications or different pieces of the workflow
and not really changing the whole workflow itself. So it'll be kind of an incremental process.
Technologies will be brought into the mix as they mature, but the core classical core will kind of remain in place? For people doing a broad set of applications,
absolutely. If you're somewhere that, you know, is running one application, well, then you can
build a more specific infrastructure for that application, right? Or if you run a huge amount
of one application or one specific algorithm, yeah, you can definitely optimize for
that. But most people on these infrastructures are running a very broad set of different things,
whether that be a major company or a national laboratory or university, you know, they're
typically running a broad set of things. And so in that case, yeah, I do feel that's the path. I mean, how many government
projects have we had over the years to really try to come up with a massive change? And at the end
of the day, it's always ended up being more, how do we make the existing infrastructures better
and just increment on that? It reminds me, many years ago, there was an effort called
integrated heterogeneous processing,
where you had all these modules that were good at one thing, and then the challenge was to
integrate them. So it's like a symphony that you orchestrate into running that app,
and then recompose to run another app. Maybe the era for that is coming.
Yeah, I think about that a lot. I think that that's how these we, you know, make that
integrated with the overall workflow so that you could carve off those applications, ship them to
a quantum system, let's just say in this example, and then bring that back into your overall
workflow and manage your data together so it's not an island onto itself, right? And so I think about that
that's the way that things will move going forward. And we'll see more and more heterogeneous
computing. I mean, we see it, you know, with GPUs and CPUs, right? That's, you know, GPUs kind of
started out that way. And now, you know, as the software has gotten better and we've been able to
make them more the mainstream, especially for certain applications, AI being a great example of that, right?
Yeah.
You mentioned interconnect and NVIDIA acquiring Mellanox.
I remember Cray, of course, always had its own interconnects in a really big way, even going back to the vector processing.
And then I think you had some IP and technology that was sold to Intel,
and you went on to design Slingshot. And that's the interconnect that is now running these
exascale systems. Why not use Mellanox? What was the thinking about let's do it ourselves in-house?
Yeah, I mean, Mellanox has incredible technology, right? What we found was for the very, very tightly
integrated infrastructures that we wanted to build, we felt like that there was some
technologies that we could do better than what Mellanox was doing and build a more unique
infrastructure, but also use Mellanox technologies. I mean, we built many
standard, you know, kind of more commodity style clusters with Mellanox technology that
was a huge part of our business overall. But that for really highly scaled out systems,
we could do some unique technology that made that even more special and more unique. And at Cray, if you're going to buy
something that you could buy from everybody else, that wasn't really going to be where Cray was
going to be hugely successful, right? And so we really needed to build some unique technology and
have some unique capabilities that nobody else had. Slingshot was a great example of that.
And also a great example of, I talked about making our software more mainstream.
The same thing with our interconnect, right?
We moved to Ethernet-based interconnect, which everybody had and was, you know, much more
interesting to a much broader scope of customers and people than kind of very unique technology
that we had before that, that wasn't broadly used and broadly built
into infrastructure. So is it fair to say that the design center for what you needed to do
needed to be way at the high end and what was designed for commodity would just not scale so
easily when you went that high up? Is that a fair statement? It was my personal feeling that we were
going to be successful if we built something that
was differentiated and unique. And that there's many, many companies out there that did, you know,
commodity technology. HPE was a good example of that, right? They had a huge HPC business with
commodity technologies. And why would you come to Cray instead of HP? Well, there are certain reasons
that people did and, you know, we would have some customers there, but that was never really going
to be a huge part of our business. And it's not very high gross margin business. So it doesn't
really bring a lot to the bottom line of a company. And so you had to really think about the business
side of it too, as we brought that all together. So we were more
about uniqueness and doing something different than about just huge scale at low cost.
Right. Now with your long-term involvement in the success of the Exascale project in the U.S.
with the three systems, one online, two being installed, but we just observed Exascale Day. I'm curious about if you have
thoughts on DOE's strategy for the next generation of leadership class systems. They're trying to
move away from monolithic toward modular, although we're still talking enormously powerful systems.
But can you share some thoughts in the direction DOE's going?
I'm not as close to it as I used to be, obviously, but I would say that the path that they're
going down, as I understand it, makes a lot of sense.
It's very similar to what we're talking about here today, which is having some different
technologies as parts of the system that you would integrate in to optimize certain application
areas that aren't doing so well on the standard technologies
that you can get today. So I think it makes a ton of sense to me. But the whole challenge is going
to be how do you integrate that and make that work together in a very tightly integrated way.
And to do that, you need to build a software infrastructure that can handle all of that.
And that's a huge part of what we were trying to build at Cray.
And I think that more and more, that's really a huge piece of what's going to determine
who's going to win the next round of multi-exascale or zetascale machines going forward is, you
know, who's going to build that kind of an infrastructure.
Another thing I really loved about your Cray journey was the comprehensiveness with which
you went after these things.
Interconnects, we will do it.
System, we will do it.
Flexible architecture, choice of technology, we will do it.
The other thing that you also did was storage, right?
That also you started building more and more IP.
I'd love to hear that part of the journey as well.
We probably did too much at the end of the day. But what we really were focused on was,
I think we're really close to our customers. And we really tried to listen to what their issues
were. And more and more as we built out these systems, the storage
infrastructures became huge issues in these machines, right? Like how do you have a parallel
file system and manage data across this whole machine? And this became a massive, massive
endeavor that, you know, we just didn't feel like there was a lot of good offerings out
in the market that were doing that, that were integrated with what we were doing. And so,
yeah, we decided to jump into that game too. And that became a really important part of our
business overall. And a really successful actually part of our business was building out, you know, luster-based storage
infrastructures. And how do we think about archival and things like that through partnerships
and some of building it our own. And a lot of integration work, you know, was a huge part of
what we did. You mentioned types of HPC architectures and how some are very well suited
for AI. I'm just curious when the whole chat GPT thing
just blew up last November.
I mean, we saw some indications coming.
This was on the way, certainly from Google and Lambda.
But I'm just curious if it came as much of a
personally for you so surprising as it did,
I think, for the rest of the world.
Well, you know, at Cray,
we were building systems for customers in that environment,
something we can't talk about too much because we did it all pretty quietly. But we were involved
in that in a pretty early space. So it wasn't as earth shattering. But what was earth shattering
was, I think, just how quickly it all got picked up because we really hadn't seen that.
You know, even if you think about it just wasn't applications, you know, I mean, people were doing deep learning models and stuff.
And that was really kind of struggling to get picked up quickly.
It was fast growing, but still not huge.
But then ChatGPT came on board and just how quickly people embraced it and started to
use it.
That was stunning for me, for sure.
It was on fire.
Absolutely.
Yeah, yeah, yeah.
But that it was happening and what they were doing and such wasn't so stunning just because
we kind of knew, right?
We were involved in that.
We're building some systems for customers in that space and such.
So another topic I want to raise is HPC at the edge, the so-called edge. I know that HPE,
of course, had edge to cloud, which continues to be a big mantra over there. And there's a notion
of a mobile edge and near edge and far edge. And, you know, are we talking about door sensors or
big clusters that are sitting out there? And of course, the other angle is that these scientific instruments are themselves
becoming more and more capable in terms of computing.
And how do you see HPC at the edge
or really stuff that sits outside of a data center?
Yeah, I think it's really important to define the edge
because a lot of what people call edge computing
is still in data centers.
And I don't really think about that
as edge computing as much in data centers. And I don't really think about that as edge computing
as much, but it technically is. But when you have more and more capability out there with,
whether it be scientific instruments, whether it be just radars and sonars and just
automobiles and planes and everything's becoming a computer, right? So the closer you can be to computing
that data, the better you're going to be, right? And you can offload huge amounts of what needs
to go back into the data centers by computing out on the edge. So I believe edge computing is going
to be bigger and bigger over time. One of the things that I was really excited about in my HPE journey
was being able to lead the edge group along with the HPC and AI groups. So you can see how those
things could come together over time. At SourceCode, we're doing the same. We have a great
edge computing program at SourceCode. And we're doing the same about really thinking about how do we take computing to the edge and serious computing to the edge, not just super lightweight computing, but, you know, GPUs and really highly capable systems all the way out to mobile deployments, whether that be, you know, in harsh environments, whether that just be closer to where the data is being initially
brought in, and then offloading data centers to do things quicker. So I think that this is going
to be a bigger and bigger thing. And I think, you know, the whole autonomous vehicle piece of things
is going to even blow up edge even more. I mean, a car is mostly a computer these days, right? So I think it's
going to be a bigger and bigger part of our future. Something I'm very excited about.
I think every wheel of the car is a computer.
Pete, you also mentioned energy at some point and power cooling, all that also continues to
be a big deal. And especially when you scale it up to Shasta levels. What do you see happening there? And is liquid cooling ever going to get standardized and common? What are the dynamics
and customer challenges you see there? I really believe liquid cooling is going to become more
and more important as we go forward, just because the power of these, you know, the wattage of these
processors and GPUs and such is just getting
higher and higher and higher, right? And so in order to efficiently use that, you have to end up
cooling it in different ways than just air, you know, blowing air as fast as you humanly can
across the motherboard, right? So I really believe more and more that there's going to be a consolidation of data centers
to ones that can handle liquid cooling and that ones that can't handle liquid cooling will have
not as high performing infrastructures in them. And there'll be plenty of those.
You know, lots of the enterprise style infrastructures don't need all of that
capability. And so you'll be able to put them in these lower performing kind
of data centers. But I think in the future, you're going to see more and more data centers moving to
liquid cooling and just so that you can take advantage of the new technology. Otherwise,
you have to clock down things to be able to run them. And who wants to spend 100% of the money
for a new GPU, but not be able to run it at 100%
of its capabilities. And pretty soon we're getting to where if you put a few of those into a machine,
you're going to have to liquid cool it no matter if you want to or not, if you want to run it at
100%. So that's kind of, I think, just where we are in financial story there. you know, the total cost of ownership of liquid cooling is a huge win
already for customers. And so it's really about having data centers that can handle it and
adoption. Do you agree with the statement we've been hearing, the data center of the future
will look increasingly like a supercomputer? That's a great question. I think at least a
portion of the data center,
you know, it gets down to the applications that you're running. Again, if you have applications
that are easily distributed over a broad number of systems or nodes or processors, and they don't
need to communicate a lot with each other, you don't need to have that kind of an infrastructure.
And that's not, you would waste money by building that kind of an infrastructure. But if you're running
applications that do need to communicate a lot, and whether that be a large language model,
whether that be, you know, science and engineering style application that you're scaling up or
whatever, well, then, yeah, it's definitely going to go in that direction. So I think I would say yes, that's right for those kind of applications, you know, people that need to run those kind of applications.
And of course, most people run a mix, right?
So you have part of your applications that need that kind of capability and part of them that don't.
And so, you know, you want to either run them in different data centers,
maybe you run the ones that don't on the cloud and you have, you know, your own high performance
infrastructure for those that do or more and more, I think over time, you're going to see
cloud guys building more high performance infrastructures. It only makes sense,
right? Just because those applications are becoming more and more important and more and more broadly used. humans for a longer period of time, or will they make us obsolete and turn us into subspecies? And
it's an existential threat and a Darwinian mistake. Where do you land on that discussion?
You know, I really think that overall, it's going to be a huge positive. But what I'm really proud
of is that the companies that are out in front of the AI space and building these large language models and stuff
are the ones also banging the drum that,
hey, these could be used for bad stuff
and you can have issues here.
And so them realizing that upfront and pushing for that,
you know, that's really great.
I mean, you know, the OpenAI guys are in
front of this. I mean, it's super good to see because there's definite negatives, right? You
can come up with many, many scenarios about how it can be used negatively. So overall, I'm an
optimistic guy. So maybe that's part of it. But overall, I think it's going to be a huge positive
in the world, but it's definitely
going to have issues. And I'm super glad that people are getting in front of this. One of the
most impactful articles I ever read was Bill Joy, who was the CTO at Sun, wrote this article about
the dangers of, you know, potential dangers of robotics and, you know, these technologies
that we were building, you know, many years ago now, a couple of decades ago now, and
biotechnology and things like that, genomics, you know, what could come of that.
And I just think that it's really important for people like him at the time and now like
the open AI guys. And you see this with everybody's jumping
on board about safe use of AI technologies and such. I just feel like that's what you need is
people that are really driving this out in front, also explaining very clearly about the issues that
it can create and that we need some governance around this stuff.
And the government's going to have a lot of challenges in dealing with this. But I think
that it's really makes me feel good that people are talking about it and not just ignoring it
and trying to take the most financial advantage of it. Right. Right. That's a great thing. I think
that's happening right now. Are you referring to to bill joy's article the future doesn't need us or something like that
yeah i i can't remember the name of the art man it's been it's got to be i don't know 20 years
or maybe more it was yeah maybe more yeah i hate to say anything more than 20 because it makes me
feel really old but yeah it was an article i I think it was published like in Forbes or Fortune or Wall Street Journal.
Wired.
I think it was in Wired.
Oh, Wired.
Yeah.
Maybe it was in Wired.
Yeah.
I just felt like it was just a great article about, hey, there's, if you take this further
and we don't really think about things from, you know, there's ethical implications to
a lot of this stuff and we need to be thinking about these things, right? And I think that's
what people are doing now with these large language models and AI is, hey, these things
are amazing, but there's also some ethical issues that we got to be dealing with and we got to start
thinking about and get in front of. So I think very about that. Speaking of governance, Pete, one of the final questions is geopolitics.
Definitely playing at the levels that Cray does, it was part and parcel of everything. What's your
perspective on what's happening with trade wars and geopolitics and the state of the world and
how technology is playing such a critical role and more with all
of the stuff that we've talked about. Well, I think I need to have a drink before I get into it.
I mean, look, it's clear that technology is playing more and more of a role in how the world
is playing out, right? And it's huge. It's going to be bigger and bigger. And, you know, that's where I think a lot of the
challenges of the future will be is because of what technology means to GDPs of virtually every
country on the planet, right? And so I think it just makes things more challenging. I think it
makes the world a much smaller place, which is both great and difficult.
And so, you know, there's just a lot of challenges there that I won't get into. But I feel like
there's good people trying to really go after these things. But the systems that we have are
very, and systems, I mean, political systems, not computer systems,
are built from 100 years ago, right?
And so it's very challenging, I think, for them to deal with how fast things are moving
and be able to just manage that and deal with that on a global basis, right?
That's well said.
That's well said.
I think Shaheen and I have been asking you to answer all the big questions and solve all the big problems, but a really interesting conversation.
Much appreciated. Thank you, Pete. Such a delight. Hey, thank you guys. You know, this community has
been a huge part of my life and I have so many friends around the world in the HPC community.
It's just, it's great to reconnect with everybody. And thanks for inviting me today.
Awesome. Thank you.
Take care, guys.
That's it for this episode of the At HPC Podcast. Every episode is featured on insidehpc.com and
posted on orionx.net. Use the comment section or tweet us with any questions or to propose
topics of discussion. If you like the show, rate and review it on Apple Podcasts or wherever you listen. The At HPC Podcast is a production of OrionX in association
with Inside HPC. Thank you for listening.