@HPC Podcast Archives - OrionX.net - @HPCpodcast-75: Rick Stevens, Mike Papka – Argonne National Lab (ANL)

Episode Date: November 10, 2023

As SC23 approaches, we were fortunate to catch up with Rick Stevens and Mike Papka of Argonne National Lab for a wide ranging discussion.  In addition to an update on the Aurora supercomputer and TO...P500, we also discuss the need and challenged of building a national exascale capability, developing teams and bench strength, the risks and opportunities of AI for science and society, the trend towards integrated research infrastructure (IRI), and what's next for the exascale initiative. [audio mp3="https://orionx.net/wp-content/uploads/2023/11/075@HPCpodcast_Rick-Stevens_Mike-Papka_ANL_20231110.mp3"][/audio] The post @HPCpodcast-75: Rick Stevens, Mike Papka – Argonne National Lab (ANL) appeared first on OrionX.net.

Transcript
Discussion (0)
Starting point is 00:00:00 So yeah, the country needs to be doing more of this and it needs to be doing more of it in the kind of open. I think that's the other piece, right? While there's companies, companies like internet companies doing it, the broad community doesn't learn so much from those exercises. And so if we're going to get to the point where we really are a high performance computing nation, it needs to be second nature to stand up large-scale systems. From DOE's standpoint, I arrive really looking at the investment that is made within the user facilities across sites. This is not just the computing centers, but the light sources, the neutron sources, and the like, figuring out how they can work together. I want to encourage our listeners to go look that up. That's episode 15
Starting point is 00:00:44 and 16 of this podcast, and it's wonderful. Thank you, Rick. Ultimately, we have to shift from trying to detect fake media towards authenticating real media. And this is actually mentioned in the executive order, that authentication, cryptographically secure authentication of real content is going to become a priority, has to become a priority. From OrionX in association with InsideHPC, this is the AtHPC podcast. Join Shaheen Khan and Doug Black as they discuss supercomputing technologies and the applications, markets, and policies that shape them. Thank you for being with us. Everybody, welcome to the AtHPC podcast. Shaheen, great to be with you again. Wonderful to be here. Really excited.
Starting point is 00:01:30 Yeah, we have a couple of great guests today. We have Rick Stevens. He is Argonne's Associate Laboratory Director for the Computing Environment and Life Sciences Directorate, and he's an Argonne Distinguished Fellow. Great to have you back with us, Rick. You're also a professor, I should say, of computer science at the University of Chicago. And we also have Mike Papka, Senior Scientist at Argonne. He's Deputy Associate Laboratory Director, again, for Computing, Environment, and Life Sciences, and Division Director for the Argonne Leadership Computing Facility. Welcome to you both. Thanks. It's great to be here. So we have a lot of areas to cover. I think given that we're moving right into SC23,
Starting point is 00:02:15 I know a lot of people are interested in update on the Aurora Exascale system. Please inform us where things are going. Okay. Well, as probably many people know, we finished the install, the kind of the technical milestone of having the machine installed with all the nodes and so forth, kind of at the end of June. And since then, the team's been busy working on bringing the machine up, stabilizing it, debugging. It's a very large machine with 10,000 nodes, 80,000 network endpoints, over 60,000 GPUs. And it takes a long time to bring a machine like that up. It's the largest network, largest slingshot network that's been built by HPE. Of course, it's largest collection of new GPUs from Intel. It's quite large, even compared to the GPU machines out in industry.
Starting point is 00:03:06 And so as a result, I mean, we're just making steady progress on bringing it up. We're about four months into that process, and we expect it to take another four or five months. But we have been able to run a number of science applications, and those will be discussed at SC next week. And we've also made a submission to the Top 500 as a sign of making progress. And you'll hear about that next week. So, you know, I think the team has been working just super hard. The development effort over the last couple of months has certainly been running around the clock to start to bring the machine up as fast as possible so we can get to production science. I don't know, Mike, you want to add to that? I think you covered it. There's an extreme complexity in deploying one of these systems.
Starting point is 00:03:48 And while we've had science apps on the smaller Sunspot system since December of last year, we're seeing science apps running now more at scale on Aurora, along with the top 500 entry. I mean, I think we're really starting to demonstrate where Aurora is going to go. I mean, one thing we can comment on is that every time you are bringing up a big system, and we've done this over the years as our lab colleagues have too, you always have faced challenges, both with new technology and new technology at scale, and often with teams that have done some of this before, but maybe not at the same scale. And so partly it's a team building exercise, engineering, and so on. But tools like LINPACK are super valuable in stress testing the machine
Starting point is 00:04:31 and having very concrete, well-understood targets for testing. And I think it's turning out to be very valuable. As have trying to run things like large language models at scale and some of our really compute intensive applications. Each one of these stresses the system in a different way, identifies problems, drives us towards a better system. That work will go on for another multiples of months, but we're seeing progress and quite happy with a lot of the results that we have. I think that's a key point that Rick's making there that sometimes it's lost on folks too. While we've had test boxes in, we've even had the multi-rec sunspot system.
Starting point is 00:05:08 Aurora is considerably larger. And that even if we've been running on previous hardware, previous software, it's a change. So you guys are building really a new national muscle in my view. I mean, DOE in general. Do you think we have enough exascale systems projects to really build that capability, that expertise? Because, you know, as you've heard me say, this is not about meeting a deadline. This is about building that skillset capability for the nation. And you're going to have to do it a lot to get good at it. Is three, four enough?
Starting point is 00:05:40 Absolutely. No, not enough. I mean, even if you add what's been built out of industry, where there's, again, another maybe half a dozen machines kind of at this scale, it's not enough. You know, we need hardware companies that both can produce the hardware, but also have the skilled teams to build out the systems and bring them up. And that's an engineering skill that you only learn by doing it. You can design processors all day long in your office and not really appreciate what it takes to bring 60,000 of them to life in the same machine. So yeah, the country needs to be doing more of this and it needs to be doing more of it in the kind of open. I think that's the other piece, right?
Starting point is 00:06:17 While there's companies, companies like internet companies doing it, the broad community doesn't learn so much from those exercises. And so if we're going to get to the point where we really are a high-performance computing nation, it needs to be second nature to stand up large-scale systems. Absolutely. Where do you see Exascale going next? Exascale, of course, in the US, in the DOE, was an initiative that had both the Exascale Computing Project that invested in software and applications, and it had the rest of the
Starting point is 00:06:43 initiative, the ECI, that was investing in these platforms. As we go forward, I mean, the plan seems to be to roll future upgrades kind of into the normal way of doing business. There isn't a follow-on necessarily post-exascale that's focused as much on the hardware. There is, of course, efforts to construct a large-scale AI initiative in the country that would have hardware as a component that could be viewed as, at least in some way, a logical follow-on to Exascale. Because one of the goals of that initiative is to improve energy efficiency for computing by, say, another factor of 100 over 10 years. That's critical to continuing to make progress post-Exascale towards the next targets, 10, 100, 1,000 exaflops.
Starting point is 00:07:28 We need to dramatically improve power efficiency. But I think one of the things that's, of course, happening is that for scientific simulations where we need high precision, 64-bit and so on, for AI, we don't need that high precision, right? We can get by with 16 or maybe even 8 bits for training and maybe as low as 4 or less for inference. And so even going forward, some of the big machines that we might deploy for AI wouldn't necessarily be the right platforms for simulation. So I think there needs to be a fresh look at what we're really going to need in terms of the balance of platforms for simulation versus AI going forward, and then factor that into the kind of R&D plans, the strategic R&D plans, whether it's DOE, NSF, or other agencies. So if you look at Aurora, CPU, GPU, and this whole notion of AI for science,
Starting point is 00:08:17 and I think of HPC tends to be CPU, AI tends to be GPU, and the integration of the two. Well, I wouldn't describe the science as CPU. I mean, that was true maybe 10 years ago. But what's happened in the last 10 years is the steady migration of core scientific application code bases towards GPUs. I mean, that was the big goal of the Exascale Computing Project, right? Which was to identify the, I mean, about 25 application domains, but all told over a hundred codes that were moved
Starting point is 00:08:49 from whatever they were running on prior to the current platforms towards GPUs. And of course we know from other systems around the world, right, that there's a huge push towards moving codes towards GPU scientific codes. So I think that's already happening. I think there could be some additional acceleration. And as the US exascale platforms,
Starting point is 00:09:11 which are all GPU-based, become available for allocations, we will see more of the community move faster towards GPUs for scientific codes. I think the real issue is that for AI, even what you might need in a GPU is different than what you would need for simulation. Yeah. You've talked a lot, Rick, about AI for science. Now, the DOE's nationwide, I guess, scale project, the Integrated Research Infrastructure, the IRI, it's really a
Starting point is 00:09:38 pooling and collaboration of scientists at the labs, but also the HPC resources at the labs working together. Could you kind of update us on that whole vision and strategy? I'm going to let Mike do that, since he's been very involved in the IRI. Great. From DOE's standpoint, IRI is really looking at the investment that is made within the user facilities across science. So this is not just the computing centers, but the light sources, the neutron sources, and the like, figuring out how they can work together. If you look even just at Argon with the upgrade to the advanced photon source,
Starting point is 00:10:14 these instruments are going to be producing ever-increasing amounts of data. They have been for the last, for their entire existence. And what we're seeing now is really a need that you can't decouple them from each other. And so the IRI looks at how do you take the investments that OSCQR's made in computing infrastructure, the networking with ESNet, and bring that to the other user facilities in terms of enabling them. There's a lot of work to be done there technically. These are traditionally
Starting point is 00:10:40 batch mode systems that now need to have a component that can respond to the immediate needs of the instruments. But it's also a lot of work in kind of social engineering, trying to bring this idea of distributed control. So you're sending your data over to be analyzed somewhere else and you're kind of relying on that. I think there's lots, lots of work to be done in this space, both technically and socially. And of course, tying it back to AI, all this is very rich data for AI for science, and how does that play into the story? That's brilliant. One way I see this is that there is definitely the exascale threat that we talked about, and that's Aurora, the testbed, science at scale, the future of that.
Starting point is 00:11:25 Then there's integration. And like you just mentioned, the instruments are themselves becoming a participant, a more active participant. But then there's also integration across multiple sites and take advantage of whatever specialization they have that might be used. And then, of course, it's AI. And then there was the congressional testimony, Rick, that you participated in that I listened to. There is the executive order that President Biden just issued a couple of few days ago. There is autonomous discovery that we talked about when you were last a guest of the show.
Starting point is 00:11:54 I want to encourage our listeners to go look that up. That's episode 15 and 16 of this podcast. And it's wonderful. Thank you, Rick. And then, you know, we also talked about how the injection of AI into science can really change things in multiple dimensions. Like I remember you mentioning if I can do something that is less accurate, but I can do it a million times, maybe that gives me a different path into science. Let's talk about the AI part here now and how
Starting point is 00:12:23 the DOE in general and Argonne in particular can help the nation put its arms around this big thing. Well, it's a big question. So obviously, since 2019, when we ran these summer workshops asking the question about what are the big opportunities in AI for science, right, where we had 1300 people participate in those, we've been asking this question, right, what should we be doing to advance science via the technologies associated with AI? I mean, that's kind of one way to frame the question. And of course, during that time, we got lots and lots of ideas, targets, hundreds of targets of open science problems where AI maybe
Starting point is 00:13:02 could contribute. It turns out after the pandemic, we kind of did the same exercise again, right in the summer of 22, last summer. And we asked the questions a bit differently. We had seen the progress that was being made with language models, large language models in the fall of 21 and the kind of rapid progress and became pretty clear that in the future, it's not likely that the way AI will impact science is for every data set, there'll be an AI model or a machine learning model. That, of course, can still happen, but you end up with millions of models for every specific problem in data, and that's not a unified kind of solution. But instead, what we're seeing is this progress towards foundation models, not just language-oriented foundation models,
Starting point is 00:13:45 but models that are trained on large volume of data across some domain, whether it's in biology or material science or weather or whatever. And these models kind of learn a space. They learn a data distribution or an event distribution in some scientific domain. And you can then use these models for many different downstream applications, right? Designing proteins or predicting weather or whatever. And of course, in the language model context, they're even more generalizable. But what's missing today in models like CHAT-GPT-4 and others is that they haven't been deeply trained on scientific data sets, right? They've been trained only on a small piece of the science literature and on the general science subset of literature that might be on the internet. But there's billions of datasets,
Starting point is 00:14:30 petabytes of data that are from the collective efforts of the scientific community that are not available yet to be trained in these models. And we're going to have to unleash that data, organize that data, construct versions of that data that can be useful for training at scale, right? So these are going to be very large, many tens or hundreds of trillions of tokens worth of data, and build models that not only understand language and mathematics and so on, but understand the deep scientific details, whether it's rare genomes in the soil, bacteria in the soil, or whether it's the relationship of a climate model physics module to the output of that simulation, but can connect all of these things together. And we've been thinking about those kinds of questions, right? What are
Starting point is 00:15:15 the handful of really large frontier scale, not the machine, but frontier meaning the leading edge of AI machine models that you could build that are directly applicable to advancing scientific challenges or making scientific progress. So that's what we're thinking about. And of course, it ties to autonomous discovery. If you couple those kinds of AI systems to robots in labs where they can manipulate instruments and samples and so on, you have an automated lab, or as Mike was saying, if you have an AI system that now can connect to all kinds of facilities via common APIs and data, operational APIs and data APIs, now you have AIs that can help plan and execute, carry out experiments, maybe faster than humans or better than humans in some ways. And you can use these AIs for open questions, you know, designing new materials for energy or for energy applications or quantum applications or whatever. And you can use these kinds of models to improve scientific codes, maybe really reduce the burden of porting science codes to new architectures, like old science codes to new GPUs or current GPU codes to new accelerators, right? So these are some of the application domains, but ultimately we
Starting point is 00:16:21 want models that can help us plan and execute experiments, advance theory, and really amplify the scientific skill set that human scientists have. That's what we're really trying to do. There's a lot of concern about where consumer-facing AI might introduce various kinds of risks, deep fakes, and misinformation and so on. We're starting to see that a little bit today. And of course, those are problems we have to take care of. But there's a huge opportunity for AI to accelerate science, research, and to bring social progress and to bring economic progress basically to the whole planet. And we need to be thinking hard about how to do that in a responsible way, but in a very rapid way.
Starting point is 00:17:10 I think the challenge there really is the risk rewards of AI. On the one hand, you want to avoid any sort of Darwinian mistakes, but you also don't want to get in the way of progress, and both scientifically and for national competitiveness. How do you square that really? Well, you have to understand the risks, and you have to evolve the technology at the same time you're evolving your understanding of the risks and how to manage those risks. Everything we do has risks. We build a highway system and there's risk, right? People crash. We build an airline system, an air traffic system, and there's risk, right? But we get good at managing those risks. We dial the risk down to where the benefit overwhelms the risk. And that's pretty much where we have to go with AI as well, right?
Starting point is 00:17:51 But we have to get good at it. We can't be afraid of it. When we electrified the country, the turn of the century, right, we figured out how to get progress by electric lighting and machines, electric machines and so forth without it destroying us. And I think we have a similar thing going on here. A lot of the risks are theoretical. It doesn't mean that they're not real. It just means that they haven't been realized. And we need to approach the problem with the right amount of humility and wisdom, right? But we got to get on with it. I mean, there's challenges that the planet and the faces that we can make good progress on if we do the right thing.
Starting point is 00:18:30 There's really no way to do it without doing it, right? Could I ask too, Rick, the way you spoke about, I mean, this tremendous compute power, but also the data, so much data being generated. And then in a collaborative framework, the IRI, you're talking about the combination of large language models and bringing science into that. I was at the HPC user forum two months ago, and there was talk about that emergent notion with large language models, that there are things that large language models, conclusions they draw or answers given that have surprised the developers of these models. Rick, Mike, do you have thoughts or answers given that have surprised the developers of these models.
Starting point is 00:19:06 Rick, Mike, do you have thoughts about that, that we're going to be combining so much power and so much data? New things might come out of this that nobody involved in these projects really expected. Well, it's emergent behavior in large language models is a very active area of study, and we shouldn't be surprised. I mean, it's not, you know, anybody who's raised a child gets surprised, right? When a child can, you know, as their cognitive development happens, they can do something that they couldn't do, you know, before. And it's, you know, somewhat similar, right? We know that scale matters a lot. And this was the big bet
Starting point is 00:19:40 that OpenAI played right before anybody else was willing to do it. They bet on scale. It's turned out to be a pretty good bet. Others have been more conservative, but now also pushing scale and pushing, of course, better data helps as well, scale and better data. And, you know, but we can take more of an analytical view. I mean, we know models will learn skills and we know that you can combine skills to create new skills. And this is what will happen in these models. It's not magic. I mean, it's something we can actually study through experiments with the models and to some degree mathematically, logically. So we should expect that. In fact, what we want, right, is we want these models to be able to do things that are useful to us and useful in general. Now, it also means that occasionally
Starting point is 00:20:26 we will underestimate or we'll misestimate what a model can do. And it means that, you know, you have to be, I don't know, eyes wide open was the term that Dave Turk was using at the AI hearing, I think is a good one, right? You kind of have to be fully aware of what experiments you're actually doing, real and unintended experiments. But having said all that, I mean, there's really no other way forward. I mean, the idea that we can build such models and they're turning to be useful, and we're really gearing up how we're assessing these models and how we can assess risk and trust
Starting point is 00:21:00 and reliability and so on. And we have to just accelerate on the evaluation component as much as we're accelerating on building the models, right? Right now, while we have pretty good tests for things like toxicity and truthfulness in some sense in models, because the community, the academic community has been really working hard on that. But in the scientific community, we don't have the level of effort that we need in building the evaluation tools that need to accompany the construction of large data sets. So for every large data set that you might want to use to train a model, you've got to have an evaluation mechanism, first of all, to verify that the
Starting point is 00:21:38 model is actually learning what you want it to learn and that the skills that can derive from that data are what you expect. But also we need to start understanding, because this doesn't happen with humans, right? Humans, it's not possible for one human to read a million science papers and integrate across the knowledge space. But it is possible for a large language model to read a million science papers and integrate that knowledge. But what we don't do right now is we don't have a good way to understand how you might evaluate an entity that can read a million science
Starting point is 00:22:10 papers and build some or create some new insight across that entire collection. And so we need to get better at that. We need to get better at understanding the implications of aggregated knowledge. Excellent. Right on. Rick, you mentioned deepfakes, and it came up in the testimony as well. Yeah. And you had some good ideas on how to go about it and just chip away at the problem. Would you mind speaking to that? I know this is a really, it's a very common topic that comes up. Well, there's two sides to deepfakes.
Starting point is 00:22:39 So one is the detection of deepfakes or fakes. Deepfake is a funny term, but this basically false data or false information generated by models. Of course, if it's simulations that we're doing on purpose, we consider deepfakes to be really good, just saying that. But if it's something that is otherwise substituting actual real footage or images or something like that, then obviously we want to be able to detect it where we can. Now, it's easy to detect bad deepfakes, but it's going to get harder and harder to do that detection as models get better. And so I think ultimately we have to shift from trying to
Starting point is 00:23:17 detect fake media towards authenticating real media. And this is actually mentioned in the executive order that authentication, cryptographically secure authentication of real content is going to become a priority, has to become a priority. And that has to, of course, be something that the community buys into, whether it's from government or private sector or whatever. But if you are generating content that you want to be ensured that it's real and can't be faked, then it's going to have to be deeply authenticated. And so I see the future is evolving in both of those directions. We'll get better at detecting fake content and we have to get the whole community onto some kind of standard for authentication. So then you have
Starting point is 00:24:00 content that is known to be valid. Yeah. Yeah. I mean, there's several ways of doing it. But yes, having a chain of trust all the way from the original photons or audio bits, right, all the way to the consumer. And there's several ways to do that with encryption and with keys that encode, you know, that get signatures off the media and then use some kind of blockchain-like structure or something to ensure in the ledger that this is a real thing. And people could then validate that by public key kind of queries. So it should be a completely automatable process, but it does change it. And of course, it shouldn't really be unexpected that we have to do that. I mean, even without AI, I think we would want to move in that direction anyway. Right. Yeah. And in a world that we live in that is rife with conspiracy theories
Starting point is 00:24:48 and people who embrace them, this is something we really got to get our hands around because- Yeah. And conspiracy theories, you know, didn't originate with AI. I mean, that's been around for a long time. And all the pictures you remember, you know, when you're in high school of fake, you know, UFOs, that doesn't, it wasn't created by AI, right? So, AI, right? But I think there's another thing that is going on, which is AI is an amplifier of human skills or human intentions. And of course, simulations are also that, right? But simulations are actually rather challenging for most people to use. And so because the barrier is pretty high, it's almost always used for positive things. But AI lowers barriers of adoption while amplifying human capabilities.
Starting point is 00:25:30 And so in many ways, what we're seeing is the implications of relatively small numbers of people having very powerful tools and what they choose to do with them. And this is not a problem with AI. This is a problem with them. And this is not a problem with AI. This is a problem with humans. We really have to get smart about understanding and managing humans' use of powerful tools, because that's essentially the future that we're going to have. Maybe it's a question for Mike. We had a very popular episode about the AI testbed that Argon runs with all the various chips that you bring in and you take them through the paces. Would you mind giving us an update on how that's going and how you see that project
Starting point is 00:26:10 evolving? Yeah, sure. It touches on probably every topic that we've discussed today from just our general interest in AI for science and wanting to understand where this new hardware can enhance our efforts in this area to the questions about post-Aurora. We have the ALCF4 project team in place. Where does AI accelerators fit into this story? Of course, we have the CPU, GPU that was discussed. And is this another component of the future system to just trying to produce science faster? So we have our existing systems in place. We've scaled a number of them this year. Everything with the exception of our Habana systems are now available to the general public.
Starting point is 00:26:54 We're talking to newer vendors about how they get their gear in place and starting to test it in kind of the same approach that we've taken elsewhere with kind of small ones and twos to systems that we think, you know, are scaled for science. I think you'll hear some really good announcements at SC about things that we've done, not only in the AI space, but, you know, really back to Rick's earlier comment, GPUs are delivering science now. Well, it turns out that there are interesting things you can do on these AI accelerators that isn't just AI, much like the path from graphics and video on GPUs. We will see similar activities in the AI accelerator space. So we'll keep pushing them. Fantastic. There are multiple threads that we haven't touched on. One is just staffing and recruitment and retention.
Starting point is 00:27:49 The other one is geopolitics, which is a little bit related, but not, you know, and then there is just a whole technology basically driving geopolitics these days like it never has. So on the staffing front, of course, there's, it's not clear what is new to say there. I mean, you've got the public sector and the private sector kind of competing with each other. And I think it's, you know, it's the case that we need to make sure that we have a balanced workforce in both. We're advocating for public projects that are at the right scale, that attract people, and with science targets, which are very interesting for a lot of people as a way of creating a talent pool and a workforce pipeline, right? That of course also benefits industry, but it's about having a diversity, not only of people in the pipeline, but a diversity of technologies and a diversity of approaches of building systems. I think the world where things collapse, only the half a dozen, you know, big companies is
Starting point is 00:28:41 probably not the future that we want for various ways. So there's that issue, right? And there is a global component to that workforce. It is interesting to see the technology playing such a huge role in the tensions globally. And I'm not sure what else we can say about it. I mean, I tend to think in terms of very long, you know, personally, I'm a long term kind of person. I don't know so much on the short term. So I think for the next 20, 30, 50, 100 years, right, we'll have to sort this out. I mean, a future world that's vulcanized in a hard way doesn't make any sense to me. Right.
Starting point is 00:29:15 So we have to get past the barriers that are driving the current behavior. AI community has, by and large, been really open in terms of publishing things with exception of, you know, some of the big companies now are not publishing what they're doing, but historically it's been very open and collaborative across international boundaries. You know, we hear and see great papers coming from all over the world and there's hope, right, that there is actually a desire on the ground to collaborate and build stuff. A lot of open source. Yeah, open source and open models and so on. But there's also a need to think about what happens if different groups have access to technology
Starting point is 00:29:55 and decide to use it in adversarial ways. And I think that's also dominating people's thinking at some level, right? How to think about a global security world where global security, when powerful AI is now on the scene. And that's a different problem than we faced in the past, right? We've had global security challenges in the past with different kinds of technologies, but they were usually pretty observable and pretty limited in terms of their applicability. Whereas AI is not like that. It's relatively permeable and it is not so observable, right? I can run it on a machine that nobody else can see really. And it's usable by lots of players and including very small groups
Starting point is 00:30:39 and it's very general purpose. So I would say that that's another set of AI risks we have to manage. I personally think those are a bit more important than some of the consumer facing AI risks that people talk about, but they're much more challenging to address currently. By the way, speaking of geopolitics and national competitiveness, thinking about your remarks about the IRI, where would you place that on the world stage as far as a research infrastructure at scale? Are there other efforts in other regions, maybe Europe or China, on a similar scale to what's envisioned here? I think in Europe, there's efforts. And well, IRI as a term, I would say is fairly new, but this isn't new to DOE science, right? So you look at efforts with CERN
Starting point is 00:31:26 and a complete build out of ESNet is integrated research infrastructure, right? I mean, enabling of connecting the various labs together is integrated research infrastructure. So the effort all the way back from work led by Argonne in grid computing connected things together. So it's happening and it's going to happen. It's going to continue to happen. Science is a global enterprise. It needs to be connected together. And so we are seeing stuff in Europe, maybe less so visible in China,
Starting point is 00:31:55 but we're trying to be aware and connect where it makes sense. Excellent. Thank you. Really delighted to have this opportunity to catch up and hope to do so again. And look forward to seeing you at SC23. We'll be tracking everything we talked about with interest. Thanks. Thanks so much. We'll see everybody next week.
Starting point is 00:32:12 Thank you. Bye-bye. That's it for this episode of the At HPC podcast. Every episode is featured on InsideHPC.com and posted on OrionX.net. Use the comment section or tweet us with any questions or to propose topics of discussion. If you like the show, rate and review it on Apple Podcasts or wherever you listen. The At HPC Podcast is a production of OrionX in association with InsideHPC. Thank you for listening.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.