Big Compute - How to Tell Your Family About HPC

Episode Date: December 21, 2021

For those working with high performance computing in any capacity, sometimes talking about it with your family can be a little… confusing.  And with the holiday season upon us,... many of us will undoubtedly be asked by well-meaning family members, “What’s going on with work?”  So today, we figured – Rather than bore the non-technical with technical jargon, why not just talk about some of the awesome ways high performance computing is changing the world?  In this episode, we revisit the 2020 Big Compute conference talk by Barry Bolding of AWS about just that – how HPC makes our lives better.  So when you’re sipping eggnog with the family and the question of work comes up, you can brighten their eyes instead of put them to sleep.

Transcript
Discussion (0)
Starting point is 00:00:00 I have not yet found something that was sous vide that was not good. I don't even know what that means, sous vide. Hi, everyone. I'm Jolie Hales. And I'm Ernest DeLeon. And welcome to the Big Compute Podcast. Here we celebrate innovation in a world of virtually unlimited compute, and we do it one important story at a time. We talk about the stories behind scientists and engineers who are embracing the power of high-performance computing
Starting point is 00:00:40 to better the lives of all of us. From the products we use every day to the technology of tomorrow, computational engineering plays a direct role in making it all happen, whether people know it or not. Ernest, I know just how to start this episode. Oh, it's about Saturnalia, or is it Festivus? Saturnalia? Or is it Festivus? Saturnalia? What's that? Well, Saturnalia is the actual historical thing that Christmas was stolen from.
Starting point is 00:01:12 Are you serious? Yeah, Festivus is a Seinfeld thing. Right, Festivus for the rest of us. A Festivus for the rest of us! In college, I had a friend whose family actually celebrated Festivus. It was hilarious. How do you even celebrate Festivus? You put up the Festivus poll,
Starting point is 00:01:27 and then I know that the major thing was the airing of the grievances. The tradition of Festivus begins with the airing of grievances. I got a lot of problems with you people. That ironically came from Saturnalia, because Saturnalia was like an ancient Roman holiday celebrated around Christmas time, basically after the winter solstice. And it was one where they kind of suspended rules or laws for a short period of time. And one of the traditions was being able to tell people off because there were no ramifications for what happened during Saturnalia.
Starting point is 00:02:00 And then as soon as it ended, well, you were back to normal civilization again. That sounds like a horrible idea. I am not a fan of that at all. It is a terrible idea, but there must be some like cultural reason that they did it. I just, I don't know. I don't know what it was. I'm glad we don't do that here, though. I think some people do that here every day.
Starting point is 00:02:19 So they don't really need a holiday. No offense, but this holiday is a little out there. I just can't believe it's already December. And it's not just December. It's like the middle of December as of this recording, which is insane. And to all of our listeners, of course, happy holidays. Merry Christmas. Happy Hanukkah. All of the holidays. May they bring you happiness and joy and all of the cheesy things that we like to say because we mean them from the bottom of our hearts, Ernest. Absolutely. But do you have any plans to see family this holiday season at all? Not during the holidays. I do have my parents coming up to visit early January to spend some
Starting point is 00:02:56 time with our daughter, their granddaughter. But aside from that, no, I will not be traveling. Our daughter is still not old enough to be vaccinated and is not going to tolerate the mask. Well, the mask is one, but even if we took the alternative, which is a 24 hour, three day drive back to Texas, there's no way at her age she can do that. So my wife and I just told the family, look, we can't travel this year. Come to us. Yes. So come to us if you want. Did you see any family for Thanksgiving? No, same thing. We actually spent Thanksgiving by ourselves for the same reason. Yeah, we had a pretty low key Thanksgiving as well. Just my husband, the toddler, the dogs and me. I mean, we literally got one of those Costco pre-prepped turkey dinners,
Starting point is 00:03:38 which was awesome, by the way. I totally recommend this because I don't cook at all. Oh, yes. Oh, yes. It was great. It sounds like both of us had a pretty chill Thanksgiving. And I mean, the reason that our Thanksgiving was so chill is because we typically go to Utah to see family for a week or so every Christmas, which we're going to do again this year. And as I was thinking about that upcoming trip, I envisioned the inevitable conversations that you have, you know, with family around dinner tables or hanging out in the living room or whatever. And part of holiday festivities is, of course, catching up on what everyone's been up to, right? Like how the kid's
Starting point is 00:04:15 been doing or in our case, the kid. Also, you know, what projects are you working on? How is work going? All that kind of stuff. And it's the work question that makes me kind of laugh or smirk at least a little bit because honestly, a lot of my family still doesn't really know what it is that I do for work. I mean, they know that I, of course, host podcasts and create videos or write blogs and whatnot in some kind of technology field, but the term high-performance computing has really never been a part of their vocabulary. So they don't really understand what that means necessarily.
Starting point is 00:04:52 It's just not a part of their field. I mean, does your family know the term high-performance computing? Have they done much with that? I knew where this was going when you were talking about it. And the answer is, for me, a double no. No one in my family does anything that is even related to the tech field. Don't get me wrong. They do a bunch of things. My sister's a nurse, right? I have. Right. Yeah. My dad's a doctor. Right. They do all kinds of other things, but they are not at all tied to
Starting point is 00:05:18 the tech industry. Okay. All my parents know is that I work with computers and that I'm constantly dealing with hackers. That's it. That's the extent of what they know. So they have no clue about what high-performance computing is or the cloud or why it's important to have cybersecurity for high-performance computing in the cloud. None of that is like anywhere within their realm of existence. So you and I are pretty much in the same boat on that, it sounds like. My family, not a clue.
Starting point is 00:05:45 Like, for example, you mentioned your mom. So my mom, she's a professional pianist. She's incredibly good at the piano. And she's also a semi-pro pickleball player. Okay, so nothing that has to do with technology at all. And let me just paint a picture for kind of what she looks like. She's this upbeat, kind-hearted, really happy, fit woman with long, dark hair who looks and basically acts significantly younger than her age, right?
Starting point is 00:06:11 And a day pretty much doesn't go by where she is not seen wearing some kind of like sparkly sequins, zebra stripes, or bright, colorful ensemble of some kind. It's just a part of who she is. When she's on the pickleball courts, she's always in these crazy, bright, sparkly, insane outfits. What is not part of who she is, is high performance computing or any technology beyond like a simple smartphone or something that a consumer, an average consumer would use. That's just not a part of her world, right? And she actually just flew here to Southern California last week with her new boyfriend to visit me and my family. And when she tried to
Starting point is 00:06:52 explain to her boyfriend what I did for work, I overheard this. She literally told him that I work with computers, quote, in the sky or something. And I know she's not like an idiot, but that was really funny because it took everything in my soul not to just like spit take laugh out loud at my mom's face. She's just that unfamiliar with cloud computing, you know, let alone HPC. She hears cloud and she just automatically goes to the sky. I mean, these are some of our family members, right? They have no idea. So I got to thinking, this holiday season, how could I help someone like my mom kind of understand what my job really is? Like, what is computational science and engineering and high performance computing in the cloud? What are these technological terms to somebody like her?
Starting point is 00:07:40 What would they mean? And I think the best way for those of us who work in this field to explain it is like not by trying to outline the tech itself and explain every detail. But instead is to talk about all of the awesome things HPC is doing in our lives that nobody really thinks about. Those kind of everyday results that people can relate to, you know? Yeah, there's kind of an adage in engineering, which is don't tell me what you think you want. Tell me what you're trying to do. Yeah, there you go. Kind of like that. Right.
Starting point is 00:08:11 And the reason is because I, as an engineer, have at my disposal probably a dozen things that can solve your problem potentially. But if you come to me and say, I want this specific tool designed this specific way, I can obviously design it and give it to you. It may not be optimal. But it may not solve your problem. But if you come to me and say, hey, look, here's what I want to do. I want to be able to have a single socket that I can put over any size bolt and turn it. Then I would say, I can design the exact kind of socket you need for that. Yes. You want to explain what something is doing or what the goal is, as opposed to the engineering that gets you from nothing to that goal. Right. Exactly. So today
Starting point is 00:08:51 I figured since we're right up against the holidays, let's fly through a list of cool ways that computational science and engineering through high performance computing are actually changing the world so that if any of our listeners are sitting around at a holiday gathering and the subject of work or school comes up, they can help people kind of understand not just what HPC is, but more importantly, perhaps how it's making our world awesomer. Is awesomer a word? I don't know if that's a word. I think it is. It's a word. But I mean, who knows?
Starting point is 00:09:23 I declare it a word. I think it is, but I mean, who knows? I declare it a word. But I would say that, yes, that is an excellent idea to help everyone kind of explain what this industry is that we're all in. Cool. I'm glad you're on board. And as I was kicking around this idea of describing HPC to family, I was actually reminded of a talk that I saw given at the 2020 Big Compute Conference. Today, high performance computing is everywhere and companies are innovating in every design phase, every space. The speaker was Barry Boulding, who is director of global go to market for high performance computing, autonomous and quantum computing at Amazon Web Services, which, oh, my gosh, I almost needed to like take a nap saying that title. It's so like
Starting point is 00:10:06 advanced. But anyway, you can tell by Barry Boulding's crazy, insane, awesome title that he's seen a lot of cool stuff done in high performance computing. And the first example that he mentioned in his talk is how they design their crafts in their coffee makers so they don't break as they're not as fragile. They can handle the hot and cold temperature fluctuations. Coffee makers. Ernest, I don't know. I know you're not a big alcohol drinker. Are you a coffee drinker, though? Not at all. That's another one. I can't stand it. My wife drinks coffee all the time, but I can't. I was going to say, like, you could totally convert to my religion and it would not be
Starting point is 00:10:41 a transition for you. It would not be a problem for me at all, because obviously I don't like the flavor. The other thing is that I get too jittery if I have caffeine. Oh, yeah. So I don't drink caffeine either. You're just not a caffeine guy. Right. I see. Yeah. Well, then this is probably a really bad example for me to start the episode off with, because you don't drink coffee. I don't drink coffee. Never had a cup in my life. My family doesn't drink coffee. So I can't really talk about a coffee maker and have them be like, oh, I get it. But maybe somebody out there, one of our listeners, does drink coffee, I'm assuming. I bet you there are plenty of them.
Starting point is 00:11:13 I think they vastly outnumber us. I think you're right. So if any of our listeners out there do have a family that drinks coffee, you can tell them that computational simulation through high performance computing helped design that coffee maker so that it could handle the temperature fluctuations. Or you can even simplify it more like for my mom and say something like massive computers use math to quickly figure out what kind of materials they can use to build a coffee maker that doesn't break when it gets really hot. So they don't have to guess and then, you know, build a hundred different kinds of coffee makers and see which one works best. is taking something that had to be done by hand many, many, many times at extreme time and cost
Starting point is 00:12:07 and just doing it inside of a massive computer farm in a matter of minutes or seconds or sometimes hours, depending on the complexity of it. So instead of having to build prototype after prototype, you can have a digital prototype that's done much faster. The fuel that you use in your jet aircraft or your cars, the batteries that you design, the weather that comes through on your cell phones every day is coming from high-performance computing simulation and modeling. So there's a lot of examples here. Fuel, batteries, weather,
Starting point is 00:12:38 all from computational simulation via high-performance computing. The simulations that are done to calculate the risks on the financial portfolios by the investment banks is a high-performance computing. The simulations that are done to calculate the risks on the financial portfolios by the investment banks is a high-performance computing problem. Which is kind of crazy to think about because think about the amount of data they must have to go through if they're in need of HPC. Right, and that's actually, you know,
Starting point is 00:12:58 this is one of the areas where I think we're starting to see a lot more HPC engagement, specifically cloud HPC, is in the financial market. Before the last five to 10 years, this industry was dominated by humans having to pick stocks that would perform certain ways in portfolios and attempt to try to beat the index, which over time, nearly none of them have been able to do. However, in the last five to 10 years, because of high-performance computing, an algorithm can now do the job better than that person can. And so you've seen the rise of these, what they're calling fintech companies that are backed by massive amounts of cloud HPC power, if you want to call it that, to essentially say, we no longer need humans to do this. As a matter of fact, having a human do this is actually a liability. Let us put an algorithm on it to get the most efficient extraction of profit or whatever the case out of your portfolio. So yes, that is another area that a lot of people don't think about it.
Starting point is 00:13:55 But HPC, specifically cloud HPC, is changing the face of banking in modern times. Where every night when the market closes, they need to do that risk calculation and get it done by the next morning. in modern times. some type of inflation hits, and suddenly the portfolio risks that they're calculating are much more complex. And they need access to far more resources than they've been planning for. And what do you do if you need more resources? They're all in the sky. So those types of questions, what could I do if I had access to a million cores, is not a question that you have to be at a major multinational to ask anymore. It could be an engineer who only wants access to those million virtual CPUs for 10 minutes, for 15 minutes.
Starting point is 00:14:55 And that's not an immense budget issue. And Barry talked about how whenever they had a potential customer who was primarily using their on-prem system at the time, they would then do a deep dive with them into every aspect of the types of calculations they're doing and say, Well, if you weren't hindered by constraints, what could you do? And those reimagining processes go into operational issues. We talk to them about all the security and the government and compliance issues, cost management, sort of those nitty gritty details that you have to deal with. And it used to be those were the big hurdles that customers had. But every day these begin to drop away.
Starting point is 00:15:36 In fact, with this talk given almost two years ago, a lot of security concerns about the cloud have faded away since this time, to the point where there is evidence to suggest that the cloud might even be safer potentially. Yes. And big bold statement here. Right. But from somebody who's been in cybersecurity for 25 years at this point, the cloud is absolutely safer. Yep. There are caveats to that. Right. You have to understand the shared responsibility model and how to properly secure stuff from your end. But yes, in general, the level of safety you have in the cloud is better than trying to manage the same resources on-premise. It used to be when I got into this business, I worked for a long time at Cray and IBM. Selling on-prem systems. And we were good at predicting what technologies customers might need when they're making an investment for three to five years and helping them to mitigate the risks associated with those three to five-year investments. But those
Starting point is 00:16:37 technology innovations, the engineers today shouldn't need to worry about making those bets. They shouldn't have to make a bet on a technology for three to five years. They should be able to make a bet on their application, on their science. And the best technology available whatever day of the year it is should be at their disposal to be able to solve those problems. And that's really the powerful flexibility. It's the agility that cloud provides. And that brings us to one of the many cloud-based examples of how high-performance computing is affecting our world. One of the first examples we had was with Western Digital.
Starting point is 00:17:16 Which I imagine everyone here knows Western Digital. I myself have used dozens of their external hard drives over the years, a couple of which have actually melted down on me and caused me much pain and panic. But that was a long time before cloud HPC was even a thing. So maybe I can blame old technology for those days. I mean, come to think of it, I actually haven't had a serious issue with one of their drives in about a decade or so.
Starting point is 00:17:41 So maybe cloud HPC does actually have something to do with it. I think so. I too, as a matter of fact, my main drive now is a Western Digital. I just got one Black Friday, 14 terabytes. What? Yep. A 14 terabyte? Oh man, I wouldn't even dare. That's like so large. It is, but Black Friday deals were not good this year in general. However, this one thing was, and that's why I bought it. It was 50% off. You're like, I am going to get something for cheap, even if it's a 14 terabyte hard drive.
Starting point is 00:18:12 They had a workflow where they were doing design for their disk drives, where they're doing electronic modeling of their disk drives. And this typically was a simulation they did on premise, took 20 to 30 days. And we reimagined the problem with them. And we really looked at trying to distribute those jobs out across all of the resources and capability that the cloud provided. And they were able to take that simulation and modeling problem and compress it down to eight hours. And this is not an unusual situation. Hey, the faster they can make hard drives that rock, the happier a video editor like me will be, which should clearly be their main objective, obviously.
Starting point is 00:18:56 I mean, to make me happy. Well, exactly. That's if the company doesn't exist to make the customer happy, then the company will eventually not exist. This is true. But I mean, like, just me. Yeah. And preventing hard drive meltdowns is just the beginning of what Cloud HPC can do.
Starting point is 00:19:17 A customer called Maxar that is doing weather forecasting, and their goal is to beat NOAA to the weather forecast every day. Because bad weather can really mess up, for example, drilling rigs and refineries. So the sooner oil and gas companies know the forecast, the sooner they can act to protect their equipment. And the value in that is that their customers get access to predictions of storms or the impacts of severe weather at a faster pace than the rest of us do. And they can make business decisions based upon that.
Starting point is 00:19:51 Traditionally, many industries have relied on free weather reports generated by an on-prem supercomputer operated by NOAA or the National Oceanic and Atmospheric Administration, which have been really helpful, honestly. But on average, their weather predictions take about 100 minutes to process global data, probably because of their limited resources, right? They're using an on-prem system. And anyone who has lived in a place where weather can change dramatically minute to minute, every minute counts, especially if it affects your business. Yes, this is exactly true. We know this in Texas because in Texas, we can go from freezing in the morning to boiling hot in the afternoon, back to freezing overnight.
Starting point is 00:20:30 So originally Maxar in this situation, they considered building their own on-prem system, right, to compete. But then they realized that they needed a cloud environment to build a cost-effective solution that would give them the quick turnaround they needed. And in this case, they found that with AWS. So when you see your family, we can mention that weather forecasting is often done through high-performance computing. And Barry says that a number of companies have traditionally used
Starting point is 00:20:58 exclusively on-prem systems, and we have called them server huggers.. You ever heard that? Yeah, I've actually heard that term. So for those who don't work in the cloud world, if you're married to your on-prem system and you don't want to give it up, you might be a server hugger. I'm just saying. That's funny. But a lot of these server huggers, as we say, they've started to kind of reevaluate their circumstances, right? Especially as their hardware ages and it needs continued maintenance and they're confined to like longer time to solution scenarios than perhaps a competitor might be. They're beginning to ask the question, what do I need to do to have more agility, to be able to have more access to technologies as they evolve rather than to make those long-term risk-based acquisitions of
Starting point is 00:21:46 technology that may or may not pan out. And that includes access to GPUs, to FPGAs, to the best software, to new services as they evolve, the incorporation of AI and ML into the workflow. Yes, absolutely. I think at the end of the day, all of this comes down to a financial equation, is it worth it for us to keep putting out all of this capital expenditure into on-prem systems that admittedly, they are yours, you have access to them 24-7, you can do whatever you want with them. However, if you have the extended resources of cloud HPC, then the amount of time you need per run is drastically reduced. So the equation of on-prem versus cloud HPC becomes a lot more balanced. And then once you start hitting the threshold where you don't have enough capacity on-prem, that thing tilts significantly toward cloud HPC. And beyond weather, another company that uses cloud high-performance computing that might excite some fans is Formula One.
Starting point is 00:22:50 They gather so much data from their cars, they want to be able to do machine learning on the fly while their cars are driving. They want to be able to access that data, do simulations, and then couple that with training models. And Formula One uses cloud HPC in a number of different ways. I mean, for one, they're using it to redesign future F1 cars, as well as utilizing machine learning to maximize use of the giant amount of data that they're collecting constantly from sensors on their cars. And another thing that they're doing that I thought was really cool has to do with pit stop data. Now, typically Formula One pit crews
Starting point is 00:23:31 can change all four tires on a car in less than two seconds, which I actually ran the numbers is 1,050 times faster than I can change one tire. So I don't think F1 will be recruiting me anytime soon. I would agree with that. But even with a tire turnaround time that is so small, the slightest bit of time lost, either driving in or out of the pit stop area, can mean the difference between
Starting point is 00:24:00 victory or defeat in that race. So Formula One is using sensors both on the track and on the cars to use live timing data to generate graphics that then show up on TV for the fans. And these graphics visually and numerically compare how two different drivers shown side by side in like a split screen kind of setup, how these drivers entered and exited the pits. And basically it compares and contrasts them and shows you who did a better job. And there are a lot of other ways that Formula One is using cloud high performance computing to, as they say, bring them into the future, which you can actually read more about on the AWS blog, which we'll link to
Starting point is 00:24:43 in the episode notes. Yeah, it's amazing everywhere that cloud HPC is becoming a thing now. Like we know about the obvious ones. For example, when you're talking about Formula One, the engineering of the cars themselves from the aerodynamics to the material science for the tires and the frames and whatnot, as well as the engine research, all of that. Right. We totally get it. Right. It's obvious, right? It's obvious. But when you start getting into like efficiency around the pit or... Yeah. Isn't that crazy? That's super crazy. And it's great to see that it's,
Starting point is 00:25:11 rather than looking at it in terms of simulations and whatnot, what CloudHPC is, it is the way to solve data problems. And fundamentally, all of these things we've talked about are data problems. And that's where CloudHPC really shines. One question that I think every engineer would love to ask is, what could I design if I had unlimited resources? If the resources sitting in my data center closet, if the resources sitting in my small company data center, or the resources sitting in a massive data center owned by the multinational corporation that I work at, what if those constraints were no longer there?
Starting point is 00:25:51 And I wasn't having to worry about, am I going to be able to do this simulation? Does it fit in the technology that we bought five years ago or three years ago? That's a very liberating question. And so at AWS and in the cloud, you have access to immense resources. And we've begun to talk to a class of customers that are asking us, how can I rethink the simulation and modeling workflows and really begin to break those bounds of constraints.
Starting point is 00:26:29 From supersonic jets to personalized medicine, industry leaders are turning to Rescale to power science and engineering breakthroughs. Rescale is a full-stack automation solution for hybrid cloud that helps IT and HPC leaders deliver intelligent computing as a service and enables the enterprise transformation to digital R&D. As a proud sponsor of the Big Compute Podcast, Rescale would especially like to say thank you to all the scientists and engineers out there who are working to make a difference for all of us. Rescale. Intelligent computing for digital R&D. Learn more at rescale.com slash BC podcast. On the other side of the automotive industry is...
Starting point is 00:27:09 In the autonomous vehicle space, it's really interesting. That's an area that's going to explode. Today, most of the autonomous vehicle companies have on the order of 500 cars on the road, maybe 1,000, maybe 100. I'm willing to bet there are probably a lot more now, two years later. They're gathering data from those automobiles. So there's a data ingest issue. They need to be able to move data off of those cars.
Starting point is 00:27:34 They need to do that potentially while the cars are driving, or do they bring them in and park them and bring the data off of the cars? So there's an ingest, egress question they have to solve. They then have to solve a simulation problem and a training problem. So the first is they get the data off of the cars. They retrain their models. So they basically take all the new data. They retrain the models.
Starting point is 00:27:59 Now, you don't want to feed that model back directly into the cars immediately because you don't know. Maybe now with the newly trained model because you don't know. Maybe now with the newly trained model, you don't recognize that dog crossing the road quite as well as you might have before you retrain the model. So they literally send these into simulations, millions of miles of simulated driving. And that's a really beautiful, embarrassingly parallel problem. It's literally you're simulating millions of cars driving on random roads. And so we can expand that simulation as much as we want. We can run that on 100,000
Starting point is 00:28:32 cores. We can run that on half a million cores and get that done very quickly. Do the analysis. Say that the new training set is good. Send that new training set out onto the cars and the cars can keep driving. And you can see how much of a bottleneck that could be if you're not able to get that newly trained model out into the automobiles as quickly as possible. And honestly, we should probably do an entire episode on autonomous vehicles because there really is a lot to explore here. But autonomous vehicles couldn't really exist without high-performance computing. I mean, just as Barry mentions, all of the systems that make autonomous vehicles, or AVs as they're abbreviated, all the systems that make AVs work demand multiple sensors that
Starting point is 00:29:17 have to be run through simulations in order to keep the car operating correctly, let alone safely. Simply put, autonomous vehicles have sensors and cameras placed all over them in order to evaluate the world around them and then be able to navigate that world correctly. These cameras outline moving objects and obstacles as well as measure speed and measure distance. And thermal cameras detect objects when it's difficult to see, like in the dark or through bad weather. Shadows and sun glare have to be accurately assessed. So these cameras and these sensors collect data that not only helps the vehicle know how to drive, but this data is also fed into simulations
Starting point is 00:29:58 to help future autonomous vehicles navigate more accurately, if that makes sense. This is especially important during this transitional period of autonomous vehicles, like with semi-trucks, where we're starting to see more of them on the roads, but they're not yet the norm yet because they're still evolving. Without the use of high-performance computing, there would only be enough compute to simulate and evaluate like a single image from one of these cameras or sensors at a time. But with high performance computing, autonomous vehicle cameras can feed videos consisting of multiple image frames, right, into a simulation and then
Starting point is 00:30:31 make calculations to increase the performance and safety of that vehicle. Those simulations can then help teach, as they say, they teach the autonomous vehicles how to better use their sensors and cameras to continue to navigate the world. So, I mean, according to a blog written by Ansys, a large simulation software company utilized by probably many of our listeners, they said weeks of calculation are reduced to minutes thanks to HPC. The less time needed to reach simulated results from video instead of single frame images, the faster autonomous vehicles can learn how to operate safely without a human driver. Yeah, this is true. I mean, I can tell you right now, like, I have a Tesla, you know, I'm a huge fan of Tesla, but I think that term full self-drive is a little
Starting point is 00:31:15 bit misleading and the technology is not quite there yet. And I would agree in general that a lot more is needed specifically in the realm of cloud HPC as well as onboard technology to help figure these things out faster. But I will say this, this is inevitable. This is not going to go away. This is not going to be something that's unattainable. Definitely within our lifetimes, probably within the next five to 10 years, we will have fully autonomous self-driving vehicles of every kind. I really think you're right. I know it's coming. And again, cloud HPC is a huge reason why that is going to come faster. Because yes, it is true that you can absolutely do all this on on-prem, but the scale that you can reach in
Starting point is 00:31:56 cloud HPC means that the companies that adopt cloud HPC for training their models for self-driving are going to be so much faster to market than the ones that are relying on on-prem technology. Yep. I completely agree with you. And since you drive a Tesla, I have to ask, do you ever use Spark mode? I've done it before because I found it in the menus. Like I was bored one day and sitting waiting in a parking lot. But the minute I did it, my wife was like, she looked at me and was like, really? This is what you've decided to do with this car? And I was like, I just found it.
Starting point is 00:32:35 I'm like a nine-year-old little boy or something when I rode in my brother's Tesla and he turned on fart mode. I was crying. Nothing has raised my opinion as quickly about a human being as Fartmo did for me with Elon Musk. Okay, moving away from the ground to the skies, Barry also mentioned another example that our listeners are already familiar with. That's the one. And in this talk, Barry said that aircraft design is just one of the fields where engineers are really changing the way high-performance computing is done. I just want to say how amazing it is that these small engineering firms, and Boom's not that small, but they're certainly competing with the Airbuses of the world, the Boeings of the world,
Starting point is 00:33:18 and they need to be able to do these simulations and models in the most cost-efficient way, the most agile way possible, they need to be able to take their applications and fit them to whatever architectures are available. We've done about 66 million core hours of computing, mainly through Rescale. That's Blake Scholl, CEO of Boom Supersonic, at the same big compute conference that Barry spoke at back in 2020. And I imagine they've added quite a few more compute hours to that total since then. And in case you missed our episode about Boom Supersonic, they're a Denver-based company with a goal of basically cutting flight time in half by way of bringing back supersonic commercial passenger aircraft.
Starting point is 00:34:03 And I say bring back because technically Concorde was a commercial supersonic airplane, but there are actually very few similarities between Boom's model and Concorde. We're starting from a blank sheet of paper, re-envisioning not just the airplane, but also what it will be like from the moment you walk onto the aircraft, the moment you step off. And they could start with that blank sheet of paper thanks to high-performance computing. In the old model, in the old on-premise model, we were making predictions about the infrastructure, three to five-year predictions.
Starting point is 00:34:34 And then we were forced to fit our applications to whatever infrastructure we ended up acquiring. So if we made a bet on how much GPU we were going to need or how much CPU or how much FPGA we were going to need, we were basically tied to that bet for a long period of time. And we need to escape that model. We need to enter a model, a world where we're fitting every single application in our portfolio to the infrastructure that is most optimal. Today, Boom has orders for supersonic jets from major airlines all over the world. And since Boom Supersonic bet on cloud HPC,
Starting point is 00:35:15 they were able to run insane amounts of simulations over a short period of time and basically beat out competitors. Like literally, competitors shut down and they no longer exist, right? Right. Had they decided to instead invest in an on-prem system when they started out, I would guess it definitely would have slowed them down and today's picture would probably be quite different. Absolutely. It would be very different. Their speed to market would be much slower.
Starting point is 00:35:43 And in addition to Boom and some of the more obvious examples of industries that could take advantage of high-performance computing in their research and engineering, like aerospace or automotive, you know, we're always thinking about those, other industries are looking more to HPC in large numbers. Like we talked about financial, right? FinTech, that's one example. And another example is life sciences. I went into a major pharmaceutical a few weeks ago, and we go in and we're talking to this pharmaceutical company.
Starting point is 00:36:12 You'd think it would be about drug design or about genomics or about Nextflow or one of their applications that they're doing. No, we were having a discussion about CAE, about engineering, because they were doing drop testing and device modeling. It wasn't a big part of their HBC environment, but it was significant. It was an engineering workload that we're very familiar with, very similar to the types of models using third-party applications, codes like OpenFOAM or ANSYS, Altair, applications from many of these software vendors, or Siemens, who's talking later with their StarCCM application. Those were the list of applications they were running. And this small engineering team needed to have the best equipment, the best infrastructure to run on. And here they are part of a pharmaceutical company.
Starting point is 00:37:03 And it's good to point out that this conference and Barry's talk took place in February of 2020, which if you think about the timeline was right before, you know, a significant global event. Easter. I think the scientific term for that is the Rona. Yes. The Rona shutdowns. COVID-19. We didn't know it at the time of this talk, right? But a global pandemic that has affected many lives over what has now been a two-year time span was just on our doorstep.
Starting point is 00:37:45 At the time of this conference, the first confirmed case in the United States had just been discovered, actually, in Washington state, but it had yet been known to spread across the country in any way. And man, what an insane time that was. I remember just weeks after the conference, standing outside on a main multi-lane street in busy Southern California, and everything was dead quiet. No cars, no people. It felt like really post-apocalyptic, you know? And it was the same here. It was completely dead. And if you've ever, I mean, I'm sure LA is the same, but if you're familiar with Bay Area traffic, to go out on the road and not see anyone.
Starting point is 00:38:16 Not a soul. That is super weird. It was, I mean, I think every person listening to this podcast could honestly share in that eeriness that swooped in during those weeks. Right. The initial first lockdown, especially. And of course, our hearts go out to everyone affected by the harshness of the virus. But one thing does remain certain. Had this pandemic occurred only a couple decades earlier, we would not have had the same means to develop vaccines, therapeutics, or even understand the virus nearly
Starting point is 00:38:46 as quickly as we did in 2020 and 2021. I mean, to have multiple brands of vaccine available within a year's time of discovering a completely new virus is just unprecedented. Absolutely. To this day, even though I've been part of this industry for a while now, it was amazing to watch the global response from the scientific community to this. And then obviously the tech community to support them and the speed at which this was done was just unbelievable. But this is one of those scenarios where you have the perfect storm, right? You have the will, you have the ability, you have the infrastructure, and it just kind of all syncs at the same time and you get an amazing outcome like we had. Yep, and I think we're really blessed because of that.
Starting point is 00:39:28 And I mean, to your point, high-performance computing played a big role in advancing scientific research on the virus, right? Ultimately influencing vaccine development. Dozens of major companies associated with high-performance computing, including Microsoft Azure, Google, AWS, Rescale, NVIDIA, AMD.
Starting point is 00:39:45 I mean, the list goes on and on. They all joined forces to give free HPC resources to scientists and researchers working to combat this virus that was affecting the world. Many of those resources were put to use on spread modeling or even indoor particle spread, right? Showing how COVID particles traveled indoors in various environments when we were trying to understand that more. In fact, the first episode of this podcast that you and I did together, Ernest, was about that very thing.
Starting point is 00:40:15 Right, absolutely. I remember we interviewed Jurong Hong of the University of Minnesota about some of the work he was doing using high-performance computing. Specifically, they were studying how COVID particles would spread in a classroom or an elevator and in a grocery store. And I remember being so fascinated by how critical the placement of the air conditioning vents were. Those particles are very small, so they are very airborne. So they travel along the trajectory of airflow. For example, in the classroom setting, if the teacher was at the front of the classroom
Starting point is 00:40:46 and there was like an air out vent that was in the very back of the classroom, then that air vent would pull the COVID particles across every single student in that room, which is kind of freaky. Yeah, and this is one of the things that I think bothered me the most about the response to the pandemic
Starting point is 00:41:02 and the response to all this is you had people like Jerong who had been doing some amazing work. And if you just looked at his examples, it was pretty easy to tell that there were two factors that played here. It was essentially the density of the amount of COVID particles per given, let's just say, cubic liter of space. So if you're inside, that density is much higher versus outside. And the other thing is the movement of the air. Yep. Air movement was huge. Right. So if we had actually paid attention to someone like Jaron, we would have said, hey, y'all can go outside and stay six feet away from each other and do whatever you want.
Starting point is 00:41:36 Yep. You're not going to have a problem. Just be careful when you're inside, wear a mask, whatever the case is. I realize that that's, you know, I'm preaching from hindsight, right? Looking back, but it's difficult to know that the answers were there coming out of scientists and we just couldn't get it right for the life of us. Yeah. The messaging and the science weren't always on point. That's for sure. Another example of that I remember is plexiglass, right? Jurong talked about how they were studying plexiglass, and he told us that plexiglass typically made the problem of COVID spread worse because it often caused the particles to spin around. And they had done these simulations on it using HPC that showed this, right? So these particles would be stuck behind plexiglass and trapped in place, just spin around. And so the plexiglass was really only protecting against like big direct spit particles, like a salad sneeze guard might. Right. But other
Starting point is 00:42:31 than that, it was actually making things worse. So all these schools, you know, were putting up plexiglass between desks and grocery stores were putting them up and whatnot. But since scientific discovery was happening in real time, I remember watching this plexiglass be installed at all these locations around the country, including in the presidential debates. And I remember thinking, hey, that plexiglass between Trump and Biden was just ascientific. I mean, well, then again, maybe those two are big time powerful spitters or something like that. And plus, a lot of what we see in politics is just for show anyway. And maybe public perception was that that was the right thing to do, even if science didn't say it was. You know, it just was interesting because science was moving so quickly, public messaging just could not match it at the same time. So it was hard probably for
Starting point is 00:43:18 scientists working in the field to see so much bad information being told to the public. Yeah. And it's still happening today, right? And that's kind of the sad part is that I think the science itself has been excellent during this entire endeavor. I totally agree. And I think the information coming out of the science is good. But yes, it was great to see, you know, the work of Gerong and others where they kind of put the science in perspective and said, here is where this matters. Here's how this affects schools. Here's how this affects supermarkets.
Starting point is 00:43:49 Here's how this affects elevators and buildings that some people are in every day. And that was just like the environmental side of it. And then there was also the side about the research for potential cures and therapies. Exactly. And along those lines, we spoke to Jerome Badri of the University of Alabama in Huntsville, who is using high performance computing to basically sort through hundreds of thousands of natural chemical compounds from like plants around the world, looking for some that could potentially be used in therapeutics fighting COVID-19.
Starting point is 00:44:20 Our simulations are based on models to calculate how much a given pharmaceutical will be happy or not to stick to a given protein from the virus. And then one of my favorite episodes was an interview with Romy Amaro of the University of California, San Diego. Do you remember this, right? I do. She used a huge chunk of one of the TACC's most powerful supercomputers to study and simulate the COVID-19 spike protein, ultimately learning why it was so stinking good at infecting people. It basically tries to hide itself from your immune system. And the way that it does this is by cloaking itself in a shield of sugar. And so by sort of covering all of its bad viral bits, I'll call them, then the human immune system doesn't sense that the virus is in your system.
Starting point is 00:45:20 Instead, it just sees this sort of sugary coating and says, oh, nothing to worry about. I'm going to, you know, look for other invaders, you know, in your body. In fact, Romy and her team ended up winning a Gordon Bell Award after our episode was recorded, which is like the Nobel Prize of supercomputing for this very research that was also shared with scientists around the world. And then those scientists use this data to influence decisions with like vaccines and therapeutics. And none of their research would have been possible without access to a lot of high performance computing resources. Absolutely. And I'm actually curious, you know, to see what she's done since then with a lot of
Starting point is 00:45:55 this stuff. Yes. So maybe we'll have to reach out and have her back on. So, I mean, it's clear just in the case of COVID, a lot of the advancements out there have roots in high performance computing. And we're frankly lucky to have had that technology in place when this horrible global pandemic came knocking on our door. Right. In fact, shout out to our friends at HPC Wire, a news publication that covers all things supercomputing. They actually have this published timeline called The History of Supercomputing versus COVID-19 that I thought was really interesting. It shows that from virus modeling to investigating the lab leak theory, how high performance computing has played a role in all of this. So we'll link to that article in our episode notes on bigcompute.org in case you want
Starting point is 00:46:41 to check it out. So while life science has experienced a high-performance computing usage boom since the beginning of 2020 especially, researchers have been using HPC to look at obviously more than just viruses. AWS was working with the Fred Hutch Cancer Research Center, and they have a department that's doing work on microbiomes, basically the organisms that share our body and how those can affect cancers. And they have huge amounts of biological samples, they have huge amounts of customer patient data that gives them
Starting point is 00:47:14 insights that they can start investigating with respect to mapping disease to biological ecosystems that live within our bodies and being able to look whether there are effects on treatments, whether a particular treatment is effective for one individual and whether that's influenced by the microbiome that they have versus the treatment for another individual. In other words, HPC is bringing us one step closer to real personalized medicine. I mean, Ernest, can you imagine a future day where you walk into a doctor's office and they know how to treat you specifically based on your individual
Starting point is 00:47:51 body, not just for diseases or cancers that might crop up, but in preventative manners, giving you perhaps the exact supplements needed to maybe lessen the chances that you develop these diseases in the first place? I mean, how awesome would that be? Awesome. And I believe that it's coming sooner than we think. So that's, yeah, absolutely. That'll be, I think, one of the crowning achievements of medical science. Yes, yes.
Starting point is 00:48:14 I hope it happens in my lifetime. I really do. And organizations like the Fred Hutch Cancer Research Center are collecting and analyzing biological samples to be able to answer questions about why certain treatments work best for certain people. And as you can imagine, these are huge data sets where they're trying to map two disparate pieces of information together and look for insights. And they would do this on their in-house platforms. And they estimated initially that this was just too big of a project.
Starting point is 00:48:44 It would take seven years to analyze all the data. And working with AWS, they were able to unleash and unconstrained their thinking from the problem of their infrastructure and just think in terms of the problems that they wanted to solve. And by using the exact infrastructure they needed, mapping their applications to the appropriate technology, they were able to, in seven days, do the simulations that they had projected would take seven years. That elasticity really unbounds the problem.
Starting point is 00:49:16 And they don't have to buy that infrastructure for five years. They only need it for the seven days. And then they've solved that part of the problem. And they can come back to it later if they need to. But literally, they're able to unleash and unbound their thinking. High-performance computing is literally helping human health in so many ways. On this podcast, we've also talked to creators of heart implant devices that were developed through computational engineering using cloud high-performance computing. We recently spoke to an engineer who is using cloud HPC to quantify damage to the human brain in football players
Starting point is 00:49:51 and then using that data to create safer football helmets that better protect against CTE and other injuries. And then outside of life sciences, we've mentioned so many others. For instance, our friend from NASA who uses computational simulation to predict weather on Mars or Martian weather. And also the recent Vertical Aerospace episode about eVTOLs, flying electric vehicles developed again on CloudHPC. Yes. And then there was also the company Sensatech that is using CloudHPC to design and develop
Starting point is 00:50:22 turbine fan blade sensors to prevent midair random explosions, which is kind of important. And about a year ago, we also spoke to a young engineer who is using high performance computing to simulate tsunamis in hopes of pinpointing the cause of the 1908 earthquake specifically and the tsunami that took place in Medina, Italy at that time that basically wiped out an entire town. And I guess the point of all of this is that if you happen to dabble in engineering or high-performance computing in any way, and it somehow comes up at the family dinner table or around the fireplace or something, there's actually a lot you can say that won't make HPC seem boring because it really isn't.
Starting point is 00:51:06 That's right. I think there's many different areas that all of us encounter on a daily basis, whether it be products, services, or just things in general that have to do with HPC or were designed via HPC that everyone can use as examples. Right. And while some of the cases we've talked about have involved on-premise right, the majority have involved cloud computing in some way because cloud simply allows for nearly unlimited scale and speed. That's right. And I think that's the key difference, right, is unlimited scale and speed with some secondary benefits around financials and security and some tertiary benefits around just not having to manage all that mess. Every one of these HBC customers, these companies that are out there, they have hundreds of use cases and applications. And cloud frees them up to be able to fit the application to the best infrastructure. This team doesn't have to make a prediction about technology.
Starting point is 00:52:01 They don't have to be technology experts. They don't have to be technologists who are predicting whether GPUs or CPUs are best. They want to do the science and solve that problem. So we want to live in a world where every application gets run on the most innovative infrastructure. Rescale, who's a part of this conference, is a company that is designed to answer that question and help customers get to that answer of how do I get to the best infrastructure for my application and do that efficiently. Little shout out there to our presenting sponsor, Rescale, who is a partner to cloud service providers like AWS, Microsoft Azure, Google Cloud, and Oracle. And in other words,
Starting point is 00:52:45 those cloud services can be accessed and used to run simulations on the Rescale platform. So we're used to living in a box. That's the traditional on-premise world. We live in a box. We fit our mindset to that box. And we want to move out of that, where every day is different, where tomorrow you can be redesigning engines, and the next day you can be doing simulations of structural analysis, and the next day you can incorporate machine learning into your models. of constraints. And the engineer is free to engage and dream about what types of applications they need and about what types of scientific problems they can solve. Freeing engineers to simply solve problems. That's what access to insane amounts of compute can really do. And where engineers are free to solve problems, innovation inevitably follows. Right. And not only does it follow, I'd like to point out the concept that I keep coming back to, which is the feedback loop. Innovation feeds itself and just creates this amazing
Starting point is 00:53:53 groundswell of innovation, if you want to call it that. And I wish we had more time to dive into more ways HPC is changing the world because, I mean, we're really only scratching the surface, but hey, that's what this entire podcast is for, right? I'm sure we'll have a lot more great examples to share in 2022, which is just around the corner. For now, hopefully you have some good examples you can bring to the family dinner table this holiday season. And in the meantime, you can find Barry Boulding's full talk on bigcompute.org, where we'll also post notes and links for this episode. And if you want to help us out,
Starting point is 00:54:28 leave us a five-star review on Apple Podcasts. Yep. I have repented of my anti-Apple Podcast review demeanor and also encourage such action. Don't forget to use MFA and 321 backups. Stay safe out there and have a very happy holiday season. We'll talk to you next year. Wow, next year. Holy cow, that's so crazy. I know. Festivus for the rest of us. Thank you.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.