Gooday Gaming Guests - Super Computers and Beyond

Episode Date: December 24, 2024

We have come a long way but where we go next is the most exciting....

Transcript
Discussion (0)
Starting point is 00:00:00 Alright, so we're going to do an early morning supercomputer thing I just read about here. It's pretty interesting. Alright, let's see if I can find it now. Did I just lose it? Ah, here it is. Alright, so it says, what is a supercomputer? So let's find out what a supercomputer is. supercomputers. When you think of a supercomputer, what
Starting point is 00:00:28 do you see in your mind's eye? A gigantic mainframe like EN IAC. A sinister emergent intelligence like HAL. Buildings full of GPUs or
Starting point is 00:00:43 maybe a single massive chip with a thousand cores. The question of what constitutes a supercomputer technically depends on what technologies are cutting edge in any given error but it's all about thought put. No, throughput. The term supercomputing has a long history. It has been recorded around 1930 to refer to the statical machines at Columbia University with the mental power of 100 skilled mathematicians. Such a first supercomputer. However the idea of a supercomputer first crystallized around distributing a workload
Starting point is 00:01:33 across more than one chip or system. Supercomputer first crystallizing around more chip than one. At a high level a supercomputer is exactly what it sounds like, namely a computer so much more powerful than a desktop as to deserve the super relativity. What makes supercomputers interesting isn't just how powerful they are in relation to smaller, cheaper computers, but the level of innovation and engineering required to bring them into being in the first place. In Cray Parallel Processing, the first computer generally regarded as a supercube computer is the CDC66000 designed by Sigmore Cray and launched in 1964. It shows a picture. That's pretty cool. I'll take a whole lot of things. That's pretty neat. So the CDC66000 mainframe. The vulnerability CDC66000 and icon then and now um it was running at 10 megahertz
Starting point is 00:02:53 thanks to reliance on transistors built from a brand new material silicon so that would have been the first silicon computer pretty cool the system cabinets were laid out in the shapes of a plus to minimize wire distance and maximize performance. They would be later considered a forerunner for the RSC, reduced instruction set computer design, thanks to its reliance on more minimal instruction set compared with other chips of its time and it its emphasis on a clock speed as means of achieving higher performance machines like this CDC 66,000 and the later Cray minus one were single process systems by
Starting point is 00:03:48 modern standards. But extracting maximum performance from the hardware of the day meant dealing with two familiar villains heat and lag. All transistors got smaller. They packed in closer together which came with the corresponding difficulty to overload heat from electricity. Again, it's all about that manipulation of electricity, but then that gives off heat. So once we get into other forms of energy, we'll be in a big much better place meanwhile expanding chip designs grew in
Starting point is 00:04:26 complexity at the cost of late latency and it lagged at time to deal with this supercomputer builders began integrating system cooling directly into the computation arrays and changing the design to limit wire length related delays. So the wires are shorter. Pushing the envelope with computing performance has always demanded
Starting point is 00:04:56 that manufacturers invest in meeting profound challenges. One of the hardest, wiring everything up and getting it to work. In 2013, Roy Longbottom, a retired UK government central computer agency engineer, compared the Cray's minus one performance against machines like Raspberry Pi. Huh, interesting. All with the new. Longbottom wrote,
Starting point is 00:05:26 in 1978, the Cray 1 supercomputer cost $7 million, weighed 10,500 pounds, and had a 150 watt kilowatt power supply. Wow. It was by far the fastest computer
Starting point is 00:05:41 in the world. The Raspberry Pi, first gen, cost around $70. CPU board, case, power supply, and SD card. Weighs a few ounces, uses a 5-watt power supply, and is more than 4.5 times faster than Cray-1. I want to see what the next generation of Raspberry Pi, like an AI Pi or something, I'll probably call it. Right? I would think, or just a whole other sort of just an ai board that's kind of like a raspberry pi that's what i'm thinking here's more for consumers that is anyway here's more recently
Starting point is 00:06:18 revisited this comparison with the update the updated pi hardware writing in 2020 the Pi 400 average livermoon loops LINPACKS and WETCH stone MFLOPs reached I can't say all those words 78.8
Starting point is 00:06:42 49.5 and 95.5 times faster than the Crate 1. The Raspberry Pi 5, which I have one of those actually, which presumably explains this improved improvements further thanks to the shift to the Cortex-A76 from the Cortex-A 72 and the increased clock speed I've really done anything my raspberry pi but I will though decades of advance advancing in everything from the interconnection technologies to semi semiconductor manufacturing nodes have allowed general-purpose consumers computing hardware to offer performance hundreds of times faster than the fastest early supercomputers. Today, for example, smartphones have thousands of times as much more computing power as mainframes that powered a Power 11.
Starting point is 00:07:44 I still believe we never went to the moon makes no sense that we would have been there since then so that's I'm sticking to that one I'm not really a conspiracy theory guy but that one I'm gonna stick with only because there's no evidence that we've been there and why we still haven't gotten back there modern supercomputers synchronize data across hundreds of thousands to millions of GPUs and CPU cores. Oak Ridge National Lab's supercomputer Frontier is powered by an AMD EPYC chips and AMD Instinct GPUs. Right now, the world's fastest computer is the Frontier X-Scale supercomputer, housed at the Oak Ridge National Lab in Tennessee. It still owns some of its design heritage to the OG Cray system. Frontier combines 9,472 AMD EPYC77113 chips, 660,000 cores, with 37,888 AMD Instinct MI250X GPUs 8,035,000
Starting point is 00:09:07 cores across 74 cabinets the array is wired with some 9 miles of fiber optic and copper wire and cooled with 4 350 horsepower pumps cooling the system
Starting point is 00:09:22 with water instead of air allows the system architects to instead of air allows the system architects to pack more components much more densely. So they cool them with water. It's pretty wild. A comparison between the Cray-1 and Raspberry Pi 5 illustrates
Starting point is 00:09:37 how the tremendous problems of scale and cooling that once required advanced engineering, painstaking component placement, and other adversity marketing for dozens of systems. Over 80 Cray-1s were eventually sold at a price of $5 to $8 million in 1977. We're not only solved,
Starting point is 00:10:04 but solved so comprehensively that the single board computer with 100 times the performance of Crayon sells for.00019% of its launch price. With respect to the problem of wiring everything up, there's much more going on than just the physical difficulty of wiring together control boards or server racks. Improvements to software and overall operating costs have been critical to
Starting point is 00:10:37 the success of supercomputers but they also trace the lineage of the supercomputers as they evolve into different types that solve different problems. What are supercomputers used for? Supercomputers are at their best with workloads that cannot run time or cost efficiently on small computers because of the bottleneck. Sometimes it's a question of wrangling a zillion variables. Other problems demand extreme precision
Starting point is 00:11:12 with both very small and very large numbers. For example, weather forecasting and earth science simulation have scaled up beautifully to run on supercomput-sized systems. The oil and gas industry has made extensive use of these systems to model the physics of seismic waves and predict the location of fossil fuel reserves. Designed by Intel and Cray Inc., the Aurora Computer at Angoria National Lab will investigate the physics and Cray Inc., the Aurora computer at Angoria National Lab,
Starting point is 00:11:47 will investigate the physics of fusion in hopes that it will be made not just an energy positive practicality. Supercomputers have also been for economic modeling to
Starting point is 00:12:03 conduct pharmaceutical research and to virtually test nuclear weapons without requiring the kinds of real-world atomic detonations carried out in countries in the early 60s. The advert of CPU compute in the mid-2010s significantly increased supercomputers' performance in general. It helped drive further improvements in model complexity. Better supercomputers have accredited with more improved weather forecasting in the United States, and AI may gain further gains. A report published in Science in November 2023 highlighted how a machine learning model known as the GraphCask broadly
Starting point is 00:12:53 outperformed conventional weather reporting. The IBM BlueGen-P supercomputer Intrepid at the Agnor National Lab is powered by more than 160,000 processor cores, but cooled using only normal data air conditioning. The system that particularly populates the top 500, a twice-yearly update list of the most powerful computers in the world, may or may not be dedicated to AI.
Starting point is 00:13:30 But companies like NVIDIA and AMD are actively drawing some of the same pool of knowledge to design the most powerful servers built around hardware like the Radeon Instinct M1300X or NVIDIA GB200. For better or worse, AI is ascending in performance as it elbows its way onto the center stage. It has pulled along with it new types of supercomputers specifically designed for large model machine language that power chatbots like ChatGPT, the Microsoft co-pilot. reportedly assembled by the likes of Google, Microsoft, and OpenAI rival more. Conventional list of supercomputers maintained by the top 500.
Starting point is 00:14:31 So these ones haven't really been named quite yet in that group. Most recently, Eli Musk, AI startup XAI, announces that a colossal supercomputer is about to double its size. In a later post, Musk added the colossal is powered by some 200,000 NVIDIA H100
Starting point is 00:14:56 and H200 GPUs, all housed in a single commercially huge building of nearly 80,000 square feet imagine the energy I think has to take super supercomputers are deeply parallel by design their architect allows them to coordinate many functional unit working at the same task. Conventionally, supercomputers are massive installations that can take up
Starting point is 00:15:27 entire warehouses. If a monk had 10 pages of text per day, how many pages would the whole script of the monks produce in one day? But there are many different ways to implement the idea of parallel processing. Decades of work have gone into the development of operating systems, message passing interfaces, network standards, including everything from garden variety, Ethernet to InfiniBand, data fabrics, and the creation of advanced network topologies that allow extra-scale systems like Frontier to leverage
Starting point is 00:16:08 and practice the enormous computing resources that they theoretically can bear. Cluster computing. Where conventional supercomputers often consist of many processing units grouped together in one server room, cluster computing deploys the multiple discrete physical computers that may or may not be in the same building. When those computers are sold at retail stores, such as to the racks of OEM NVIDIA GPUs powering crypto mining startups, a computer cluster may also unlock a different title, Beowulf Cluster. as a response to a mid-century space race in a deluge of data from NASA's swiftly multiplying systems that started piling up in NASA's gartered
Starting point is 00:17:12 in a huge room full of storage tapes that had been indexed, mounted, and requested to access. It was simply too much. They developed their own supercomputer cluster in logical self-defense. Another from a U.S. national-based lab used 70 Lynx-powered PS2 consoles networked together for physical calculations. Oh, that's interesting. The emergence of Beowulf clusters is an important moment in the history of supercomputing.
Starting point is 00:17:48 Not so much because it's marked the development of any great technology advances, but because Beowulf clusters can commonize mainstream hardware like PlayStations and free and open source. So you're using technology that's not necessarily made for it, but it can be used for it. So as the origin Beowulf, how to, stated, Beowulf is not a special software package, new network topology, or the latest kernel hack. Beowulf is a technology of clusters of computers to form a
Starting point is 00:18:27 parallel virtual supercomputer so these are kind of like virtual every system in the top 500 relies on elements of this approach distributing computing this is a cool one. I like this one. In a physical sense, supercomputers are often buildings full of server racks. A single system might be split into multiple nodes, but supercomputers are mostly installed in a single physical location. Distributed computing systems, in contrast, can operate across vast distances including across different countries cloud computing platforms like microsoft azure even offers so-called service serverless architecture billing clients for the resources their workload consumes. The various nodes of supercomputer are designed to work in concert to solve a problem but if the latency isn't an issue it's not so important to have all those compute nodes in the same physical place.
Starting point is 00:19:40 Distributing computer delegates a part of its workload to each of the nodes in the network. Cloud computing takes that one step further, moving the entire workload from the edge, i.e. computer or device like the internet, into the cloud, a data center that offers compute time as a service. Distributed computing systems aren't used for real-time data analysis. Work being done by System 1 in North America wouldn't be connected to an immediate result from System 2 in Japan. One of our favorite distributed computer projects is Folding at Home, which distributes the work of investigating protein folding across participants' desktops and laptops. Districtly, both biochemists' calculations while their systems were otherwise idle. During the early days of the COVID epidemic,
Starting point is 00:20:50 interest in folding at home surged, leading to a number of stories about how in many terms a raw computer muscle and the F at H network is now more powerful than the entire
Starting point is 00:21:06 top 500 supercomputing network. I don't want to grasp my head by that one. Folding at home. Way for scaled computing. We've come a long way from the single process of the late 90s and early 2000s.
Starting point is 00:21:22 Desktops, laptops, smartphones of all have multi-core processes 2000s. Desktops, laptops, smartphones of all have multi-core processors these days. Generally, more cores means a chip can successfully jingle more work at one time. So what if you just crammed more and more cores into a chip? What if you could eliminate internal latency as a bottleneck?
Starting point is 00:21:43 Are more cores also better? Taking the idea to extreme wafer scaled computing uses massive dinner plate sized multi core process units engraved onto a single slice of silicon. So it's just bigger the picture here shows a celebris most recent wafer size offering the wafer scale computing is almost the opposite of a distributed supercomputer because of the it's monolithic in highly parallel design the wafer scale is most useful for workloads that involve doing one thing many times. The wafer sandwich structure involves an interface layering design to minimize latency as vast quantities of information travel through the gigantic wafer. Syllabus Bills is the most recent wafer-sized processor, WSE2, as the fastest supercomputer on Earth.
Starting point is 00:22:55 How high is the supercomputer ceiling? Even at the top of the red-hot market, some acknowledged thermal realities have been made. NVIDIA is said to have recently canceled plans to develop a dual rack 72 GPU GP200 because of its intense consumers couldn't handle the power density required. Power consumption clearly cannot continue to increase forever. So we have to change up the energy. Frontier may have broken the EXA
Starting point is 00:23:36 FLOT barrier, but it also reportedly consumes more than 21 megawatt of power compared to relatively 115 kilowatts for the Crate 1. Modern systems are literally orders of magnitude more efficient than computers of four decades past,
Starting point is 00:23:56 but they consume orders of magnitude more electricity anyway. AI might eventually point out the way to circuit optimization and new approaches to computing that currently elude us. But absent from some kind of fundamental breakthrough, supercomputing faces a difficult road to zeta scale, if such a road exists at all. AI has thrown power consumption issues in the data center industry into sharp relief, but concerns about the direction of super-community power consumption predicate the current surge of interest in artificial intelligence. According to a 2020 article from the Data Center Dynamics, the megawatt scale power requirements demanded by machines like Frontier may have represented
Starting point is 00:24:56 a path that most significant computing centers can't afford to pursue. A sign of how much market dynamics have changed in just a few years, the DCD article suggests that cloud HPC development may be more affordable way for customers to want supercomputing forms to access it without paying the high on-site cost of the actual supercomputer itself. Makes sense. If you paid any attention to the blizzard of AI PC information dump on the market lately, we have seen companies arguing that future of AI PC will reduce AI cost
Starting point is 00:25:43 by moving workloads out of the cloud doc center and back onto local drives. That seemed like the opposite of what I was saying. At the most significant level, this reflects the tendency of industry experts to see trends moving in whichever direction most favors a respect client's pocketbook. Still, there is something to the idea that workloads are broadly in motion, simultaneously moving outwards across larger systems. Networks take advantage of local, co-local, and efficiency of scales, while also hunting for local environments that minimize latency, power, and data movement. Sixty years ago, Samuel Clay's stubborn refusal to be satisfied building a mundane computer
Starting point is 00:26:38 literally reveled what computers are capable of doing. As a field supercomputer has helped unlock most fundamental understandings of our universe, from the behaviors of atomic nuclei to the indices of drug interactions and protein folding. I don't know what protein folding is. I don't know what that is. These machines push the boundaries of human knowledge, continuing to improve them as they challenge the computer industry to scale existence technologies and repeatedly push it to invent new ones. Supercomputing is exciting because it invites us to consider
Starting point is 00:27:18 what computers can accomplish if we throw caution, budgets, and power bills to the wind. That's not always a great idea, but it's undeniably led to some pretty incredible things. That was a long one. That was fun, though. I like that one. So I kind of got some out of that. Supercomputing in a different way.
Starting point is 00:27:41 It's always about the latency. I like the scale wafer, the WS e-2 celebris that's pretty neat so it's just exciting to see where it's gonna go from here so that's my little computing for the day later on we're gonna do I'm going to do my Sega boots. Sega Genesis 1601. And then I'll do the 1631. And then the Sega Genesis 3. We'll do those as a boot up later on. Alright.
Starting point is 00:28:20 So you guys have a good rest of the Christmas Eve. And I'll talk to you in a little bit. Alright. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.