@HPC Podcast Archives - OrionX.net - HPC News Bytes – 20251222
Episode Date: December 22, 2025- High Frequency Trading race to save nanoseconds - AI federated learning at DOE labs - Perspective from "AI nobility" who are also "AI doomers" [audio mp3="https://orionx.net/wp-content/uploads/2025/...12/HPCNB_20251222.mp3"][/audio] The post HPC News Bytes – 20251222 appeared first on OrionX.net.
Transcript
Discussion (0)
Welcome to HPC Newsbytes, a weekly show about important news in the world of supercomputing,
AI, and other advanced technologies.
Hi, everyone. Welcome to HPC Newsbytes. I'm Doug Black of Inside HPC, and with me is Shaheen Khan of
OrionX.net. A classic enterprise application of HPC is high-frequency trading, in which
investors use supercomputers to achieve stock trading speed advantages measured in milliseconds.
But now, traders on a European futures exchange are shaving that time frame down to a million
times faster than a millisecond. A story last week in the Wall Street Journal looks at Germany's
Eurex exchange and a dispute in which a group of traders have used garbled data to save
3.2 billionths of a second. As the journal explains it, the controversy is about an arcane
technical maneuver in which high-speed traders bombard Eurex with useless data. The idea is to keep
their connections to the exchange warm so they can react fractionally faster to market-moving
information. The controversy became public when the Paris-based trading for Mosaic
investigated why that firm's formerly thriving trading business suddenly fell by 90% in 2022.
The journal quotes the founder of Mosaic saying, an arms race is okay, but you must use legal
weapons. Well, a program trading, algorithmic trading, straight through processing, and finally
high-frequency trading, HFT, are part of the spectrum of the discipline to execute financial
trades faster and faster. You have financial trading companies at one end and market data at the
other. What connects them is the network and the level of optimization that goes into that is just
unreal and has been for decades. There's a very good paper from earlier this year that covers this
very well. It's in the International Journal of Computer Science and Network Security and it's
titled, quote, the impact of high-speed networks on HFT performance. It is acceptable. It is
accessible online. In this case, the optimization goes all the way into network packets that tell
the market a trade request is coming, thus keeping it warm, only to decide whether or not to actually
issue the transaction depending on market data. Like you said, it's down to milliseconds or less.
As the Department of Energy's Genesis mission rounds into form, there have been announcements
about participating organizations. The number is approaching 30, and we're seeing some of the
strategies that may be used to construct the planned cooperative array of supercomputing resources
at various locations around the country in support of their AI for science mission.
One of them is a federated learning AI model project led by Sandia National Laboratories
that the lab calls a significant milestone in advancing AI for national security.
Over the past year, Sandia, Los Alamos, and Lawrence Livermore Labs, known as the Tri-Labrador,
have been building a federated AI model as a pilot project, and they now have a prototype.
A federated AI model trains a model that is shared across devices in multiple locations
without moving the raw private data with the goal of creating a smarter collective global model.
The project used an open-source federated learning software called NV flare,
and as you can tell from the name, it's contributed by NVIDIA.
The distributed learning proceeds in parallel at each lab and is broken up into phases or epochs.
And after each one, the labs share the updated weights, but not the data itself.
And the weights are averaged together to form a single model for the next epoch of training to begin.
Continuous and federated training is one of the next stages of AI and critical for achieving high-quality learning and therefore inference.
ability to use classified datasets for training without sharing them is itself a critical requirement
for government and commercial applications. Tech media mostly concerns itself with how AI technology
is evolving and advancing with less focus on the possible negative ramifications of AI as it approaches
artificial general intelligence and superintelligence levels of performance. Certainly and naturally,
we don't hear concerns from AI vendors whose mission, of course, is full speed ahead.
But there is a serious group of technology thinkers, analysts, and academics, deeply troubled by the
possibility of dystopic AI. This group has been labeled the Doomers. And MIT Technology Review
has published a lengthy article on the Doomers perspective and how they've adjusted their
outlook as AI progress has apparently stumbled a bit of late. We're referring to the
somewhat disappointing reception given to GPT-5 from OpenAI, whose CEO had said in 2024 that
AGI may arrive this year and superintelligence by 2030. At the heart of the Doom Review is a call
for technologies, measures, and policies for greater AI safety before the industry
developed systems it can't control. From their perspective, AI's slowed progress merely
means the world has more time, a bit more time, to come to grips with AI. I'll quote,
a characteristic statement from the Dumer Group, Joshua Benjio, winner of the Turing Award and
chair of the International AI Safety Report said, quote, the overall landscape for AI governance
and safety is not good. There's a strong force pushing against regulation. It's like climate
change. We can put our head in the sand and hope it's going to be fine, but it doesn't really deal
with the issue. He later added, like many people, I had been blinding myself to the potential
risks to some extent. You're excited about your work and you want to see the good side of it.
That makes us a little bit biased in not really paying attention to the bad things that could
happen. Even a small chance like 1% or 0.1% of creating an accident where billions of people
die is not acceptable. It's the global question of our times and it's an important article.
because it highlights how serious the industry's views are.
AI is already impacting businesses and societies.
Some people believe that policymakers should make sure
short-term impacts of AI are covered
before focusing on medium and long-term issues.
Short-term impacts of AI, like deepfakes,
clever cyber attacks, job elimination, or an AI bubble,
get a good amount of attention these days.
But all aspects of AI policy are important,
and this article discusses the long-term,
impact. AI is not done yet, of course, and progress needs to continue for both business and
social benefits. It's not easy to manage risk in the face of something that is unknown and
transformative. It leads to the evaluation of how any unexpected negative consequences are
managed versus how unexpected positives are enjoyed. For example, Nobel Prize winner Jeffrey
Hinton said that his focus has been on the longer term threat, quote, when AI gets
gets more intelligent than us, can we really expect that humans will remain in control or even
relevant, question mark, end quote. There's also a lack of an accepted framework with probabilities
of something happening or not and timeframes needed for mitigating a threat. You see Berkeley
Professor Stuart Russell says it, quote, isn't about imminence. If someone said there's a four-mile
diameter asteroid that's going to hit the Earth in the year 267, we wouldn't say, remind me
in 2066 and we'll think about it, end quote.
Now, it has been said that logic is always on the side of the naysayer, but on the positive side,
the article says, quote, most people I spoke with say their timelines to dangerous systems have
actually lengthened slightly in the last year, an important change given how quickly the policy
and technical landscapes can shift. Let's hope for that. All right, that's it for this episode.
Thank you all for being with us.
HPC Newsbytes is a production of OrionX in association with InsideHPC.
Shaheen Khan and Doug Black host the show.
Every episode is featured on Insidehpc.com and posted on OrionX.net.
Thank you for listening.
