@HPC Podcast Archives - OrionX.net - Analyst Roundtable: AI, Social Media, Bitcoin, Quantum – OXD34
Episode Date: December 29, 2025Analyst roundtable covering the big ideas in technology that are changing the world, with Adrian Cockcroft, Stephen Perrenod, Chris Kruell, and Shahin Khan. In this episode: - Agent swarm coding, upd...ate - AI bubble? - Australia social media ban - Modelling Bitcoin bubbles and volatility - Supercomputing 25 Conference (SC25), TOP500 - European Supercomputing, - Q2B Conference, Quantum computing modalities - RISC-V in Servers [audio mp3="https://orionx.net/wp-content/uploads/2025/12/OXD034_ART-10_20251228.mp3"][/audio] The post Analyst Roundtable: AI, Social Media, Bitcoin, Quantum – OXD34 appeared first on OrionX.net.
Transcript
Discussion (0)
Is this AI thing a bubble?
And it wrote me some SOPs that worked like first time.
It was immediately useful and now I have the way I evaluate a repo in my benchmark is an SOP.
Instituted, legislated a ban for social media access for children for teens under age 16.
where you can look at the data center from above and it looks just like a motherboard.
The whole point about Bitcoin, the reason why it has a power law is actually two power laws
multiplied together. One is Metcalf's law of networks and the other is the power law growth
in the adoption. Welcome to the OrionX download podcast. Join Shaheen Khan, the OrionX analysts
and special guests as they discuss and simplify the big ideas and
technology that are changing the world. Thank you for being with us. Welcome to another session
of the Orion X analyst roundtable. I have to mention every time that it spells art because we do
art and science of all the buzzwords that are out there and the technology trends with me as before
is Adrian Cockcroft, Chris Kruele, Stephen Pernoe. How are you gentlemen? Hi, thank you.
We're covering all the major time zones today with scattered all around the world. So without
further ado, which is what we usually do. Adrian, you've been working on AI-aided coding,
vibe coding, whatever the terminology is right now, and it seems to be making significant progress,
so please catch us up. Okay, so since the last time we spoke, I gave a talk at QCon, San Francisco,
called Managing a Swarm of AI Agents for Fund and Profit, and it went very well. I was one of
the highest audience feedback ratings that I've ever seen, like 99% positive rating.
One person thought it was average.
So, you know, I've got to go find that one person and find out what that one person
didn't like about it.
But, you know, it was.
But I think I pushed a bunch of buttons that people like.
So it was sort of very practical and but also sort of leading edge.
and then it sort of tied into a lot of people's interests.
So that was a good session.
During that conference, Google and OpenAI released new versions of their coding agents,
and later that week, Anthropic released a new version of Claude,
which is, and benchmarking that new version of Claude,
it did the same task in about a quarter of the time and the quarter of the number of tokens,
which is roughly what they were saying.
The speed up is not just a few percent.
It's like they're collapsing the amount of thinking required to get an outcome.
And it just seems to be better.
So that's just like an average month in the world of AI coding.
The tooling is getting better faster than you can keep up.
Coming out of that, I wrote a blog post right up on that.
And I also created a benchmark as a organization on GitHub called Brazil-hyphen bench,
where I've put a standard task, which is to teach an AI.
agent about Brazilian football by feeding a knowledge graph into it and then being able to ask it question.
So that's the task and then I'm running that task against multiple different
tooling and you consider it's a repeatable task basically. That's the idea. So I've been playing around with that
and you do it a little bit more on that but that's been my most of the sort of coding things I've done in the last
in the last month and it was interesting conference met a whole bunch of good people and it's a good conference for
developers to sort of track where things are happening in the space.
Do you characterize this as a major trend?
Yes, the rate of change isn't slowing down.
We're now seeing second-order effects because the coding agents are now good enough
that they're actually able to build new things really quickly.
So there are people like Riven Cohen came out with yet another new thing in the last few weeks,
which seems really useful.
It's a replacement for the PG-Vector
plug-in for Postgres, but it's called it Rub vector, and he's written in Rust, and it's off there.
And the next thing I have to do is go play with that.
It's basically a graph neural network running on top of database, or it filter replaces the
graph database side of things.
So it's just, you know, a brand-new thing turns up that looks useful because he was able to
write it in a few days.
Normally that would have been a much bigger development.
So we're seeing that that thing happen, and we're seeing more the tooling.
getting faster and better pretty quickly.
So the problem with that, which is part of the subject to the talk,
the fun and profit part was you have to be playing with this
and exploring and experimenting on projects that aren't production,
whatever you call non-production, like pet projects.
That's the fun part.
You've got to have some things that you can play with
to try and figure out what works and what doesn't work
because the boundary of what is working is moving back really quickly.
So you have to be experimenting on the other side of that.
boundary with things that may not work. And once you find something that does work
as sort of repeatable pattern, then you go to the for-profit bit, you can actually
use it to go build something. So without that playing around and spending a good
amount of time experimenting, you're not going to get stuck in a dead end. You're
not going to be tracking what's happening in this space. So that's sort of the way
I was sort of trying to push people in that direction.
One observation of had is that there is a how-to.
to doing AI, that simply because it's AI, the expectation that it is easy to use and
it's just going to work, doesn't work out, you need to know how to do it. So the answer to the
question of, are you telling me I don't know how to use AI, could very well be yes. You kind of do
need to know how to do it. Do you see that? Is that valid? I think you need to know how to manage
developers is the skill set that I'm finding is the one that really drives successful out.
comes. Because the human developers do most of the stupid things that the AI agents do as well.
So you have processes wrapped around that that make you have a good outcome. And that's basically
what it looks like. Chris, you're going to say. Yeah. So, you know, Adrian, you bring up a good point
that people need to be playing with these things all the time, right? Because of that, the rapid rate of
change. Do you see that rate of change as being a barrier in moving into production or getting
utility out of AI with what people are trying to do in the enterprise, for example.
Yeah, we've mentioned that before. I call it enterprise indigestion. The rate of change is too
high for enterprises to absorb and deploy new things. And the people that are able to keep up
with the rate of change here are startups and the whatever scale digital natives or
whatever you call the sort of Netflix's of this world who are actually deploying these things
rapidly in figuring out platforms for deploying them successfully and building or figuring out where
it works. Yeah, there's that catch-22, though, of, and maybe, maybe they're just the big guys
just get pushed to the sideline by these disruptors coming along, the smaller, more agile
organizations, but there's that catch-22 of you can't stay away from it, but on the other
hand, what you're deploying on a larger scale is immediately out of date. So that's an interesting thing to
follow, especially as we see the business models, right? Anthropic pushing more toward enterprise than
consumer, chat cheap E.T, doing a bit of both kind of thing. I'm not sure where Gemini lands, but
part of what you said is that the skill set required is managing developers, not being a developer
that points to this tool essentially replacing developers. And if you are,
a developer, you're not going to mingle well with it. And if you are a manager, then it's easier
because you just replace them all with the agents. Does that work, or do you still need
actual developer skill set profile in the organization somewhere? I think you do. And there's
an analogy that I have in my talk as well, which is that if you go back to cloud, we had what I'd
call a ticket and click it operations person. You send them a ticket and they click buttons in
VMware to make something happen or they something, you know, they go and install a thing in the
data center and that was the operation cycle.
And nowadays we have like DevOps and SREs and some of them are the same people that retrained
themselves.
So they know how operations works.
They have probably, you know, physically carried a machine into a data center at some point
in their past, but they don't need to do that.
Now they just need to understand what's involved in that.
So you have enough memory to have like the architecture of the solution.
in your head, but the operation of that is now happening with automation in cloud.
So we called that no-ops for a while, but some people didn't like the idea of the word no-ops.
So I said, well, maybe what we have coming up now is no devs, because I don't actually,
I have developers now on call.
If I want five developers, I type something into Claude Code, and five developers appear,
and they run for half an hour, and they're done, right?
I don't have to go and hire five developers.
Developer as a service.
Yeah, the developer capacity is available on demand.
The question is, can you manage that to an outcome given that they're effectively off the street developers without a lot of context on your project?
So then the question is, can you feed them the context?
That means they come in now and they are productive.
But the onboarding process effectively of a new hire has to be compressed to the point where that is automated.
So we're just automating that part of the flow.
And then the question is, do you get an effect, a good output from that?
And there's parts of the problem solutions placed that are quite reliable, like building a relatively simple Python scripts.
I'm getting 100%. It just works. If it doesn't work, then I'm surprised.
And then there's other things that are more complex, which are sort of beyond the boundary of what currently works.
And you can, you know, there's all different, everything in between.
So that boundary as it moves back.
So then the thing is, can I automate the management of these agents?
And I call that no man, if you like.
Because if I can automate the low-level management away, I don't need a line manager.
I have a management agent that is watching over these agents and making them do the right thing.
So I'm experimenting a little bit with some tools around that.
So what Amazon came out with was an SOP tooling for agents that's basically standard operating procedures,
and it's a framework for building these.
I pointed Claude at the blog post about it.
It read that.
It went, oh, I see how this works.
and it wrote me some SOPs that worked like first time.
It was immediately useful, and now I have the way I evaluate a repo in my benchmark is an SOP.
Then it generates the same output, and I've used it on three, four repos now,
and it comes out with the same structured output because the SOP tells it what the structure should be.
So now you've got repeatable tasks, but still with the ability to innovate within that repetition based on what it sees.
So the innovation isn't a fixed recipe, but it's guided by this document basically says that you must do this and you should do this and you should think about these things.
Anyway, it's...
That came out in the last month, the Strands S-O-P-A-W-S took it.
I tried to using it.
It worked first time.
You know, that's kind of the world we're living in now.
Amazing, yeah, absolutely.
So while all of this big trend is happening, there's also...
increasing conversation anyway, if not actual worry, but is this AI thing a bubble?
Are we just getting too passionate about this?
What's your take on that?
We talked about this last month where I saw there was some things written by Polkadroski.
And I think those were amplified into basically along the same lines,
but with a bit more detail around them by Ed Zittron in a blog post that's making the rounds right now.
and it's basically saying is
Envidia an Enron and like maybe they
aren't an Enron but what are they
and what are they doing
that is dodgy and there's a bunch of
things in there that
ask a few
questions and most of it's okay
but the thing that I picked up from it that was
the big question was
Nvidia says that it shipped
a certain number of Blackwell chips
Blackwell GPS but if
you say that each of those should be
running somewhere, then they would be consuming, say, two kilowatts per chip to be running effectively
because about it's over a kilowatt just for one black well plus the rest of the machine is the rest
of the, so it's kind of of the order of two kilowatts per blackwell. And he says that you go looking
at the deployed data center capacity of everybody and the energy supply. He says he can't find
the energy that will be needed to actually be running that number of blackwells. So then he postulates that
maybe they're just sitting in a warehouse somewhere and they've been bought because somebody
said I need to buy all these but not actually deployed yet because the data center capacity
doesn't exist to deploy them yet and the energy required isn't there yet so that was a new thought
that maybe there's some actual overbuying Nvidia has sold the chips to somebody but they're just
sitting somewhere they're not actually out in the world and of course you know Rubin's coming
out so those black wells will be obsolete maybe before they're turned on even
I don't know if that's a thing or what you guys think about that.
Is that credible, or does it make sense?
I think it makes sense.
I wonder whether, did he give any indication,
whether it was more data center build out per se or just even worse,
the availability of electricity to power the data centers?
I think he was looking at the power consumption as the main driver
and just trying to find where that could possibly be
than just not finding the major supply.
The people who have enough capacity deployed to take a water-cooled rack
is a relatively limited number of places to look.
And he was looking at what he could find about the capacity there.
So I think it's a good question.
I think over the coming months, it will become clearer what's really happening there.
But that's one of those things that's kind of keeping the bubble inflated
would be this kind of keeping the price high.
and if there was actually a glut in the market,
then the price would come down.
So that's sort of the thing that tends to blow up a glut
is when supply actually starts to exceed or meet demand, right?
So the question is, can demand stay ahead of supply
enough to keep the price high?
Right. I guess the worry is whether there's a glut of GPU suddenly
and then everybody has as many as they want
and then the price drops, et cetera.
And that would be the deflation of whatever we would,
would then call bubble. But I see this as unlikely. I believe it is highly unlikely that an organization
as kind of rigorously managed as these big cloud providers are would sort of put themselves
in a position of having such magnitude of overbuying and warehousing them and not actually
selling them on the open market to get rid of them and just like sit on them long enough
for this to be an issue.
I feel like Envidia is showing the receipts.
They've got their revenue.
People are paying them.
That money is real.
The chips that are getting shipped are real.
Their roadmap is real.
Like you said, Ruben is coming.
All of this stuff is going to be past generation.
They're still useful.
But if you're a big player with enough verbiothal to hoard them,
you are also the kind who's going to want the latest.
So it doesn't hang together with me.
feel like, you know, there is obviously a lot of passion and, you know, what is the right
word, exuberance in the market. But I think if we get anything at all, it might just be like
a little bit of a hiccup rather than any, this does not look like a bubble to me.
So clearly we have a refresh, you know, refresh cycle on the horizon. That'll help
maintain some helium in the balloon. One of the things that occurs to me is that the lens
that we're viewing quote-unquote AI through is too narrow.
And yes, you can point to Nvidia as helping prop up
whatever slice of market you want to talk about.
But the real action, and all of us come from hardware backgrounds,
we know that the real action is how this stuff gets used.
And so those trickle-down effects in the software and agent and et cetera world
is really where AI is going to bear,
a lot of fruit in the coming years.
So I just, I don't know that we take
a holistic enough view of AI
when we talk about a bubble.
There's been another signal here.
I'm not sure exactly how to interpret it.
There's now a DRAM shortage.
And the news was that, is it,
Micron is shutting down crucial
because they don't have enough supply to supply people.
Consumers.
and they're going in.
The way I understood it, and I did not look too deeply,
was that they were redirecting their capacity towards servers,
and that the power issue has led to server providers and VDivis of the world
to look at cell phone technology memory because it is way lower power.
And if you redirect those to servers,
you're now suddenly got a lot higher margin capacity
versus lower margin capacity.
That's where I'm going to take it.
And there's a big set of orders that are coming for it.
The price is going up,
and that's why we have the memory shortage.
The other thing I've seen is that because everybody's kind of seen
that they have to wait a long time to receive what they order,
that they're placing orders in advance of themselves needing it,
and that's going upstream.
it's moved past chips to memory, to motherboards, to actual systems.
So they're going upstream to lock up capacity for the next, like, you know, 18 months.
And then they're kind of sitting pretty.
And that is also a good competitive move if you have the way with all to do it.
But it all rests on your confidence that the demand is going to be there, right?
Yeah.
And then the other side of this.
Sorry.
Yeah.
Well, I'm sorry.
But we're seeing also a technology shift taking place, right?
So we've got DDR5 and LPDDR5X, et cetera, coming online.
That's it.
LPDDR is what I'm.
Right.
So that speaks a little bit toward your point about moving toward server and higher up the tree types of implementations.
So I think we're seeing that at the same time that people are trying to figure out how to build what they're trying to build.
Okay, so as an indication of it not being a bubble, everybody's doing their own AI chip now.
Anybody who can is.
Obviously, AWS, Google, Microsoft, meta, and Marvell has parlayed that into a big growth business
as it co-designs with these guys and adds all the blanks that they might have in terms of networking.
And Invidia has been more liberal in licensing their NVLink technology,
to accelerate that.
So, Adrian, you tracked, reinvent, as you normally do, the AWS conference,
and they talked about their next generation AI chips and CPUs as well.
Do you want to say a little bit about that?
Yeah, just briefly.
I didn't go in person this year, but I watched the keynote, the main keynotes,
where they announced most of what they were doing.
And the takeaways from that, their own chip is called Traneum.
And the technology that they've targeted up to now has been one step behind NVIDIA,
so they're not competing directly with NVIDIA for capacity.
And aiming at something that looks like about half the performance per chip,
but less than half the power, and less than half the cost.
So what they can say is overall use twice as many chips,
but it costs less and it uses less power.
So that's been their sort of strategy so that they don't get into a sort of a competing for the same wafer's wafer capacity kind of issues.
So that was Traneum 2 and Traneum 3 seemed to be following that path.
And the outcome from that is that they do have a very large capacity up and running.
They were talking about that.
The main customer they talk about is Anthropic.
So a fair amount of the capacity that's being used to run.
Claude is running on this on Traynium and they've got, they're learning how to roll
this out in very high, very high capacity.
They say they have more gigawatts of this than anyone else, but you know, whatever.
That's, they then announced that Traynium 3 is now shipping, they previously announced
that I think a year ago and they announced the specs for Traynium 4, which is in a more aggressive
feature size, you know, they're going for a more aggressive process and, you know, more capacity
and it looks, you know. So then, again, I think they're following the same strategy. But the thing
that was new that I hadn't seen before was they licensed NVLink from Nvidia so that you'll be
able to use the Traneum 4 in the same environment, the same sort of circuit level environment
that the Blackwell and Rubin chips are using. So,
it becomes part of the
Nvidia chip level ecosystem
because NVLink is how
NVIDIA glues together on its chips.
I think that's a pretty significant change
because I hadn't seen
NVIDIA licensing that
previously, but there was definitely
talk a year ago that
AWS was going to figure out how to get
more closely integrated with NVIDIA,
so that was sort of the pre-announcement,
if you like, and this was the actual,
okay, we've licensed this
and it's in our next-gen chip.
They had done that for their B-10 chip,
which is in their little desktop spark box,
if I remember correctly,
to Media Tech, was it?
And so I think what they're doing with NVLink
is pretty significant strategically,
because while they're licensing it,
which is, okay, that's kind of new.
They also announced something called NVQLink
for connectivity between
a traditional classical computer
and a quantum computer with GPUs inside that would be used for running error-correcting
code, which can be very computationally demanding and by definition needs to be real-time.
And putting that inside and positioning it as a connectivity thing rather than an error-correcting
thing was very interesting.
The idea of becoming more of a standard in the fabric space is quite interesting.
There was a lot more announcements from AW.
U.S., mostly in the software stack and various things they're doing with partners.
So they're diving in and trying to get AI to be digestible by enterprises
is probably the best way of looking at that.
Similar to Trinium, obviously Google was the first to do their own AI chip called TPU
for Tensor Processing Unit, and they're in their seventh generation of that.
The last one that they did called Ironwood also comes in its own rack,
And there was rumors that some of the neoclouds as well as meta were in conversations with Google to get access to those systems.
Sort of the implication was like on-prem.
It's not clear whether that's the case or not.
For the neoclouds, of course, it'd be on-prem for neoclouds.
But then my take is that the neocloud emerging as a customer class really is what the news is there.
is that Google does not need to become a traditional server provider.
It can just be a new cloud provider.
Like, you need the data center here, data center as a system,
like I've been saying for the past 20 years, right?
We had slides of that when we were at Sun,
where you can look at the data center from above,
and it looks just like a motherboard, right?
I mean, everything is going to eventually look like a motherboard.
I think that's another rule that we can talk about.
But anyway, that became pretty interesting, and people were wondering, is this the beginning of real competition for NVIDIA?
I don't think so.
I think it's been carrying on exactly the way things have been.
NVIDIA has always had competition.
Certainly the cloud providers doing their own AI chip consumes a bunch of AI workload that otherwise would have gone to mostly NVIDia and AMD and such.
And in the meantime, AMD is becoming stronger and stronger.
They got a deal going with OpenAI, so they got that endorsement.
I mean, if you're doing AI, OpenAI and Nvidia are as good an endorsement as he can get, right?
So Intel got it through Nvidia and AMD got it through OpenAI.
Google is also renting data centers from CoreWeave that have Nvidia chips in to bolster their data center capacity.
So everyone is after everything everywhere, and they're all trying to build it up.
Again, going back to, so Ed to Tronner, and Paul Kedroski's case, the revenue needed to sustain the level of build-out right now is not feasible from, like, where does the, where does, people would have to be spending a ridiculous amount of money in a few years' time to fund what people have on their books, which basically, when that happens in a market, what it means is that when the bubble bursts, some amount of that just doesn't happen.
There's basically overbooking of revenue and capacity because people are booking more than they really need because they don't know, right?
So that it collapses down to something sustainable at some point.
And like with every bubble, you can say that you can point at a bubble, you can't say when it's going to burst, right?
That's the thing.
You can't time this.
It might be tomorrow it might be in five years, but you can point at a bubble and say it has to burst some point.
Well, I think that reduces the growth rate.
but it is not going to be a problem primarily because this stuff isn't evergreen.
The stuff does get aged and it does become old as much as you can use them and maybe you do use
them but the power and cooling and all that is getting better and better.
So I think that that will, I mean in the scenario where a whole lot of people,
a whole lot of companies who are buying or using GPUs can't sustain it
because they're not profitable.
And their funding source demands that now.
So I think even if they all stop,
that capacity isn't going to just turn over.
Here's the counter argument to that.
If you say that the current generation
versus a generation in three years time,
two or three years time is say four times more powerful
at the same power, right?
It's sort of doubling per year or something like that.
And you're currently
depreciating your GPUs over six years, which is the common hardware is depreciated over six years.
That's what's going on the books at, you know, the hypers.
In a few years' time, you are power limited.
It doesn't make sense to be running an old chip that consumes that is, it makes sense to just shut it down, turn it off, get rid of it, and put in the new one.
It doesn't make sense to run the old ones because they consume too much energy and your energy limited.
So there's a built-in thing, which means that every two or three years,
it actually makes sense to just get rid of the old stuff completely.
And so when you're depreciating over six years, there's no useful value of it.
You need to be depreciating it over three years, and even that's probably a stretch, right?
It's not going to be economic to run these things.
So when you're power limited, that drives that behavior.
And so the question is how quickly we can end up not being power limited
or what other constraints you have.
And this is true across the board.
Even in embedded edge devices, you're always power limited, so the new guy is going to win.
Let's move to another topic, and that was what was being live streamed by the BBC last night
when Australia's social media ban went into effect.
And Chris, you were tracking that.
What is your take?
Yeah, yeah.
Yeah, so first let's talk about what it is.
So, the effect of December 10th, which, you know,
some of our time zones were December 10th,
and it's yesterday in Australia.
But so they've instituted, legislated a ban for social media access for children,
for teens under age 16.
And they're putting the onus of enforcement on 10 named companies.
Now, this is a dynamic list that they've said it.
dynamic list, but it's, you know, the usual suspects, right? We're talking about
TikTok, Facebook, Instagram, threads, Twitter, or X, YouTube, Reddit, Kick, Twitch, and
Snapchat. Those are the named platforms that need to enforce this ban. Now they
also explicitly excluded platforms such as Discord, Google Classroom, Lego Play, LinkedIn,
which I'm sure big deal, but just, you know, I mentioned Discord
or WhatsApp, YouTube, kids, Pinterest, Roblox, Lemonate, which came out a few years ago as an alternative
to TikTok when the TikTok band was a live thing.
It's a whole other topic.
Steam, Steam chat, Messenger.
So they segregated the social platforms going after the big guys.
And again, the onus is on the social media companies.
And, you know, how are they doing this?
Some of them are being a little hazy on how they're determining what the age.
is. Sometimes it's self-reported or it's a combination of self-reporting with
behavioral characteristics. There's a company out there called K-ID, which has
built their business around validating the ages of account holders. So the
people affected could download their content from these platforms before their
access is cut off. The live stream of
on BBC, which I was surprised to discover, said that, you know, there are a lot of people still
had access. The actual shutdown is trickling out. And the platforms will, I think almost every case,
freeze the account until that person is determined to turn age 16. And no new accounts would
be able to be formed. So that's the essence of what's taken place. And, you know, I've been
thinking about this topic for a while because here you know a lot of people have been talking about
the damaging effects of social media and the addiction right and as we were joking before we came
on live um a recording this addiction and these effects are not limited to to teenagers and tweens
and you know younger kids but the focus has been on the harmful effects of on children and
And so, you know, countries such as Australia, which is being closely watched by other countries, you know, they don't run into First Amendment issues because they don't have a First Amendment.
So it's a little easier for them to legislate that sort of thing.
You know, these things have tried to come up here in the States, but, you know, First Amendment basically prevents the state or the state or the
federal government from legislating access, but there's nothing preventing companies from themselves
self-imposing age limits. But at the end of the day, the companies are profit-motivated
and advertising and addiction drives their top line and bottom line.
And it's an important demographic, so they have entire departments focused on caring and
feeding of that particular market segment and keeping them engaged.
Yeah, absolutely. And in the adult world, we hear about terms like social media detox and
those kinds of things. And people talk about the benefits of unplugging and so forth.
Fasting. Yeah, fasting. Touch grass, right? So, I mean, my take on this is that I think there should be
some form of age gating. I think that companies should take on more responsibility in monitoring who has access
and so forth. But it's not totally reliant on the companies themselves. There's a social aspect.
There are various schools and school districts that have instituted no phone policies, in part
because of the social media access during the day. But at the end of the day, I think it's down
to parents and schools and other aspects of society to help create a cultural norm where social media
is pushed a little bit to the background.
I don't know, is the horse out of the barn?
Perhaps, but I think it's a challenge.
But at the end of the day, we should be paying attention
to the damaging effects.
Steve, you want to chime here?
Well, I find it interesting that AI is not on the list.
I don't know if there were any comments about that in any of this.
I haven't seen any commentary about AI in this regard.
to this limitation but we know that AI is starting to manifest some some dangerous behaviors for
for some teenagers who are you know befriending it right in the wrong way yeah so it's it's
quite notable as an omission and I just wonder whether they're starting to look at that as
well so people are thinking about it talking about it and yes there have been cases where
AI gets blamed for suicides, actually. So that's clearly a dangerous step. My perspective on this
is that the legislation and talks of these kinds of legislation comes a little bit more around
peer pressure and groups and so forth, and the naming and shaming and the peer pressure to
belong. I think that's sort of the first order of business. And these kinds of things have been
talked about for years. So that conversation is further along than the presence of AI in our
culture and in our lives. So I wouldn't be surprised if AI does get pulled into it, but yeah,
that's more on an individual level. There's also the business value of knowing your customer
and the age verification is valuable. So there's pressure from the social media companies that
want to age, you know, it's valuable to them to have people that are verified because
one way of doing it is showing you have a credit card in your name, for example. So now you've
got a billable mechanism, right? But that proves that you're old enough to have a credit
card. And there's a bunch of things like that. But I did have one person who was on X
get a message saying, they're asking me to prove my age and says, well, this X account was
a Twitter account that I opened over 20 years ago. So I obviously am over 20 years old.
Why are you asking me to prove that I'm over 16 on a 20-plus-year-old Twitter account?
Well, that does lead into, you know, how effective can this sort of ban be?
And kids are going to find their ways around it.
I mean, you know, there have been age limits on drinking, for example.
And we all know.
We were teenagers once, and we know that any of the...
motivated youngster is going to find their way to doing what they want but I
think this but it does reduce it yeah this is given parents a tool for
example hey you know I think you're just training kids to use VPNs and the
ones that don't have parental parents helping them do that are going to end up
with the very scammy VPNs which are much worse than the the social media
companies you know the free VPN's you're nothing's free
Right. So there are some, I just think that's all.
And then if AI isn't banned, you just go and ask chat GPT how to set up a VPN
and find me a social media service that is not banned that no one's heard of
and they all end up on Mastodon or something.
Exactly. Exactly.
So the interesting thing to me about this is the impact of technology on all aspect of life,
in this case, parenting.
because the discussion that ensues is, is this something that you regulate or is this something
that is the responsibility of the parents? And if you don't like it, then do something about it.
So that's kind of one side of the discussion. The other side is that if I'm talking about
weaponized information and it is an actual weapon, I cannot guard against a weapon.
Both sides need to be regulated, in other words. But it's an indication of how technology is creeping into
all aspects of life. And in this case, I consider it an interesting experiment that Australia is
running. And let's see how this pans out since they're doing it anyway, regardless of which side
you are, you are not there to vote for it. And watching what happens would be interesting. I do expect
that it will have a impact. It will reduce it. Those who really want to, it can obviously find
their way, but many won't. And deciding what to regulate for who and how is tricky.
anyway. Sure. And it's a burden that you're picking up for society. So you want to be
careful about it. But when you need it, then you need it. Well, then it's a pendulum, right?
I mean, different. We'll move from trends of regulation, more regulation to less regulation and
back and forth. So let's switch to crypto. Dr. Pernau. What's up with Bitcoin? It dropped in
prices and for some people that was a surprise, was it?
Well, I think it was a surprise for a lot of people.
You know, within the community, the expectation has been that there would be a big bubble
in 2025.
And it's because the community, the majority of the community, I would say, has believed in
this idea of a four-year cycle.
tied to the halvings.
So the havings happened every four years,
slightly earlier each year
because the chain runs a bit faster
than 10 minutes per block on average,
but the difficulty adjustment is always
pushing it back towards 10 minutes.
Every fortnight, it's adjusted,
every 2016 blocks, to be precise.
So there was a halving in 2012,
2016, 2020, and 2024.
And the peaks in the past kind of look like they were happening a year after the havings
because there were peaks in 2013 and 2017.
But in fact, this whole kind of myth that I call it a myth, others don't accept it yet,
that it's a myth of a four-year cycle, always conveniently ignored the very large bubble that
happened in 2011, well over a year before the first hamming, when Bitcoin,
Point had an age of about 2.4 years, well before four years had come around.
They just threw that away and said, let's look at these other two.
And you have a situation where you've got like three events.
You have bubble in 2013, 2017, 2021.
It's not very good statistics to then make an assumption
that these things are on a four-year linear cycle.
But the mythology was reinforced by the fact that you have presidential election.
in those same years.
And in fact, we saw a big run-up with the presidential election in the U.S.
around the time because the Trump administration coming in was more pro-crypto
and pro-Bitcoin.
And so there was a bubble, but then it didn't materialize in the rest of 2025.
Instead, we had fairly kind of steady progress.
So if you look at things very broadly, they've adhered to this long-term power law.
And that power law is quite steep.
It's faster than the fifth power of the age of Bitcoin, approaching the sixth power.
It's about 5.7, basically.
So you put things on a log-log plot.
It's a straight line.
So that holds up.
But what makes it difficult for people to be in the market sometimes is the bubbles have been very, very strong.
The volatility has been very high.
This has been coming down.
Well, I was worrying about that.
this issue all throughout this year, whether we were going to have a bubble of 2025. I did a lot of
analysis on the bubbles in the past. I saw that the energy in the bubbles was coming down
roughly with the reciprocal of the age of Bitcoin. And I separated out the behavior in the
core power law and in the bubbles themselves, found that the core power law volatility is
pretty constant, actually, and that the volatility was much higher than coming down in the
bubbles. But in the last really couple of weeks, I've redone an analysis of the bubble timings,
including the 2011 bubble, and then with the realization that we had no bubble in 2025.
And then what you find out is indeed it's not a linear cycle. It's a long periodic kind of a cycle
is what it appears to be.
And, you know, in order to prove that,
we'll need to see the next one happen
on that sort of log periodic schedule.
But if I tell you that the bubbles occurred at age 2.4
and then 4.9 and in age 9,
and at the next fundamental mode bubble,
based on the Fourier analysis that yields this log periodic parameter,
would be at age 18, 18.4.
that's into 2027, approaching the mid-year of 2027.
And I've done it both for Bitcoin versus Dollar and Bitcoin versus Gold,
where I first kind of discovered this.
And I will say it's a rediscovery because long ago, Giovanni,
who was the first one to find the general power law nature,
did some of this log periodic analysis as well.
And actually, his forecast for an ex-bubble,
would have been, you know, next year that there would not be a bubble in 2025.
We've got six more years data to analyze, and both with a Fourier analysis method and with
a wavelet analysis method, and both with, based on dollar and gold, you get this same
result, which is this logarithmic lambda parameter is called, but it's a multiplicative parameter
roughly between successive bubbles of a little over factor of two. It's 2.07. The 2021,
One bubble is a little bit of an anomaly because it was double bubble and weaker.
It turns out, though, you can explain it as the first harmonic of the fundamental mode of this thing.
And this is expected if you have continuous scale and variant behavior,
which the power law is the representation of, if it has discrete things that it impacts or actually that impact it.
So as the market cap of Bitcoin rises to higher and higher levels, and it's risen by more than seven orders of magnitude, it gets whole new classes of investors.
You started out with the cypherpunks, and then you got retail, and then you got the first exchanges, and then you get the first hedge funds, and it's gone up the capital stack to where now it's dominated by institutional investors.
They're the net buyer through the ETFs and through the Treasury companies in particular.
So those are retail buyers underneath because they're the shareholders of strategy,
which used to be micro-strategy, and they're the shareholders of the largest Bitcoin ETFs.
But, you know, this has dominated the net buying, and it's rising as the fourth power of age.
So it's clearly the big driver.
And as this happens, the capital pools are deeper.
You expect the volatility to go down and you've seen that happen.
But it also imposes these sort of, when you get to these discrete tiers and the ones that lay ahead of it,
if it's going to continue to be successful, would be the very largest companies like the MAG 7 companies.
And then governments, you know, sovereigns.
And we see sovereign wealth funds already.
So all of these, this sort of hierarchical capital structure in the fiat world that's coming into the Bitcoin space is what imposes this kind of log periodic nature.
And this is a phenomenon that's seen, you know, in other disciplines.
So it's in a sense not surprising at all.
So that's why we didn't have a bubble in 2025.
It's my belief.
We had a big down move of about 30%.
since the peak in early October. It's come back a little bit. We dropped into the 80s. It held in the 80,000 range and has moved back up into the 90s. The sudden drop that we had within the last couple of weeks, yeah, there's some conspiracy theories around that related to J.P. Morgan and Morgan Stanley. Morgan Stanley are the keepers of the MSCI index. And they're talking about exclusive.
including the Bitcoin Treasury companies like strategy from that index.
Sailor's gone in to talk to them.
It's not a decision yet.
But I think the decision's going to be made in January, in the middle of January.
So we'll see what happens there.
But it's not huge.
It doesn't prevent them from, you know,
being in the S&P 500 index at some point in the future, for example.
It's just that Morgan Stanley has a set of indices.
also they use in the world.
And J.P. Morgan introduced a structured Bitcoin product that basically caps how much you can
lose and how much you can make for people who are more risk-averse.
And they did that right after this dropped happen.
So that's why there's been some conspiracy talk around that.
Any questions before I segue into our latest crypto super report?
If you don't say that 2025 was above all, how do you,
characterize it because you look at early in the year we were what low to mid 80s i want to say
um let's come back to 90 mid 90ish um with a big spike in there so if you extrapolate that out
you know past performance no indicator or future performance kind of thing but do you
in i mean based on what you've said you're implying or outright saying that 2027 is probably the next
bubble and so that implies that we're going to see a growth of or a value of a Bitcoin well
beyond the 126-ish that it hit earlier this year. How do you characterize what we've experienced
in 2025 when it looks like a bubble on the chart? Well, it all depends on what kind of chart
you draw and understanding how much volatility there is because the volatility
is about a factor of one and a half,
a little more than that.
And that's the one sigma standard deviation.
So, you know,
if fair price on the power law is 100,000,
if you go to 150,
you're still only one sigma away.
And if you drop to 70,000, right?
So you really have to look at things
in logarithmic terms in order, you know, to decide.
But I would characterize it as waves.
You know, we've had these waves,
and we've stayed close to the power.
law and everything's intact as far as the power law yeah we're a little bit below it but we're
less than half a sigma half a standard deviation below it you know fair value now is about 110 000
and today we're about 92 and it's just not huge relative to bitcoin's volatility now if we do
have a real bubble and these bubbles tend to have like sharp peak blow-off tops then and if it happens say
around mid-year 2027 is predicted both by the analysis
of versus gold and the dollar you could run up to 300 350,000 or more
and that would be in a kind of a blow-off top but you would expect
prices to pick up considerably you know during 2026 particularly late in
2026 and just to get the sort of wave motion
that you get, not including the blow off top,
it would not be unusual to see
the 150 to 200,000 by the end of next year.
But a lot of people thought that was going to happen this year,
and there are all kinds of forecasts
that people were putting out.
Actually, Tom Lee nailed it.
He said 126,000.
But all the other forecasts were higher.
They were, you know, many of them were up above 300,000.
I never thought that was in the,
cars. I thought we wouldn't reach 200,000, but we could do that next year if this, you know,
next fundamental mode is a reality. Okay, thanks. Well, I knew the science of bubbles was a very
complicated one in real life. It looks like it's pretty complicated for markets as well.
It requires FFTs. That already tells you.
Yes, and there are these things called log periodic power law bubbles.
and they have seven parameters.
But I remember, Adrian, you were using FFTs for capacity planning analysis
some, like more than a decade ago.
I remember a talk you gave that half.
Yeah, I mean, anything periodic at FFTs is how you analyze the frequency spectrum.
It's just, you're just operating on frequencies instead of over time, right?
Right, right.
I did for a transfer as part of my physics degree a long time ago, and I felt things on it.
That's right.
That's right.
But when you're looking at something that's periodic, that's where you'll end up.
Even if the period is one.
Well, if it's a period's one, then if it's repeating continuously, you can just use sign waves or something.
Right.
It's a constant frequency, right?
But it's further evidence that life is good.
Yeah, I mean, if you're looking at distributions of things
and how they're varying, then, yeah.
And then logging, you know, every time I see something
that's where the elements are multiplied together
to have an effect, you take logarithms,
and then things become added together,
and you can separate out effects that you can't.
So there's a whole bunch of tricks that are used
to make analysis tractable.
Right.
And this is sort of the physicist tool kit,
It's right, take logs of it, take FFTs of it.
Okay, does it look simple yet?
Okay, and then you do the analysis and do, you know, take your exponents and turn it to an inverse FFT and you've got this complicated looking thing, but underlying it is a set of modes and ratios.
So that's, yeah, I always go back to the book by, from the Centipay Institute and Scale by, is it Jeffrey?
Jeffrey West.
Yeah, there's a lot of good, a lot of good, if you want to see some good examples there, he basically, they take a lot of physical and biological and phenomena and economic phenomena and they reduce them to straight lines with very characteristic.
Well, the whole point about Bitcoin, the reason why it has a power law is actually two power laws multiplied together. One is Metcalf's law of networks and the other is the power law growth in the adoption. So to first order, you know, Metcalf's law says the square of the number of users.
which you can approximate with the number of wallets holding Bitcoin.
And historically, that's gone up as a cube.
So the product of those is six, right?
When you put the two of those together, you've got a square of a cube is six.
So it's gone up is nearly the six power in price and in market cap.
Yeah, excellent.
I'm really delighted with all the scientific approach that you and Giovanni and team are taking towards Bitcoin.
So let's wrap with a quick update on SC25 and, you know, Risk High Summit and Q2B,
the conferences that we normally go to.
Adrian was doing other conferences, and Chris, you and I did it last time, so this time I was doing it.
So SC25, attendance slightly below what it was the previous year.
Atlanta had a record attendance, about 18,000, maybe even higher.
This time was, I thought it was pushing 17,000.
But I think the official number was slightly below that.
But the number of exhibitors was an all-time high.
They had over 500 exhibitors.
They had to actually open up another area in the convention center to accommodate them
and provided some incentives for people to go there.
But slightly fewer people, larger exhibit area, the conference sessions going strong as they always do,
the floor looked a little bit thinner than it did before.
A lot of the exhibitors are plumbing, right, rather than software.
So you've got physically large things that you're trying to exhibit, right?
How to keep machines cool and all of that stuff.
So I think that that may be driving the real estate of the floor plan,
because you're not just a stuff of a piece of software.
And also these are industrial companies whose idea of an exhibit is a really large space.
they're not thinking at 20 by 20
like they're generally used to having like half an acre
so so the boots are really large
and of course it's got massive pipes and refrigerators
and you know pumps and and also there's a lot of that
and then there's a lot of cooling units we talked about it last year
people like you know shell and castrol and
you're going to redefine HPC as high power computing
well you know I wouldn't mind adding another power in there
but performance is a good thing.
But you could see people like Shell and Valvaline and Castrol
that you wouldn't see normally.
So fluid mechanics and all these natural sciences are merging with electronics
and discussions of what happens when you immerse a wire into a fluid
in what way does it corrode, in what way does it dissipate?
So all of those become issues suddenly.
And of course, if you're air cooling and you have a leak,
it's okay. If you're liquid cooling and I have a leak, it can be a big deal. Okay, so that was
SC25. No big news. There's a new top 500. Europe now has the first X-scale system at the ULick
Center in Germany, and Europeans in general are proliferating supercomputers around Europe. They're
sprinkling them about, so they're not as big, but they are numerous. And it's under the European
joint undertaking project.
JU, and Eulake Center in Germany also starts with JU, so they kind of key off of that,
and their system is called Jupiter, for example, and other things like that.
In the U.S., it's really mostly driven by DOE, and we were talking our pre-call,
in part because of all the massive workloads that they need to run, managing energy issues
and nuclear stockpile husbandry and all other things that they do.
but they have a very vast area of science that they look at.
And then Risk Five Summit, couldn't go this time in person,
but I attended a webinar that they had,
so they're going strong as before, a lot of new action.
Server companies that are doing Risk Five systems,
obviously we know about Tense Torrent,
and these we talked to last year,
Adrian, when you and I went in person,
and there are several others.
There's a company called Ahead that is doing some good stuff.
So we expect Risk 5 in mainstream of computing in short order, let's say.
It's making its way, but it is benefiting from the road that Arm paved as it became bigger and bigger, starting from devices and going up.
Risk 5 is doing the same thing, starting in microcontrollers and then going up the stack.
And then Q2B, Quantum to Business is a conference that is put up by a company called QC Ware, a wonderful company.
with a really good culture and they do a great job with this event. It started in Santa Clara.
They expanded it to Tokyo and Paris. And then next year, it's going to be Chicago, Tokyo and
Copenhagen. I gave a talk there about quantum and HPC as I have in previous years. They actually
had an entire track on quantum and HPC. So I'm delighted that waving the HPC flag is being
corroborated by everybody else, that HPC is indeed a very suitable
first market and arguably the main market for quantum computing, at least in the short term.
It's still not ready, but the vibes are becoming more and more towards applications and usage
rather than just the technology. Now, Bob Suter, who's a well-known technologist and analyst in the
field, he said that he'd calculated something like 84 quantum competing companies.
and Bob Sorensen, who's another well-known and well-respected name in the industry analyst communities,
said that that's bigger than all of the supercomputer and mini-supercomputers that ever lived.
So quantum computing is definitely a lot more prevalent these days than these other ones were in the past.
I read a headline today, and I won't embarrass the publication that put it there,
but it was saying something to the effect that quantum computing is,
is reaching its transistor moment.
I have my doubts.
I wonder how you felt.
No, no, it's not.
We're far from transistor moment.
We have like a half dozen modalities.
As you mentioned also in the comment that I saw, you wrote,
that between superconducting, which is fast, but it's noisy,
and it requires a lot of error correction,
and trapped ions, which are high fidelity, but they're slow,
and then neutral atoms which are good error correction but also slow because you know for these guys
you have to physically move atoms and particles around and guide them with lasers you know with like
laser tweezers and by slow we mean like one megahertz right very slow now if you're a million
times faster then that slowness may still be okay and you know that leads into what is a quantum advantage
and for many people it's not that you're necessarily exponentially faster than
classical is that you're simply better in some way. And it could be better because you're
better accuracy or less energy or indeed faster. Now, less energy is also an issue because if you
need cryogenics and you need it for like five cubits and now you need a million cubits,
the scaling of the refrigeration starts adding up. So Olivier Izrati, who is another
very well-respected name in the field, he had done some analysis of just.
just how much energy it's going to require if you want to do quantum competing at scale,
and it could be significant depending on the modality.
So have we reached the Cat's Whisker Radio moment?
That was a nice, right?
Basically, a crystal and a bit of why you get a diode is enough to.
So have we reached that moment?
Have we got to get to that before we can get to transistors?
Well, we should come up with these milestones for quantum computing.
There is such a thing as a catcubit, however, which is better for error correcting because it's sort of intentionally, if I understand it correctly, it sort of intentionally favors one kind of error than the other because that one it can handle and the other one is harder to handle.
But as a result, it ends up being better high fidelity.
And then you've got many more, you know, photonics and, you know, topological, et cetera, et cetera.
Go ahead.
Yeah.
Well, as I was going to say in our pre-call, we also talked about the pace of innovation,
versus expectation.
And that's a whole separate conversation.
But I think that's worth thinking about
and perhaps talking about in the future episode.
That is an important topic, and we should do it.
So if you all agree, we can conclude this episode.
Sound okay?
Yeah.
Okay. Well, thank you, gentlemen.
Thank you to our listeners.
And until next time, take care.
That's it for this episode of the OrionX Download podcast.
Every episode is posted to OrionX.com.
and shared on social media.
Use the comment section or tweet us with any questions or proposed topics of discussion.
If you like the show, rate, review, and share it.
The OrionX Download podcast is a production of Orion X.
Thank you for listening.
