Utilizing Tech - Season 7: AI Data Infrastructure Presented by Solidigm - 08x07: Building Servers for the Edge with Antillion
Episode Date: May 12, 2025No matter how we define the edge, the special requirements for use in harsh environments drive unique product decisions. This episode of Utilizing Tech, brought to you by Solidigm, features Alistair B...radbrook, founder of Antillion, discussing edge servers with Jeniece Wnorowski and Stephen Foskett. It pays to start with the intended outcome, defining the solution based on customer needs rather than with the technology at hand. This is especially true at the edge, where unique requirements for mobility, power, ruggedness, and manageability drive novel configurations. When it comes to defense applications, AI is driving greater collection of data at the edge, yet connectivity is often inconsistent, driving increasing processing power. Yet the current CPUs can often handle inferencing in edge use cases, especially when the rest of the server, including storage, can handle high data transfer rates. Edge computers have always needed more storage capacity, and the latest SSDs can bring incredible amounts in a small form factor. Antillion is also a leader in conduction cooling, bringing liquid and immersion cooled devices to market for demanding applications. They are also working to bring disaggregated servers to market using CXL technology, a topic covered in detail on season 4 of this podcast. The edge is all about constraints, and this limitation drives incredible innovation.Guest:Alistair Bradbrook is the Founder and COO of Antilion. You can connect with Alistair on LinkedIn and learn more about Anitllion on their website. Hosts: Stephen Foskett, President of the Tech Field Day Business Unit and Organizer of the Tech Field Day Event SeriesJeniece Wnorowski, Head of Influencer Marketing at Solidigm Scott Shadley, Leadership Narrative Director and Evangelist at SolidigmFollow Tech Field Day on LinkedIn, on X/Twitter, on Bluesky, and on Mastodon. Visit the Tech Field Day website for more information on upcoming events. For more episodes of Utilizing Tech, head to the dedicated website and follow the show on X/Twitter, on Bluesky, and on Mastodon.
Transcript
Discussion (0)
No matter how we define the edge, the special requirements for use in harsh environments drive unique product decisions.
This episode of Utilizing Tech brought to you by SolidIME features Alistair Bradbook, founder of Antillion, discussing edge servers with Janice Naroski and myself.
It pays to start with the outcome, to think about the constraints, and to build solutions around that.
to think about the constraints and to build solutions around that.
Welcome to Utilizing Tech,
the podcast about emerging technology from Tech Field Day,
part of the Futurum group.
This season is presented by Solidime
and focuses on new technology like AI at the Edge.
I'm your host, Steven Foskett,
organizer of the Tech Field Day event series,
and joining me today as my co-host
is my old friend, Janice Narowski.
Welcome to the show.
Thank you, Stephen.
It's a pleasure to be back, always.
Always.
It's been a lot of fun.
You know, we've done a couple of seasons here.
We've had you on this podcast and the others.
And every time you bring in somebody interesting.
Now, the most interesting thing
that we're doing this season
is we're trying to bring in folks
who are actually out there doing the work,
implementing, making this stuff happen, you know?
Exactly, and what I'm excited,
I'm excited about this series
and this episode in particular, because you're right,
we do bring in some cool partners.
However, it's not too often that you get someone that can talk about the
edge in a unique and different way. And I'm super excited about Alistair. I'm going to
give it over to him in just a moment to explain exactly what he does and who he is for the
company. But Alistair and his team with Antillion are building a really unique edge use case that can be used across many different industries.
And their vision and their focus about how they get work done and how they work with their partners, I think is incredibly different.
So we're going to learn a lot about Antillion today.
We're going to learn a lot about the edge use cases.
And then we're going to find out, you know, where does AI and storage kind of fit into all this?
Well, welcome to the show, Alistair.
Why don't you tell us a little bit about yourself?
Hey, yeah, great.
Thanks, Stephen and Denise, for inviting me today.
Yeah, my name's Alistair Bradbrook.
I'm the founder of Antillion.
I've been going about 10 years now.
Background of Edge, we can talk about what Edge means.
It's an interesting concept of what we all mean by Edge,
but doing this for 20, 30 years actually,
exploring how we collect and manage data
and deliver capability to different types of,
both in the healthcare industry
and more recently in defense.
So yeah, that's kind of me and intrigued
to see where we end up today
talking around those sort of topics.
So let's dive into that,
because I think edge is a really interesting
topic and everybody that we've spoken to within this series and also outsider,
it's different definitions of what edge is. So I don't know Alistair, can you tell us a little
bit about what your viewpoint of edge is and how do you define it with your company?
Yeah, that's really interesting and probably whatever I define will be different to everybody else at the
end coming. Everybody has their own view of what edge is and probably it
probably doesn't really matter to some extent.
It probably matters to the company who are designing things and the
consumers who are going to use it.
So to some extent, you just need a natural language that works between you
and those customers that you both at least understand or at least can
communicate on.
To us, that edge, well, it could be somebody jumping out of a plane.
It could be somebody leading a boat and walking through water.
So my background at the moment is all to do with defense work.
So when we talk about edge, we actually have called it deployed for many years.
So the edge is more of a commercial construct that's come across more recently.
But prior to that, we've been talking about deployed comms within the defense industry for a long time because they
have the ability or need to take IT with them. They have to do stuff in these remote locations,
and they often don't have any comms. They don't have any power. They don't have any of the niceties
that you get in a data center or sitting at home now where I'm sitting here. And I have all of the niceties that you get in the data center or sitting at home now where I'm sitting here and I have all of the benefit of a great broadband connection and great IT, they have
none of that. So I think that's the challenge for us is that the edge can be right at that
tactical space, where effectively what you have with you is all you have. And then you've
got phases leading up to that. And then the defense world as well as I suppose in telecoms
and other industries, you've got scaling of people and as people scale, then the types of technologies they need naturally scale
with them. And that eventually, I suppose we say, we're kind of not at the edge now, but I don't
know if there's a really clear definition of when am I not at the edge? I mean, are we at the edge
now? I mean, all of us are connected to a data center somewhere,
which who knows?
So I think it's an interesting concept
of what defines the edge.
I'm not sure that, or A, does it need to be defined?
I don't know.
Maybe for me it does because we build hardware
that we say is edge hardware.
So I suppose I need to define it,
but I suppose as a consumer,
you just want it to do what it does for the outcome
that you're trying to get to at that stage. So maybe it doesn't matter so much, but yeah,
it's something I pose every day. But we use the word edge, but I can't define it as well as
probably I should do. Hence, I've just muddled my way through that answer to try and explain
where I think this edge could be. Well, that's what I was kind of trying to get at
in the introduction.
I'll say this, like edge is as edge does.
And the reason that it's interesting to talk to you
about this is because you're not trying to come at it
and say, like, this is the box.
This is where we need to put things into.
You're trying to come at it and say, what do you need?
What kind of servers do you need?
What kind of specifications,
what kind of configurations?
And by kind of coming at it based on the customer demand,
rather than coming at it based on preconceived notions
of what it is, I think, has led to the development of what
you've got.
So maybe it would help.
Do you want to talk a little bit to kind of kick things off about the servers at the edge
that you've got?
Yeah.
Let's go back and explain where we sort of come from, where we are now, and the reason
where we are now, I suppose, is so traditionally in the
IT world and my journey with defense and IT in that mixture, and we can go back prior to that with
my work in the pharmaceutical world where we were kind of collect data for phase one, phase two,
phase three, phase four trials, which actually is interesting because a phase one trial is
essentially a bedside trial. You are in a very small group of patients. First
dosage of a drug to a human is at phase one. At that stage, you're hoping you're just going to
get some effect, but you don't want to hurt anybody at that stage. And then as you go forward to phase
four, you're looking at mass trials on a global basis. And you can see the challenges there when
it comes to collecting that data. We were looking at this 30 years ago, how do we collect data at each of those stages, but provide them back to the pharmaceuticals or
the CROs to allow them effectively to submit that data to the FDA. So that's where it came from.
And then when I moved into defense, my first experience of that was actually the first and
second Iraq wars, actually more the second Iraq Iraq War, where we provided the the comms for the multinational force in the south of Iraq.
And the challenge there was, again, we had this splintering of groups about what type of data and
what type of comms they had, both from fixed headquarters to mobile areas, and obviously a coalition of 44 countries.
It was a very large coalition there.
And essentially what we saw at that stage,
we were either carrying compact,
I think it was compact service,
Del weren't really even around when we were there,
I think it was compact,
we were either carrying those things on our back,
tipping out the dust on the back of a Humvee
to try and make them work.
And generally they did work. And that was an interesting factor is that we realized that
that actually commercial comms, even in a desert, in a sandstorm, in high heat, you could actually
coax its way through. It wasn't always 100%. It wouldn't hit the metrics that Compaq, or Dell,
or HP would ask you to, but you could kind of get it to work through. So that was kind of an interesting thing that we learned. We also learned that portability,
and actually we'll probably come back to this topic. We talk a lot in our company about
portability and survivability, which those two factors are actually quite interesting because
you could just say something's light or heavy. You could say something's big or small. But actually, can you carry it?
Can you move it to the location you want with the means of the logistics you have at that
stage?
That logistics may be you as a human.
The logistics could be a C-130 with a certain size of area you can put it onto because if
you don't go in that area, they're not taking food or
other things they need to take. So the balance of portability is really important to us because we
need to be able to get our equipment and our capability to those locations where it's important.
So we came back and we looked and we assumed that other people would be doing this.
We weren't a hardware company at this stage.
We were actually more into doing software.
We were doing loads of stuff with computer associates and BNC, and that's where my background
sort of was in that middle phase.
But we came back and we realized that actually nobody else was doing this.
We were either having very old comms that were very, very dated and very hard to use,
which was the defense world, or we had the stuff that was designed for data centers and there wasn't really much in the middle.
So working with DSGL, which is the UK research arm for the government and into the MOD,
we said, well, how about we take commercial kit, stuff that say Facebook we're using or
starting to use in those days, how about we take some of this kit and see if we could make it work for defense? And it becomes a risk balance. The risk is,
well, maybe it's not designed to work as long as some of the other kit, but actually the balance
of that is it doesn't cost as much and it's easier and it's more powerful so we can do more with it.
So we explore these ideas, assuming actually that we wouldn't count on doing hardware for very long.
We always assumed that we would either somebody would get better than us or actually we would
just go back and go back into our software services world. But A, we kind of enjoyed it.
I think we were okay at doing it. You know, we seem to have forged a different type of approach
within the UK and NATO and now into the DOD.
And it started this vision for us to say, could we put data sensor technology in a ditch?
That kind of was our mantra for a long time.
Could we take...
And sometimes actually I have to say to even the people I talk to is that just because
we can doesn't mean we should.
Because yes, I could give you an H100 in a ditch, but do you need an H100
in a ditch because there are other challenges that come with that. I can give it to you,
but you've got to power it. You've got to cool it. So actually, it was interesting how
we have to balance what potentially we can achieve versus actually what is logical for
what they're trying to do. And that comes back to your point, Stephen, about outcomes.
And we talk about outcomes quite a lot
because the outcome should really be the driving force
for most IT decisions.
In the end, you should be thinking
about what you're trying to achieve
before you've decided what you've engineered.
However, the majority of people in technology
are either engineers or people who want to be engineers.
So they kind of liked this idea of being involved
in describing the physicality and the technical aspects of it.
And they kind of forget about what they were trying to do,
because they designed this amazing thing.
Like, what were you doing?
Oh, yeah, I just wanted to send an email.
Why did you design this server then
that can send 1,000 emails?
You just wanted one email.
Why did we not design that?
So we're constantly having to bridge that gap between outcomes and what we can do,
and make sure that people take the right choices.
And that hopefully means that they get the best value for money, the best use, the best portability, the best survival.
All of those factors come into a decision.
And actually in the commercial world, you don't generally have to make too many of those decisions.
Maybe cost comes into it and performance, but generally, you know it's a 19-inch rack.
It's going to go in the space.
That may be changing.
I've listened to quite a lot of podcasts recently where we're talking about power consumption.
There was a great one recently with PKIO and Solidigm talking about exactly that factor
of we can't just ignore the amount of energy that it's taking
now to do things. I mean, there was a quote in that podcast I was listening to which talks about
that the amount of energy going into just doing some degree of L&M's in the future could be more
than India. Just to do some of these could be more than what a whole, that country would,
and that's kind of mind blowing, isn't it?
That we're going to use that much energy to achieve stuff.
And I think in the end, hardware will be important.
So things like solid iron
will absolutely be important to innovate.
We will be important because we need to be able
to get the power to it correctly and call it correctly.
But I also think the actual engineers
who are writing the software are going to be critical
because in the end,
you know, they need to be more efficient and we all need to be more efficient, I suppose,
in that chain because not one person's going to solve this. The question is, do we need another
LLM? I suppose it's a different question, but we are human and we will always want the next,
best, greatest thing. So I think the chance of stifling innovation because of climate is unlikely.
So it will revolve around us having to innovate
to ensure that as we go through that journey,
we consume less power, I suppose,
and generate less heat going forward.
Amazing.
I really love how you connected
what I was initially speaking about your company and your vision and how you look at technology differently and kind of put the human factor first, right?
Because there's lots of innovation out in tech, right? Been in it for a long time. Everyone's looking to one up so-and- so and be the next best thing. And sometimes you lose sight and
you over engineer and you start selling things that folks don't even really need or you're, you're,
you know, oversubscribing on performance or density or power or whatever, right? So I love that you're
kind of keeping that people first, human first mentality and designing and optimizing your
solution to be what is actually needed.
I think that's why you're great partners with us as well, because that's really truly our vision
and why we're just sticking to storage. You mentioned a little bit about what the people
need and what the people want, right? So we're talking a lot about Edge. We're talking a lot
about AI, that big buzzword. And do people really know what they need, right?
Everyone's trying to figure out AI,
they're trying to figure out the Edge.
So, and you brought up power and cool,
you brought up a lot of good stuff, Alistair.
So, we'd love to get your insight on what you're seeing,
particularly with your solution,
be it that you guys dabble software and hardware, where are you seeing
AI and kind of the needs of your customer base and how are they integrating all of this
together?
That's a great, let's unpick that question because there's a lot in that, which is really,
really interesting and fascinating.
So AI in the defense, I mean, we can see it playing out in Ukraine at the moment.
The Ukraine war is clearly the beginning of a new way of doing it.
Drone warfare, but drones are driven a lot by IT systems behind the scenes and the way we consume that data.
So actually, that is a game changer in terms of the way that we're going to have to think about how that information is both
collected. So a sense of fusion is going to become a really big thing. We need to bring
all of this information in to mobile, let's call them data centers, really big trucks,
consuming both video, audio, electromagnetic, human source, other types of information needs to all be fused in real time.
And not only does it need to be then taking a model that maybe has been learned on a DGX sitting back somewhere, or multiple DGXs, no doubt.
But it's going to have to refine that model because actually warfare is never quite as logical as you would expect it to be.
Yes, there is quite a lot that you would assume was going to be part of the model, but you're going to have to keep refining
that model constantly.
And I think that that's not just, I mean,
every industry is going to have the same challenge.
I suppose the challenge we have is that we've got
to condense that into an object that doesn't have
a broadband connection.
It's probably got satcom.
Even though it's great now with, you know,
Starlink is better than where we ever were before
and it will continue to get better,
but it's still not directly connected
into these cloud ecosystems.
So a lot more of what we need has to come with us.
And we don't need to disperse and share that
in such a way that we don't have big bandwidth
to share that within a battle space.
So we're going to have to work out ways of,
how do we share this evolving model
from these different sensors in such a way that everybody
can learn from that?
An example of that would be that you have, maybe you have a tank, a tank's coming from
an area of a forest and a drone picks it up and it will take a photo of that, potentially
it will take a video of that.
And then another drone sees that tank.
Now, if you haven't understood it's the same tank,
because the tank may look very similar, frankly.
So if you don't know it's the same tank,
you now think you've got two tanks.
So somehow the AI actually has got to be able
to put very small markers,
and they're using technology around putting it down
at the pixel level markers to understand the difference
between these objects, and ensure then as that information passes
between the different sensor fusions,
we know it's only one tank.
Because if suddenly we've got two tanks, three tanks,
four tanks, we think we've got a much bigger problem
than one tank.
So actually that's a really interesting example
of where we've got to be able to get better.
That doesn't particularly need any additional learning.
That type of technology is available.
It's just using vision learning, but there will be additional pieces that need
to be generated in real time during that exercise or that battle that you potentially haven't got
the time to send back to your big DGX server, back, you know, sitting somewhere because you
haven't got, you can't move the data or you haven't got time to make those decisions.
It's real time decisions we need to make. So I think that's interesting on that side.
I think what we're seeing, though, like lots of people,
I think most people think they need more GPU
than they probably do,
but I don't think that's unique to our.
I think that's been great marketing
from the likes of Nvidia and people to tell us
we all need these things.
And actually, I think we've all forgotten
about maybe there are other ways.
The CPU is still in there.
And sometimes a CPU can be equally useful as a GPU for certain functions, not a GPU
doesn't always make it better, you know, and I think actually over time, some
models are actually aligning themselves to say, well, you know what, if you've
got a really good CPU, we can take some of those cores and actually you don't
know, need a GPU to do certain functions.
So I think we'd need to educate
ourselves as engineers but also educate then our customers downstream to make it clear about when a
GPU or a DPU or an MPU or any of these additional processing units are valid and when actually
keeping it simple is the better option because the simpler we keep it, A, it's easier to maintain,
it should be cheaper to buy, it's easier to maintain. It should be cheaper to buy.
It's easier to buy up front because you're not on a supply chain.
It includes so many different components that, you know,
if you want a GPU, Elon Musk has probably bought it already
and put it into one of his big data centers.
So we're not going to better get that for a long time.
And if your system depends on it,
you can't then get that and deliver it out quick enough.
So I think that education is going to be important.
And I think actually that the software engineers will start writing software
that needs less GPU over time for certain functions.
Some things will absolutely need it.
Those core big things that happen, you know, right at the beginning of a model
learning, absolutely, you know, you, you're gonna need your black wells
and all of that, but I think some things will change.
So we're definitely, education is key.
And I think that's key for us as well, actually,
because we can get sometimes a bit like,
oh, wow, this is pro GPU at this.
It'd be great.
We'll put it, let's just put AI in the title.
And I think we need to also stop doing that
a little bit collectively.
We are seeing storage.
Storage is a big thing.
It came to be partnered so well with SolidLand.
Storage is getting much more prevalent.
You take that sensor fusion challenge earlier, the amount of information we need to store,
but not only store, we need to go to write it and read it very quickly.
Now we've probably pushed our customer far quicker than they had expected.
I mean, they've been on SCSI, SATA, you know, the fact that we moved into MVME,
so all of our platforms are now only MVME and generally QLC as well, because we want high density, you know.
So we want to get as much into our systems as we possibly can at such a high rate.
So, you know, PCI 4 at the moment, 5, you know, coming out. We'll keep moving forward the best we can to mean that when they do hit those challenges,
then the stumbling block won't be the hardware that they've either got or they've bought into our ecosystem.
It will be the way the software is written to make sure they take the best they can of that.
But we are seeing a demand for more storage in a smaller space.
And actually, that's always been our challenge, our challenge for many years leading up to the revolution. but we are seeing a demand for more storage in a smaller space.
And actually that's always been our challenge.
Our challenge for many years leading up to the revolution
with the E1.S, the E1.L, E3 I suppose,
but we're more of an E1, you know, we love the E1.
It fits our vision so much more than the standard
two and a half type concept.
We can talk about that and why that is the case.
But I mean, we can now get with the 120 terabyte more than the standard two and a half type concept. We can talk about that, why that is the case.
I mean, we can now get with 120 terabyte in a half 19 inch system where we do a lot of half 19 inch, half a rack size, we can get 10 of those E1.Ls in there. So we're over a petabyte storage in
that. It's amazing what we can now do. and I imagine that we could probably sit here in 12 months time
and we'll be even more. I mean it doesn't seem that there's an end to the amount of, there obviously
is an end, like Moore's law when it came to CPUs, but it seems that we're on this journey that
doesn't seem yet to be ending in terms of the amount of storage that the likes of solid line
can now fit into the same physical object. But actually what's more
important to me, not only is it physically not changing, which is vital because we want everything
to be as small and as portable as possible, but actually you don't increase the amount of power I
need to drive it. In fact, sometimes you give me less power, which is something that you do as well
as AMD. So if you look at the AMD processors, if you go through looking at the
7000 and the second series, the third series, the fourth and the fifth, actually,
when they increase my core count, I actually don't get a big penalty for power, which is amazing to me because that means essentially I can give my customer downstream something more performant,
but not then say, just to let you know,
now we need a power station, we're behind you to run it.
I can say, actually, you can do that with less power
and I'm going to give you more cores.
So yeah, to us, the AMD Solid-I marriage
where we use them is amazing
because we tend to be able to go forward with capability,
but either stand still with power
or even sometimes even go back.
And power to us is heat.
And we need to get rid of that heat.
We do air-cooled systems, like the traditional ones.
We do them in a slightly different way.
Hence, we do a lot of half 19-inch
even in the air-cooled fabric,
so you can build interesting stuff.
But we also do a lot where we have no air cooling.
So it's all conduction-cooled or liquid-cooled
or immersion-cooled technologies. And that's when we have a big challenge because we have got to
get rid of that heat. If you know the movement of heat through those mediums is more challenging
than it is if you just put a load of fans over it. You know even though fans are not efficient,
air is not an efficient medium for conducting heat, you can still
get away with just throwing a lot of fans at it and eventually the heat will go out,
I mean, assuming the ambient's alright. But when you literally condense it down, so we've
been doing conduction calling with Solidigm drive ever since the E1 first came out. Within
the first week of us getting it, the first thing we did was rip the heat sink off it, put it inside one
of our conduction core systems. And in the defense world, there's a concept where you can pull the
heat out through a VPX system, where you use a way to clamp it in, the copper pulls the heat out,
and then that then is where the heat goes out. And we've used that method in our system, in our conduction systems for five years now, six years now, with amazing
effect. I mean, it'd be nice not to have to rip the whole thing to pieces, but it's the way it
works for us and means we can get in something about this, you know, about centimeter high,
we can get, you know, six get six disks in that fabric.
And they can pull it out
and they can do what they want with it then.
And it becomes effective.
It's a bit like having a Nintendo cartridge.
They have a cartridge of their information,
which then if they need to run very quickly,
they can pull out the data and they can run.
If they need to transport that, then, and it's secret,
they can transport in certain ways that is easier than if it's
50 disks that are just thrown in on the floor.
And you've got to work out, well, which disk,
when we're which disk?
How am I going to reboot my system?
If I don't know, we deal with a lot of that challenges.
And the E1.S has been the enabler.
We were using M.2s before.
We never really liked the M.2s. We liked the
mechanical format, but hated the performance and the general, they were consumer really,
not really enterprise. E1 obviously gave us the best of both worlds, fully enterprise
and in a format that meant we could do the same job. And E1.L, yeah, I mean, that's just
amazing, isn't it? I mean, the amount of storage on an E1.L,
I know the ruler for many years from Intel,
from previous years, you know, never quite made it.
And I still don't know if the E1 will become as prevalent
as other, because it's a challenge
because it's 300 and something deep.
So it makes servers bigger and so forth.
We do it differently though.
We don't see a server, the inside of a server,
as a one-dimensional space.
Most companies see it as a single, even in a two-year
system, it's just effectively the same plane,
but just higher objects.
We actually see it as a full three-dimensional space.
We're manipulating objects at different planes
and at different planes and at different
levels and at different locations to see how much can we cram into the smaller space. The
density is another phrase we use a lot of. We're trying to increase the density of everything
in our systems. That's not just storage with Solidon, but it's everything. It's the amount
of power we could get to it. It's the amount of cause we can get them out of memory.
So density is critical.
The other interesting thing,
we've been playing around with the concept
of disaggregated systems for six, seven years plus.
It's become more of a big thing now.
Unfortunately, from our point of view,
when we liked the idea,
A, we weren't clever enough to do the inventions
that needed to happen.
We're good at our stuff, but we're nowhere near as clever
as these amazing people who can do these inventions.
So we had the ideas, but couldn't deliver on it
because the things didn't exist.
We worked with Fujitsu NEC for a while
on using light to move PCIEs.
We did a really great piece of work where we were using,
and Samtech had a vision of using light to move PCIe so we could disaggregate the PCIe bus remotely.
And what that meant is that we weren't limited then by the density of a single object.
We could increase our density across multiple objects and disaggregate the functions we
needed.
That's become a lot easier now because the likes of CXL has come along.
I mean, I say it's easier. CXL is still quite complicated. It's still a maturing technology.
1.1, I don't think it's really become that proliferated as much as people want it to.
I think 2 is starting to probably deliver what most people have wanted it to.
But for us, the ability to disaggregate, say, memory, to increase the memory footprint
but without increasing the physicality
of the single object and saying to the customer,
if you do need more memory, yes, there
is an overhead of a bigger subsystem,
but you can make that choice when you need it.
You don't have to compromise at the beginning.
So the reason we did this was because we didn't want people
to have to say, well, my object needs
to be this big because I might want a GPU and I may want more memory and I may
want this.
We wanted to say, well, if you don't know you need it, let's make it as small as possible
and let's allow you to add those capabilities to only those systems at the moment you needed
to deliver it.
And that's really vital for our world because if they don't have to take it, why give it to them?
Because if they take it, they then got to power it.
They haven't got to maintain it.
They haven't got to look after it.
They then also have to potentially buy the same systems
with all of this expensive capability upfront,
when actually they may only need 10 percent of their fleet
to be able to be upgraded to that level.
Now, it probably sounds a bit stupid.
We should just be saying, hey, buy the most expensive system.
That kind of isn't us.
We would much rather they bought the right system
and had the ability to grow or even contract
when they needed to, because actually, it's
not like in a data center where you can just throw it at it.
They literally have to sometimes put it in their backpack
and carry it.
And if they don't have to carry it,
why would I give it to them to force them
to have to go down that route?
So storage will be big for us there as well.
I mean, I know storage disaggregation has been possible
for a long time, MVME and Fabrics.
But again, I think with CXL,
it becomes an interesting approach
of a standardized approach for us,
the disaggregation of everything on that sort of PCIe 5
and 6 bus, we can start doing.
For us, it would be memory and storage,
because the two big things would maybe
be GPUs as well, moving those.
So yeah, sorry.
Very long answer to a question you asked earlier, but anyway.
We've covered quite a lot of ground there, actually.
I will point out that season four of Utilizing Tech focused on CXL,
and one of the things that we focused on in there is exactly what you're talking about.
It wasn't about necessarily disaggregation as the goal.
It was about right sizing as the goal, and about flexibility,
and making systems that were not bound by the strict structures of the size of a dim or, you know,
it's about making things right. And to me, I think that's really kind of the summary of this whole conversation. It's, it's,
it's interesting, isn't it, that
without constraints, people tend to just expand like crazy.
But with constraints, and that's what makes the edge interesting to me,
is that it's a constrained system.
You cannot go beyond this physical size,
or you can't go beyond this power footprint,
or you can't go beyond this cooling,
or we have to make sure that it's rugged,
or we have to make sure that it's able to handle disrupted communications.
Oh, and disrupted communications means we have to do processing locally.
And that means we have to.
And there's this whole chain of thought that happens when you are constrained.
That doesn't happen when you're in sort of a data center or a cloud environment
where you can have as much of anything at any time within reason.
I mean, but even now, I mean, look at AI, it's not even within reason. It's just, you know, you can have as much of anything at any time within reason. I mean, but even now, I mean, look at AI,
it's not even within reason.
It's just, you know, you can have as much of whatever you want,
you know?
When it comes to the edge,
and especially in the applications you're describing,
these constraints make us more creative.
They make us come up with clever, novel uses
for technology that we already have,
like the CPUs that we already have,
and new technology as it comes out, it makes us see new possibilities. And that's what's inspiring,
I think, about this whole story is that we can do more with less if only we just tried. Would you
agree with me about that? I would, yeah, I think that the idea of constraint is amazing.
I love that phraseology you've come up with actually.
And I think that's what drives me actually.
I think that the idea of not having unlimited options and having some constraints actually makes it more interesting.
I don't know why, that's a part of my psychology maybe.
I don't know, but that idea of having restrictions and having to work around them is fascinating.
And you think about where potentially we
want to go as a world.
I mean, not in my lifetime, obviously, or any of us,
probably.
But when we look to go to Mars or other places,
do you think these challenges we've just spoken about,
those constraints are absolutely going to be in there.
When we start going into planetary, then those challenges are going to be there. Even on this world,
we are going to hit these challenges. We know that. Not maybe within the lifetimes of any of us on
this call, but there will always be, I think there are going to be more constraints. I don't know
where the peak of unconstraintability is in this world of AI, but at some point we will flip over it
and we will be constrained by the amount of power
and so forth.
And I think, yeah, I think Edge,
I think a lot of other industries could learn from Edge,
even those ones that at the moment are unconstrained.
I think they could learn.
Yeah, I couldn't agree more, Alastair.
I feel like, and Stephen, you said this too, like,
is the industry really trying hard enough? Right? Have we really looked at cooling storage?
We're starting to, we've got some new designs that we just placed out onto the market and everyone's
like, well, what is that? That's cold plate? Wait, it's cool on all four sides. It's not just with
the liquid tube. Wait, it's not dunked in anything. It's not diabolic cooling. What is it, right?
So it's like, I mean, to your point, Alistair,
I think your company and your vision
and what you're doing is so ahead of itself, right?
Ahead of the time, because you aren't just looking
at cooling the GPU, the CPU.
You're looking at the overall architecture.
You're looking at how do I make this more seamless,
portable, simple, and again, that human first focus, which I personally really, really appreciate.
So I can't wait to learn more.
I feel like we could talk all day about this and still be really interesting.
So I actually think we should have you back on at a later date, Alastair, and talk more about
the stuff you're doing with PKIO
and some of the other organizations, right?
Because we talked a lot about military today
and we talked a little bit about
the other use cases you guys work in,
but it would be interesting to give our audience
an understanding of how you're doing this differently
for others.
So this has been phenomenal.
I really appreciate your insight.
It's been a great chat.
I've really enjoyed it.
Really, really good.
Thank you very much.
Yeah, thanks.
And I have to say too, this whole story of constraints
and incredible challenges and how people meet challenges
at the edge.
I mean, this is what we've been talking about at Edgefield Bay
the whole time through.
I just love this discussion.
We've talked to a bunch of companies in this space
that are doing things like this that are kind of rising
to the challenges that are placed on them.
It's incredible.
So thank you very much, Alistair, for your time.
Thank you so much for joining us.
Before we go, I'm sure that people are gonna wanna
continue the conversation with you.
Where can they connect with you?
They can find me on LinkedIn.
I'm not very good with the whole social thing.
Other people are.
I'm probably from that generation.
Didn't quite embrace it enough as I should have.
But LinkedIn, you can find me there.
You can go to our website, antilian.com and reach out to us.
Obviously, I'm doing a few podcasts with Solidarm at the moment,
which is alien to me doing these things, but I'm enjoying them, actually.
I'm not really into the self-promotion discussing thing,
but I love these types of ideas of just talking, which is great.
So, yeah, you know, there are ways to get hold of me,
but I'm not prevalent on the Twitters and the LinkedIn,
you know, the other play.
Yeah, I'm not great on those areas,
but you can get hold of me.
Well, you wouldn't know it.
Excellent, excellent conversation.
And Janice, you know, thanks for joining us here again.
What's new with SolidIME?
Thank you for having me again, Stephen.
This is always fun.
You can find out more about what we're up to on
solidime.com forward slash AI.
And you might see another sneak peek of what we're
doing with Antillian there as well.
Excellent.
I would love to see that.
And as for me, you know, we've been really
enjoying this season of Utilizing Tech.
You will find us
show notes, more episodes, and so on.
If you just go to utilizingtech.com enjoying this season of Utilizing Tech. You will find us show notes, more episodes, and so on,
if you just go to utilizingtech.com.
You'll also find us in your favorite podcast applications.
You'll find us on YouTube.
We would love it if you would leave us a comment,
leave us a rating, leave us a review.
It really helps us to direct where
we go with this in the future. This podcast, as we noted,
was brought to you by SolidIME, as well as Tech Field Day, which is part of the Futurum group.
We can also be found on the socials. Find us on ex-Twitter, BlueSkyMastodon at UtilizingTech.
Thanks for joining. We've got a new episode coming next week. Listen to the whole season.
episode coming next week. Listen to the whole season. We'd love to hear from you. Thanks for joining us and we'll see you next week.