Closing Bell - Manifest Space: Hyperscalers for the Edge, Movable Data Centers, & The AI Arms Race with Armada CEO Dan Wright 8/21/25
Episode Date: August 21, 2025Armada is a startup focused on movable data center systems in AI computing in remote locations. It all started with a close collaboration with SpaceX, using the Starlink broadband network for connecti...vity. Co-founder & CEO Dan Wright joins Morgan Brennan to discuss edge computing, partnerships with energy producers and the AI arms race.
Transcript
Discussion (0)
Armada makes movable data centers.
The startup's modular systems are used for AI computing in remote locations.
Some as small as a suitcase, others the size of shipping containers.
CEO and co-founder Dan Wright calls Armada the, quote, hyperscaler for the edge.
Today is where there's a ton of data being generated on oil rigs, mines, battleships,
terabytes of data every day on every one of those assets.
And there's no good way to process that data in a way that's real time.
secure at the edge, and so we're enabling that.
Armada Counts Founders Fund, Lux Capital, and Microsoft among its investors.
It recently raised another $131 million.
It all started, though, with a close collaboration with SpaceX,
using the Starlink Broadband Network for connectivity.
Starlink, most people don't know, didn't exist until it launched in public beta in November of 2020,
and now they're in well over 140 countries all over the world.
So increasingly, we're seeing that used as a primary,
but then what a lot of our customers want is they want to optimize multiple forms of connectivity.
And so we allow multiple types of Leo and geo-satellite connectivity as well as we can use fiber.
On this episode, Armada's Dan Wright on edge computing, partnerships with energy producers,
and the AI arms race between China and the U.S.
I'm Morgan Brennan, and this is Manifest Space.
Joining me now, Armada, CEO and co-founder Dan Wright, Dan, it's great to speak with you today.
Thanks for having me, Morgan.
So let's start with, let's start with the basics.
What is Armada? What are you doing?
Armada is the hyperscaler for the edge.
So we are building full stack data centers that can be rapidly deployed anywhere in the world,
co-located with not just data, but also energy, to unleash American AI dominance and make sure that we stay ahead of China.
So when we talk about computing at the edge, you really mean the edge.
Like, what does that look like?
What does it entail?
What are some examples?
Yeah, what's really interesting is when most people talk about the edge,
they mean the edge of the cloud providers networks, which is only about 30% of the world.
If you think about the cloud, it was built for a world where all of the data was in major metropolitan centers,
but that's not the case of today.
So we're building the hyperscaler, the cloud, for the 70% of the world that is not covered today
and making AI available to the entire world.
And you have investors that include Microsoft.
So you are working with some of those more traditional hyperscalers to expand the reach.
Exactly.
It's very complimentary of what we're doing to all the other hyperscalers,
but we're just extending the same benefits of the cloud to this 70% of the world that is currently not covered.
And what's interesting is that that today is where there's a ton of data being generated
on oil rigs, mines, battleships, terabytes of data every day on every one of those.
assets. And there's no good way to process that data in a way that's real-time, secure at the
edge. And so we're enabling that. All right. So when we talk about something like industrial
AI, for example, this is what you're targeting. Exactly. Industrials as well as defense. And then we
also do a lot of work for emergency response as well. So with industrials, think about large
energy companies. Many of them have oil rigs offshore in the Gulf, in, you know, the North Sea,
let's say, in rural parts of Canada and the Middle East and lots of other parts of the world,
Latin America. And each one of those rigs generates one to five terabytes of data every day.
And we're enabling all that data to be processed at the edge. And then you can send the metadata
back to the cloud using Starlink or any other source of connectivity that you have.
One to five terabytes per day. That's an insane amount of data.
It's a huge amount of data. And most people don't realize that data is actually not being used
in any sort of real-time way today, and the latency is often, if anything, is being done with that
data three weeks to a month. And when you're talking about the age of AI that we're in now,
three weeks to a month isn't going to cut it. You really need to be able to run applications
in real-time. And then also as you move to robotics, running any sort of AI model, or doing any
sort of automation at the edge, you're going to need to have the compute there on site to enable all
of that. And so you can think about us as like a distributed brain that enables those types of
technologies at all of these sites, both industrial and then in the public sector, again, think about
battlefields, think about ships in the middle of the ocean. We're enabling intelligence at the edge
using all of the data that's generated there. Yeah, so we're talking about dual-use capabilities
here. How do you do it? How do you operate at the edge? So we do it full stack. That's something
that is very unique about Armada. When you think about Armada, we call it the hyperscaler for the
edge because the closest analogy is one of the hyperscalers. Think about what Microsoft,
Google, AWS did 20 years ago.
They built full stack, the hardware with these large hyperscale data centers,
the software with Azure or AWS, and then the AI piece as well.
We're doing the exact same thing, and that is unique.
Nobody is building full stack at the edge, but we are at Armada.
We do the hardware, we do the software, we do the AI,
and it's also a very open architecture so that we can embrace any types of technology
that our customers have already invested.
it in, whether that is hardware, we're cooperable with Azure Stack, for example.
We can use any connectivity they want, any power they want, and then any AI applications,
whether it's their own AI application, RAI applications, because we have some first-party
AI applications, or third-party AI applications that they want to run at the edge.
We enable all of that.
And we do it in a way that it is totally plug and play, turnkey, because that's what people
want, right?
The people that are on these sites, they're not IT people, they're people that are just
trying to do a job better, and we're enabling them to do their job better, and we think that that
is critical. You can't really do your job in the age of AI unless you're utilizing all the tools
at your disposal, including the latest models, the latest hardware and software, and then also all of
your data. How are your modular data centers being connected? What does the connectivity look like?
I know you, last time you and I spoke, Starlink was in the mix. Is that still sort of the backbone here?
Yeah, so we've been working with Starlink since the start of
the company. What's really interesting is at the time that we started the company, Starlink was being
used primarily as backup connectivity. And increasingly what has happened, and this is not surprising for
people who are familiar with SpaceX, but the technology has advanced extremely rapidly. The service
is getting better because the satellites, the birds themselves are getting better. The terminals on the
ground are getting better. And then the service itself is getting better very, very quickly. Starlink,
most people don't know, didn't exist until it launched in public beta in November of 2020,
and now they're in well over 140 countries all over the world. So increasingly we're seeing
that used as a primary, but then what a lot of our customers want is they want to optimize
multiple forms of connectivity. And so we allow multiple types of Leo and geo-satellite connectivity,
as well as we can use fiber. We support SD-WAN devices in our platform. And today we're managing over
10,000 of these connected assets in our platform and helping them optimize for performance as well
as cost.
That's interesting.
So when you hear about these investments that Jensen Huang and Nvidia has been making
in all kinds of space and space infrastructure companies, this is one of those examples of
how space-based connectivity in this particular case is enabling more of this AI era to take place
on the earth.
Exactly.
I mean, you think about it, November of 2020 being when Starlink launched in public beta,
that is not that long ago.
They spent the first couple years more focused on consumer and then went into enterprise
and government.
But now this technology is available all over the world.
And we think over the next five years, it is truly going to cover the entire world.
And so what you need is the infrastructure to complement the connectivity.
And that's what Armada does faster, better, more cheaply, you know, than anybody else in the market.
Now, you also talked about the importance of power sources.
So what does that look like?
How are you targeting locations and how are you tapping into those power sources?
Yeah, great, great question.
So we actually recently released a white paper.
You know, we just raised $131 million in a strategic funding round with top venture investors,
founders fund, Lux, et cetera.
Microsoft, who you mentioned invested again in this round.
They invested in our previous round as well.
So working really closely with Microsoft and then a number of strategic investors like Gladebrook,
Veritin, Pine Grove, and now what we're doing is talking about, and we did this white paper,
how we can unleash American energy and AI dominance. And I say energy and AI dominance because the two
are totally linked. With energy, we have about six gigawatts of stranded energy, largely flare gas,
stranded natural gas. What we're doing is we're working with developers that have land and
energy all over the country, as well as in allied parts of the world, to unleash all of that
energy for AI. An example of this, we're partnering with Bakken Energy, which has a large amount
of stranded land and energy assets, natural gas assets in North Dakota. We're doing the same thing
with Fidelis, which has a large amount of land and energy in West Virginia, as well as Louisiana,
as well as some allied territories. So it's a similar thing where these,
these organizations, they've done the hard work of getting the land, getting access to power,
and making it available for AI.
And then what we're doing is we're deploying the infrastructure and sort of optimizing that
so that we can get the most throughput when it comes to AI and also make sure that as
we're building, we're building in a way that is very flexible because as we know, the chips
and the cooling are evolving on a monthly basis at this point.
So you want to make sure that you're not only building, but you're building the right thing
at the right time to get the maximum throughput
when it comes to AI.
So you just talked about it.
You just raised another funding round.
What's next for the company?
How quickly are you growing?
We're scaling rapidly.
I mentioned that we have over 10,000 connected assets
in the platform.
We're also operating in over 70 countries
throughout the world.
And what we want to do is scale extremely rapidly
because the AI race that is happening
in both the public sector
and the private sector
between the U.S. and China is happening now.
And we believe that five years from now, you're going to end up in one of two worlds.
In one world, you know, the world's running on the Chinese AI stack that has massive implications,
not just for national defense, but also for the economy.
And so we are very focused on making sure that five years from now you live in the world
that we want to live in, which is that the world is running on the American AI stack.
And what that means is we have to continue to invest rapidly in our R&D, have very rapid R&D loops.
So get feedback from customers to make.
sure that our technology is better while also being cost competitive with China.
You know, they are backing Huawei, and they've recently been in the news exporting their
technology to Malaysia and other parts of the world, and they're getting very aggressive
on making sure that they can do that very quickly.
And so we need to beat them to the punch and make sure that, as again, Starlink and other
forms of connectivity are rolling out throughout the world.
You have all this energy that is being made available throughout the world.
that we're a first mover in terms of the cloud, the full-stack infrastructure for AI at the edge.
What you're touching on right now is economic policy that China, quite frankly, has dominated
looking at emerging technologies the last couple decades, and that's this notion of diffusion.
Do you see this administration, the Trump administration, and just policymakers in general,
leaning in to enabling that export and that creation of an American-dominated AI tech stack?
Yeah, I do. I was recently in D.C. for the winning the AI race summit where the president announced the American AI action plan. And if you read it, it's very clear that there's a focus on not just the deployment of this technology domestically, although there's certainly that, not just making sure that we have the best technology at every layer of the stack from the hardware to the software, the chips, the cooling, the models themselves, but also making sure that we have the best technology at every layer of the stack from the hardware to the software, the chips, the
that we are exporting our stack, full stack faster than any other company, sorry, any other
country in the world, obviously including China. And so I think just that focus on not just
making sure that we can more rapidly deploy this technology, have really fast feedback loops,
but then also that we are exporting it, that we're making it easy for companies to do that
and be China is good because the reality is, and we see this with our customers every day,
we work with some of the largest energy companies in the world.
Like we're, you know, we had an announcement earlier this year with the Ramco and Microsoft as an example,
but we're also working with some of the largest mining companies, manufacturing companies.
What we see is they're either going to work with us or if we're not there, they're going to work with China.
They're going to work with Huawei.
And so we need to make sure that we are not only there, but we have technology that is the best in the world and that is also cost competitive with China.
so that when they make that decision about who they're going to partner with, they partner with us.
I do want to go back to the public sector piece of this.
Are you working directly with the Pentagon and the different services with your technology?
We are.
Yeah, we are working with the U.S. Navy today with Navy Fourth Fleet.
We've deployed our technology with them.
We are going through testing exercises with them.
And if you think about those types of scenarios, increasingly and going forward, conflicts are not going to be resolved
as much with people at the front lines, but increasingly with drones and autonomous technology,
all of those things require massive amounts of data that you have to process locally,
and that needs to be processed in a completely secure way.
And so we're focused on being that sort of forward-deployed infrastructure for all of that
data processing at the edge.
And one thing that's really interesting about the work we're doing with the Navy is that each
one of these, we call them galleons, these modular data centers, you can think about it like
a node to distributed private cloud.
And so if one of those is ever destroyed, you can automatically back up to another node.
And so it's not only infrastructure that enables very high power, secure computing at
the edge, but it's also extremely resilient infrastructure.
And given what's going on geopolitically and where things might go in the future, that's
going to become increasingly important, again, given that just there's massive amounts of data
of being generated by drones and autonomous technology,
and that's only going to accelerate.
Plans in the future to IPO?
How do you think longer term about the trajectory of the company?
I mean, when I look at this,
I look at it in terms of a generational shift.
I think what we're going through right now
is kind of similar to what the cloud went through 20 years ago
or the advent of the internet, but potentially it's even bigger.
You know, the AI is changing and will change every aspect of how we live in work, how we, you know, resolve conflicts.
And what you're going to need is you're going to need infrastructure in this 70% of the world that enables that AI to happen and happen in a way that is both cost effective and flexible.
And so when I think about what Armada should be, we want to, again, be the hyperscaler for the edge.
And if we do that, I don't think so much about IPOs or any sort of event.
I think about creating a company that can stand the test of time and solve really big
important problems for the country and for the world.
We're a mission-driven company, American company at Armada, and we think more about the
mission.
The mission of our company is to bridge the digital divide.
And so our work's not done regardless of any financing event until we complete that mission.
Okay.
Dan Wright of Armada.
Great to speak with you today.
Thank you.
Thanks so much, Morgan.
Great speaking with you.
That does it for this episode of Manifest Space.
Make sure you never miss a launch
by following us wherever you get your podcasts
and by watching our coverage on Closing Bell overtime.
I'm Morgan Brennan.