No Priors: Artificial Intelligence | Technology | Startups - How Capital is Powering the AI Infrastructure Buildout with Magnetar Capital Managing Director Neil Tiwari
Episode Date: February 26, 2026By the end of 2026, AI capital expenditure is projected to hit nearly $700 billion. The question isn’t who has the best model, but who has the most creative financing to build out AI infrastructure ...and beyond. Sarah Guo is joined by Neil Tiwari, Managing Director at Magnetar Capital, a financial innovator helping the AI industry scale from billions to trillions of dollars in CapEx. Neil explains some of the debt structures used to finance massive GPU clusters, who is taking the risk, and how the industry is maturing. Sarah and Neil also discuss how power distribution, energy storage, and physical materials like steel are the bottlenecks of the AI industry. Plus, Neil gives his take on the future of inference-optimized clouds, and why the market shift away from software and into infrastructure might be an overreaction. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Chapters: 00:00 – Cold Open 00:05 – Neil Tiwari Introduction 00:26 – Magnetar’s Story 01:28 – Why CoreWeave Helped Magnetar Win 06:15 – Scaling CapEx Efficiently 09:02 – Debunking GPU Collateral Risk 11:42 – How Deal Structures Evolve 13:01 – What Bottlenecks Buildout 15:28 – Circular Financing Critiques 17:35 – The Shift from Training to Inference Workloads 23:10 – AI Factories 24:12 – Constraints of the Current Power Grid 28:27 – Sovereign Compute Buildouts 29:54 – Physical AI Capital Needs 32:48 – The Capital Rotation Away from SaaS 36:04 – Conclusion
Transcript
Discussion (0)
Hi, listeners, welcome back to No Pryors.
Today I'm here with Neil Tuari of Magnetar Capital.
This is a $22 billion alternative asset manager at the center of the AI compute buildup.
We talk about the financial innovation, depreciation of GPUs, and what's next in AI compute.
Welcome. Thanks so much for doing this, Neil.
Absolutely.
You know, really happy to be here.
So you are leading AI infrastructure at Magnetar.
you're at the center of the buildout enabling it, financing it.
For any of our listeners who haven't heard,
can you just explain a little bit what Magnetar is?
Sure.
So Magatar's been around for actually this is our 20th year.
We're an alternative asset manager,
and that can mean a lot of different things.
But we have three primary strategies.
The first one is private credit.
The second one is a venture strategy,
and the third is more of a systematic or quantitative-focused public strategy as well.
And so I think, you know, when people look at us and, you know, why are we here in this moment, especially on building out AI infrastructure, I think a lot of it has to do with kind of our unique lens on helping to build capital intensive businesses and using creative financing, whether it's venture or other structures with unique elements. And I think we're going to talk a lot about that. But to build out and optimize the balance sheets for these capital intensive businesses.
So I remember hearing about you guys originally.
You're the first investor, I think we've ever had on the podcast.
I'm excited about this.
I remember hearing about you and Magnetard initially around, I was like, who's this big owner of Corweave?
And also, you know, helping Open AI with some of their early buildouts.
When did you guys first start looking at the problem and thinking about how to solve it?
Yeah, so we actually, you know, stumbled across the compute problem before it was compute.
We met CoreWeave back in 2021, and that was when they were actually transitioning from mining Ethereum into high-performance compute.
And at that time, it was using the GPU as an instrument to mine cryptocurrencies.
And interestingly, that same instrument could be used for high-performance computing applications.
And the first one was visual effects.
So think of things like movies, Marvel movies, and things like that.
And so they were transitioning at that point between crypto mining into the first kind of high-performance compute use case.
And this is all before AI.
And so we made our first investment before the AI trade started.
But we added a lot of optionality where, you know, we could envision a world where the GPU could be used for a lot of different high-performance kind of computing applications.
I think, you know, AI was on the radar, machine learning was on the radar for us.
But I wouldn't say that we could foresee everything that happened.
We just happened to be, you know, at the right place at the right time.
And we continue to double down as the company progressed and started, you know, shifting into more workloads that were machine learning and kind of AI training base.
Did you have like an existing significant data center investing footprint?
No.
I mean, I think, you know, interestingly at Magnetar, there, you know, we have invested a
across asset classes. So we've done a lot of property investing, real estate investing as an example,
investing in energy. We had an energy business historically. And so a lot of the elements for,
you know, what constitutes a data center, power, energy, land, real estate. You know, we had a lot
of the background in those spaces. I think we were new to compute, right? Like that was a new sector
for us. And so kind of those two worlds merging, you know, we obviously, you know, came up on the
curve on the compute side, but we had a lot of, you know, background on the elements that constitute
what it means to build a cloud. So you guys just really, you were in this company, you saw the
demand and you said, like, it's going to grow and we're going to make this a big part of our business.
Exactly. I think, you know, what was interesting was we made our first investment in 2021.
And then about a year later, we continued to see expansion of use cases for at that time.
It was called high performance compute. And then it was kind of towards the end of 22, the whole
AI discussion started. And as we entered 2023, CoreWe've started to train models for OpenAI.
And that's when things really started growing because the sheer amount of compute that was needed
to train an LLM, this was like the first time it had ever been done. And what was interesting was
what kind of allowed them to take advantage of that opportunity was the historical kind of
backgrounds of a lot of the founders were in energy asset management. And when you fast forward to
today and you look in like what constitutes your ability to build a GPU cloud, it's your ability
to manage these highly complex assets. And it fundamentally comes down to access to power and energy.
And so they had these elements with them. They obviously brought on a lot of talent on the cloud
side and to put all these together. And at that moment, it allowed them to, you know, build
very large-scale, reliable clusters for Open AI and obviously many other customers since then.
And I think the last comment I'll make is what really allowed them to kind of win this market early on
was focus on two things. It was scale and reliability. And I think those were the two things that
are really difficult for a lot of the new entrance since then, because scale has to do with your
access to capital, your access to energy, power data center. And then reliability really had
to do with their ability to manage a giant fleet of GPUs, which is actually quite complicated,
whether it's reliability from GPU failures or software challenges, building a fleet that can
healthily be online all the time at 99.9% reliability is incredibly difficult. And that's
something that they had started back in 2017-2018 timeframe. And they were at the right moment
at the right place with the right technology stack to really build the optimal cloud.
for that moment.
I've definitely experienced that with our portfolio of companies that are building large
training clusters.
It, Corweb has a reputation for reliability that not everyone has reached.
Can you just help characterize if you fast forward like two and a half, three years now?
Like what is the scale of the problem today?
Yeah.
So if you look at kind of CAPEX, right, let's starting with that.
So CAPEX for AI compute and infrastructure in 2026, you know, at least from the hyperscale,
is projected to be between $660 and $690 billion.
And over the next several years, you know, that scales to trillions of dollars, right?
And so the scale of the problem is how do you build, you know, that size of CAPEX efficiently?
And I think a lot of that has to do with not only, you know, your ability to have access to, you know, those core elements, energy, power, you know, and your ability to have.
have data center space, et cetera. But I think one of the things that's not talked about as much
is capital and access to capital and how is capital structure. And what I mean by that is
this is billions to trillions of dollars of CAPX. And just using equity dollars alone is not an
efficient way to scale this. That's obviously a massive dilution. It's not an easy problem to
solve. When we first met, I had like slowly come to this realization. I was like, I don't think we should
take the dilution for the cluster. Yeah, right, exactly. And so that's where I think, you know,
when you and I have talked about like structuring, and I can give a couple examples, if that's helpful,
I think the first one was DDTL structures or SPV debt structures that had a, think of it as like an
SPV. Inside of the SPV is the CAPEX, the collateral, which is the GPUs and the contracts
themselves. And so in this example, the actual asset or collateral was not really just the
GPUs themselves. It was really the contracted cash flows from, in this case, investment-grade
counterparties. And so I think the reason... This is the consumer of the compute. The consumer of the
compute, exactly. You know, your Microsoft's, your metas, et cetera, of the world. And I think the reason
that was done is really two-fold. When you look at the scale of the problem, you know,
those particular contracts needed billions of dollars of debt to finance the CAPEX.
You know, obviously for a nascent and new and growing company, that's really hard to raise.
So part of structuring it this way is ensuring that you have kind of guaranteed offtake on the back end to minimize the risk for, you know, debt holders.
And I think that's a lot of what the market got wrong, especially when there was a lot of press about this early on, where it was,
there's billions of debt on these highly depreciating assets, and it's extremely speculative.
And what was oftentimes characterized in the media was these debt structures had GPUs as collateral,
and that's like putting a used car as collateral, which is obviously just going to depreciate incredibly fast.
You know, that's a very risky kind of structure.
And I think what got missed was the GPUs themselves were actually like the second,
second or tertiary level of collateral in those instruments.
The primary collateral was the contract of cash flows from investment-grade counterparties.
It's Microsoft or Invidia or somebody like that saying, I'm committed to pay you.
Exactly.
I know you can pay me.
Take or pay contracts.
And they're like five years in length.
So I think that was like one feature that's unique to talk about.
And then the second one really has to do with the debt itself and how it amortizes.
And so, like in simple terms, you know, when you have debt, you have principal in interest,
and you have to pay it off over time. And in these structures, typically the payback period on the
CAPX was roughly two to three years. And the structures themselves, the debt was over five years,
you know, four to five years in length, where the entire debt amortized during the
outstanding period that the debt was out. And so at the end, you ended up with zero,
balance for the debt, and there was no balloon payment or anything that was really due on the
back end. And so the question that often comes up is, isn't that a very risky type of structure
because these things are depreciating incredibly quickly? So I think there's two comments here.
First is on that depreciation question, in these kind of debt structures, it doesn't really matter
because the debt's fully paid off by the end of the debt term against committed.
contractual contracts from investment-grade counterparties.
And then at the very end, the actual upside or residual value,
and I know there's a lot of questions on residual value,
is held by the cloud player in this example, right,
Courtweed, right, or any others.
And that's a really interesting prospect,
because you can see a world where all of this CAPEX is paid off
incredibly quickly, and there's an opportunity to redeploy it.
where you can redeploy it without having to pay for any additional debt, obviously, against
that redeployment.
How have the instruments changed?
They've changed in several ways where, you know, the first is, and when you look at these SPVs,
I think you're starting to see ways to change the portfolio construction of who can go inside
of one of these debt structures.
And so, you know, early on in the early days, these were all only investment-grade counterparties.
because the space was so nascent, the operators had no experience. And I think now what you're
starting to see is a blend of investment grade and non-investment grade. So like, what does that actually
mean? What that means is, you're seeing these structures with investment grade counterparties like
your hypers and your other corporates that are IG mixed alongside some of the AI native companies.
And so think of the AI model companies, the labs, software companies that are building AI
startups, you're seeing those companies get mixed in alongside the IG companies to build a portfolio.
Because now you have, you know, the history that you can do this.
And now you have structures where you can kind of balance the risk with IG and non-IG.
And we're continuing to see that going to move to be able to help finance, you know,
really the model companies and a lot of these startups.
Obviously, that was difficult to do, you know, three or four years ago.
That's starting to become easier as these companies have more runtime and ability to
make the compute fungible.
All our portfolio companies that buy compute, tell me it's a supply constraint
to the market today.
One, is that true?
And two, when you think about like continuing to grow your business or grow this ecosystem,
like what's going to stop it?
Like, what could slow down a build out?
Yeah.
I mean, I think what's interesting is if you look at like 2023, 2024, we were
were very supply constrained and the supply constraint was chips.
And no one could get access to chips.
Yes.
We bought chips.
We bought chips, right?
Yeah.
And, you know, there was this thought that, okay, there's going to be an overbuilded
chips and then the supply constraints will go away.
Well, you know, fast forward to 2026.
And what we see is, you know, there is obviously more availability of chips, but to build
and operate these, you know, data centers requires people, power, infrastructure.
a lot of these things that have a lot of bottlenecks.
And so actually taking these chips and then making them into useful revenue generating assets is really the bottleneck now.
It's also not clear that there is supply of chips at the latest generation at scale soon, which is how everybody wants them.
Exactly. I think you see, you're starting to see interesting and not only just the high-end players want access to the latest chips, you're seeing the latest, you're seeing the latest, you know, obviously startups,
access to those, and I think it has to do with efficiency.
You know, one of our friends or one of your friends as well, Dylan Patel over at
semi-analysis posted this interesting article last week on inference and inference spend,
an inference kind of performance.
And, you know, there's a lot of, you know, jokes made about Jensen math.
And it was interesting because the...
Seems pretty good at math, honestly.
He's actually great at math.
And so for the hoppers, the H-100 or H-200 series of GPUs into the...
the Blackwells, there was a claim made that it could be 30 times more efficient.
And I think the data from, you know, someone else showed that it was 90 to 100 times more
efficient in terms of inference performance.
And so I think part of the need to go to these new chips is not, yes, more computing power,
but it's actually the, it can be cheaper to operate.
It's price performance.
Right.
Price performance, exactly.
Mm-hmm.
Yes.
My favorite Jensenism is the more you buy, the more you save.
Exactly.
It's actually true.
Yeah.
Crazy.
Help me address this criticism around circular financing.
Yeah, I know.
It's obviously a topic de jour.
And I think, you know, the way we see it and frame it really has to do with the demand signals.
And who are the eventual buyers and how is this being used?
And so, at least from what our perspective, we continue to see insatiable demand.
and if you go back to, you know, the previous kind of big tech buildout back in the early 2000s,
there was obviously a lot of fiber that was being built and you had dark fiber, you know,
and overbuild happening.
And I think what you see here is I've, you know, you don't see any dark GPUs.
No, I've been looking.
Exactly.
Any GPU is used.
Yeah.
And then number two, you're starting to see actual economic value.
So I think last year, Enterprise AI had about 37 billion.
of Total Tam and it's continued to grow like crazy.
And at least personally, and I'm sure you see this too, but I use these tools all the time.
Continuously.
Continuously.
Incredibly valuable, right?
The actual tokenomics of positive ROI is actually here now, I think, from our perspective.
And so that the circularity, you know, comment, I think, applies when you're building, you know, speculative compute and capacity.
or if you're purely doing vendor financing,
and you're trying to do some type of unique,
some type of reveric type item related to that.
And that's not what we see.
What we see is financing to support,
to build out the demand against use cases
that are very positive in their ROI.
And so our perspective is that that's not a real concern that we have.
And it really has to do with who are the ultimate buyers here.
Ultimate buyers have been at scale, the hyperscalers, they're deploying this at scale,
and the economics are positive when you look at a unit economic basis in terms of deploying
intelligence.
And I think we're at a moment in time where you're really starting to see that.
In my own experience, I have been a heavy AI user for several years.
But reasoning advances the ability to scale inference, especially around code,
means I'm up against my max limit all the time in a way that I was not.
not true initially.
How does the inference workloads actually growing?
I mean, it's a good demand signal that there is value, but how does that change your business?
Yeah.
So I think one thing that's interesting that we're seeing is, obviously, there's been the shift
from training to inference, you know, over the last few years.
That split continues to grow on the inference side as usable and ROI positive applications
get developed.
I think the two things I see on the inference side now,
is inference is a lot more complex than I think initially thought.
And what I mean by that is it's not as simple as you train a model and then it's easy to
inference it.
In certain cases, you can do that on similar infrastructure, but there are issues around latency,
fungibility of that, and really optimizing the cost of your compute on the inference side.
How do you manage peaks of inference demand?
And obviously it's not linear like training, your GPs are on all the time, you know, 100% of the time.
And so with inference, you have a lot more variability.
And so there's a lot more nuances in optimizing inference.
I think the second thing that's observed that I've seen is inference is definitely a memory problem, a memory throughput problem.
You know, on the inference side, you know, you have these kind of phases called pre-fill and decode, right?
and how you optimize that across a fleet of GPUs is actually a unique technical problem.
And then the third is what I would say is distribution.
You know, a lot of times training infrastructure is quite centralized.
What you're seeing with inference is in many use cases, as this becomes more ubiquitous,
you're going to have more and more decentralized inference clusters.
And actually one of my favorite companies is one of your companies, Base 10,
which is really optimizing distributed inference.
at scale. And I think one thing that's interesting when you look at companies like that and other
inference clouds is how do you optimize the compute and build out these clusters that could actually
look very different than a training cluster, where training cluster might be 50, 100, 150 megawatts and one
kind of four walls. I think you're starting to see distributed inference, which could be, you know,
four or five megawatts and five separate data centers and stitching them together.
different areas, right? And that looks very different from a kind of power perspective, how you,
you know, the software matters a lot more when you're doing like distributed inference. And then
in terms of your question, how it impacts us, I think one of the things that we've been,
you know, focused on is, you know, where we started this conversation with you on financing
compute, that was really obviously, it started with mostly training. A lot of those hyperscalers
are now doing a lot of inference on that same.
infrastructure, but these are investment-grade counterparties. You know, it's easier to lend money
to build out these clusters to those customers. I think now that you have this new crop of
inference clouds and application-layer companies that are needing tons of inference, I think the
key question that we're really focused on is, how can we finance the next build, which is
distributed inference? And maybe the last, you know, one or two takeaways would be one thing I'm
is, you know, for every application layer company out there, the highest line item from COGS is
compute. And then the inference companies and inference clouds out there, most of them are
purchasing up compute from either other clouds or unused capacity. And when you look at like margins
for that, you've got like layered margins. And so there's a push to kind of own your own
own infrastructure to really drive and increase profit margins, but also it's the ability to
kind of have control of your own destiny. And I think a lot of folks are starting to the application
layer companies and inference clouds are grappling with how can we build and own and operate
our own infrastructure. And that's something I'm really looking into. I am too. And I think one of
the things that is going to make a big difference in this ecosystem is like can the inference clouds
like Base 10, can they deliver reliability that you would expect from a cloud, like a traditional cloud?
Yeah.
Because the distributed data center operations that, you know, they consume today do not offer that reliability.
Right.
And the other thing that's interesting is, you know, this is additional reporting from last week.
If you're familiar with Silicon Data, they put together a lot of, you know, data on spot pricing and price per token performance.
this is Carmen Lee's company.
And one thing that I think it was really interesting in an article she published last week
had to do with how two pieces of compute that look identical on paper have wildly different performances,
everything from reliability to cost to speed.
And I think as you distribute, you know, have distributed inference,
how do you mash together very different types of compute and try to optimize reliability?
I think is super interesting.
And that gets to kind of one thing I find really interesting that NVIDIA is doing is this concept of AI factories
and building AI factories, you know, behind corporates and AI companies.
And maybe the way I unpack that is you've got kind of more large monolithic cloud players,
the hyperscalers and the neoclouds that are building large-scale, you know, cloud environments.
And a lot of where I think Nvidia and,
others see this going is, yes, those are going to be important components and those are going to be
huge markets. But corporates, Fortune, you know, 500 AI companies that use a ton of compute will want
dedicated AI factories associated with workloads that they run and that they have control
of. And so I think you're starting to see, you know, the early indications of how do you finance and
build out? Almost think of like literally AI factories that sit on-prem with a company that can
operate their workloads.
You're talking about my Mac mini farm.
Exactly.
No, but all joking aside, I think one thing that is another supporting factor for use of all of the compute we have is and can create over the coming years is power is clearly the limiting factor.
It's easier to get more power in smaller units.
I think that as inference demand is growing, anyone who has, anyone who has,
usable compute for inference is going to find a lot of partners for offtake.
Exactly.
Okay. Let's look at the future a little bit while we have 10 minutes.
Let's talk about the macro.
Like, people talk about energy.
They talk about natural gas, the grid, the slowness of nuclear.
Like, what do you think about over the next six or 12 months?
Over the last year, I've been spending a ton of time in the power and energy markets.
and looking at interesting solutions that can help scale power for the gap that we see.
I think a few observations that we've seen.
The first is we do have a power problem, but I think it's a bit more nuanced than a lot of the reporting out there where...
We just we can't generate.
Yeah, I think there's actually quite a bit of stranded power across the grid, across the country.
And what I mean by that is a lot of the utilities are built in a way where they're focused on peak power, right?
So they've got natural gas peakers and they're focused on providing peak power for those moments where demand is kind of off the charts.
And that's obviously only for a few days out of the year.
So there's lots of generating assets out there.
The question is they're a bit stranded, right?
And so there's kind of, I look at the power problem as being kind of multiple fold.
The first one is how can you take the power we have on the grid and actually make it usable?
And a lot of that has to do with flexibility and storage.
And so we've been spending a lot of time looking at an energy in the energy storage business and distribution.
How can you store unused capacity, peak demand shave capacity, store it and then distribute it when it's needed?
We made an investment in a company called Taurus.
I think I mentioned to you, which is building like this distributed utility layer, almost like this mesh infrastructure to store access capacity or store capacity from a variety of sources and then distribute.
at the time when it's needed. And so I think that's kind of a critical layer that needs to be built.
And the longer term, there is a generation problem, but I think in the shorter term, it's really,
it's more on the distribution and storage. And then the other piece I would say is, you know,
the true bottleneck, at least in the short term, the next six to 12 months is, it's incredibly,
I don't use the word simplistic, but it's things like structural steel. It's finding electricians
that can, you know, build this.
You can't get enough.
You can't get enough steel.
You can't, yeah.
This is not something I was aware of.
Yeah.
You can't get steel.
You can't get, you can't find enough electricians to build out, you know, the power
infrastructure, substations, transformers, air chillers.
These are like very specific power infrastructure needed to just get to a point where you can
start to build a powered shell on a piece of land.
And so the bottlenecks in the short term really are people,
equipment. And then the other interesting thing is that on the generation side, what you're saying
is regulatory, obviously, is a big challenge. And so there's a combination of bring your own capacity.
There's a lot of that that's interesting right now. And so a site that can potentially grow to
50, 100 megawatts might start with only 10 megawatts of grid interconnect. But can you add solar,
net gas, turbines, put these various bring your own capacity, kind of pieces of technology together,
to make that site usable.
And so I think a lot of what's being looked at
and a lot of what I'm looking at right now
is really on the bring your own capacity,
at least in the short term.
Yeah, I think if people don't know
the origin story of Crusoe and Flergas,
like it's actually really interesting
as an example of, you know,
there is actually lots of energy.
Yeah.
You know, some energy out there
and you can make much more of it consumable.
Yep, exactly.
A couple topics to hit before we lose you.
new players, how do you think about the sovereigns and what they're doing in their buildouts?
Yeah, I think...
They seem to be able to fund themselves to some degree.
Exactly, right?
You know, you saw the news from India last week.
Obviously, a lot of the news in the Mideast, Southeast Asia.
I think, you know, we're continuing to see that sovereigns, you compute and AI, you know, as...
And even we do here in the United States as a matter of national security.
And obviously, the funding of those clients...
clusters is very different than funding like a private cluster. And so you've got, you know,
government capital that can be used for that. So I think there's two things that, you know,
I find interesting in that space. I think one is who are the partners that are going to build
those, that capacity? And what are the cybersecurity kind of implications and environments for that?
And so those are the two nuances, I think, with sovereigns is they need to find players that can
rapidly scale compute in their countries.
And oftentimes they don't necessarily have these players that know how to build and scale GPU compute.
I think that's a great place for the United States to lean in and help build sovereign ecosystems
around the world.
And then there's a matter of cybersecurity.
And how do you make it into a truly safe ecosystem for those sovereigns?
And so I think there's a lot of work to do still on the cyber side, especially as you look at,
you know, scaling sovereign AI.
What is your thinking on physical AI?
It's another, you know, if it works,
Capix intensive build.
Absolutely.
And, you know, maybe I'll just take a second to say
one of the things that we observed from 2010
to like the early 2020s was we were in a very capital
asset light mode of build.
Like SaaS was, you never heard Magnetar in SaaS, right?
No.
Because it was just purely asset light.
Compute and everything we saw starting in, you know,
one is asset heavy.
That's where you started hearing.
a lot more about us. And I think physical AI is actually an extension of that. And so what you're
seeing is part of the reason, I think, and I think we all have scars from the 2010s of hardware
companies that did not make a lot of money for us. Part of the scars was it was so difficult
to scale hardware companies, you know, because the software was so difficult to build. You needed
to spend so much money building the hardware, the software was an afterthought. What you're seeing now
is now that you have more general purpose software via AI, it can make the, you know,
hardware easier to scale because you have, you know, software that can be, you know, can interact
with more, more hardware. And so I think the natural kind of extension of what we see is,
kind of what happened in the compute markets where you really needed flexible capital,
where it wasn't just equity, it was debt and, you know, a variety of project finance to really
scale CAPEX. You're going to see that same kind of need in physical AI. And it simply has to do
with capital intensity, right? You know, on the compute side,
for like CoreWeave as an example,
they needed billions of capital to scale
that cloud.
And I think whether it's a robotics company
or whether it's a, you know,
a manufacturing focus company, drones, defense,
all of these areas are incredibly capital-intensive.
And then now that you add AI into them,
I think it can help them scale faster, quite frankly,
and capital intensity is still there.
And so there's a moment in time now
where you're going to have to really look
at optimizing balance sheets
for physical AI to really grow and scale.
I think to your point of how the early AI compute contracts were structured,
I went from learning to be an investor in an era and an environment where robotics was a great way to lose a lot of money for a long period of time.
You remember that too.
Now I send the board of two robotics companies, so let's hope it's not true anymore.
But I'd say like it's just a question of capability to me.
Like, you know, whether it's in the home or in industrial settings where, like, it is simply not a good human job or we don't have the labor.
Yeah.
You are going to have, if I think the products will support investment-grade buyers who are going to have contracts that say, like, we want it.
And you can raise debt against it.
Exactly.
Right.
And so I think actually that that feels of a very similar shape.
Last question for you, because it is so timely.
what do you make of the general capital rotation out of software, the end of software, and it's all
infrastructure labs and AI natives, I guess. Yeah, yeah. It's interesting to see that every day there's another
industry that kind of tanks, whether it's, you know, you saw the wealth advisors tank for a few days,
you saw the consulting companies, you saw payments, payments, real estate, right? I mean, I think
what you're seeing at least is, at least in my view, what I saw was towards the tail end of 2025
and into 2026, like there was, at least in my view, a big step up in performance of usable
AI. And I think, you know, what Anthropic was doing really and clawed and like, we use it all,
you know, obviously we use all the models, but, you know, there was a definite step up in performance
in making AI usable. And seeing that it can, you know, truly disrupt these, you know, non-AI native
industries, uh, I think the reaction and rotation out of each of these names is a bit much because when you,
I think there's two factors I look at.
One is when you look at valuations as an example,
I think from a free cash flow perspective,
SaaS companies are valued at the lowest they've been in years.
And there's a huge margin difference between what those rev multiples are today
and what they've been in the past.
And so free cash flow margins have steadily increased significantly for SaaS
as a whole over the last four or five years.
and revenue multiples have stayed, you know, the same or gone down.
And so to me, that's a bit of an exaggeration because it really has to do with individual names versus sectors.
And I think that's kind of at least my take is like in all of these sectors, there are individual names that will learn how to maximize their, you know, value using AI.
And there's those that won't.
But what's happening right now is there's, you know, a hammer being hit across all names and not, you know, specific individual names that might not be using it as well.
And then the second point, at least, you know, my view is there are a number of applications that, you know, on paper sound really interesting.
Like, oh, AI, you could just rebuild Slack or you could rebuild Salesforce or could rebuild, you know, X, Y, and Z.
I think, you know, the, it's not just the product.
It's the way that's integrated across multiple services and systems across the enterprise.
That is a lot more difficult to just replicate than I think some of the public markets are kind of reacting to.
And I do think there's, you know, a fundamental question in addition.
what you said, which I agree with, of like, does anybody want to rebuild it? Yeah. And, uh, you know,
there are, to your point of like, within the software sector in particular, um, there are companies
where, uh, uh, they're structurally more protected than there are companies that are at
more risk, right? And I, I think it's simple as like, you got to go select. Yeah, exactly. Um,
this has been so fun. Thanks so much, Neil. Yeah, I really appreciate it. Congratulations on all the
innovation and, uh, on building out all the compute. Awesome. Thank you. It'd be.
here. Find us on Twitter at No Pryors Pod. Subscribe to our YouTube channel if you want to see our faces,
follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode
every week. And sign up for emails or find transcripts for every episode at no dash priors.com.
