Big Technology Podcast - Coreweave: AI Bubble Poster Child Or The Next Tech Giant? — With Michael Intrator and Brian Venturo
Episode Date: January 7, 2026Michael Intrator is the CEO of Coreweave. Brian Venturo is the chief strategy officer at Coreweave. The two join Big Technology Podcast to discuss the company's rapid rise amid the AI boom and the cri...ticisms of its business model. In this episode, we cover what it takes to build so many datacenters in such a short time, what happens to Coreweave if the AI boom flattens out, why the company uses debt to build its infrastructure, and how AI chips depreciate over time. Tune in to hear an in-depth, illuminating interview with the founding team of one of the AI moment's most fascinating and controversial companies. --- Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. Want a discount for Big Technology on Substack + Discord? Here’s 25% off for the first year: https://www.bigtechnology.com/subscribe?coupon=0843016b Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Is AI a bubble or the biggest boom of our lifetimes?
The fate of one company, Corweave, may tell us everything we need to know.
We'll be back with the company's founders right after this.
Fiscally responsible, financial geniuses, monetary magicians.
These are things people say about drivers who switch their car insurance to Progressive and save hundreds.
Because Progressive offers discounts for paying in full, owning a home and more.
Plus, you can count on their great custom.
service to help when you need it so your dollar goes a long way. Visit progressive.com to see if
you could save on car insurance. Progressive Casualty Insurance Company and affiliates, potential
savings will vary, not available in all states or situations. Ready to take on Canada's
worst invasive plant? The call for proposals for the invasive Fragmites control fund is open. These
funds support collaborative projects led by municipalities, conservation authorities, indigenous
communities, and nonprofits that focus on Fragmite's mapping, monitoring, planning, control.
role and innovation. Applications must be submitted electronically by January 23, 2026 at 1159 p.m.
For more information and to apply, visit our website. Welcome to Big Technology podcast, a show
for cool-headed and nuanced conversation of the tech world and beyond. We have a great show for you
today because in studio with us are the founders of Corweave, Corweave CEO. Michael Intrader is here
with us. Michael, welcome. Thank you very much. Great to be here. And Corweave's chief strategy
officer Brian Venturo is also here. Brian, great to see you. You are, you both are running one of the
most fascinating companies in the AI boom. Everyone has used you effectively as a Rochart
test to read in their beliefs or insecurities about what's going to happen in this AI moment.
Some people think that you're the poster child for the AI bubble. Others think that you're
perfectly positioned to take advantage of the boom in building that is.
occurring as demand goes through the roof.
A couple stats about you.
As of today, the company is worth $42 billion after an IPO earlier this year.
You've built eight new data centers across the U.S. in the third quarter alone,
and the latest reported numbers have you in possession of something like 250,000 of
NVIDIA's GPUs, which are the chips that companies use to run.
AI models and grow them or train them, as they like to say. Let's just start off with this because
it's been heck of a ride for you over the past couple years. What has it been like being on the
front lines of this AI build out? Tuck a little bit, how people feel it, the speed at which
it's boomed and what it's taken to do something like build eight data centers in a quarter.
It's exhausting. Yeah. All right. So let's start with that. It's been exhausting.
Yeah, it's, you know, you headed that out, right?
Like, it has been incredibly exciting.
It has been an unbelievable year.
I mean, we just, we just IPOed really eight months ago, and it feels like it's been two lifetimes.
The company is moving at incredible speed.
We are building a massive percentage of,
the global AI infrastructure that's required to allow artificial intelligence to be what it is.
And when I say massive, it's, you know, like a meaningful percentage.
What's your estimate about the percentage?
Ooh, that's tough.
You know, look.
A lot?
A lot is, you know, we don't, we think of ourselves as providing enough of the compute that
that we have the ability to be relevant in the debate of how AI is going.
to be built and how it's going to run into the future. And so we don't know what the numbers are.
You know, there's lots of different providers of technology. They're being used. There's no real
good way to kind of put your fingers on the data. But, you know, meaningful, right? And that's
an exciting place to be. And it's, honestly, when we talk about this in the company all the time,
it's a privilege to come into work and focus your energy, your creativity,
every day on building a component of this, this, of artificial intelligence, which is the issue of our time in many ways.
And we get to really sit there every day and pit ourselves against those issues, which is great.
I mean, I have a ball of it.
I'm taking a shot of this.
Hold on.
Before we move on, I think that that's really around, let's call it, the practical side.
of it, right? And when you're a company growing as fast as we have where we had maybe 100 employees three years ago, now we have 2,500 employees or so, there's an emotional side of this too, right? And, you know, sometimes since the IPO, we've been under this spotlight in the world of like, what are they doing? How are they doing it? Are they executing or are they doing this? And, you know, internally, we always set the highest bar for how fast can we do something, how high up quality can we do it at? And, you know, as this industry has expanded so rapidly,
rapidly, like, there are things that happen, right? And, you know, you have weather that impacts
construction or a project. You have a truck that hits a bridge. Like, you have all of these
random exogenous or idiosyncratic things that happen in a supply chain. And then it comes
back to us and it's like, the world is like, wow, you failed. Right. And inside the company
from a culture perspective, it's been so important for us to manage. Like, listen, we're doing
something at a scale. No one, no one's ever done before at a speed. No one's ever seen before.
of course things are going to go wrong, but take perspective, like see how much we've done, right?
And for our employees, it's if you're moving at a million miles an hour and you hit a speed bump,
it's okay, right? It doesn't change the trajectory of what you're doing. It just, like, it just
provides the battle scar so it doesn't happen next time. Yeah. I can imagine it's a rough and tumble
world trying to build this with very demanding customers, very important technology that you're deploying
and the speed is crazy. I mean, it is interesting.
looking at your founding store, you really started working on providing infrastructure for
crypto. Was it like Ethereum mining or something like that? And then pivoted in a very smart way
to this AI moment establishing a relationship with Nvidia. We'll talk about that. That's proven
to be very useful and helpful for you and probably for Nvidia as well. And now you're again,
hyperdrive building data centers. And the data centers are, if I have it right, largely licensed
or the capacity is rented out, mostly the tech giants. I mean, the core customer is Microsoft,
something like two-thirds of the demand, according to your public filings, is Microsoft. But there
are others as well. So we actually spoke to company, a customer concentration in our last
starting so we can kind of, there's no customer that represents more than 30% of our
backlog. And so we've done an incredible job. It's been a focus of the company, everything
from sales all the way through the build cycle to really begin to broaden the reach with which
our solution touches artificial intelligence. So Microsoft is an important customer and a large,
credit worthy and formidable part of the AI ecosystem at large, but they are, you know, we've done a
really good job bringing on other wonderful clients, wonderful customers that are going to continue
to kind of use our solution as they build their products and deliver them to market.
Okay, and I definitely want to get into customer concentration a little bit, so, but that's a good
preface to what we'll touch on and already some new data to me, so good to hear that.
But I wanted to, again, like, just get into what it takes to build these things, these data centers.
You're assembling them with incredible speed.
So I just want to hear a little bit about, like, on the ground, what does it take to put together these data centers?
So historically, you know, let's say two years ago, we were able to go out and buy capacity or at least
capacity that was much further through the development cycle, right?
They were basically, the shell already existed.
It was a fit out construction process, which means going in and installing, like, the last
pieces of the cooling infrastructure, cabinets, conveyance for all the cabling, all the hundreds
of miles of cabling we have in these things.
But it's shifted over the past year is that now we're doing much more bespoke in-house
design, right, to make sure that we're meeting the needs of what our customer's deployment
is going to be, right?
So it's everything now from, okay, how is the cooling and electrical distribution
designed. How are we ensuring electrical redundancy and reliability? You know, how are we
cooling the air-cooled side of these things? Because you have liquid cooling, there's still a
component of it that has to be cooled with air. Can we pause on that? Sure. These chips run extremely
hot, right? So, cooling, people talk about cooling for those people who are coming to this for the
first time, being able to run an AI data center, you've got to be able to cool the chips if you
want to be able to be successful. So this is one of the things that I think the market
misunderstands, right, is that everybody believes that this, that there's some differentiation in
the plumbing of the liquid cool data center, right?
That's not where the differentiation lies.
It's all the same pipe and valves and fittings.
Like, everyone's using the same things there.
The differentiation comes after you turn it on and how you control those systems.
Okay.
Right.
And that's what we've done incredibly well as a company that we've very consciously not spoken
about externally for the past couple years because it is our secret sauce.
is how we provision, validate, and manage those data centers
all the way from the power, cooling, infrastructure,
up through the GPUs, the servers.
And it's why the most valuable companies in the world,
the biggest AI labs, actually use us to run their most critical training jobs.
Right.
I mean, it's a herculean task, right?
It's important to understand that when you're thinking about the ecosystem, right,
and you're thinking about the different neoclouds that populate in what's a neocloud.
The worst term ever. I hate it.
Think of it as like, you know, in the common vernacular, you know, everybody knows who
AWS is, you know, Amazon. They know who Microsoft is. They know who Google is. Those are
the hyperscalers, right? You can throw Oracle in there if you'd like. But then there's a class of
providers that can deliver this infrastructure. And, you know, we are the leader among that.
And what is important to understand that if you took all of the other neoclouds and added their GPUs fleets up, we would still be a multiple of all of them combined in terms of the number of GPUs that are up and running and delivered to clients.
And so when when Brian is talking about, you know, things that the market is struggling to understand, it is, it's important to understand that what differentiates us, what's a lot of.
us to be as successful as that we have, is that the software suite that we have built allows
us to take the commodity GPU and deliver a decommoditized premium service that allows
people to extract as much value from this infrastructure as possibly can be extracted.
And that's really what CoreWeave is doing.
And it's why when Brian says, hey, you know, the leading companies in the world and the
leading labs in the world are relying upon us to deliver.
our service. That is why. It's because the product that ultimately they receive is the product
that will allow them the greatest probability of being successful at using the GPUs to deliver
the products that their company is building.
Right. So just to put it in plain English, when a company like a Microsoft will work
with you on building infrastructure for artificial intelligence, you've built some proprietary
pieces of the puzzle, like your cooling system, like the software, that runs the data center,
and that allows them to get more out of the chips than they would have typically.
Yeah, and the nuance here is that when you build one of these data centers and it has
3,000 miles of fiber optic cabling and it has a million optics that connect into the switches,
like these things all fail, right?
And when they fail, the way that training jobs are run today is if one component fails or one component limits the performance, the balance of the training run is going to be governed by the worst performing component.
Oh, right?
And our entire job is to build the automation, the predictive analytics, the, you know, the machine learning models around saying, okay, we're seeing a problem here.
How do we gracefully handle these things?
So it has the least impact on our customer's jobs, right?
And that's the core we've secret sauce, okay?
is that we have the world's largest data set of how these things run, how they fail,
and we've built all the recovery mechanisms and the software intelligence to help our customers run these things.
Is the demand that you're getting from your customers, you mentioned you know training very well,
is it mostly training the AI models? Because, well, that's what a lot of the infrastructure has been used for,
building, scaling these models, throwing more compute at them, throwing more data, making the
models bigger, and then the idea is that the models get better. So are you seeing most of your
demand in the training side of things? Or has it gone to inference where, like, companies are
actually using the models and deploying them into production? It's a great question. And I think
it talks to the split or this kind of delineation of where the market's been for the last three
years and where it's going. You know, our customer base for the last three years has primarily been
the largest AI labs and enterprises that are building the capabilities of AI, right? And it's now
shifted from the people building those capabilities to the people that want to use those
capabilities to change business outcomes. And this is where all the enterprise adoptions coming from.
You know, it's one of my favorite services out there is lovable, right? You go to lovable.
You can build any app you want. There's a chat bot that helps you go through it. You know, we're
finally starting to see people chain together these capabilities to build real products that
solve problems. And our business for the last three years has really been around the creation
of those capabilities and is very quickly shifted to include not just the creation of them,
but the deployment of them and use in business practices. So one of the things that I didn't
expect was that what looked like training two years ago is how inference was going to look
today, right, is that you're still dependent upon highly connected storage, you know, your backend
networks become critical to this because the models are sort of large. So there's really no
difference between training infrastructure we deployed to build those capabilities and what our
customers are ultimately using to serve them. So has inference overtaken training for you?
We serve a tremendous amount of inference. But no. I actually don't know the answer to that.
Really? Six months ago, I would have said.
that it was two-thirds training and one-third inference.
It's probably close to 50-50 now.
Okay.
But there's also some of our big customers that they go from, they'll use a
campus or training, they'll launch a new product, they'll have to spill over for inference.
You know, a lot of this is very dynamic and it's been built to be so.
Yeah.
I, this may provide a segue to some of the other subjects that you'll ultimately get to in this
podcast.
But, you know, for me, watching inference on.
Understanding that inferences the monetization of the investment in artificial intelligence is one of the most exciting trends that exists within AI.
And we have a front row seat across the entire cross section of almost every large, important lab that's building this stuff and watching them increasingly, you know, move from, let's say, you know, one-third inference climbing towards, you know, 50 percent.
and at times it's even over 50% of the fleet being used for inference,
you know, is just an amazing indication of the scale of the demand
to use artificial intelligence to serve customer inquiry.
And that means everything.
All right.
One more question about this.
Why does CoreWeve need to exist?
I mean, we're talking about these big companies like Microsoft.
Like, why wouldn't they just build their own data centers?
why are they licensing it from a third party?
So it's a great question.
There was a void in this market, right?
And how, there's a couple pieces here.
The biggest clouds in the world today are built off the cash engines of peripheral businesses, right?
Google's built on search, Amazon's built on retail, Microsoft was built on enterprise software.
We came pretty much out of nowhere, right?
And our, the moment in time for us to be able to get our.
ourselves into this position was driven by crypto, right? You mentioned earlier that we came out of,
you know, Ethereum mining. We were able to leverage the revenue from Ethereum mining to go out
and build and deploy additional scale so that when crypto went away, we had the infrastructure
in place and we hopefully had enough clients that we became, like, we were an escape velocity,
right? So, you know, we recognized that compute was going to be valuable. We didn't necessarily
really know at the time what it was going to be valuable for.
Like, I don't think Mike and I ever had this idea of like there's going to be
this hundreds of billions of dollars a year in CAPEX for AI.
But, you know, we had the thesis that compute is going to be incredibly valuable.
We wanted to own a lot of it.
And we looked at that compute resource as an option.
Like, and we said, okay, what are the best things that we can do with this?
And that's how we've always approached different business problems, right?
It's like, what is our asset?
How do we monetize it the most effectively?
What's the most valuable way to use this?
So I'm going to jump in here on this, but I want to go back to something that we kind of talk through as we started this, right?
Is that like we've built a software stack from the ground up to optimize for the use cases associated with parallelized computing.
We do it better than anyone else.
The reason we exist is because we deliver a fantastic product that is highly in demand.
And incredibly differentiated.
And incredibly differentiating.
And so, you know, we serve the largest players, but we also serve, you know, a ton of other AI companies that are building applications where they have the choice to go and use us or to go and use one of the hyperscalers.
And many, many, many of them choose to use our solution because it allows them to more effectively deliver computer.
And one of the things that's really just lost on this is that there's not an understanding of how fundamental,
to change from cloud 1.0 into cloud 2.0 as you moved from, you know, sequential computing
into parallelized computing. And when you made that leap, right, from, you know, hosting websites
and data lakes into driving parallelized computing for artificial intelligence, it stands to reason
that a fundamental change in how compute is used will also require a fundamental change in how you
build the cloud to serve it. And we took advantage of that transition to build best in class
solutions. Right. And that's why we exist. So I've heard an argument made that basically the
big tech companies, you know, to build these data centers, they have to forecast demand out
years in advance. It's a massive capital commitment. They are not sure whether it will pay off.
And Corrieve is useful to them because you're taking the risk and then they will be able to use
your capacity and sort of rent it out as opposed to having to make these big investments on
their own and you know it's it's their as if things go wrong yeah look you know that that is a
that is a narrative um i don't think that actually tracks with the reality of the situation
i think the reality of the situation is is the large hyperscalers are building as fast as they
can uh google went out and just you know release
to press release where they're building $50 billion worth of infrastructure while they're still
buying from everyone else they can.
Microsoft is building internally and they're buying from lots of other plays.
What I feel like that argument is model fitting, right?
It is somebody's got a preconceived notion of what this is going to look like and now
they're reconstructing the factual, the facts on the ground to fit that model so that they can
say, look, I'm right. But the reality is, is that I look at it very differently, right? I look at
the way that we built our competitive advantage over, you know, the hyperscalers, the way that we
built our competitive advantage over other neoclots. And the way that we did that is we understood
that this type of computing was going to be important. And we built the infrastructure and the
software to be able to serve it when the demand emerged.
We did it in a very risk-managed way.
When I look at the future, when I think about the investments that go into building an AI factory,
and I think about how much money is being put into the data center versus how much money
is being put into the compute that goes inside of the data center, I think about the data
centers as being basically an option on being able to provide and be relevant for the delivery
of compute into the future, right?
we take our risk dollars as a company and we invest in the long poles. And the long poles are
really twofold. One is building the best software in the world. And the second one is having access
to the data center capacity to be able to deliver compute when a wave of demand hits this market
that requires you to deliver it. You can't just wake up and say, hey, I want to deliver a gigawatt
worth of infrastructure. What you'd have to do is you have to start years in advance building that
gigawatt of infrastructure so that you're in a position that when your customers say,
hey, I just produced a new way of using AI that's going to require a gigawatt worth of
infrastructure, you're able to serve it. We're going to have a tremendous portfolio of
infrastructure that is going to be able to be deployed into the future. And we're really
excited about that. We think it's a wonderful way to go about building our business. Right. And that's
the question about the bet, right? Is that you're betting that AI is going to continue
to be adopted at a wild rate?
That's not entirely accurate.
Okay, let's hear it.
What we are doing is we are making the majority of our investments by taking long-term contracts
from credit-worthy entities using those contracts as a way of raising money to build the
infrastructure where the demand and the credit and the capital has already been a,
secured, right? So let's say 85% of our exposure is to deliver compute to investment grade
or AI labs or other large consumers of compute, right? The other 15% is our exposure to
long-term contracts to be able to do that exact thing in the future. And that's the way I look
at it. I think it's a much better way to think about how we're taking on risk.
how we're dealing with leverage and how we're positioning ourselves.
If the market continues to grow, we're in a great position.
If the market stabilizes in and around this, we're fine.
If the market contracts, there's some new technology,
then we will be left with some portion of that 15%
that we may be in a position where it has to wait for a few years
before the market grows back into it.
And we are fine with that.
We think of it from, and, you know, people have talked about how the founders of this company
kind of look at the world with a different lens because we don't come from Silicon Valley.
You know, we come from the commodity space.
We come from Wall Street.
We think about option value, right?
When we think about compute, we think about what is the option value associated with it.
When we think about the data centers, we think about what is the option value to be able to build
to be relevant in the future.
And that's the way we kind of go about allocating our risks and security.
curing the contracts that we have in place right now.
Yeah.
And, you know, to speak to one thing here, you talked about if the market contracts,
I think that we would love for that because it presents tremendous opportunity for us.
How?
Right.
I mean, you're in a position where there's going to be distressed assets.
There's going to be consolidation possibilities.
Like, that's when opportunity really comes in.
And, you know, there's a lot of times where we sit there and say, okay, we're looking for
M&A, we're looking to invest in things, but the valuations don't make sense.
And for Mike and I, you know, we've made our careers on waiting for those opportunities and saying, okay, these are the things that I want to buy when things don't necessarily go right for them, right?
And, you know, that's really what excites us.
You know, one of our, one of our other founders last week, he got on the phone with me.
He's like, I love this, Brian.
I'm like, what, Brian?
He's like, this is the one where you start, like, you're so focused on, like, where are the opportunities?
How do I go take things over?
And, you know, it's, I say it to some people every once in a while is that I feel like when I, I feel like when I, you know, I, you're not.
there's headwinds in the market, it's actually easier to do this job, right, than when
the tailwinds are kind of blowing at 1,000 miles an hour.
But can I ask, how have you set up the company to make sure that you're not the distressed
asset when the contract, if the contraction happens?
Look at our construction of our customer contract portfolio, right?
Is everybody last year talked about how customer concentration and exposure to Microsoft
is a bad thing, but they have a better balance sheet than the U.S. government, right?
Like, I'm not worried about them performing in their long-term obligations to us.
Like, that's basically the best possible position we can be in.
And we've been super thoughtful about the way that we choose which customers to work with and how we manage the credit exposure so that we're, like, we're certain that the investments we make will be paid back.
And if you look at the people that are providing us the debt to do those projects, like Blackstone, right, they're some of the most sophisticated people in the world.
And for their underwriting committees to come in and say, yes, I want to do this and I want to scale it up as aggressively as possible, like you're telling me you're going to pit some financial analyst against John Gray.
I'm going to go with John Gray.
Yeah. Well, you know, I mean, maybe a second on just like kind of one of the fundamental building blocks of how we have expanded the way we have and how we use debt because I think that's one of the misunderstood components of how you build or how we have built this company.
And so it is really important to understand that we, the way that we build the components is we go into the market.
Let's use Microsoft because we've used them, but there's lots of other clients.
you could use and they're totally interchangeable from the perspective of the structure is still
the same. We go to them and we say, hey, we've got access to this data center. They say we need
compute. We say, okay, we're going to sign a contract. They sign a contract for five years.
We structure that contract in a way that we can go back out to the blackstones of the world
and we can borrow money from them to go ahead and build the infrastructure.
to deliver to Microsoft.
Within the five years of the contracted period with Microsoft, we pay for the infrastructure,
we pay for the OPEX, we pay for the interest, and we earn an enormous margin on the infrastructure.
So, yes, there is debt.
We're not arguing that.
We believe fundamentally when you build any type of...
of infrastructure at this scale, that is the correct way to go about doing it.
The examples run through history, whether you're talking about building a power plant,
building a distribution grid for electricity, whether you're talking about the telephone,
whether you're talking about the steam engine and railroads.
Like, you go throughout history, this is the tool that you use, right?
We didn't invent anything new here.
We just took a tried and true method and applied it to the specifics of depreciation associated
with this asset, of the obsolescence curve associated with this asset, and made the contours
so that it worked in an airtight manner so that guys like John Gray or, you know, Blackstone
or any or Black Rock or any of the big lenders could look at it and say, I understand how
they're going to underwrite this.
I understand the risk in this.
I understand that these guys are going to deliver compute to that balance sheet.
They're going to get paid back, and when they get paid back, we're going to get paid back.
So let's lend them the money.
And that's lost on the market.
They think where we're running around with this, like, you know, incredible capacity to take on risk.
But that's a really low risk approach.
Matter of fact, it's way more low risk than saying, hey, we're going to do it on equity
because we're saving our equity for the long poles that you've got to invest in.
That's where you want to put your bullets.
You want to use the debt markets to deal with a depreciating asset.
It's the way it's done. It's the way it's been done throughout history.
Yeah, by the way, it's great that we're able to have this conversation.
This is what we want to do on the show is take this complex stuff, talk about what the reactions have been in public speak with the principles, and actually get the story.
So thank you for talking it through with me. And on that note, let's continue.
The argument, I think, that would be made is not that Microsoft isn't good for the money.
The argument would be made that generative AI is still a developing category.
It hasn't really shown the ability to turn consistent profit.
And so the companies that are investing in a big way in it may one day wake up and say, you know, we can't, we don't really want to do that build out.
Open AI, for instance, let's just use them as an example.
They have something like $1.4 trillion committed to spend on infrastructure.
I think open AI might be the only ones that believe that they'll actually spend that
$1.4 trillion and maybe they're investors.
So what do you think about that risk?
Because AI is new and not as predictable as you would have in a different category,
you know, financed by debt, that therefore it is riskier even if the credit rating of a company
like Microsoft is golden.
So when you're a couple of things on Open AI, because
Because they are the tip of the spear in many ways for artificial intelligence.
They have a franchise that has 800 million monthly users of their product, which is fully one-tenth, one out of every 10 human beings on the planet, logs on to open AI users.
Fastest growing tech products in history.
I use it all the time for everything.
I am addicted to it.
Yeah. And I don't even find it in like a bad addiction way. It's an amazing product. I won't argue with that.
So you've got this product that's out there. And then you have this $1.4 trillion, which I believe has been confirmed by everybody, but Open AI, who would actually probably have issues with that number in terms of how much they're spending, when they're going to spend it, what are options, what are firm, all those kind of things.
And so I just think it's a, you know, there's an incredible amount of people out there that are talking through how this is going to be done, when it's going to be done.
And I don't think that they necessarily have all the correct information.
That's number one.
Number two is that, you know, you listen to both Brian and I talk about how we think about credit.
We're pretty sophisticated how we think about credit.
We've built our entire careers long before we started this company thinking about risk management and credit.
OpenAI will be a percentage of our credit exposure, just like Microsoft will be a percentage of our credit exposure.
And the way that you manage credit against a unbelievable potential company, but a company that may not have the credit rating that is strong enough to support
their aspirations or they may have to tone it down or they may, is you just make them a
limited percentage of your overarching business and you accept the risk on that while you
mitigate the risk using credit from other companies like meta that we signed a $14 billion
contract with, like Microsoft.
I mean, it's just incredible companies.
And so you just think of them as how much investment grade exposure am I going to take,
how much non-investment-grade exposure am I going to take, and what's the correct ratio,
and how am I going to mitigate that over time? And that's the way we look at it.
And what happens if one of these companies, over time, wants to walk away? Let's say
meta says, yeah, actually artificial intelligence, we can develop it much more efficiently,
or Microsoft says, yeah, AGI is actually a decade away, not three years away.
Yeah, so AGI being a decade away, six decade away, it doesn't matter. Like the way,
you know, you were asking about, you know, how you run a company in this dynamic environment,
how you run a company that's going through this type of scaling. And I talk about this internally
to the company all the time. We need to be directionally correct. The world is incredibly
fluid. The world is incredibly dynamic. We are at the absolute bleeding edge of a new technology
that's redefining the world. You're not going to get everything right, but
directionally, you have to go ahead and build a company that's moving in the correct ways
to be able to take advantage of this super cycle that's going on. What do I think if meta says,
hey, we're going to, you know, we're not going to continue to invest? That is their prerogative
as a company. But that doesn't in any way mitigate their contractual obligation to us through
the term of the agreement that we went to Blackstone with and said, we're going to borrow money
because we have a firm contract with META.
That's not open to renegotiation.
They can't walk a lot.
The concept is, and, you know, there was a wave of this that took place, you know, about a year ago.
Microsoft is walking.
Like, what are you talking about?
This is a AAA company.
They don't walk away from anything.
If they make a contractual obligation, that's a contractual obligation.
Even the idea that they would walk away from it is deeply misleading to the market.
Okay. There's been some analysts that I've talked about, one more thing on debt, then we'll move on. Some analysts that have talked about CoreWeave borrowing more money because they spend more money than they can get structurally. So they borrow to pay interest on the last loan. Any truth to that?
Why don't you talk about how these actual debt instruments are structured from like the box perspective and how the controls around these things are? Like that'll put this to bed.
Yeah. So let's just be done with this. There's a lot of, a lot of analysts that how.
a lot of opinions based on a deeply incomplete understanding of how these are built.
So maybe two seconds on it and Brian, you can kind of keep me on the rails here.
I'm pushing you off the bruce as much as I can record.
Once again, going back to the contract, we did a contract with META.
When we did a contract with META, we go ahead and we signed the deal with META.
We borrow the money from a syndicate of lenders, and then we go and we buy the infrastructure to build that facility.
We run the facility.
When we run the facility, as we're delivering GPU capacity to META, META sends money, but it doesn't come to us.
It goes into what's called a box.
Money flows into the box, and then it goes through a waterfall.
The first thing it does is it pays off the OPEX associated with the power and the data center.
The second, after it's done paying that, the second thing it does is it pays the interest to the lenders.
The third thing it does is after it's paid all of the expenses, is it releases back up to our company.
Also principal.
And principle and interest so that it completely amortizes within the five-year term.
of the contract with meta.
Like, there's no, like, it's controlled by somebody else.
And the important piece of this is, like,
it's not that it's, hey, we just barely pay off the interest.
The coverage ratio in that box is excellent.
And it can be underwritten at a very narrow spread
based on the risk analysis of the most sophisticated
lenders in the world, right?
They're not lending us this at 22%.
They're lending this at, you know,
250% over, excuse me, 250 basis points
over Sofer, right?
Which means basically they're looking at it as like,
this is a low risk transaction to get their money back.
It's not some crazy, you know, you know,
YOLO structure.
It's an unbelievably risk-mitigated structure
that's built to simply go ahead and allow us to build the infrastructure,
deliver it, and then take the revenue.
Now, when you're scaling a company at the rate we're scaling,
it tends to make sense that you're going to be investing all over the place.
And we are.
We're investing in data centers.
We're investing in software.
We're investing in people.
We're investing in, you know,
the companies that we're buying to help us reach up the software stack and provide more value.
We're doing all of those things, which is exactly what we should be doing right now as this space opens up.
Whenever we see an opportunity, we look at it against all the other opportunities are out there and say that one makes sense for us.
It drives the company forward.
The idea that you're at risk from the debt, I mean,
anytime you have debt, there is risk.
I'm not going to argue that point because you have to generate the revenue.
But what are you talking about?
You're talking about operational risk on the GPUs that are in the box.
Right.
You know, one of the things for us and why our spread on that interest rate is compressed
over the last two years is we've demonstrated incredible capacity and capability of delivering
that infrastructure, right?
The first time we did one of these debt syndicates, I got paraded around the whole world
that had to sit with every single underwriter being like asking me,
questions about, like, what are the doors to get into the data center? Like, what is the
floor made out of them? Like, okay, guys. Like, there had so much, there was so much risk around
our ability to operationalize it. That has been put to bed now where everyone knows that we can do
this and we can do it at scale, right? That our cost of capital is significantly compressed.
I mean, it went from, you know, what was it? Sofer plus 800 to? No, it was sofer plus 1350 down
to Sofer plus 400, right? Once again, like for those who don't understand what that means is,
The higher the interest rate, the higher the risk.
And what you're seeing is the lending market understand that we have the capacity to deliver this infrastructure and that they are willing to lend us money at increasingly lower rates because they look at it as a lower risk transaction.
Okay.
I have so many more questions and we have only 15 or 20 minutes left.
So let's take a quick break and come back and talk about a few things that I find really fascinating.
That is the depreciation on these AI chips, maybe a little bit about the financing structures,
and then power.
I think we need to talk about power.
So let's do that when we're back right after this.
You want to eat better, but you have zero time and zero energy to make it happen.
Factor doesn't ask you to meal prep or follow recipes.
It just removes the entire problem.
Two minutes, real food, done.
Remember that time where you wanted to cook healthy but ordered pizza?
You're not failing at healthy eating.
You're failing at having three extra hours every night.
Factor is already made by chefs, designed by dieticians, and delivered to your door.
You heat it for two minutes and eat.
Inside, there are lean proteins, colorful vegetables, whole food ingredients, healthy fats,
the stuff you'd make if you had the time.
Head to FactorMeals.com slash Big Tech 50 off and use code Big Tech 50 off to get 50% off your first factor box.
Plus, free breakfast for one year.
The offer is only valid for new Factor customers with the code
and qualifying auto-renewing subscription purchase.
Make healthier eating easy with Factor.
Whether it's with your besties or date night,
get to all the hottest concerts with Go Transit.
Go connects to all the biggest entertainment venues
and makes it affordable with special e-ticket fares.
A weekend pass offers unlimited travel across the network
on any weekend day or holiday for just $10.
A weekday group pass offers the same weekday travel flexibility from $30 for two people up to $60 for five.
So no matter what day of the week, Go's got you covered.
Find out more at go-transit.com slash tickets.
And we're back here on Big Technology podcast with the founding team of, or two-thirds of the founding team of CoreWeave.
Michael and Trader is here.
He's the CoreWeave CEO, and Brian Venturo here is here.
He's the Corweev CSO chief strategy officer.
We talked previously or in the first half about how these
chips run hot. So let's just talk a little bit about the life cycle of these chips. I'm trying to
figure this out. There's two differing opinions. One is that a GPU like the Nvidia H-100 or the
GB-200 will burn as hot as it possibly can for like two or three years and then effectively be
useless like meltdown. It's like the life cycle of a car compressed into a couple of years.
The other side of it is that, no, the GPUs can last, but they get less valuable over time because more powerful GPUs come out that are multiples in terms of their ability to do AI calculations compared to previous generations.
So can we just start with like the basic physics of this?
How long did these things last?
So, oh, I'm taking this one.
You're out.
You take the physics.
I'll take the other side.
So last year is when we saw, let's call it the hyperscalers that were around in the 2010s, so Amazon, Microsoft and Google, finally retire their Nvidia K-80 fleets.
And the K-80 was a GPU that was introduced in 2014.
So it was active in their clouds, almost fully utilized for 10 years, right?
And the number of, you know, of changes in architecture and efficiency advancement and performance advancement over those 10 years was massive.
You know, just last week, we entered a multi-year contract to renew NVIDIA A100s, which are the GPUs that were introduced in 2021, right?
So we're already going beyond the five-year contract life for GPUs that came out, you know, four years.
that the idea that these things burn out in two or three years,
like it's kind of bunk, right?
And from a physical perspective, right,
within three years, these things are all still under warranty.
So if they break, they get replaced, right?
But from a, like, this is not they run hot.
These things are designed to run hot.
GPUs that we had deployed in 2019 are still running,
still have customers on them.
You know, it is a, like,
Some of it is customers that are deploying Grace Blackwell with us today.
They're going to use Blackwell for their most frontier or bleeding edge use cases.
They're going to train their biggest models.
They're going to do the things that they need the new.
It's Invidius' latest chip.
Yeah, it's Nvidia's latest chip.
They're going to do the things that they need the most firepower to do, and they're going to
run their inference on hoppers or they're going to run their inference on Amper, the A100s, right?
Or they're going to run different steps of their pipeline on A100s.
Or they're going to run parts of their pipeline on CPU compute, right?
Right. There's always going to be a use for these different levels of compute infrastructure.
It's just where is the economic value there?
Right. It's not a useful life question. It's where is the economic value in those, right, in that time.
And this is where, this is where the question start to build up because, so the chips run.
We agree on that one. Now I've been taught, so thank you.
The chips. The chips run.
That one off the tip.
And so now the question is when it comes to power, right?
Hold on. Hold on.
I just want to finish this question, and you can answer the last one, but I just want to finish this one, right?
So the question...
No, no, no, no, no. I really do want to hear.
But let me just put this out there, and then you can answer them whichever way you want.
Okay, the old generations of Nvidia GPUs, they're much less powerful than the newest generations.
There's the Grace Blackwell that's out now.
There's a Vera Rubin that's coming out.
And the argument is that these newer chips, even if the H-100, the hopper, can continue running,
the new chips are so much more powerful that the value, right, because those H-100s are being sold
at $20,000, $30,000 a pop.
The value of those chips are going to be much less because of the power of the newer generations.
And then if you think about it again, if these companies move from training to inference, right,
if, for instance, let's say hypothetically, there's a diminutive.
return to training a bigger model, then those bigger, those more powerful chips can be used to run inference.
And then a company like CoreWeave, which has hundreds of thousands of the older generation of chips,
is faced with a depreciation problem compared to the most powerful ones.
You got it. So let's go through this a couple of different ways.
Okay.
All right.
I feel like the depreciation narrative is being spun up by folks.
Michael Berry.
Yeah, like, people that don't understand the space.
He's never been in a data center.
So, so like, my theory here is, is it's being spun up by a bunch of folks who couldn't spell GPU two years ago.
And now they are out there as experts on how it actually works.
So let's actually go through the different pieces of it.
The most important tool that I have for understanding what the depreciation curve or the
obsolescence curve of compute is is not what I think, right?
It's not what, you know, some historic short thinks.
It's what are the buyers, the most sophisticated companies in the world willing to pay for today.
And when they come to me and they put in a contract for a five,
year deal or a six-year deal. In what world do I not think that they, who are the consumers
of this, understand that there are new, more powerful chips coming out? Of course they do. They understand
it, but they also understand what their various use cases are, and they are saying to themselves,
I'm going to buy this because I'm going to need it today, I'm going to need it in three years,
and I'm going to need it in five years, and what the use is within my system will change. But
it didn't become useless, it hasn't become obsolete, right? And they know them do stuff's coming,
yet they're still buying it because they know better than someone who doesn't know anything
about how compute is used. My opinions around depreciation are informed by the only
entities that get to vote in my world, which are the folks that are paying for the compute over
time. Those are the guys to get to vote. Everybody else is just looking and guessing, right?
That's number one. Number two is Brian kind of made a point that we just had somebody come back
and re-contract for a term, for a term deal, the H-100s. A-100s. No, at H-100s, at 95% of the value
of what they were originally sold for. Once again, not showing this catastrophic depreciation
curve that, you know, has been voiced out there. I just, once again, like, for me, it's about the
data because I need to make the decision to buy this infrastructure or not to buy this
infrastructure. And so I've got to kind of look through the noise and decide, you know, are the big
hyperscalers, are the big labs, are the big buyers of this infrastructure who are looking at this
saying this stuff will be useful for us for the next five years. Let's go out and buy it.
Or should I go and turn to somebody who's never really understood how the cloud works, what a
GPU is, what are the different uses as it moves through from the most cutting edge models
to other uses within their training as they go all the way down through inference to simple
or smaller models? And I think that's the way you've got to look at this thing. It's like,
what are you talking about, man? If Microsoft and meta and the other big buyers are coming in and buying
for five and six years, I don't really think that anybody else really should or gets to have
what I would consider to be an informed opinion on depreciation.
And since I'm selling on term contracts, specifically to insulate my company from the depreciation
curve, right, I know how much I'm going to make because I've sold it to Meta for five years
every hour of every day, and they're going to pay for it every hour of every day,
the what what the curve looks like inside of that five years that's already been priced into the deal I did with
them sorry go ahead sorry well I was trying to interrupt you there because I think that the in addition to
the H100s which came out in 2023 right we signed a term contract for the A100s like within like the
95% of an original price range for on like on term last week or two weeks ago yeah like that's
crazy those GPUs are already five years old and they're they're like that you're like that
Useful life is there.
Yeah.
And everyone is saying, oh, it's not useful.
Like, they have no idea.
They don't actually have the data.
We're sitting on all this data.
We talk to every single one of these customers.
And, you know, one of the interesting things that's happened over the past years,
everyone was saying, well, where are all the enterprises last year?
Michael Lewis here.
My bestselling book, The Big Short, tells the story of the buildup and burst of the U.S.
housing market back in 2008.
A decade ago, the Big Short was made into an Academy Award-winning movie.
And now I'm bringing it to you for the first time, as an audiobook narrated by yours truly.
The Big Short Story, what it means to bet against the market, and who really pays for an unchecked financial system, is as relevant today as it's ever been.
Get the Big Short now at Pushkin.fm slash audiobook, or wherever audiobooks are sold.
