Big Technology Podcast - Betting on the Future of AI – WIth Sarah Guo
Episode Date: August 23, 2023Sarah Guo is the founder of VC firm, Conviction, and co-host of the “No Priors” podcast. She joins Big Technology to talk about generative AI, where the opportunity for new investment lies, and wh...ether the fears surrounding AI are grounded in reality. She gives us unique insight into what the next round of AI tools might look like, and what the impact on our jobs and lives will be as this tech becomes more accessible and affordable. Stay tuned for the second half where we run down how each Big Tech company is (or isn't) taking advantage of this moment.
Transcript
Discussion (0)
A leading investor who is actually betting on AI startups tells us where she sees opportunity
and how this promising technology will look inside the next generation of products.
All that and more coming up right after this.
LinkedIn Presents.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Today we're going to talk about generative AI.
Where the opportunity is for the next generation of startups, whether this is
actually investable or not and plenty more, including, are these fears of AI wiping out the world
actually grounded in anything of reality? All right, we're joined here by an amazing guest,
Sarah Goa is here. She's the founder of the VC firm conviction, and she's the co-host of the No
Priors podcast. Sarah, welcome to the show. Alex, thanks for having me. Oh, it's great to have you
here. So a little background for listeners. We were both at CEO Summit that CNBC put on a few months ago.
And I thought you were just far and away the best speaker there.
Just like speaking to this like really important new mode of technology with like actual examples and like, you know, substance as opposed to a lot of the fluff that I've seen out there before.
So it was like, I basically ran after you.
I was like Sarah, like we kind of know each other from Twitter, but please come on the show.
So it's great to have you here.
Yeah, I appreciate that.
It was a fun event.
So let's start with the possibilities and which fields this technology might end up changing.
One area that you're looking at is the creative field.
I'm curious how it plays there.
Yeah, I think the creative field will both change from a just like a tools perspective
and then also from a consumer experiences and social perspective.
So the tools one is pretty direct in that everybody recognizes that these language models
can generate like written content that is at least an interesting starting point.
And I think like Dolly 2 and Mid Journey and like other products have really shown image generation rapidly increase in quality.
But if you think about generating audio and video and imagery that like complies with brand guidelines, that you could use in advertising, that you have more control over that even, you know, surpassing some of the basic research constraints today around.
resolution and length, like we're going to see a lot more of that. And I think having more
powerful tools is going to make anybody like capable of creating much richer content. So like,
you know, you could have your, not just your podcast, but an entire movie studio someday. And so
I think that's something we think will both be really valuable and be just like massively
changing for society. Because there's more, there's more creatives than there are people who
know how to, for example, use like Premiere, right?
So do you remember the photo app Prisma?
It basically allowed you to put like an AI layered painting style on top of any photo.
But it really didn't take off because I think people respect like the amount of work that
artists and creatives put into their creations.
And that's sort of what makes it unique.
And AI has a way of homogenizing that stuff.
So eventually every Prisma photo looked the same.
Why isn't that going to happen again, even though we're talking about much more ambitious projects like AI-assisted movies?
Yeah. So I think, you know, in any creative field, you have tools that artists use to create something new, right?
And like I have liking respect for the Prisma founders, but I think if something is a specific filter where the aesthetic is defined by, you know, a product team or a founding.
team, there's only so much creative variation you can have, right? And so I think we're going
to get to a point with these tools where it's not so product defined and there's room for
expression of creativity. And so I think we're going to be like amazed by the number of things
people create and it's just going to become part of the artistic suite. I definitely don't think
it replaces like creative direction and artists, video makers, any of it. Yeah, it would be cool for
instance, to just have like the podcast production side of things taken over by AI so I don't
have to spend time editing, you know, and then just being able to interview more or spend more
time on prep. That would be ideal. I feel that way too. But I think one of the things is like,
you know, you can, you can have taste and you can have an opinion, but not have the technical
skills, right? And so I'm sure you have like podcasts where you're like, oh, well, this is,
that's not how I want, especially if somebody gives you the tools today, there's no capability.
but I want it to look this way, right?
I want it to sound this way.
And so I think giving people those opinions,
they won't naturally all aggregate to the same place, right?
Take an example, like, I mean,
maybe something even analogous to the thing you described with Prisma.
Instagram has inbuilt filters.
People still have a huge variation of the types of things
that they want to capture in their photos
and the way they frame and, like, all of that creativity,
including the use of like much more high-resolution tools
has not gone away.
yeah well we could do a full episode on the the basically the fact that filters have faded away on
instagram again going back to this idea of like well they made everything look good but in the same
way anyway we could spend all day talking about this another area you're looking at is like assistive
agents which i think is really interesting um especially because you know i think people have this
idea of an assistive agent as you know just like buying movie tickets for you booking hotel but
it actually can be industry specific as well you want to say a little bit more
more about that? Yeah, I think you are going to see, like, as a sort of first generation,
assistant products for lots of different workflows, both consumer and enterprise, right? So today,
like the experience that the most people have had is an assistant experience, but like where it's
just giving you information back, chat, GPT, right? And that's, first of all, not up to date,
and second, not personalize. And then third, it's not doing anything for you.
But from the consumer perspective, like your ability to do things on the web or take action in real life, like that's a big difference, right?
You have an executive assistant, somebody managing your life, like they can be a lot more useful to you.
And so the idea that you have agents that can not just retrieve information, but be like, okay, these are they're personalized, right?
These are Alex's priorities.
These are things he hasn't even thought about, but based on what I know, what help him.
I can take, I can write code.
I can take actions on the web.
I can interact with other things.
Like, I think we're at the very beginning of, like, what a consumer assistant looks like.
And, you know, we're investors in inflection.
I think that's, they're taking, you know, a big step on the personalization front there.
And on the enterprise side, this, I think, is actually going to be a little bit slower because
you really need to, you need to fit these intelligences to workflows, right?
And so we've made some investments here already, but you kind of need these two things to meet
like somebody with deep understanding of the domain, right, be that podcasting or legal or
enterprise analytics. We're investors in Harvey, which is like legal co-pilot or seek, you know,
talk to your enterprise analytical data. And I think there's a lot of nuance in, you know,
how you actually make somebody with that role in a company more productive.
Right. So first of all, I cannot wait to get to the point where I basically type a sentence
and, hey, I'm going on vacation for this week.
book some like hotels for me. And this is my price range and this is what I want. I mean,
we've been talking about that for a while. But it does finally seem like that's coming into
focus in a way that's like, oh, this might actually happen as opposed to like me typing that
request. And there's a bunch of humans on the other side doing that. But the enterprise thing to me
is is really fascinating. I mean, Harvey, I spoke into a couple, to at least one law firm that
has that inaction. First of all, like, this is the case that all of these doomsayers have been talking
about of like a bot that can scour documents and summarize them and you can ask questions to it
and they can respond to you about what you're looking at and they said well paralegals are gone
and junior associates are going right to the breadline we just wrote about this in the boston globe
actually that's going to be syndicated on big technology but it is just so interesting
that like the firms that have this implemented a find it extremely useful already and b they haven't
fired a single person because of it. Yeah, I think that at least coincides with my understanding
of like what happens historically when you make people more efficient, dramatically more
efficient, right? If there was like all markets are dynamic, if, you know, there was so much
demand for the law when the law cost $1,000 an hour, like if you reduce the cost of producing that
legal tasks to $100 an hour, I think people just do more law, right? And so I actually think,
like, yes, like tools like Harvey are extremely capable and they will do a lot of the work that a
first year associate or a paralegal does. But I think what happens is those people do different
things for law firms and you take on tasks of scales and costs that you couldn't as a law firm
before, right? And so one of the examples I like with Harvey is, you know, they'll have customers
whose end clients want to look at 25,000 sales contracts, but that's not a tenable thing if,
you know, the firm, junior people at the firm cost $800 an hour, right? And so either that's like,
oh, like, we're actually going to do deeper M&A due diligence, are going to do things we never
did before. And like the end client gets more value for the same dollar. I think that is much more
likely and more in line with everything we've seen so far in the broad arc of technology
history. But I think it's unavoidable and true that it may cause like short term dislocations
in what people are doing. Right. And it's so interesting because a company like Harvey like at
once makes you think, well, these generalized bots aren't going to be able to handle everything
because Harvey trains on a specific data set on legal files. It might even be personalized law firm
to law firm so you can have, you know, upload or have it train on your discovery and then be
able to go ahead and take a look at it. But on the other hand, it does seem like a capability
that like a more generalized bot should be able to handle just maybe not now, but eventually
like you just train it on the specific thing. And then it doesn't have to be like a legal
bot or a medical bot just knows. So what's your thought about like whether these specialized
bots with specialized data sets are actually that defensible? Yeah, I'm a pretty strong believer in this, right?
My money is where my mouth is.
But I think, like, one of the things that you discover is, like, the tasks are not as straightforward.
And the data is not necessarily in the Internet data set, right?
Like, if you're looking at, like, the vault top 10 law firms where, you know, somebody, like, you're dealing with litigation and you want the very best lawyers in the world.
Like, unfortunately, the content of how to be the best lawyer in the world to deal with your case is not.
deeply represented on the internet, right?
And so when you say, like, eventually it seems like these bots may train against
this data, I'd say, one, it doesn't exist, right?
Companies have the opportunity to go collect a lot of that expertise in different domains.
And, like, the firms who are doing this will, like, sort of keep that, you know, for their
end customers, I think.
And two, it's also, like, you know, a one of the ways these models work is they're, they're,
it's next token prediction today, right?
It's like what is the most likely output given all of the data I have seen
with some instruction control?
And you don't want what like the internet says is the next step in litigation.
You want what the best lawyer in the world says, right?
So then what are these naysayers saying?
I mean, there are smart investors out there saying that seed and early stage
AI companies are not investable. Can you like make their case for them? I mean, I'm curious what
what you would think is going through their head. Yeah. They should call me. I am interested
because like I made a big bet on exactly the opposite. But let me like try to a lot of your own money
too. Yes. Yes. I am an investor of the firm. I'm hoping this works. I think the the fair version of
the argument is general models will get only more capable over time. And like, you just need to
add more data sets, change the training mix, condition the models afterward to a particular use
case. And all the value is in the model. There's not product to be built on top. And there's like
one company that rules it all. Right. Who has the foundation model. Right. And everything else
is a, is a thin wrapper, I believe the term is.
They love saying that.
Yeah, I don't think that's true at all.
I definitely think that, like, if you just look at, like, volume of companies being started,
I can see where this point of view comes from because I'm like, man, there are hundreds of
companies doing the same idea that is a thin wrapper idea.
So, like, I guess my view is, like, the viewpoint is not wrong because there are a lot of
entrepreneurs that are perhaps not being sufficiently thoughtful.
about what it will take to succeed.
Because if you are producing a product surface, that is nothing but the chat box and you don't
own the model, like, that's probably not a good path.
If you've not, there's very little built IP or like depth of the idea.
And should investors be able, like, have faith in their, in their selves to be able to see that
and actually say, okay, this is not that or this is that?
I do, right?
But I think the, like, this is why I'm like trying to.
to like genuinely like listen to the argument where I'm like,
ah, from a numbers perspective,
there's a lot of garbage, right?
But I think the thing that will be different is like,
if you actually understand how these models work,
if you're collecting unique data,
and I can give you a few examples here of like what I think has been interesting.
And like I don't actually think,
I think natural language interfaces are incredibly powerful,
but I don't think that the right interface for everything, right?
So like the end of software?
that people are saying, oh, it's, you know, software is just a bot?
No, you're not really a believer in that, are you?
Oh, my gosh.
I think we're going to have a lot more software because it's going to get cheaper to generate, right?
Right.
But it's not like we're just talking to a bot instead of using Excel.
Is that your perspective?
No.
I think that there are general, like natural language interfaces that people will use and have
hugely valuable.
But, man, if people are not using core relational database-based ERP systems like
SAP five years from now, like, I am shocked. I will bet my entire fund that that exists
five years from now. Maybe, maybe like two examples of like why I think the, uh, the like rapper
theory is, is, is not out is, is, is not quite right for at least like some exceptional
companies. We have a security company in the portfolio that is unannounced. And it's a
former like, you know, well-sighted open AI researcher that knows what they're talking about. And, uh,
His original thesis was like, okay, I'm going to go partner with some existing security companies to collect the data I need to do the fine-tuning on the model I want to do for a specific task, right?
And he goes to all the companies that should have the data or are most likely to have the data, the incumbents, and they don't have the data, right?
So, like, you know, collecting data actually has cost.
You have to store it.
You probably want to try to get some value out of it.
otherwise, why are you collecting it? And it has like security and compliance implications.
And so a lot of the data that you might use to actually enable some of these experiences and
make models better, it has not been collected. So I think people like totally over skew on
the incumbents have it. And so if it hasn't been collected, like where is the foundation
model company going to get that data either, right? Like somebody has to collect it to create
the model. It's like you have to do the work. You have to do the work and it doesn't exist yet, right?
which is expensive or hard and but if you are clever about it like there's a reason that that works
that there is an opportunity until your agent bot can do the work for you so that's a great
great point right which is that there is a a pretty high labor load uh when it comes to creating
any type of company and it's not like you can have like one company just do all of them right
away yeah i actually think this is the more important point like you just said the thing that's
more important than what i said which is like if you look at what's important
important, like all the parts of building a software company and what's hard,
understanding customers is hard, building the product that's really valuable
is hard, distributing the product is hard, having it fit into customer workflows and making
them happy and serving them as hard. And like, I would ask you, like, with love for foundation
models and like friends with the big research labs, we've invested in these companies,
like they don't have all of that done, right? It is unlikely that one company goes and understands
all of the workflows in the world.
Exactly.
All right, Sarah, one minute before we're going to go to break,
and then after the break,
we're going to talk a little bit about how every big tech company stacks up in this battle.
But, you know, I'm hearing a lot about how, like, this type of technology is the end of the world.
It's good, but I'm not running for my life from chat GPT.
What do you make of all of it?
Yeah, I struggle to make the logical jump from we have models that now do next token prediction
on language tokens or we have diffusion models that can generate images to like the loose logic
just to make sure we are understanding a reasonable version of it is like these models are so
powerful that they give single individuals the ability to do things that are very risky that used
to be very expensive right so this is like a computer virus that takes a grid down or a
a biological virus that, like, kills people that used to be very difficult, look, very difficult
to design. And then, like, even one step further, the logic is something like, you know,
like one engineer got over-excited and optimized the model against, like, an outcome of generating
paperclips. And then, like, we all, you know, we're in resource competition with the paperclip
generation AI, right? And, like, I can't draw any sort of expectations.
line from here to there, like, where we would not understand, like, I totally believe
that we should be careful about what we can do with these models and have a democratic process
around it. And, like, I'm trying, you know, I have deeply committed to AI, like, trying to think
exponentially. But I think that this is a mind virus. I think that it's attractive to think about
these risks because it's interesting. And I'm a little more cynical that, you know, calls for
immediate regulation of a very nascent technology that has horizontal applications,
I think of that as more of a competitive move.
Yeah, fascinating stuff.
Let's go through the big tech companies when we come back right after the break.
Hey, everyone.
Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news,
and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and
informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers break down the biggest business headlines in 15 minutes or less and explain why you should care about them.
So, search for The Hustle Daily Show and your favorite podcast app, like the one you're using right now.
And we're back here on Big Technology Podcast, the Saragoa.
She is the founder of Conviction VC firm that's investing actually in AI startups in their
early stages, believe it or not, it exists. She's also the co-host of the No Prior's podcast. Can we go
through quickly, like, do a quick lightning round of like what you think of like the current
efforts that the big tech companies are making in this field? You up for that? Yeah, sure.
I guess I'll be honest. I don't know that I'm like super up to date informed about it.
So I focus more on early stage innovation. Let's riff on it and see what happens. So Apple is rumored
to make an Apple GPT? What do you think?
about that? I think it's no secret that the experiences around the Siri assistant have really
lagged in terms of what the expectation is now that we have this series of things that are really
amazing from scaled up LLMs, right? And Apple has quite literally the world's best platform
for creating personalized assistant experiences. They have all of your data, they have user
trust and they have, you know, chips on on device that are very capable of running, you know,
increasingly efficient, highly capable models. So I would be really excited to see
upgrades in this experience come. Amazon has talked about how they, they want to be basically
the distribution point for foundational models for companies that want to build upon them
through AWS, and that's going to be their play.
Is that smart?
I think it really relates to the sort of overall Amazon attitude toward open source, right?
Like Amazon, you know, a friend of mine who's an executive AWBS used to say like,
oh, we love open source, right?
Like, you know, somebody else builds a product and we get to monetize it.
Exactly.
Yeah, I get to make the mind from the computer.
Yeah, so I think the knock on Amazon would be,
they are, you know, of the cloud, like, Google has a research team and a strategy that is very strong here, right? They've got the deep mind, now Google Brain, Deep Mind combined lab and, you know, big suite of Genesis products. Microsoft has, as you would well know, like executed extraordinarily well on a partner strategy with Open AI and in terms of getting getting itself to be positioned as a, a,
partner for enterprises adopting AI. And so, like, lots of companies use Azure for AI in the way
that it was not as competitive, like a year ago in the cloud space. Amazon doesn't have a leading
research lab in this domain, right? So I think one part of the strategy is very much, like,
they don't have a choice right now, right? They would need to acquire the talent or develop more
of a strength here. And the other is it fits with their strategy of like if you and I don't
think this is wrong. I'm very excited for an increasing number of open source models that are
really, really capable for lots of applications. But if you think that's true or you can work
with all of the partners like having Amazon be just the the infrastructure target for these
capabilities is a great play. Yeah. Okay. That's a great point about the research houses. So let's go
into those. Google is interesting. It seems it's had a very weird year. We're going to talk about this
a lot this month on the show, but like initially seemed caught off guard. Now it's marshalling all
of its resources and starting to catch up. It has the products. It has the team. It seems like
it has the will. But it's still kind of like when I think about Google, I'm just like, hmm. So what do you
think? Yeah, I will not pretend to understand the things that hold.
very large companies back, right? And I have a ton of very good friends that are amazing leaders
that, you know, work at Google today. And I see researchers that work at Google today or used to
work at Google. So it's not like, if they struggle, it will not be for lack of amazing talent and
like resources or even intellectual understanding around like the importance of AI. I think it's just
they have some, you know, I don't think it would be secret that they have organizational issues
around speed.
Yeah.
And so, like, it's their game to lose, in my opinion.
Okay.
Meta's Lama strategy is very interesting.
How do you think that positions them in terms of them building these foundational models
and open sourcing them?
And are you excited about that as a startup investor?
I'm super excited about it.
Yeah.
I think it's great that like meta is a amazing technical organization that has resources
and a founder who like commits to a point of view, right?
And so like Mark joining the like open source LLM party is like a huge boom to the ecosystem.
The struggle that they had, I think it is no secret, is like, is really on the on the regulatory side, right?
Like if we release models to the world that are really high quality from our research labs,
are we going to be responsible for all of the ways in which people use them? Are we going to be liable?
And like, what are the policy and risk ramifications? The positive for them, of course, is, you know, if they have the, if they are a big benefactor and, like, they have the power of the open source ecosystem, research and engineering community, then, like, you know, Lama could become a, like, an extremely well understood.
very well-supported, dominant technology that they get to leverage internally, right? This is
sort of the reason people open-source core components at these large internet companies. They
want them to, like, be durable, get more robust, be useful internally. And I think it's also
amazing for their sort of strategic positioning and like recruiting story, too. So I think it's a,
I think it's a great move. I'm glad they've taken that risk because it's hard to take that risk
is at a company that scale. Yeah, and then briefly, Microsoft, it seemed like they own the world
for a moment. But now everybody else is joining the game. Bing is still Bing. You know, it was
on the top of the world for just a minute, but now it's returned to being Bing and has not really
increased its share at all. So, and of course, they have it in office. They're offering opening
eye through Azure. What is your quick take on Microsoft right now? Yeah, I think that, I understand it's
interesting to think about the search wars, but, um, uh, I'd say, I know, I think people
it makes you feel smart. No, I'm kidding. No, I'm talking about, sorry, it's just comparing it to
the, um, existential risk thought exercise. Oh, I mean, we like, I was very lucky to work with some
amazing people at a, um, a search competitor called Niva. Uh, so, you know,
Of course.
Streetar is actually going to be on in a few weeks.
Oh, awesome.
Right.
Yeah, they're part of Snowflake today.
But I'd say, like, I like to think about the search war so much like I worked on a company there.
But I'd say, I don't think people understand how entrenched Google is in search, right?
If you look at the data, the tales of Google's demise are greatly exaggerated.
Yeah.
They actually, they have gained share, right?
And so I would say, Google as a company has done this amazing job of creating, like, user behavior of, you know, it's a verb.
It's a habit.
It's incredibly hard to change, you know, a user's point of view on what search is.
And they have all these, you know, partnership agreements from a distribution perspective, like with Apple, that, like, are.
are very difficult to unseat, very expensive to unseat. And so I am like, you know, I, if I was a
betting woman, and I am, it's my job. I bet on Google in the search wars. But, but I don't think,
like, I think Microsoft has so much opportunity here across all of its businesses. You can already
see it show up in the financial results of, you know, how they keep the entire product suite relevant
and keep it differentiated.
And, you know, the, the big initiative that actually has been quite successful for the last, you know, five plus years is Azure, right?
And I think like this has been, like, you know, Bing is not an important revenue driver for Microsoft today.
Cloud is.
Right.
And so I think they're, you know, their positioning of the company as an AI leader with a new.
number of partners, right? They are equal opportunity, even if they have, you know, a really close
partnership with Open AI, I think is a great strategic move. And I think they're going to really
benefit from it on the, on the productivity and cloud side. Yeah. I got to ask you about
Nvidia before we go. And if do you have, can you go to five past the hour? Because I want to ask
you about the room temperature superconductor. Because I saw you, you're talking about that.
Yeah. I could go a few minutes past.
I will not pretend to be a material scientist.
But I want to hear what, yeah.
I think that, like, it's important for to hear, like, all right.
So we're going to add that caveat.
Let's do the superconductor first and then go invidia.
What's your take on, like, how real this could be in, whether, what, whether we could really see some innovation if it actually is legit?
Yeah, I mean, I'd say, like, we have to reproduce the experiments.
And I am not that physicist, but I really look forward to, um,
seeing the real material scientists actually like reproduce the data and
validate that this is real room temperature semiconductors have been a dream for
scientists and engineers for decades and I think the like the biggest reason to
think about their impact is that large amounts of energy are currently lost due to the
resistance of wires in the electrical grid and so like if you think
about all the transmission loss. If the technology is real, we could drastically reduce those losses and, like, get to just more efficient power grids, potentially cheaper electricity. So I think that's the, that is the, like, biggest near-term climate impact that people are excited about.
Yeah. Okay. And like part of, you know, why do we just explain, like, why does room temperature matter? Like, current superconductors, they need to be cool.
down to extremely low temperatures using expensive things like liquid helium and it's obviously
just like logistically, environmentally challenging to manage that cooling. So if you remove
that cooling infrastructure, you democratize the technology, right? They're simpler. They're cost
effective. You can use them everywhere. Fascinating. Okay. Last one for you. We'll just end with one
of your tweets. We wanted to do an Nvidia conversation. You said, current gray market for
NVIDIA H100 is a Wild West mess.
Pricing both monthly with three-year commit or hourly is all over the place.
Teams waiting to train, people who overbought trying to make a dime claiming large clusters for marketing.
I mean, what is the story right now in terms of how scarce these chips are?
Yeah.
So, like, true story.
And like, we manage a cluster of A100 and H-100, Nvidia GP resources in different clouds.
for, like, you know, clusters in different clouds for our portfolio companies and friends.
And so I spent my morning being like, okay, we're going to, like, manage people's SSH keys
and deal with capacity utilization across a few different startups, like, who needs H-100s?
I do not want to be doing this, right?
I think it is, like, my business is to, you know, invest in early stage companies that we think can be really important
and help them succeed however we can.
I do not want to be managing, like, hardware utilization.
And, like, obviously, we're giving it to people at cost, right?
Right.
But the problem is, like, the overall macro is we're kind of in the,
there's two dynamics.
We're in the, like, on-premise days for GPU hardware,
which is what you need to run these large models, right?
in that, like, people will literally ask you for, like, the, you know,
disk and memory and networking spec on your set of chips.
And in traditional, like, web cloud software engineering round, like,
nobody worries about that, right?
Yeah.
Like, Amazon takes care of it.
Mm-hmm.
Google or Azure take care of it.
And, and so, like, today, we are still thinking about,
discrete hardware resources and optimizations against those resources.
So it really feels like a huge step back in terms of maturity of the infrastructure
market.
So one issue is just like the maturity of like how we deal with this type of computing, right?
And that's not a knock on Nvidia, right?
It's like knock on like, okay, like cloud providers.
We're investors in a developer platform company for machine learning called Base
10, which, you know, makes all of this look serverless under the hood, right?
Treat it like traditional computing, like scale to zero, make it really easy to deploy
models.
But I think the adoption of much more modern consumption and orchestration technologies is still
really early.
The second dynamic, which is also not Jensen and NVIDIA's fault, is just like the, like a core supply chain issue, is they can't make them
fast enough for the market.
Like, you know, these, you're building, it's, it's as if, like, every engineering team
in the world decided in a one-year window, like, you know what we need?
We need a supercomputer, right?
Right.
And so, like, you're trying to build these super computers, I almost said conductor, right?
I'm going to confuse the issues now.
Supercomputers at massive scale, but, like, they're physical goods, right?
Like, you need manufacturing and input capacity.
And, like, the world doesn't have it right now.
Like, we're building that capacity up.
And so what is going on in at least, you know, a bunch of technology companies is there's a hoarding dynamic and a gray market of, like, okay, like, you know, what cloud providers can you do deals with, like, what sort of commits and pricing do you get?
But who plays repeat games with the cloud providers or the chip providers, right, like conviction.
And it's like that's why we do it because we don't want people to be blocked on capacity for training in France, which feels like a crazy thing to say, but is definitely a very real problem.
And so, you know, I'm really excited for the market to mature to a point where you get, you know, serverless abstractions.
on-demand, massive capacity, you have, like, more efficient scheduling and, you know,
InVIDA or other players in the accelerator space, like, can produce enough to meet demand.
And this is one of the reasons going back to areas of research where excited about, like,
I do think efficiency is an important area.
Right. Okay. And all ties it together. Super fascinating conversation. Sarah,
thank you so much for joining. Yeah, super fun. Thanks, Alex.
All right. Thank you.
Thanks, everybody for listening.
Thank you, Nick Watney on The Edits, LinkedIn,
for having me as part of your podcast network.
And all of you, the listeners,
we'll see you next time on Big Technology Podcasts.