Software Huddle - AI Incubation and Investing with Rak Garg from Bain Capital Ventures
Episode Date: January 23, 2024Today's guest is Bain Capital partner Rak Garg. Rak is a super smart guy that's worked as an ML researcher. Then he was in product at Atlassian before moving over to the venture capital side of the wo...rld. In this episode, we talk about BCV Labs, an AI incubator and community for AI founders that Rak helped establish. Rak shares his thoughts on the big opportunities he sees in AI and how it's going to impact the world, both in the short and long term, and how BCV Labs is helping support AI founders bring these visions to reality. There's a huge amount of opportunity to automate away a lot of manual tasks across industries like legal, insurance, and healthcare. But of course, there's a lot of complexity with actually bringing this technology to market.
Transcript
Discussion (0)
You know, I always loved learning, especially about tech. Like I consider myself something of a software historian. And I feel like investing is a really great way to just constantly learn about new areas and spend time with the really brilliant people that I wouldn't have had access to otherwise.
And did you think you'll ever go hours a day on it for the first couple of weeks just
to see what it could do. Like I tried to sort of prompt inject it in various ways,
see if it would give me unsafe or unsanctioned responses.
The hacker mentality.
Yeah, exactly. You want to kick the tires and see where it'll go.
How do you go about identifying exceptional founders?
When you've worked on something that has really become a standard or has pushed the field forward in some ways,
if you're coming from sort of an EPD, you know, entering product design kind of background,
it's really about have you worked on things that have mattered?
Hey, everyone. Welcome to Software Huddle.
I'm Sean Falkner, one of the co-creators of the show.
And today's guest is Bain Capital partner, Rak Gard.
Rak is a super smart guy that's worked as an ML engineer
and then in product at Atlassian
before moving over to the venture capital side of the world.
And in this episode, we talk about BCD Labs,
an AI incubator and community for AI founders that Rack helped establish.
Rack shares his thoughts on the big opportunities he sees in AI and how it's going to impact the world, both in the short and long term, and how BCV Labs is helping support AI founders bring these visions to reality.
There's a huge amount of opportunity to automate away a lot of manual tasks across industries like
legal insurance and health care but of course there's a lot of complexity with actually bringing
this technology to market well anyway that's enough setup for now i hope you enjoy the show
and if you do please remember to subscribe and leave a positive rating review all right over to
my interview with rock rock welcome to the show. Hey, Sean. Thanks for having me. Always a good time.
Yeah, it's great to see you again. How about for those listening that aren't familiar with you,
let's start with the basics. Who are you? What do you do?
Yeah, thank you. My name is Rock. I'm a partner at Bing Capital Ventures,
focusing on AIML and cybersecurity. And my background is that I grew up in the Bay Area.
I've been coding
basically my entire life. Went to UCLA and wound up doing ML research there for a couple of years,
which is a totally different world at the time. But in 2015 and 16, we were working on
computer vision use cases with RNNs. After that, I worked at Redfin, where we were trying to
productize NLP for automations for real estate agents, and then went to Atlassian, where I was on the founding team of Access,
which is a security product that we scaled to thousands of customers over a couple of years.
And then made my way over to Bain, where I've been investing for the last few years.
So I guess you started as a kid coding, and you started your career as an engineer,
and then you moved into products.
So what got you interested in actually moving over to the investment side?
You know, I always loved learning, especially about tech. Like I consider myself something
of a software historian. And I feel like investing is a really great way to just constantly learn
about new areas and spend time with really brilliant people that I wouldn't have had
access to otherwise. And so for me, it was sort of an evolution of building products where even when
I was a product manager at Ablasi and Redfin, I was always thinking about, you know, who can I be
meeting? What other things can I be helping with? And so it was just a way for me to sort of expand
my horizons at first. Now I feel like it's very personally fulfilling for me to support these founders and
take what they have, which is really deep domain expertise, and help them make an impact on
whichever market or industry or community they want to impact.
And do you think you'll ever go back to the other side of the business being like an operator,
a tech company, or, you know, essentially maybe even back
to an IC role as a leading product for some sort of, you know, technology company? Yeah, you know,
partially this is kind of why we started BCV Labs, which is our AI incubator and technical community
in Palo Alto. Our favorite part of the job at BCV is working with founders. And so being pre-seed,
pre-idea, just working at the inception stage helps me
scratch the itch of working with customers, trying to figure out what problems should be solved.
In a lot of ways, it's kind of like being a fractional product manager to some extent,
just in the kinds of problems you learn about, the diversity of customers, the types of customers
you end up meeting. And so being very early stage and working with founders for me is very fulfilling.
But, you know, I never say never.
And so if there's a business someday that, you know, I feel really, really strongly that I should go join, maybe that'll happen.
Yeah, I mean, that makes sense.
Like for me, when I was leaving Google and looking to figure out what I was going to do next,
I, you know, for a brief period of time contemplated the idea of starting another company, but then
realized that my wife was pregnant, seven months pregnant with our second child.
And I thought that maybe it wasn't the best idea.
But the way for me to scratch the itch of sort of being a founder was to join an earlier
stage company where you still get a lot of the excitement of building something from scratch,
but not necessarily all the headache and responsibility that comes along with being
a founder. So I can understand the idea of being involved with these early stage companies that
kind of scratches your itch to maybe be on sort of the operating side of the business without
actually moving over to that side. Totally.
So I want to talk about BCV Labs, but maybe before we get there,
it's been a little over a year since ChatGPT and GitHub Copilot kind of blew up the internet.
And I was wondering, what was your initial thoughts when you first saw those products?
Yeah, so I had different reactions to both of those products. When ChatGPT first came out,
I probably spent, you know, two or three hours a day on it for
the first couple of weeks just to see what it could do. I tried to sort of prompt inject it
in various ways, see if it would give me unsafe or unsanctioned responses.
The hacker mentality.
Yeah, exactly. You want to kick the tires and see where it'll go. I found it really helpful
on these really abstract and creative tasks.
So I had this one where, you know,
ChatGPT came out, I'd say,
fully everybody got access in November.
And so I had this use case during Christmas
where I had to create holiday cards for everyone.
And I have an awful memory.
And so trying to create these personalized holiday cards
for, you know, a very large family was very hard.
And so ChatGPT helped me think of pros.
It helped me say similar things in very different ways that I could contextualize
to the person just by giving it one character trait or one memory of that person
from that year. And so I really fell in love with it for these creative use cases.
Now I use it a lot less.
I use it probably once or twice a week, mostly for work.
And that's for these like blank canvas first mile kind of problems.
So maybe I'm working on a new blog post or a fundraising announcement or, you know, a
script for a webinar that I'm doing or something like that.
I find it easier to edit than to start with something blank.
And so I'll use ChatGP to create straw men for me that I can go critique and analyze
and change in different ways.
So for me, it really shortcuts that like, you know, you're staring at the cursor on
a blank Word doc.
What do you do in that situation?
It shortcuts that for me.
GitHub Copilot was really impressive.
I mean, it was basically everything I'd been asking for for a long time.
It certainly wasn't a new idea.
I don't know if you remember this company called Dash,
which basically would inject reference documentation into the IDE.
That was sort of a 2017, 18, 19 kind of product.
Kite was another company that was trying to do probabilistic autocompletion,
but obviously I don't think the ML models were quite there yet.
Copilot brought all of that to the level of abstraction
where it was basically no work to start using it.
And I think Microsoft played that very masterfully
where everybody's already on GitHub.
Most people I know use Visual Studio Code.
And so it was very easy to get started with Copilot.
And then it really did everything
I would have wanted it to do at the time.
I personally think Copilot's just scratching the surface.
Like I would like to see it go deeper into infrastructure and architecture,
handle IAC for me.
Why am I the one dealing with Docker, handle all the Docker stuff for me?
And so I think that's eventually what we'll get to with CodeGen
and some of these code completion tools.
But Copilot was really just a force for me
in keeping up with my coding projects.
Yeah, I think you mentioned with ChatGPT,
I think one of the big value adds for a lot of people is it just solves the
blank page problem. For most people, it's easier to edit than necessarily create from scratch.
And then, yeah, I agree. Kito Copilot,
I think there's a lot of companies now, JetBrains is coming out with
their version of the Copilot. a lot of companies now, you know, JetBrains is coming out with their sort of version of the co-pilot.
A lot of people, you know, Salesforce now has their sort of co-pilot.
Everybody's creating some sort of co-pilot.
There's Microsoft 365 co-pilot.
So I think this is going to become the norm, but we're really just also sort of scratching the surface there.
Like I would love for, you know, they should be able to handle like infrastructure as code um and you know you should
you know be able to spit out your cloud formation or your terraform file and do all these types of
things that take sort of hand coding and they're kind of like not the funnest jobs to do and are
uh like if we can hand that off to ai fantastic and especially if it's something that you're not
you know living and breathing every day it's's hard to remember all the little syntax nuances
that you might need to remember
and it's just going to end up taking you 10 times longer
than it really should
because you're not doing it all the time.
Yeah, I think the other interesting thing
about code completion as a category
is that coding is very rules-based
and these models have proven to be really good
at rules-based kind of tasks.
And so that means you can get very, very small models
that are really efficient
and you can productize them in interesting ways.
So Microsoft, for example,
I think just last week launched Fee2,
which is their state-of-the-art sort of code gen model
out of Microsoft Research.
Google had a paper called Didact from a few months ago.
And they were training, at least the way the paper makes it sound out of Microsoft Research. Google had a paper called Didact from a few months ago.
And they were training, at least the way the paper makes it sound, is they were training this model sequentially on not just code, but on GitHub commit history. And so the model would see how
software was built up. What did you start with? What did you add on in the second commit,
the third commit, and so on? And so it would build software the way that a human developer would build it.
And then even one of our companies, Poolside, has been innovating there. So Poolside has done
a tremendous job of really re-architecting the way developers think about these code
completion tools. So we mentioned BCV Labs, which was recently announced, and it's an AI incubator community
for AI founders.
I guess, what sets BCV Labs apart from other incubators that exist for startups?
And how does it provide kind of like a unique and tailored offering for AI engineers and
researchers?
So at its heart, BCV Labs is a community, and it's a community of vetted researchers,
vetted engineers and founders, and vetted product people.
And from that community, we run various programs that are sort of tailored to, you know, that
person's sophistication or readiness for entrepreneurship.
So, for example, if they're a deeply technical expert and they can't stop thinking about the possibilities of something new,
like retrieval augmented generation, right?
Or if they're a deep domain expert that has just lived a very complicated workflow
and has lived this day in, day out,
and it's just never been possible to automate that workflow until now.
We supplement them in the ways that they kind of need supplementing.
So that research expert might want someone who can connect them to customers
and might want to test whether or not this research technique has a place in the enterprise
or in whatever area or industry they want to target.
And vice versa, the domain expert knows very well that this is a big problem in companies,
but maybe necessarily doesn't have the technical background to go and really sort of, you know, command that opportunity.
So for that kind of person, we help them find their co-founder.
We help them recruit the earliest engineers.
We help them find the first couple of design partners through the sort of global Bain Capital and Bain Network.
Ultimately, running a company is the founder's job and the founding team's job. But our hope is that we can give them a sort of head start and a support structure so that they can focus on what really matters, which is building a product and solving customer problems.
All the other stuff, we can help them as someone who is just really high potential
and a rising star in the industry.
For that kind of person,
we might offer them incubation space
in Palo Alto or San Francisco.
They can go meet the other founders that we're working with.
They can talk about new things, new ideas,
new problems to be solved.
But we also assemble cohorts,
very small, sort of dozen people cohorts once a year, where we expose that rising star personality
to the greatest founders. And our hope is that, you know, they'll see that every company looks a
little sort of unclear when it first starts. And maybe that'll, you know, that'll help them take
the plunge. And we just try to connect them to two or three people that they could work with in the future,
two or three people that can become customers of theirs or become advisors to them.
It's sort of like putting all the pieces there so that when they're ready, they have access to all of that stuff.
And then still, there's this third layer of people who maybe just aren't ready to start a company if it's not something they want to do.
But they are very happy in their current role and they want to be exposed to new things in AI.
I've met a lot of people from non-AI companies like Stripe and Canva and so on who have AI teams.
These are really bright engineers who maybe didn't come from research backgrounds.
And they're trying to find ways to leverage AI, maybe inside the company or publicly in its products. And so we can bring in that kind of person to our events, our salons, our demo days or debates. And that just exposes, you know, their company and the way that they build products to this sort of, you know, new innovations and new ideas in AI. And so really, all of it is, you start with this kernel of, let's help people start
companies. And then you go bigger and bigger and bigger. All the while, we keep everybody in that
community, very high quality, very curated. We run really small events. I mean, these are
25 to 30 person events that happen a couple times a month. And they're very, very specifically
themed and scoped. And so with that, you walk into a BCB Labs event
and then you leave with a couple of new ideas
and a couple of new friends.
How do you keep or how do you actually curate the community
so that you are keeping it as a high-quality individual
as part of it?
Yeah, we're all about domain expertise
at Bain Capital Ventures.
We come to every meeting with founders
with a prepared mind
and our bar for founders is similar at labs.
So that means that they've thought
really deeply about something,
whether that's a research technique
or a way to get LLMs into production
at their existing sort of product surface area,
or they've thought about a very specific problem in security or in sales operations or data analytics or something else.
And we interview every single person before putting them on this list.
So someone at BCV has vouched for and met and, you know, felt that someone was really high potential for them to be part of the BCB Labs community.
By doing that, I think it helps people gain confidence
that they're in like a company.
They're going to be surrounded by people who think about things
to a similar level of depth that they do.
And then I also think it just helps them
kind of filter out the noise to some extent.
Every event is curated around,
as an example,
we ran one on agents the other day.
And so the only people that were there
had been thinking about agents,
whether it's from a product point of view
or a research point of view.
And so that helps people
just meet other exceptional people.
And then they refer us new people
based on their experience
with our events and our salons.
Yeah, I guess it's kind of similar to the idea of hiring
A players attract other A players, right? So if you keep the
quality bar high, they're going to essentially refer other people that are sort of at the same level
as them or their peers.
How do you go about identifying exceptional founders?
There's a lot of things that can make someone exceptional in our mind.
So part of that is, you know, if you're coming from that research background, have you worked
on projects and papers that really made an impact and really made a difference?
So examples of this are, you know, the Conchula paper really changed the way people think
about scaling laws.
The Realm and Retrieval Augmentation papers created entire new categories in vector databases
and in retrieval augmentation. There are a bunch of eval papers coming out of all of the labs at
Stanford and Berkeley right now that really push our thinking on how to choose models and how to
benchmark these models. And so when you've worked on something that has really become a standard or has pushed the field forward in some ways,
curricular learning or mid-training or other examples of this, we want to meet you and we
want to help you take what you've sort of uncovered and bring that to more people, whether that's
help you build your brand or help you start a company or whatever else. So that's exceptional
in the research category. If you're coming from sort of an EPD, you know, engineering product design kind of
background, it's really about have you worked on things that have mattered? If you're a designer,
have you worked on really compelling products? If you're a product manager, have you been able to
sort of materialize things at the early stage, you know, creating something from nothing? If you're
an engineer, do your peers and, you know, your track record of GitHub projects,
open source contributions, tell the same story as the one that, you know, you want to tell.
And so all of these things kind of go into what makes someone exceptional.
But a lot of it is really, you know, you meet someone, and then you just can't stop thinking
about that meeting for a couple of days, because they exposed us to something new or some new idea, or maybe they said something in a way
where we just never thought about it like that before. And so there's a softer side to it as
well. And we put those two things together and that's what determines if someone is a good fit
for BCV Labs. Are there specific industries or verticals within AI
that you are particularly interested in
or you see having significant growth potential?
Yeah, so I have multiple answers to this.
I think at the application layer,
we're really excited about automating universal workflows
that are really manual or brittle today.
So if we think about, you know,
where does the majority of IT services, where does the majority of IT spend go in the world today?
It's actually not on products.
It's on services.
And the majority of IT services are, you know, implementation charges, maintenance charges, data integration and transformation charges.
I mean, there is a lot of business in helping other companies become successful with various products or
various processes. And we think a lot of that, at least lower level work, can be automated by LLMs.
And so literally, you're converting service revenue into software product revenue. And
everything around the ERP industry, I think, is a really good example of this,
where historically, these integrations have been really hard to build. And these systems of record have been very brittle,
because they were designed to be walled gardens. And so historically, it's not been a fun job
to try to build around the ERP. But with LLMs, you can kind of raise up the layer of abstraction,
you can read data from the screen, you can ingest data from the text on the screen.
And then you can manipulate in different ways and send it in different places. And I think that has a lot of potential.
Other examples of this are, you know, handling procurement for B2B marketplaces. If you think
about companies like fair or sort of hardware procurement, robotics parts procurement,
a lot of those industries are very, very manual, where you don't often know how something's going to fit together with the rest of your store or your hardware project or something else.
And so we think a lot about, you know, what can we do to sort of automate the procurement process for marketplaces specifically?
And I think at the infrastructure layer, we're just interested in problems that push the field forward, that create better applications that we haven't thought about yet. So that's examples in audio generation, where, you know, Midjourney has sort of created
images as a first class citizen. Every AI blog post I see, the graphic is generally AI generated.
A lot of AI apps now are using Midjourney for product graphics internally. And so I wonder
if audio will become sort of the next modality. You know,
we've always had to deal with these really robotic kind of tinny voices in software.
What happens if software sounds like you or me? You know, can I go to three meetings at the same
time? Because my voice is cloned and it just asks the one question that I have to ask in that
meeting. Another example is multimodal retrieval. So retrieval today, historically, has been all about text.
Let's get the right text from the right places
at the right time.
Can you do that with all the audio notes
that you have in Gong?
Or all the call records from Otter?
Or all the images and briefs
that your sales team has created?
All of that is really valuable information.
How do you make that accessible?
And then another one is eval, right?
I think eval is a very unsolved problem.
There's LMSIS, obviously there's Helm,
but a lot of the companies that I work with
have a very hard time replicating eval results
across various models.
And so we think a lot about,
is there a way to sort of maybe generalize
or create a better product experience
around these valuations?
Yeah, it seems like one of the big themes that you kind of started off highlighting there is,
you know, where are there opportunities to automate where we essentially rely on like human resources to do manual work today?
And, you know, I know, for example, like I think like the legal profession is a really good example of that,
where you have like basically like teams of people that
are just like go find this thing in this obscure like document somewhere because they have no other
means to essentially solve some of those problems or you know essentially like uh you know understanding
and processing a large contract and or even um you know filling out i know i've talked to some
companies in the um so the drug discovery space that are exploring
using LLMs just for helping fill out some of the paperwork involved with actually getting
through all the stages that you need to get through to actually bring a drug to trial.
And it's not that they won't have a human in the loop, but there's just a lot of stuff that
you have to do that today you have to to rely on humans essentially take care of. Yeah, definitely. I mean, another example of this is, there's a lot
of industries that historically have rejected software, you know, Silicon Valley companies
would go to hospitals, they'd go to governments, they'd go to insurers and banks, and they'd say,
look at our new software, you should use this, it's way better than whatever you're using.
And all those companies have historically said, no, thank you.
We're very happy with what we have.
We don't want to deal with this.
Even if we dealt with it, it would take us 18 months to start using it.
Right now, what's happening is that you don't have to do that anymore.
And AI software can just sit on top.
They can sit on top of whatever EMR, HR, EHR you're using.
You can sit on top of whatever sort of MGA software you're using or
loan origination software you're using. And it helps the human do much better work, much more
productive work, faster work, because they just have to do less clicks. They're not clicking
around to take data from one place to the other anymore. And at the same time, the IT department,
the company's IT team doesn't have to deal with all the really gnarly sort of migration
challenges that come up with moving away from these historical systems of record. And so I
think what we're seeing emerge is, you know, the 2010s were about take systems of record, put them
in the cloud. That was the Workday story, the Atlassian story, the Adobe story, and so on and
so forth. I think the 2020s are going to be about systems of intelligence that become the way we interact with software.
So leave the system of record alone.
It is where it is.
It is what it is.
Instead, we'll be interacting with it through some other medium.
And that will be these AI apps.
Yeah, I actually saw a demo recently where they were using LLMs to abstract away multiple APIs.
So instead of essentially having to call a specific API endpoint or use a specific SDK,
they would essentially, as a wrapper to that,
describe what they needed from the API
and even the format they needed.
And then behind the scenes,
the LLM is figuring out which API to call
and then reformat the results and stuff like that.
And it was all very much like demo-ware, but that's kind of a glimpse into the future of
what writing software could potentially be.
I don't even need to essentially know what the rest API endpoint is.
I just need to describe essentially the type of data I need and what it needs to actually
be able to do to accomplish my job.
Totally.
I mean, MuleSoft was, I think, like a $6.5 billion acquisition back in the day. And
that was when it could only automate and integrate with existing APIs, right? And now imagine a
MuleSoft built today that can integrate with everything because you never have to know the
API anymore. How much bigger would that company be? I mean, it's mind-boggling.
Yeah, absolutely. What is the relationship or the connection between BCV Labs and Bain
Capital? How is Bain involved with some of the startups that would be part of the incubator in
the community? Yeah, totally. So BCV Lab is inception stage. This is pre-seed. It's really
talented people who haven't yet taken the leap of raising a seed financing for their company. And so it's really the idea or pre-idea stage of investing.
Bain Capital Ventures, so BCV, invests from seed to IPO in a set of core domains
because we're very domain focused.
And for us, that's next-gen applications, it's commerce, infrastructure, fintech,
and then a smaller set of emerging areas like climate, defense, robotics, and so on and so forth. BCB is the early stage venture arm of Bain Capital.
And so Bain Capital is a very large global alternative asset manager that invests in
businesses of all sizes in various industries. And really, this supercharges the network that
BCB Labs provides those very talented people. So actually, I think two
weeks ago, I met somebody who's thinking about building something in the insurance space,
and she was coming out of Facebook AI research. And so the question is, how can she go talk to
the AI people at MetLife and at Nationwide and, you know, so on and so forth. And the Bain Capital
Network really helps us connect that
kind of personality to the people that might become their customers or their collaborators.
So that's sort of the relationship. Bain Capital is sort of the mothership.
BCV is the early stage venture firm within Bain Capital. And then BCV Labs is the pre-idea
inception stage incubator as part of BCV. Do you think right now that because there's so much interest in the space
and there's so much potential that in some ways we might have a little bit,
like, I don't know, like rose-colored goggles when it comes to like,
just, hey, like there's this problem and I know the solution,
I'll apply LLMs to it.
And then there's this other problem and I'll apply LLMs to that as well.
And that might actually prevent us from thinking about alternative solutions that are actually
maybe less expensive, less difficult to actually operationalize and scale and prioritize.
I think you always run the risk of that in any hype cycle.
And we are definitely in a hype cycle for AI.
I think what makes this different from historical AI hype
cycles is if you just walk through sort of the history with me, you know, in the early 2000s,
we had sort of the big data wave and that created companies like Cloudera, you know,
we'd hear words like Hadoop, even Spark kind of came out of that timeframe. And the core goal
there was help companies harness just gargantuan amounts of data. And the problem was there were
very few companies that had that kind of data or knew what to do with it. And so, you know,
you had the software, maybe it wasn't as performant as it would become, but you didn't
have a customer base that really knew how to encode our architecture systems around these new
products. In the 2010s, we got companies like Databricks and Confluence for data streaming.
We got the whole MLOps wave, companies like Tecton, which we're very happy investors in.
And what happened was that you had these breakouts like Uber and DoorDash and Lyft that were sort of defined by their ML prowess, right?
The better price forecasting on ride share companies or better fraud and risk detection for lending
companies and fintech companies. ML for the first time was sort of a differentiator in product.
And there was a talent crunch. Not enough people could build the specialized models
that would actually create the product advantage. And now in the 2020s, what's happened is anybody
can create images, text, audio, video. Anybody can, any developer really can use the OpenAI API.
And then there's well-documented ways to get it to do what you want,
whether that's guardrails or spit out things in a policy language
or a grammar or something else.
And so the reason we're seeing so much hype,
and as you said, the rose-colored goggles,
there hasn't been a time in history before
when everybody could use AI.
And so we're really seeing this sort of divergence of what are all the things we can do with it.
And what we'll see, I think, at the tail end of maybe the next few years is a convergence of what are the most valuable apps and what are the most valuable use cases for this technology.
And so I think anytime you have this sort of hype cycle, you have to kind of validate, are you solving a real problem for someone?
We certainly do that whenever we interview people for BCV Labs and obviously Diligence Opportunities and Founders.
We spend real time together trying to uncover those use cases.
There are some things that just don't need LLMs, but there are a lot that I think we haven't even discovered yet that could do a lot more with LLMs. Yeah, it's interesting.
I think you made an interesting point with the idea that if we essentially
lower the barrier to entry with using ML models,
we democratize the use of these things,
then suddenly you're opening up the use of them to other domains
that never had access to ML before,
at least not with that really specific use case in mind.
So if I was an ML engineer, I might not be thinking about like, how can I apply this technology to
like, I don't know, geology, or even chemistry or something. But now if I'm a geologist or a chemist
or whatever I'm sort of interested in, I can use these things without having to be super like
technically proficient. Suddenly, I'm going to be able to sort of connect the dots in a new way that no one's ever been able to do before.
Yeah, definitely.
In terms of the tech community,
how do you actually think about growing and nurturing
the community that you're building with BCV Labs?
Because that's a hard challenge in itself.
Many companies have failed to essentially do that effectively.
Our goal is to stay small for as long as possible.
We cull the list every few months where we really think critically about who's coming to the events, who's engaged.
Do we want them to be more engaged or less engaged?
And we really think about ways that we can encourage people to
meet each other without us. Like that's sort of the goal is that if you're a part of this community,
you come to an event or two, you meet a couple of people that you want to continue the conversation
with, whether or not that involves BCV. And so that's how we are trying to, at least at this
stage of growth, make sure that the quality
bar stays very high. I fundamentally believe if we keep the quality of person that comes to our
event or the quality of company that speaks at our events very, very high, good things will
continue to happen. And so far, it's been going fairly well. We've got a few hundred people that
are very engaged with us. We invite 20 or 30 every couple of weeks to
our offices to hang out with us. We've invested in three or four teams already that have come out of
BCV Labs and come out of these efforts. And so we're just going to continue to keep things small
and then grow from there. In terms of nurturing these companies as they grow, BCV invests from
seed to growth. And we love to support entrepreneurs in every round as they sort of grow and gain breakout scale.
There's the really tactical stuff in the early days. other companies that can become customers, become advisors, just become these really sort of, you know, pillars of strength for early stage companies who might not have access to those
people otherwise. And so if we can do that at the early stage, and at the growth stage, we can
help the companies bend the curve and inflect, then we can start to, you know, really make this
a full lifecycle kind of thing. But that's how we think about it. Yeah, I mean, I think when it comes to communities
or really any sort of go-to-market effort, it's much better to have
100 people that are super engaged and in love with whatever it is
than have 10,000 that are sort of just there
apathetically lawyering and not really involved or engaged.
Because 100 great people are going to lead apathetically lawyering and not really involved or engaged because, you know, a hundred great
people are going to lead to, you know, the next hundred great people if they're highly engaged.
And from your perspective, probably lead to better, you know, investment opportunity,
better companies and stuff. When it comes to like early stage investment, like trying to make a
decision around, should I, does it make sense to put money into this or not?
What is sort of the framework or thought process that you go through to figure out,
does this make sense? How do you sort of evaluate companies or opportunities at such an early stage?
Because you just don't have a lot of data to go on. Totally. I think the most important part of
that equation is the founder and the founding team.
So do we believe that this person is coming from a set of experiences where they've gained very deep domain expertise?
Are they working with anyone else, whether that's a co-founder or it's a set of founding engineers or something else? Do they have people around them that proves that they can recruit a team, they can hire a team, they can convince other people to come work with them and for them. And then I think once you have conviction on the founder,
you know, we spend a lot of time in person together building with them. Literally,
I go to BC labs, you know, every day. And so we're trying to sort of observe,
you know, how do they handle customer conversations? How do they handle the
different people that we introduce them to? All of that is first and foremost, trying to be helpful to them, but also evaluatory
in a sense.
From there, you go into the market.
And so I think at the early stage, what we really care about is, is it an existing market
that historically has never been penetrated by software or under penetrated that AI and
various models can now make a very big impact in?
Or second, is it that it's a totally new market?
It's just something we've never looked at before.
It's going to create new apps, new ways of doing things,
and just change the way we interact with software.
I think agents are a really good example of that.
Three years ago, you only heard about agents in science fiction.
Now you can actually have an agent like Multion, for example, order you a burger at the exact
time you like to eat lunch every day, right?
And so that kind of thing is not an old market.
That's a brand new market.
So those are the three things that we validated.
It's the founding team.
Is it a new market?
Is it an existing market that's big enough to be innovated in?
And then from there, it's really about, you know, can they execute?
And have they proven
that they have a
willingness to keep
pivoting and keep
executing until they
find, you know,
something that
resonates?
How do you think
some of these,
you know,
changes that we're
seeing from
the innovations
that's happening
in the space
are just going to
impact, like,
jobs sort of in the short term or the long term. Just even talking about
the agent idea, it'd be great if I had a team
of agents that were just working on my behalf. But
traditionally, that was probably work that might have been done by an admin assistant or something like
that. So it's going to impact certain types of jobs, at least in the
short term. And then I think any technology in the long term generally leads to more job opportunities. But how are you
thinking about the sort of short term versus long term impact to some of this stuff?
Yeah, absolutely. I think the writer's strike in Hollywood is another really good example
of some of the ripple effects of AI. There's where we are today, and then where I think we're headed.
So where we are today is we've seen all of the service providers leverage AI and get ahead of it and use it to
serve more clients or serve the same clients, but better in more deep or more creative ways.
We've seen that with copywriting firms, with publishing houses. We've seen it with marketing
and design firms. And what it's really done is, you know, historically it takes, if you're
working with one of these service providers, it can take a while to scope a project, to do all
the scheduling, to get to a first deliverable, and then you refine from there. Again, that first
segment of a work stream we found has been much faster, at least in the providers that I tend to
work with myself. And so what I think has happened is with the same
headcount, the same firms, the same service providers have been able to serve many more
clients or do many more projects for clients, which I think is a good thing for them and for
the client. I think where we're headed is as these models get bigger and better, because they will
continue to get better. I think we really have to think about, you know, which roles are going to
be able to sort of raise the level of abstraction that they work at.
Should developers really be writing IAC rules?
Or should they be focusing on, you know, what's the next product or feature that we can launch and solving the sort of hard technical problems happens in the, like the R and D EPD kind of realm is that the,
you know,
the easy stuff gets automated and you can focus on the,
what engineers would call the fun stuff or what PMs and designers would call
the real work.
And I think in other sort of like industries and other roles,
we're just going to have to wait and watch what happens.
Right.
I mean,
I think it's,
I think what's happened is that the genie is out of the bottle and people
are learning how to prompt and creatively express these models.
And that's becoming a differentiator.
So if you use GPT and I use GPT-4, we might get wildly different results because you're probably a much better prompter than I am.
And so maybe that becomes the area of specialization that writers and creative people sort of start to exploit, which is that this becomes a tool like Photoshop or like Sketch or like Figma and less of something that completely automates you
away. Yeah. Yeah. I think, you know, on the developer side or the engineering side,
I always say that, you know, you're hired as an engineer to solve problems, not necessarily
to write code.
Writing code is maybe the instantiation of how you solve the problem,
but the reason you're paid the salaries that you get paid as an engineer is to solve problems.
And I don't think those problems go away just because you have suddenly a co-pilot assistance that's there.
It just actually gives you more freedom to solve more challenging and harder problems, which is high value for a company and also a good way to keep and maintain your job.
Definitely. I think Parker Conrad had a really good quote on this on Twitter where
it was something about remote work. And he was saying, as an engineer,
your job is not lines of code. It's not shipping more code. It's coming to work,
treating the people around you, creating new ways and new architectures to do things.
That's the role of an engineer. It's not, let's just start shipping code. And so I completely
agree with you. Yeah, I think we can sometimes, it's easy to lose sight of that. And sometimes
even companies create, I think, the wrong sort of like KPIs and sets of structures that make us even think more about this, where it becomes like, oh, well, I know, like, I got to get rewarded.
I'll get my bonus at the end of the year if I write so many different lines of codes.
But then it becomes like a gamification of what you're actually doing.
This is actually a really good use case for models, which is I've never met an engineering manager that liked the way their company did
career management for engineers. It seems that it is very hard to know when to promote someone.
It's hard to put the packet together. It's hard to know what to index on. You don't want to compare
people based on the lines of code or the number of features, but it's also not an engineer's fault
if their product generated business impact or not. And so I think AI managers will start to emerge where
they look at not just these sort of DORA metrics, as they're called, but also the quality of what's
been done. Is this person taking ownership of meetings? Are they scoping out bigger and more
interesting tasks as an architect? Other things that we couldn't quantify before that now we can
and aggregate and ingest in
different ways to make these judgments on, you know, how is someone doing in their career?
Yeah, I think evaluation of even a side of engineering evaluation of employees is probably
a big opportunity, because it is a really hard problem. Certainly, like, no one liked the
performance review structure that was set up during my time at Google.
Everyone complained about it, whether you're a manager or an IC. And there was definitely a
challenge, I think, as if you could be a really good IC, but maybe you weren't great at sort of
articulating and championing your own work. And then you sort of get penalized in that system
because you're not good at politically putting together a packet to
emphasize how impactful your work was. So then someone who is maybe not as good, but better at
sort of that part of the job or that sort of gamification of the system ends up getting
promoted past you. So you kind of have these, whenever you're measuring people on something,
there's both positive and negative consequences to whatever it is that you're measuring. Definitely.
One of the companies that I've worked at, there was this feeling that you had to
write blogs to become successful at the company. And so all
of a sudden you'd see people who maybe were strong engineers or maybe weren't,
but they would be pumping out internal content and internal blogs because
those were the rules of the game as it was described to them that
they had observed, which is probably not good for the company.
The company would obviously want you to keep building products and driving impact.
And so that I've definitely seen that dynamic play out.
Yeah.
It's like you optimize the things that you measure.
So you should be careful about what you measure, essentially, and especially when it comes to evaluating people's performance.
Now, back to in terms of AI founders, do you think that there's unique challenges to being
a founder of an AI company versus, you know, maybe a non-AI company or traditional sort
of technology company?
Oh, certainly.
I mean, there's three unique challenges that AI founders face
that I don't think traditional founders face right now.
The first is access to compute has never been harder to find.
And I don't see that problem getting better anytime soon.
Every time you look at NVIDIA's quarterly results,
you find that demand is far outpacing supply still.
And there doesn't seem to be a viable solution yet.
There's several experimental approaches using AMD's chipsets.
Their RockM runtime seems to be working in sort of theoretical settings that I've found.
But I still don't know people who are running workloads on RockM in production outside of the very large companies. And so if you're a very
early stage founder, finding H100s and soon H200s is a real, real challenge. The way we've tried to
solve this is we provide every founder we work with about a million dollars in credits to each
of the major GPU cloud providers. And so that includes all the big hyperscalers, which is AWS, GCP, Azure,
but also the various model providers,
so Anthropic, OpenAI, Cohere,
and then also other sort of alternative GPU cloud compute providers
like Crusoe Cloud, RunPod, and so on and so forth.
And so that has been a source of acceleration
for a lot of these companies.
The other problem is, even if you get into the queue, how do you get prioritized?
You know, there's a very long waiting list sometimes.
And so we work with the companies to connect them into each of these providers
and maybe help them find ways to, you know, use alternative compute resources
or find, you know, other ways of doing things that maybe they hadn't thought about before.
And that also turns out to be an accelerator for a lot of these early stage companies.
So number one is compute. Number two is there is so much noise in AI, as we've kind of alluded to
in this conversation. You know, if you're building, let's say a co-pilot for recruiting or for sales
or something else, chances are there are over 30 competitors in your space
that are at similar stages to you
that are probably all growing very rapidly.
Part of that is good.
It's good to be in a market that's expanding.
It's good to be in a market
that people are really excited about and interested in.
Part of it is how do you stand out?
And so I think we've got a world-class PR and comms team
that helps these companies tell their story.
We've got a customer development team that helps these companies tell their story. We've got a customer
development team that connects these very young companies directly into the right groups at the
right companies. And so in a lot of ways, it kind of just helps you avoid the sort of rat race of
continuous hacker news posts and continuous tweeting and all that kind of stuff. It's very
important to do, but when you're in the very early stages, you've got to focus on product and on solving problems rather than on creating hype.
And again, I think that's the sort of the inception stage to test if something works.
Once you become a seed stage company, marketing and go to market will matter a lot. This is more
just sort of at the early stage. I think the last is talent. I mean, it's really hard to find and recruit these really talented researchers and engineers, depending on the kind of company
you want to build when you're on a startup's budget. And so we found that bringing people
into the incubation space, showing them that it's less about a job and more about a mission,
exposing them to a community that they get access to if they join this company or this founder, all of that has gone a long way and assuaging various concerns that you
might have if you're leaving a bigger company to go join something at the seed stage or pre-seed
stage. But I think those are three really unique challenges that other founders that aren't
building an AI might not necessarily have right now. Yeah, the compute one is a big one. And I think it's a hard problem to solve.
If you don't have the money, even if you have the money,
there's these wait lists that you have to navigate.
It can really slow down your development cycles.
What do you think are some of the big unsolved problems
where AI could have big impact?
Like what kind of like, I guess,
like problem would you love to see a startup tackle?
Yeah, I mean, dude, there's so many.
There's a few that we've been thinking about,
which are, you know, I think there's just so much opportunity
in this problem class of taking data from a bunch of different
places and then unifying it behind one API stream or one event stream. And I think you kind of
alluded to this as well. I think specific applications of that class of problem exist
in defense, like with sensor fusion. You've got a lot of sensors everywhere, various sort of hardware
assets, various software assets, antennas, et etc. None of those signals really talk to each other. They're picked up and interpolated by different
entities and different products. How do you sort of bring all of that together to create an alert
map or just draw a signal from some of that noise? I think another application of that is in
unified APIs. I mean, Merge.dev has been doing really well. A lot of companies in our portfolio use them.
How do you build companies like Merge.dev
in various other industries,
like for ERP or for marketing or for other areas?
Because the world's going to run on APIs.
And in some industries, that hasn't already happened,
especially the ones dealing with hardware
or embedded environments.
But I think that can happen now. And then I think think the third has been we're just seeing really interesting life sciences
you know applications and life sciences and healthcare it applications and so drug discovery
is one that we mentioned i think diffusion models are very well suited to doing drug discovery and
novel proteomics research we spent a bunch of time thinking about healthcare historically
been really, really hard to penetrate as a new software entrant. Are there things that you can do
to bring AI into healthcare, whether that's patient interfaces or monitoring patient outcomes,
whether that's sort of the back office automation stuff, running a care provider,
maybe it's on the insurer side. So we're thinking a lot about these very verticalized applications of new approaches to AI and ML. Yeah, you mentioned around the world
running on APIs. And we talked earlier about how in a lot of industries, there hasn't really been
an investment in technology. People have tried to bring tech to healthcare for a long time,
and it's just been a non-starter for a lot of companies. And I think when you're in the valley,
it's easy to think everybody's running on Kubernetes in the cloud, and they're all using
Databricks and Snowflake and stuff like that. But the reality is most of the world doesn't
really operate that way. And I was actually in an event last week, and it was an API event.
And I remember there was a leader of a bank
somewhere in Europe talking about how their big innovation
was they were now investing in APIs.
And that's kind of the state of the world
for a lot of companies.
So I think if we can leverage AI systems
to bring technology on top of these,
you know, companies that haven't really been able to,
to really invest in technology, it, it,
it's going to have a massive impact and lead to a lot of opportunity in terms
of like innovation as well as, and probably hope,
hopefully improved experience for consumers and businesses.
Yeah. I mean, we,
we held a dinner with a number of senior bank officials
that work in either security or engineering departments.
And they told us that before they'd have to go find a specialized set
of services and contractors who would basically upgrade mainframes for them,
mainframes written in COBOL or even FORTRAN in some cases.
And now they still have all the server spreaders,
but they've been able to make in these innovation sprints
OpenAI's APIs work in translating and upgrading
some of these systems,
which like today is really hard to do.
I mean, there aren't many people left
who know how these systems were architected back in the 70s.
And so the fact that ChatGPT can do that for you now,
or maybe other specialized models, is pretty incredible.
Yeah, absolutely.
You know, basically, it may be ChatGPT that keeps holding up
all the cool ball Fortran systems for the next 50 years,
as essentially we run out of people available that have those expertise.
So as we start to wrap up,
is there anything else you'd like to share?
And if people want to learn more
about what's happening at BCD Labs,
like where should we point them?
Yeah, definitely.
If you're someone who is thinking about new ways
to apply AI research,
or you've been thinking about a specific problem
or a workflow for a very long time, or you just want to come hang out and meet some of the community, you can, you've been thinking about a specific problem or a workflow for a very
long time, or you just want to come hang out and meet some of the community, you can always stop
by in Palo Alto, email us at bcvlabs at baincapital.com. And it'll be very happy to spend
some time with you and help you figure out what's next. Awesome. Well, thanks so much for coming on
Software Huddle. I thought this was really interesting, really fascinating to see everything
that you have going on at Bain Capital and BCV Labs really interesting, really fascinating to see everything that you have going on
at Bain Capital and BCV Labs.
And I'm excited to see the companies
that come out of this incubator.
Yeah, we can't wait to take the wraps
off some of them, Sean.
We're going to be announcing
quite a few things next year.
So stay tuned.
And always a fun chat.
Awesome.
And yeah, and hopefully
we can have some of those founders
join us on the podcast down the road.
Definitely. All right, man. It was good to talk to you.
Yeah. Thanks. Cheers.