The Prof G Pod with Scott Galloway - First Time Founders with Ed Elson – The AI Company That Codes For You
Episode Date: August 4, 2024Ed speaks with Varun Mohan and Jeff Wang from Codeium, an AI code generator. They discuss the importance of being a lean company, how their product stacks up against competitors and why having a level... of paranoia has been imperative to their success. Follow the podcast across socials @profgpod: Instagram Threads X Reddit Follow Ed on Instagram and X Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from Constant Contact.
If you struggle just to get your customers to notice you,
Constant Contact has what you need to grab their attention.
Constant Contact's award-winning marketing platform
offers all the automation, integration, and reporting tools
that get your marketing running seamlessly,
all backed by their expert live customer support.
It's time to get going and growing with Constant Contact today.
Ready, set, grow.
Go to ConstantContact.ca and start your free trial today.
Go to ConstantContact.ca for your free trial.
ConstantContact.ca
Support for PropG comes from NerdWallet. Starting your slash learn more to over 400 credit cards. Head
over to nerdwallet.com forward slash learn more to find smarter credit cards, savings accounts,
mortgage rates, and more. NerdWallet, finance smarter. NerdWallet Compare Incorporated, NMLS
1617539. Scott, have you ever made a significant pivot at one of your businesses?
And if so, how did you know it was the right time to make that change?
Yeah, I'm not sure there is a significant business that's ever been built without something resembling a pivot or iterating the business strategy.
So my first firm profit brand strategy was initially profit market research.
And we used to go out and do surveys
with the internet and computers and try and find a different way to collect data. And what we found
is that what brands and clients appreciated was the interpretation. So we turned to profit brand
strategy and we became a consulting firm. At L2, we were totally focused on the luxury space. And
then P&G called us and said, would you ever do this for P&G? And the name of the company was Luxury Lab. And I hung up the phone and said, our new name is L2.
And pivoted to or into consumer products. At Section, my online education started up.
We initially thought we were going to be the Netflix of business, that it would be short
form videos for B2B. And that was going to be so expensive
to produce that kind of content.
So much of that content was available elsewhere
that we pivoted straight into online education,
focusing on upskilling the enterprise for AI skills.
So I don't think I've had a business
where we didn't pivot.
I think you just look at the data
and you are thoughtful about what are the opportunities, and there's nothing like
facing the enemy. There's just no market research like launching a business and seeing what people
are actually willing to pay for to inform your decision-making. And a lot of times,
clients will come to you and say, we'd love it if you could do this. And so that it's, you'll get signals from the market. And what I would
suggest is have a solid board and have your kitchen cabinet of people that you can propose
stuff to and bounce stuff off of them. And also talk to your colleagues and your employees. I'm
trying to think if we've done a pivot here, I guess we're sort of, we, you know, we were,
we'd had our adventures in television. Now we're kind of, I wouldn't say all in on podcasts, but I think we're devoting the majority of the human capital of property media now to podcasts.
So the market is an unbelievable muse and advisor.
And you just want to surround yourself with smart people who can help you interpret the data and make sure that you're not, you know, not speaking to yourself.
It's very hard to read the label from inside of the bottle sometimes.
Welcome to First Time Founders.
One of the most promising use cases for AI is code generation,
that is, doing the work of a software engineer.
Already in the US, an estimated 9 in 10 software developers are using AI coding tools.
My next guest created one of those tools.
And less than two years after launch,
it's already one of the most popular AI coding assistants in the world.
Last year, they had less than 1,000 users.
Today, they have more than 600,000. Now, after raising a $65 million funding round at a $500 million valuation, they're looking to take over the industry.
Next up, compete with the likes of Microsoft and OpenAI.
This is my conversation with Varun Mohan, CEO and co-founder of Codium, and Jeff Wang, Codium's head of business.
Varun, Jeff, welcome.
Thanks for having us.
How was the flight? You just got in today, right?
Yeah, we took a red-eye and we, I think, slept one hour each, maybe.
Oh, my God.
And then when we got to the hotel earlier today, maybe we slept another hour.
So we're in perfect condition to do this podcast.
New York hotels are super fun.
You get like this matchbox room.
You know, you're right next to the wall every location you are.
Yeah.
Perfect.
Well, I appreciate you being here and appreciate you joining despite getting one hour of sleep last night.
It's like two, I suppose.
Yeah, yeah.
Good enough.
So you guys are the second AI company that I've had on this podcast. The previous one I had was an AI for
finance company, more or less replacing bankers, though his argument is that he's not replacing
bankers. I'm just going to start this with the same question that I asked him, which is,
are you guys at Codium replacing programmers? Our vision is actually to give developers
the ability to dream bigger.
I know that that sounds very vague,
but I think there's one way of looking at it
that we could just go out and replace
like low-skill labor or low-skill developers.
But I think that that's not a very rich idea.
We want to take the best developers
and make them 10 times as leveraged.
And the reason why we think that this is only going
to lead to more developers existing is unlike other professions, there's no limit to the amount
of technology the world can actually consume. Yeah, right. I don't think you would ever just be
like, hey, guys, stop making technology. If we can actually make it so that the next great invention
happens 10 times faster, we will just only get 10 times the amount of invention in the world.
So are programmers generally fans of yours, would you say?
I hope so.
Yeah.
I mean, our message is not to replace developers, I would say.
The way Varun actually got me on board was he said like,
everyone that touches a product, almost half the people continue using it.
So when you have that, you know there's something there, you know there's some value, right?
And some excitement.
Is that higher within the industry?
I mean, 50% retention,
basically, of the customers.
Do we know what it's like
for other AI tools
or is it too early?
Yeah, I think for a lot
of consumer products,
it's usually in the,
if you can get it above 10%
at the long tail,
that is considered very good.
That's largely because consumers, you know, unlike companies, they, you know, if they get bored of something, they throw it away very quickly.
And these products are not collaborative also.
This is like a single player product.
So it's actually very easy to churn off of the product because you aren't chatting with other people.
So it does mean it is providing a lot of leverage and developers stay more in flow state using products like Codium. But yeah, hopefully we can make it even better.
I mean, you're up against a lot of different tools. I mean, I feel like code generation was one of the first things that people said, oh, AI is going to take over this thing. But I feel like the biggest competitor in your space is GitHub Copilot. What does the competitive landscape look like
in the AI code generation space?
There are reasons why we are able to compete
directly with Copilot.
One of them is that we have full repo context awareness.
So as the user is typing,
the results are highly personalized.
And we're seeing a 30% to 40% boost in accuracy
just having that code base be available
and giving tools for the user to
point to what they're working on and trying to figure out their intent.
So just the quality of our code suggestions are very competitive.
And then the thing that we're really kind of trying to lean in on, though, is our ability
to deploy onto a private server.
And people can host Codium inside their company or in their work environment.
And for example, if you're like in the defense space or like the finance or healthcare space,
they can't use Copilot at all. And that's kind of where we are focusing right now, our efforts,
but then we have some things down the pipe where we'll just be competitive even on the cloud front
too. What does it look like for a programmer? I mean, it sounds like you're sort of typing in
and then it gives you a list of auto suggestions
for what comes next.
What does it actually look like if you're a programmer?
And please explain it to me as if I'm five
because I don't program, I'm not a coder.
Yeah, so a little bit about like the way software
sort of gets built.
So developers write code and what's called an ID. It's this application that enables you to debug code. So if there are bugs, you could
run it, you can actually see what the errors are, iterate on it, right. And then right before the
code gets pushed into production, it goes through a review process. And other people in the company
take a look at the code and actually review it. And then after that, it goes and it actually gets deployed into production. And it's on a website or whatever,
where end users can actually touch the product in the end. And right now, where Codium is mostly
focused is in the ID. So that's where developers actually write the code. It provides value in
multiple ways. So as developers are writing code, it fills in passively starts filling in more and more code.
And because of the fact that we actually do train our own models for that passive AI,
we actually found that around 50% of all software that is getting committed
by a developer is actually accepted and generated by Codium. So that's the amount of leverage that
just autocomplete is providing to end users. But to add to that, we also provide a couple of other pieces of functionality that is super
valuable despite what level you are as a programmer.
So you can even chat with your code base.
And this probably seems like a very basic piece of functionality, but when you're a
new developer and you're coming into a company and you have millions of lines of code, it
takes a while to actually onboard onto that new code base. And what we're finding is even at the largest enterprises,
the time it takes to onboard onto a code base goes down from four to six months to four to six weeks
with a product like Codium. I want to focus on how you started this company. Varun, you were
a software engineer at a self-driving vehicle company.
So you were kind of working on AI there, I feel like.
And you originally had an idea to start a company not for writing code with AI,
but for something called GPU virtualization.
Completely different company from Codium.
It also had a completely different name.
The name was Exafunction. Could you explain the first iteration of this company
before it became what is now known as Codium?
So to add a little color there,
so I graduated from MIT,
worked at this company called Nero.
It's an autonomous goods delivery company.
And actually a lot of the learnings
that I had from autonomous vehicles
are actually making their way
into the space we're in right now.
And maybe to paint some
clarity on that, in 2015, TechCrunch basically wrote, this is the year of AVs. And in 2024 now,
the quote is, is this the year of AVs? And you can see how there are probably going to be a lot
of parallels to generative AI where we are going to severely overestimate what is going to happen
in a year. And one of the cool parts about generative AI is how easy it is to make a demo,
but it is tremendously hard to make something production ready.
And if you make a claim that you are going to get rid of a developer,
that is a massive, massive claim.
And in fact, I would actually argue that is a harder problem than autonomous vehicles.
Because ultimately, if you look at autonomous vehicles,
all you need to do is press the accelerator or decelerator or turn a steering wheel.
Think about the number of different things a developer actually needs to do.
So I actually led a team to build large-scale deep learning infrastructure.
So how do you run these models at scale?
And sort of left the company with this vision of deep learning and the idea of running these
large models was going to affect many, many industries.
And we had a small team of people.
We had eight people managing upwards of 10,000 GPUs.
We managed close to 20 to 30% of an entire data center.
And we worked with a lot of these
large autonomous vehicle companies
when we started this company, Exafunction,
because our mission was,
how do we make it easier to run deep learning models?
But what ended up happening,
and this is where startups can always get disrupted.
And it sounds silly to get disrupted. We were making seven figures in ARR, but we realized
actually most of the models would probably become these transformer-based models. And these are the
models that underpin the GPTs, the models that OpenAI has. What is a transformer-based model?
So the basic idea is, so we now know of prompting, right? You know, use ChatGPT, you pass in a prompt and notice how it streams tokens one at a
time, right?
That's actually a property of these models that are called transformer models, that they
are what are called autoregressive.
They actually generate one token at a time.
And this is very different than a lot of other classification tasks in the past, where you
pass in an input and it just gives you the entire answer all in one shot.
But this actually like slowly generates the entire thing, sort of one word, one token at a
time. And we started noticing actually that that was how a lot of models were starting to look like
once OpenAI came out with GPT-3. And the beautiful thing about the model that is truly crazy is
because of the way it is trained, it can do it in an unsupervised way. So one of the things that was
very different
about models in the past is you needed a lot of label data.
But these models are trained on the entire public internet.
The label data is the internet.
So because of that, you suddenly got these models
that could take in basically trillions of tokens
of code or text.
And this was not possible in the past,
and this created these new sort of generative models.
And in the middle of 2022, we had this business that was a GPU virtualization business. The idea was we made it simpler to run applications on GPUs. And we found out that
most applications would probably be these transformer models. And if all of what we
were doing was running transformer models, we would largely become a commodity because they
would become a race to the bottom, right?
It would be the equivalent of asking,
like they would ask Varun,
how cheaply can you run this model?
I'd say, I can do it for a dollar.
Then they would ask Jeff how cheaply you can do it.
He'd say 50 cents.
And we'd go back and forth until no one made any money.
And this is the commodification of this entire space.
But what we did see was we felt that this technology
would be like the early coming of the internet.
There would be a brand new set of applications that would be created. And we were early adopters of GitHub's product, GitHub Copilot. And we thought that that was just scratching the tip of the iceberg of what the future would look like. And that's where Codium sort of came about. But it was, you know, as you can imagine, a very rough experience because we basically said, buy to all the revenue that we had, and we had to start all the way back down to zero.
So, I mean, that to me is like the mother of all pivots,
where, you know, not only are you changing the entire business as you know it,
you're also doing it at a time
when things are going really well.
I mean, you said seven figures ARR.
My understanding is that you'd also raise
$22 million for this company.
Like, things are going right.
And then you turn around and you tell all your employees,
actually, we'd scrap that.
We're going to do a whole different thing.
If I were a software engineer who worked at Big Tech
and had quit to go work at Exafunction,
and the CEO told me that,
I'd be a mixture of pissed off, freaked out, concerned.
How did you rally the team and how were you able to make that pivot so successfully?
So I'll say a couple of things about the composition of the team. Largely,
they were people that we knew. And that's actually very important because they would be
people that would go into the trenches with us. They were people that knew the caliber of people
that both me and my co-founder were.
And also on top of that, we picked a problem space
where we were all passionate about it.
And I 100% knew at the time,
there were products like Midjourney that were taking off.
You know, a small team of people,
eight people that were making tens of millions
of dollars in revenue.
And frankly speaking, when we decided to pivot the company, we knew for a fact, none of us were that passionate about image
generation, despite the fact that it is a very cool area. And if we had picked it, our team
wouldn't have had, I guess, the mental fortitude to dig deep enough to actually build the problem
space. And then I guess the sort of third part is actually that we actually were able to take a lot
of the infrastructure expertise that
we had as a company to actually go out and build the application significantly faster. We were very
quickly able to train our own models and run them at massive scale. And right now, Codium is one of
the top five largest generative AI apps in terms of text in the world. And that largely is because
the original composition of the team was these people who are effectively GPU infrastructure experts. But all said and done, everything that I said, it comes down to, you need a very truth-seeking
company. And at the time, even if we were at seven figures in revenue, I did not know how we would
10X the amount of revenue. And we could continue to lie to ourselves and have a slow but certain
death as a company, as the technology commoditizes, right?
And we all run the same kind of models. Or we could just say, hey, there is a high probability
that we will die, but there's a space that we could be very passionate about, and it could be
very big. I think we just decided the latter was the more rational choice. It is very hard to make,
but in retrospect, it was the obvious choice, right?
Did you have any data showing you
that you were going to crash and burn at that point?
Or was it just a hunch?
And part of the reason I asked that
is because if I were your investor,
I would want to be like,
oh yeah, yeah, it's very clear to me
that you guys have to pivot.
We were cashflow positive then.
We were cashflow positive then.
You just had a feeling.
We had a feeling. It's just
because, and this is a little bit of a curse of being a venture capital, venture-based business.
If you're making millions of dollars in revenue, that is not a venture backable business. And if
we can't figure out a path to get that to a hundred, then that is not a business that we
could build. We could continue to keep it as an 8% to 10% team. We have another
mentality in the company beyond being truth-seeking that we are a very lean company. By the time we
raised our Series B, we had barely spent our seed round. And I think that's just because we don't
think capital is a limiting factor in building a good business. You have to build a great product
that customers love. And that is usually not just you had more money. We'll be right back.
We're back with First Time Founders.
One of the things you mentioned
is that you are training your own models.
I mean, this is an AI application,
but most AI applications that I am aware of,
they're purely building the application layer.
And that is, they're basically using someone else's model,
usually open AI,
and they're tweaking and they're building off of that model and creating their own application.
You guys are different.
It sounds like you guys are, I don't know what you'd call it, full stack AI from the model to the application.
Is that right?
That is correct.
Actually, even at the kernel layer, we've done some, like we've rewritten some code, even at the infrastructure layer, so that we could set the models on top in an efficient manner.
And the reason we have to do that is because of the latency issues we talked about.
For example, if you are a passive AI and you take, let's say, one second to show up the suggested code, people are just going to stop using that.
They don't want to get out of flow state and pause and wait for results. How many other AI startups are doing,
what should we call it, the full stack,
the infrastructure to application?
Are there many others?
I just, I mean,
off the top of my head,
I'm like anthropic, like open AI,
but I guess they're mainly
kind of infrastructure layer, right?
I mean, isn't this super rare?
I think in my mind,
there's maybe a couple of unique things about code
that make it so that you can actually do this here.
And you're totally right.
Most of the large companies that are even successful
are largely built on top of an API.
But we genuinely felt to build a best-in-class app here,
we needed to become vertically integrated.
And for us, it was also not a complex thing for us to do
in that we have the technical talent inside the company
to actually go out and do that.
Maybe one of the unique aspects of code
on why we can do this
is code can actually be run.
Like let's say I am a legal AI tool
and I'm redlining a bunch of documents.
The only way to know if that is good
is for a human to go in afterwards
and actually take a look at it.
For code, you can actually,
if you make an edit to a code base,
you can actually run the code and validate it is doing the right thing
without a human in the loop at all.
And what that means is there are ways in which you can close the loop
in intelligent ways that you can actually,
if you specialize on that application and you are vertically integrated,
you can build an even better app for code.
And we've taken advantage of this in many, many ways.
And we realized if we didn't do this,
we'd be shooting ourselves in the foot.
And I'll give you even a simple example
of how this manifests itself.
Right now, I mentioned this,
Codium processes over 100 billion tokens of code every day,
which is over 10 billion lines of code every day.
If we pass that through OpenAI,
we would have gone bankrupt.
And there was a recent article from the Wall Street Journal.
And why is that? Sorry, because you'd have to pay for it.
Because it would be too expensive.
Exactly.
Be too expensive and not the best for our particular application on top of that.
And if you look, there was a recent article on the Wall Street Journal about how GitHub
Copilot was spending tens of dollars per user per month. And that's actually because even GitHub, Microsoft's product, is not vertically integrated.
They are relying on external models to build their application.
And we view that as, hey, the model and the product and the infrastructure are so critical
to delivering a great experience.
Why would we not have control over every piece of that?
My understanding is you guys don't use the actual data of your individual users. So how is the model getting
trained? Yeah. So we actually do sort of two different things and I'll let Jeff add on how
this affects our enterprises and the customers that use the product. So first of all, we use
permissively licensed code that is available on the public internet and we also attribute it on
generation time. So we actually take sort of copyright
and licensing very seriously as a company.
But on top of that, when we release product
from our user data, you're right.
We don't take the data from our users,
like let's say when they're auto-completing stuff
and copy that code and put it into our training set.
But we can see, hey, users are accepting
these types of suggestions more
and these types of suggestions less.
So we have preferences on what users and humans like.
And that actually informs us to actually build products
that are better and more willing to use.
And then that actually has a virtuous cycle
in that now people are willing to try
more complex things on our product
because the easier things,
they have high confidence that they work.
And suddenly the frontier of what we are actually able to
experience as we are able to give the user
increase more and more,
largely because we have a product that is so well beloved we now have over 600 000 users that use
our product yeah it's unbelievable i think one thing we might have glossed over in the beginning
was codium made a very conscious decision to make the product free for individuals yeah and if you
think about we you know varun just said copilot loses 1010 to $20 a month per user. That's after they've been paying a subscription
too, right? So having that infrastructure
background, being able to make
it efficient to deploy these models
and then giving it out for free for individuals
allowed us to build this very large user
base, probably the largest user base
for a coding assistant that's free.
The data here, you started out
last year with less than 1,000 users.
By the end of the year, you had half a million.
Yeah, and we have over millions of downloads across all the plugins.
And the reason why that's very important is because of what we just talked about.
If we roll out multiple models, if we are changing the temperature or the thresholds here and there,
we are getting so many signals as to what is the appropriate settings to tweak.
And I think every hour we're getting like a million signals.
So we can run all these experiments on these free users
of all these models we train
to really, really get the best results
that you could possibly get.
And then only when we've validated,
like, okay, we've trained this model.
This is better than all the other ones we've trained.
These are the settings that make the best results.
Then we could deploy that to our on-prem
or enterprise users, right?
Because we can't,
after we've deployed it, we can't really get that much more information from it. It's completely, it can be hosted even in an error-gapped environment. So I think that's a big element
of why we are successful of being able to deploy these models. Because if somebody's trying to
start from scratch right now, how are they going to know that their model is good? How are they
going to tweak the model, right? Without a very large user base. So that's part of the secret sauce is having that free user base. Yeah. And
you, it's still free, which is what I find pretty fascinating. And you said, I was just reading your
state, your like mission statement, you are quote committed to having a free tier forever.
The natural next question is, you know, how is it going to get properly monetized? And
how are you going to maintain that free tier into perpetuity? I think people underestimate kind of
the demand for both just the on-prem instance, but also some of the Teams features we add on
the SaaS product. So we have a free individual tier, but if you create a team and add onboard
users to it, that is a paid product. And there are things like analytics and actually bigger models and seat management that are
going to be better than the free product.
But we are committed to making the free product the best coding assistant out there, no matter
what.
So whatever other coding assistants come to market, we will make sure our free product
is still the best one.
We want to make sure, you know, disincentivize others from entering, but we want people that
always have the option. We don't want them to force,
get forced to buy a copilot, for example. And then maybe one thing to add to Jeff,
like we monetize enterprises, right? So we have some of the largest Fortune 100s,
F-thousands, even over 10,000 developers on our product. And those companies, obviously,
they want security guarantees. They want personalization for many, many repositories
that exist. And they also want support across all source code management tools.
Less than 10% of Fortune 500 companies are on GitHub Cloud.
And that is the competitor that we have.
That is GitHub Copilot.
And they have committed to making their product differentially better if you are on GitHub
Cloud, right?
So we want to take the approach of almost being Switzerland.
We don't care what programming language you write. We don't care what IDEs you use. We don't care where you store your source
code. And ultimately, we also just don't really care what seniority the developer is. We will
provide the maximum amount of leverage there. Whereas a lot of the larger players in the space
are focused on being tied to another brand. And the reason why we don't think that that makes sense is we think AI is such an up-leveler.
We think it deserves to be in a category of its own.
You recently raised $65 million
in a round led by Kleiner Perkins.
It valued you at half a billion dollars.
Congratulations.
What is that money going to be used for?
I think the way we would like to think about it is
we have ways of spending cash,
not only to train models,
but to also make it so that we can build
a better user experience for the end user.
But one of the cool things for us is
our product has such high ROI
that we think that there will be
a real payback period on that.
Enterprises and companies will see enough value there
that they will be able to eat the cost
that we will need to spend upfront. But also on top of that, we want to spend a lot on making
sure that we can become better partners for our customers. We're onboarding some of the world's
largest companies and we're onboarding tens of thousands of developers. That's going to take a
little bit of effort to make sure that we do that properly. One company that we're working with
right now that we have a multi-year engagement with, they account for 0.15% of all developers in the world, just that one company, right? So, you know, I'm a little bit of a different type of
founder in that I do not like the idea of spending money unnecessarily, but if it comes down to we
are doing it because we make our customers more successful and our users more happy, we'll do it
any day. How have you guys restrained yourselves in terms of spending. Because, I mean, yeah, the story, the narrative in AI
has been that it is an arms race.
And the thing that I hear about in the venture industry,
or at least what AI founders are being told,
is just go out and raise a shit ton of money,
as much as is even possible.
One, because you just want to develop a war chest.
And two, you want to get the headlines you know
you want to be the ai company that's working on cogeneration the ai company for finance whatever
it is so i guess sort of two questions for me here one is that accurate to your experience
and two how have you been so you know responsible in terms of spending while you see all of these headlines of other companies spending so much money on talent and training their models?
I think Varun mentioned earlier that we run very lean.
And the reason we're able to do that is we hire people that are generalists or ex-founders and they're capable of doing many job roles at once.
So our company size is probably like a little misleading.
It's probably way more
effective, not just infrastructure, but the headcount also. What is the headcount? So right
now we'll be almost about 55 at the end of the month, I think. And the thing is like the people
we hire are able to just slot into different roles almost at a moment's notice. It's like,
oh, we don't even have a marketing team. For example, the product's growth has been organic,
but we do want to do some marketing experiments to make sure that we are getting ourselves out there, just as an example.
And then people, you know, randomly, someone will be like, I have an idea.
Okay, go do it.
And then we all of a sudden have a Google ad strategy.
All of a sudden, we have a bunch of blog posts.
We're on podcasts like with you, Ed, Elsa.
But I think that my point is, part of the function of not spending the money is just
being very, I guess, practical of spending the money is just being very,
I guess, practical of where the money goes and running lean.
And I think when there is a moment that says like, hey, we need to train a much bigger
model and we need to spend this much money, we're all for it, actually.
We actually are very conscious about what the ROI is of everything we do.
How do you maintain that culture of leanness and what would be your recommendation to other
companies that are maybe not as big as you,
but trying to be?
This is probably the hardest problem that we're trying to solve right now.
Yeah.
Because we want to hire people that are very good, very technical, maybe they're generalists,
like I said earlier.
But it's very hard to hire those people.
So actually, this is probably one of the things we focus on in the next month,
is what is our recruiting strategy?
How do we hire the best people?
Maybe part of that is getting Codium's brand name much more aware.
Maybe it's like a big push of user adoption.
Maybe we're just going to have to be scrappy and be very creative of how we hire people.
For example, I don't know if other companies are just like pinging every ex-founder on LinkedIn,
but we are, right?
So we are trying to scale with creative means.
One other thing is like, as a company,
I think culture is, we have some cultural principles
and we run lean as one of them,
but who would want to say you don't run lean?
So I think, how do you actually live that out?
We are a five days a week in person
company. So we don't do remote work. We don't really do hybrid work either. So people see what
it's like to work at the company. And, you know, until very recently, our CTO was ordering snacks.
And that's not to say that's a great use of his time. It's more just to no one is high enough
to not do some work to test a hypothesis. And we don't hire specialists at
the company until generalists outgrow that role. And I'll give you another example of this.
When we ran that GPU virtualization company, even though we were making money,
we never hired a sales rep. And that's not because I don't believe in enterprise selling. No,
we have a great VP of sales now at the company. And I think we might have, in terms of talent
density, one of the strongest enterprise sales teams in the world, actually. But why didn't we hire someone? I just
didn't believe that if we added one new person, I would be setting them up for success. Because
the reality is, if I could not get $1 of additional sales, I cannot expect someone else to get $10.
This is one of those things where we have a mentality of, we try to do it ourselves.
And then we try to eliminate ourselves from the role.
We give away our Legos.
We let someone else take that over that understands the role much more, but we don't do things prematurely.
And I think there's a tendency across people that the idea of building a scalable organization is really valuable.
And I do see that, that you do want to build a scalable organization. But sometimes people get too excited about this notion of org building or fundraising
rather than the idea of having customers, having users, because ultimately people that
join our company don't care about how much money we raised as long as we will survive.
And our customers genuinely don't care.
Let's look at it this way, right?
If you look at a company as big as JPMC that makes hundreds of billions of dollars, to
them, does it make sense if we raised 100 or 200 million?
It all looks like peanuts to them.
It's all like 10 basis points of the amount of revenue that they make a year.
So to them, what they really care about are companies of this size.
Are we the best partner for them?
Are we the best product to them?
And as long as we're laser focused on that, we should do whatever it takes to build that up.
We'll be right back.
We're back with First Time Founders. Sort of a more personal question. I mean, you started this company in 2021,
sort of just as the AI hype was bubbling up,
and you now find yourself at the epicenter of the hottest industry, and you are one of the hottest companies in the hottest industry.
Just a personal question for both of you.
How does that feel?
What has it been like
getting used to being the guy in AI? I told this to the company. I tell everyone,
just get ready to get destroyed. Assume that something very bad is going to happen always.
And this is where us having gone through that pivot is very critical. Things are going very
well for us as a company. A lot of the reason why we haven't spent a lot of money is now we make money, which is a
unique property about a lot of companies, apparently in the space where most companies
talk about vision rather than actually building a product that people use. But I just, I tell
everyone, hey, get ready for something really bad to happen. And this is why it's like very
important that we hire people that are truly in it for the long
run.
And I tell people this when they join the company, I think we could be a company that's
worth over $100 billion.
I think we can.
And that's largely because of the total adjustable market of what we're building and the amount
of impact that this can have, given how important technology is, could be massive.
But also a series of bad decisions that we make could completely kill the company.
And that will happen very fast. I think this is where us building a very truth-seeking company,
and that actually is very hard because people want to believe that what they're doing is correct,
and they want to embrace psychological safety. And what I tell people is, hey, if you feel
something is wrong, lean into what you think is wrong and tell
everyone. Tell everyone because we should not have pockets of people that want to report up the chain
and tell their manager or tell me we have a very flat company or tell me things are going fine.
I would much rather hear everything is on fire and have paranoid people at the company than people
who are just happily going to work. And this is why I think startups are so much harder than big companies. It's actually not that, you know,
you're taking a massive risk on the monetary side. You still can make a six-figure salary, right?
These are not people that are living hand to mouth. But the really hard part is the lack of
psychological safety. We make, a bunch of people at the company make a series of bad decisions and
the entire thing can go to zero. Whereas if you're at Google, you have very little accountability. If your team
doesn't perform well, Google makes so much money that you are a rounding error. You will be
shuffled to some other part of the company. You never really need to deal with the impact of your
decisions and consequences of your decisions. And that's why it's always a little bit of a
funny statement when someone at a big company is like, I work at a startup at a big company.
No, you don't. Imagine the idea of you potentially losing your job every quarter or every month.
Exactly. One thing, the way I think about it is radical transparency. And we have a lot of
conversations within our company of, you know know let's be super transparent about everything and even in my personal life i'm like i'm just going to be like
totally up front that's the cleanest most kind of hygienic way to to operate do you ever feel
that there could be too much truth do you feel that there's a possibility that you know if you're
encouraging everyone to
tell the truth, tell the truth, be transparent, tell me everything that they'll kind of go
overboard? I think this is the hardest part about a startup. You know, there are the two things that
I say is a startup is really hard because you need to be both irrationally optimistic, because
if you're not optimistic, the answer is always Microsoft is going to beat you, right? Right.
It's the biggest company of all time. They have the most capital. They have the
most people. They have the most distribution. Why does any company win? But clearly that's wrong,
right? There are companies that have beat Microsoft at different areas. We always say this,
oh, they have the most capital. They're going to win. Yes. It's very easy from my perspective to
say that. Exactly, right? And then HBS, you know, I try to think about what Harvard Business School
would say 10 years from now for us as a company. And I tell people this, it's like, you know, we fail. And what they write is,
in a world in which technology was changing, Microsoft had all the distribution in the world.
They had this property dog called GitHub. It was inevitable that they would win. And because of
that, they won. They won the entire space. And all of these startups, it was a fool's errand.
Why did they even try? And then the world in which we win, they were going to write, in a world in which the technology was getting disrupted so materially, there were a set of companies, and one of which was Codium, that had such a technological advantage that there was no way a slow-moving gorilla like Microsoft could even compete. So what I think is very hard.
Yeah, exactly.
They will write whatever the future looks like and the history will be written by the victor and no one will know exactly what the pages of the book look like.
Right.
And I think the hardest part about a startup is you do need to be irrationally optimistic
to believe that you can win because by default, if you don't, you will definitely lose.
But then also uncompromisingly realistic, which is that sometimes, actually, there's no point continuing
in a particular direction. This is where I think it comes down to what you just said,
which is that if you are too truth-seeking, and everyone is paranoid all the time,
it could lead to paralysis. And I think there's a fine line there. There's a fine
line where one time we lost a deal and we actually every day for dinner talked about it for two weeks
in a row for multiple hours. We talked about the implications as a company and that was very useful.
But if we extended that out to an entire year, we would not go anywhere. So you're totally right.
There's a level here. But what I do feel most companies do is probably err on the side of not being truth-seeking enough. They become too
complacent with what they're doing. And, you know, I think the thing to really think about,
and this is not true at a big company, which is why you really need to think about it at a startup
is you do not win an award for doing the wrong thing for longer. So the sooner you can rip the
bandaid off, the better your company will be.
You will be way happier that you did it. It is going to be incredibly painful for like a week,
but just do it. I love that. You guys have been generous with your time. So we'll
begin to wrap up here. Jeff, I'll ask you this question. What do you think has been the greatest
challenge that this company has faced in the past couple years? I think the pace at which
things switch is really challenging. We relied on some technology features to be the selling point,
and then our competitors would come out, and it's like the same thing all of a sudden.
And every time that there's a news release for one of our competitors, we immediately go into
like a code red and go into a conference room, reverse engineer what they're doing. And this is constant. This is like every
month this is happening. The question is like, will this last forever? Like, will we just always
be panicking? And the question, the answer is probably yes, right? I think for people listening
to this podcast, you two are the closest thing to AI experts as we've had.
You're kind of on the front lines of this. What would be your advice to anyone who is working in
the AI industry or who wants to work in the AI industry? And I think that doesn't just have to
be founders, but engineers, product managers, business operators, etc. What's the most important thing that they
should understand about AI right now? There's an interesting property about AI right now,
in that it is actually imperfect. You know, when you use the products, it sometimes says the wrong
thing. And despite that, it is actually very useful in some domains. That's not like anything
in the past. When you used the internet and you
ordered something off Amazon in 2002, it's not like they would ship you something incorrectly,
or maybe they would do that, but that's not, there was no expectation going in. You would buy one
book and you would get a different one. And somehow that is still fine right now, which is a
cool part of the technology, which is that it actually gets perfect. It's going to usher in a
brand new set of applications as well.
But I think the key thing to really think about
is not to think about go back from a demo
and try to build that today.
So think about what products can you build today
that are actually imperfect,
but still can generate a lot of value.
And that is a lot harder than you would think
because in a lot of domains, if you are
imperfect, like let's say you're, you're reviewing a legal document and it actually completely
reviews the document, but 10% of the time it's wrong. You can't give that to an end customer.
So actually thinking about the trade-offs of how good does the quality need to be to ship your
product, right? How fast does the experience need to be in a world in which the quality is not perfect, it better be fast. There's no world in which I'm going to wait 24 hours for
something and the quality is imperfect, right? And then if the quality is imperfect and it is fast,
how quickly can I correct it? And these are all important factors of a product.
If you don't hit the sweet spot here, you will have a product that's a cool demo,
but no one will ever use it.
I think this is the most important thing
to really think about.
Adding AI to just any field that exists
doesn't just suddenly make the product usable.
It needs to be a product that is useful in its own right.
I think if you give away a product for free
and everybody keeps using it
and a lot of people keep trying to get it,
then you know there's something of value and there's product market fit. And I think if you look at a lot of people keep trying to get it, then you know there's something of value
and there's product market fit.
And I think if you look at a lot of these AI tools
and AI demos,
maybe they're not realizing that.
Maybe they're just building something
that is a cool demo
or it's like a really,
it pushes the limits of the technology,
but it's not actually building something
that people want, right?
Or will find value from.
I think that's maybe our biggest message
to other founders
or other people that are building products in the area is just think like if i gave this away for
free is everybody going to want to use it and keep using it i think that's a maybe something people
miss do you think maybe that's the model that i mean i was gonna say software founders but maybe
all founders that you should just start with giving the product out for free and see where
things take you from there it worked really well well for ChatGPT, right?
Yeah. Yeah, exactly. And it worked well for you.
Yeah.
That's actually an interesting principle that Jeff just said. Obviously, in some scenarios,
the reason why ChatGPT could do that and OpenAI could do that is they own the infrastructure.
And if another company did that, they would have gone bankrupt. So you do need advantages
in particular places to be able to do that. But at the very least, if you did burn money and you gave it away for free,
if no one runs to use the product, you're probably in a world of trouble.
That's a great place to end.
Varun Mohan is the founder and CEO of Codium.
Jeff Wang is the company's head of business.
Guys, thank you for joining us.
That was awesome.
Yeah, thanks for having us.
Thanks for joining us. That was awesome. Yeah, thanks for having us. Thanks for having us.
Our producer is Claire Miller.
Our associate producer is Alison Weiss.
And our engineer is Benjamin Spencer.
Jason Stavis and Catherine Dillon are our executive producers.
Thank you for listening to First Time Founders from the Vox Media Podcast Network. Tune in tomorrow for Prof G Markets.