The Prof G Pod with Scott Galloway - First Time Founders with Ed Elson – This Physicist Is Building AI Droids
Episode Date: November 2, 2025Ed speaks with Matan Grinberg, co-founder and CEO of Factory, an AI company focused on bringing autonomy to software engineering. They discuss the long-term future of AI, the role of regulation, and w...hether or not he’s concerned about an AI bubble. Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
Support for this show comes from the Audible original, The Downloaded 2, Ghosts in the Machine.
Quantum computers, the next great frontier of technology, offering endless possibilities that stretch the human mind.
But for Roscoe Cudulian and the Phoenix Colony, quantum computing uploads the human mind with life-altering consequences.
Audibles hit sci-fi thriller The Downloaded returns with Oscar winner Brendan Fraser,
reprising his role as Rosco Cudulian in The Downloaded 2, Ghosts in the Machine.
This thought-provoking sequel from Robert J. Sawyer takes listeners on a captivating sci-fi journey,
a mind-bending must-listen that asks,
what are you willing to lose to save the ones you love?
The Downloaded 2, Ghosts in the Machine.
Available now, only from Audible.
Support for this show comes from the Audible Original, the Downloaded 2, Ghosts in the Machine.
The Earth only has a few days left.
Rosco Cudulian and the rest of the Phoenix Colony have to re-upload their minds into the quantum computer,
but a new threat has arisen that could destroy their stored consciousness forever.
Listen to Oscar winner Brendan Fraser reprised his role as Rosco Cudulian in this follow-up to the Audible original Blockbuster,
the Downloaded, it's a thought-provoking sci-fi journey where identity, memory, and morality collide.
Robert J. Sawyer does it again with this much-anticipated sequel that leaves you asking,
What are you willing to lose to save the ones you love?
The Downloaded 2, Ghosts in the Machine.
Available now, only from Audible.
Support for this show comes from the Audible Original, The Downloaded 2, Ghosts in the Machine.
The Earth only has a few days left.
Rosco Cudulian and the rest of the Phoenix colony have to re-upload their minds into the quantum computer,
but a new threat has arisen that could destroy their stored consciousness forever.
Listen to Oscar winner Brendan Fraser reprised his role as Rosco Cudulian in this follow-up to the Audible original Blockbuster, The Downloaded.
It's a thought-provoking sci-fi journey where identity, memory, and morality collide.
Robert J. Sawyer does it again with this much-anticipated sequel that leaves you asking,
What are you willing to lose to save the ones you love?
The Downloaded 2 Ghosts in the Machine.
Available now, only from Audible.
Welcome to first-time founders.
I'm Ed Elson.
Seven and a half billion dollars.
That is how much money has poured into AI coding startups in just the past three months.
And it's not that hard to see why.
Across the industry, developers are embracing generative AI to speed up their work.
It's efficient, it's impressive, but it's still under the careful watch of human engineers.
Well, my next guest wondered if AI could do more.
What if it could handle routine tasks like debugging or migrations on its own?
What if it could be autonomous?
To turn that idea into reality, he launched an AI startup,
which uses agents to handle the mundane work that developers would rather skip.
With a $50 million investment from Sequoia, JPMorgan, and Nvidia,
his company is reshaping the future of software development.
This is my conversation with Matan Grinberg,
co-founder and CEO of Factory.
All right, Matan Grinberg.
Thank you for joining me.
Thank you for having me.
How are you?
I'm good.
We should probably start off by saying,
we go way back.
We do, indeed, yes.
We're friends from college.
I knew you.
Back in college, I knew you when you were studying physics.
You were a budding physicist.
I mean, just for those listening,
Matar was basically the smartest guy I knew.
in college and then you go on and you're, I know you were getting your PhD in physics and then
eventually you tell me, no, I'm actually starting an AI company. And now here you are and you're
running one of these top AI-igent startups, figuring out how to automate coding. Let's just
start with like, how do we get here? How do we go from Princeton physics? Going to be a physicist
and then now you're an AI person. Yeah. So obviously that was a,
not the arc that I think I was expecting either. Probably goes back to 8th grade, which is why I got
into physics in the first place. Spike is a very big motivator for me. And in 8th grade, my
geometry teacher told me to retake geometry in high school. And I was like, screw that. Like,
what? Like, I'm good at math. I don't need to do that. And so in the summer between 8th and 9th
grade, my first order on Amazon ever was textbooks for algebra two, trigonometry, pre-calculus,
calculus, one, two, three, differential equations.
A true nerd.
Yeah, exactly.
And so I spent the whole summer studying those textbooks.
And going into freshman year of high school, I took an exam to pass out of every single one
of those classes, so I had credit for all of them.
And then I went to my dad, and I was like, what's the hardest math?
And he said string theory, which is actually physics, it's not math.
And I was like, okay, I'm going to be a string theorist.
And then basically for the next like 10 years of my life, that was all I really cared about.
I didn't really pay attention much to anything about like finance, entrepreneurship, like
anything like that.
Went to Princeton because it was great for physics, then did a master's in the UK,
came to Berkeley to do the PhD.
And at Berkeley, it finally dawned on me.
Wait a minute.
I was just studying for 10 years, like 10-dimensional black holes and quantum field theory
and all this stuff, originally because of this, like, spite.
And obviously, I came to love it,
but I realized that I didn't really want to spend my entire life doing that.
Taking 10 years to realize that is a little bit slow,
but I had a bit of an existential crisis of, you know, like, what is it?
What should I do?
Almost joined Jane Street in a classic, like, ex-physicist, like, what should I do?
Decided not to because I feel like that's the thing, like, you know,
once you go there, you kind of don't move on from that.
So I ended up taking some classes at Berkeley in AI,
really fell in love in particular with what was called program synthesis.
Now they call it cogeneration.
And the math from physics made it such that like jumping into the AI side
was relatively straightforward.
Did that for about a year and then realized that the best way to pursue
cogeneration was not through academic research, but through starting a company.
And so then the question was like, okay, well, I know nothing about entrepreneurship.
I've been a physicist for 10 years.
what should i do and this was uh just after covid but i remember on youtube in my recommended
algorithm i saw a podcast on zoom with this guy whose name i remembered from a paper that i wrote
at princeton this guy used to be a string theorist but it was a podcast and it was like
uh sequoia investor like talks you know everything from like crypto to physics and i was
like what the hell is this and i remember watching the interview and the guy seemed relatively
normal, like had social skills, which is rare for someone, which is rare for someone who had
published in string theory.
That was the other interesting thing about you is you're kind of a social person who's also
this physics genius, which again is quite rare.
So you found someone in common.
Yeah, so found someone who was like, okay, you know, maybe there's, there was someone else
who has this similar background.
And I remembered the name correctly, and so I looked him up and saw that he was a string
theorist who ended up, you know, getting his degree, then joining Google Ventures, being one
of the first checks into Stripe, then one of the first checks into, like, SpaceX, on the way
he had built and sold a company for a billion dollars to Palo Alto Networks. And I was just
like, this is an insane trajectory. So sent him a cold email. And I was just like, hey, I'm
a ton. I studied physics at Princeton, wrote a paper with this guy named Juan Maldesana,
who's like a very famous string theorist. And I was like, would love to talk. And,
That day, he immediately replied and was like, hey, come down to our office in Menlo Park.
Let's chat.
What was supposed to be a 30-minute meeting ends up being a three-hour walk.
And we walk from Sand Hill all the way to Stanford campus and then back.
And funny enough, on the walk, so we realized that we had a lot of very similar reasons for getting into physics in the first place,
similar reasons for wanting to leave as well.
And this was in April of 2023, so just after the Silicon Valley Bank crisis.
And also very soon after the...
the Elon Twitter acquisition.
And after the conversation,
he was basically like,
Maton, you should 100% drop out of your PhD.
And you should either join Twitter right now,
because if you voluntarily go to Twitter, of all times now,
that's just badass, it looks great, you know, on a resume,
or you should start a company.
And I knew what the answer was,
but didn't want to, like, corrupt what was an incredible meeting.
So I was like, okay, thank you so much.
I'm going to go think about it.
Good advice for meetings.
Don't give your answer right away.
Yeah, yeah.
Take some time, come back.
Yeah.
And so, crazy thing, the next day,
I go to a hackathon in San Francisco.
In this hackathon, I run into Eno.
We recognized each other at this hackathon.
We're like, oh, hey, like, you know, I remember you.
We ended up chatting and realizing that we were both really obsessed
with coding for AI.
And then that day, we started working on what would become factory.
He had a job at the time.
I was a PhD student, so I could spend whatever time I wanted on it.
And over the next 48 hours,
We built the demo for what would become factory, called up Sean, and I was like, hey, I was thinking
about what you said.
I have a demo I want to show you.
And so we got on a Zoom.
I showed it to him.
He was like, this is all right.
And I was like, all right.
Like, I think this is pretty sick.
Like, I don't know.
And he's like, okay, would you work on it full time?
And I was like, yeah, 100%.
And he was like, okay, drop out of your PhD and send me a screenshot.
And I was just like, fuck it.
Okay.
So go to the, like, go to the, like, Berkeley portal, like, fully unroll and withdraw.
I didn't tell my parents, obviously.
Send him a screenshot and he's like,
okay, you have a meeting with the Sequoia partnership tomorrow morning,
like be ready to present.
Wow.
So now, back by Sequoia, you just raised your series B.
You are one of the top AI coding startups,
but there are a lot of AI coding companies.
We spoke with one a while ago,
which was codium, which eventually became windsurf.
It got folded into Google in this kind of controversial situation.
Point being, there are people who are doing this.
What makes factory different?
What made it different from the get-go and what makes it different now?
Our mission from when we first started is actually the exact same that it is today,
which is to bring autonomy to software engineering.
I think when we first started in April of 2023, we were very early.
And what I've come to realize is that, and this is kind of a little bit of a trite statement,
but being early is the same as being wrong.
And we were wrong early on.
in that the foundation models were not good enough to fully have autonomous software development agents.
And so in the early days, I think the important things that we were doing was building out an absolutely killer team, which we do have.
And everyone that we started with is still here, which has been incredible, and having a deeper sense of how developers are going to adopt these tools.
So that was kind of in the early days.
And I think something that we learned that still to this day, I don't really see any other companies focus on,
the fact that coding is not the most important part of software development.
In fact, as a company gets larger and as the number of engineers in a company grows,
the amount of time that any given engineer spends on coding goes down.
Because there's all this organizational molasses of like needing to do documentation
and design reviews and meetings and approvals and code review and testing.
And so the stuff that developers actually enjoy doing, namely the coding,
is actually what you get to spend less time on.
And then these companies emerging saying, hey, we're going to automate that,
one little thing that you sometimes get to do that you enjoy, you don't get to do that anymore.
So your life as a developer is just going to be reviewing code or documenting code, which it just,
I think, really misses the mark on what developers in the enterprise actually care about.
And I think the reason why this happens is because a lot of these companies have, like, in their
composition, the best engineers in the world graduating from, you know, the greatest schools,
and they join startups.
And at a startup, if you're an engineer, all you do is code.
And so there's kind of this mismatch in terms of empathy of what the life of a developer is.
Because, you know, if you're a developer at one of these hot startups, yes, coding, speed that up, great.
But if you're a developer at some 50,000 engineer org, coding is not your bottleneck.
Your bottleneck are all these other things.
And with us focusing on that full, like, end-to-end spectrum of software development,
we end up kind of hitting more closely to what developers actually want.
Microsoft, I know, I think Satya and Della said something like 30%,
of code at Microsoft is being written by AI right now.
I think Zuckerberg said that he's shooting for, I think,
half of the code at Meta to be written by AI.
You're basically saying what software developers want
is not for someone to be doing the creative part,
but they want someone or an agent or an AI
to be doing the boring drudge work.
What does that drudge work actually look like?
You said sort of reviewing code,
documenting code, in what sense is factory addressing that issue?
Even the idea of like 30% of code is AI written, I think it's a very non-trivial metric
to calculate because if you have AI generated like 10 lines and you manually go adjust
two of them, do those two count as AI generated or not?
So there's some gray there.
You think that they're kind of just throwing numbers out there a little bit?
It's just a very hard, it's hard to calculate.
And so even if you were trying to be as rigorous as possible, I don't know how you come up
with a very concrete number there.
But regardless, I think that directionally it's correct that the number of the number of lines
of code that's AI generated is strictly increasing.
The way the factory helps, so I guess generally like the software development life cycle,
very high level looks like first understanding, right?
So you're trying to figure out what is the lay of the land of our current code base, let's say,
or our current product.
Then you're going to have some planning of whether it's like a migration that we want to do
or a feature or some customer issue that we want to fix.
Then you're going to plan it out, create some design doc.
You're going to go and implement it.
So you're going to write the code for it.
Then you're going to generate some tests to verify that it, you know,
is passing some criteria that you have.
There's going to be some human review.
So they're going to check to make sure that this looks good.
And then you might update your documentation.
And then you kind of push it into production and, you know, monitor to make sure that it doesn't break.
In an enterprise, all of those steps take a really, really long time.
because there's, you know, the larger your org, if it's 30 years old,
there are all these different interdependencies.
And like, imagine you're a bank
and you want to ship a new feature to your, like, mobile app.
There are so many different interdependencies
that any given change will affect.
So then you need to have meetings
and you need to have approvals from this person.
And this person needs to find the subject matter expert
for this part of the code base.
And it ends up taking months and months.
And so where factory helps is a lot of the things
that don't seem like the highest leverage
are what they spend a lot of time on.
So, like, that testing part, or the review process, or the documentation, or even the initial
understanding.
I cannot tell you how many customers of ours have a situation where there was, like, one
expert who's been there for 30 years who just retired.
And so now there's, like, literally no one who understands a certain part of their codebase.
And so getting some new engineer to go in and do that, there's no documentation.
So now that engineer has to spend six months writing out docs for this, like, legacy codebase,
which is, you know, engineers spend years of their lives becoming experts.
The highest leverage use of their time is not writing documentation on existing parts of the codebase.
In this world where an org has factory fully deployed, that engineer can just send off an agent.
Our agents are called droids.
So send off a droid to go and generate those docs, ask it questions, get the insight as if it was a subject matter expert that's been there for 20 years.
So they can go and say, okay, here's how we're going to design a solution.
Here's how we're going to fix whatever issues at hand.
These droids, your agents that you call droids, I think one of the big differentiators that
I've seen is that they are fully autonomous. They're doing it basically everything on their
own. In contrast to something like co-pilot, which is by definition working alongside you
to help you figure things out. You guys are saying, no, these things can be completely on their
own, totally autonomous. Literally, you've got robots just doing the work for you. Why is that
the way to go with AI?
At a high level,
so this is true for code,
but I would also say
for knowledge work more broadly.
But for code in particular,
we're going from a world
where developers wrote 100% of their code
to a world where developers
will eventually write 0% of it.
And we're basically changing
the primitive of software development
from writing lines of code,
writing functions, writing files,
to the new primitive being a delegation,
like delegating a task to an agent.
And so the new kind of,
important thing to consider is, you know, you can delegate a task, but if it's very poorly
scoped, the agent will probably not satisfy whatever criteria you had in your head. And so if this
new primitive is delegation, your job as a developer is to get good at, how can I very clearly
define what success looks like, what I need to get done, what the testing it should do, like,
what are organizations contributing guidelines are, let's say. And so with this as the new primitive,
the job of the developer is now, okay, if I set up the right guidelines and I tell this agent
to go, it now has the information it needs to succeed on its own. And this is very similar to
like human engineer onboarding. Like when you onboard a human engineer into your organization,
what do you do? You don't just throw them into the code base. You'll say, hey, here's what we've
built so far. Here's how we build things going forward. Here's our process for deciding on what
features to build. Here's our coding standards. So you have like a long onboarding
process, then you give them a laptop so they can actually go and write the code and test it and run it
and mess around with it before they actually submit it. And so we need to do similar things with
agents where we give them this thorough onboarding process. You give it an environment where it can
actually test the code and mess around with the code to see if it's working. And having that
laptop, now it has this like autonomous loop that it can go through where it tries out some code, runs
it. Oh, that failed. Let me go iterate based on that. Now we do have not like fully autonomous
droids, but the point is that giving people access to this, they can set up droids to fully generate
all of their docs for them. So now as an engineer, that's just something you don't need to worry
about, because that's not the highest leverage use of your time. Thinking about instead this
behavior change towards delegation, that's like the kind of biggest thing that we work with
enterprises on. I think delegation is the right word, but it's also kind of a scary word because
delegation implies, I mean, the way that we work today, you delegate to other people who's
jobs are to do all of the things that you're describing, there are some companies that say
AI is going to be your partner and work alongside you, you're saying, actually, no, this is just
going to do the work, i.e. it would replace people. And this is obviously a big debate in
AI, the automation debate, what happens to the four and a half million software engineers,
is what is your viewpoint on this automation debate
and the idea that AI is going to take your job?
At a high level, I will say AI will not replace human engineers.
Human engineers who know how to use AI will replace human engineers who don't.
And I think the reason AI will not replace human engineers is because basically there's
like a bar for how big a problem needs to be in order for it to be economically viable
for someone to implement a software solution to it.
And suppose it used to be a billion dollars
and then slowly it's gone down to $100 million or $10 million.
Like these are like the TAMS of the problem
that makes it economically worthwhile
to build up a team of software engineers to work on a problem.
What AI does is it lowers that bar.
So now in a world where before,
you could only economically, viably solve a problem
that's worth $10 million.
Now maybe it's $100,000.
Now maybe it's like large enterprises
can actually make a lot of custom software
for any given customer of theirs.
It means that the leverage of each software developer goes up.
It does not mean that the number of software engineers go down.
It would mean that if there was only one company in the world that had access to AI.
Because then they have access to AI.
They can use AI while no one else does.
And now they have way more leverage.
So they can beat their competitors while having less humans.
But the reality is now is if there are two companies and they're competing,
one has 1,000 engineers, the other has 1,000 engineers.
They both get AI.
So now they have the equivalent output of 100,000 engineers.
they're not going to start firing engineers
because now one company is going to be way more productive
than the other. They'll deliver a better product,
better solution, lower cost to their customers
and then they're going to succeed. So then
this other company is going to be incentivized
than to have more engineers, right?
Yes.
So I think that's one side of it.
I think the other is like
we have really bad
prediction on what we can do
with these tools. Because right now
humanity has only seen
what loosely 100,000 software engineers working together can build.
That might be like, let's say, the cloud providers.
Those are some of the largest engineering orgs, something that took, let's say, 100,000
engineers to build.
We don't even know what the equivalent of 10 million human software engineers could build.
Like, we can't even conceive of what software is so intricate and complicated that it would
take that many engineers to build.
And I kind of refuse to believe that 100,000 is the limit.
There's no interesting software after that.
Yeah, I'm really glad you brought up the point of,
the danger here is that one company would own all of the AI.
Like the problem isn't value creation.
I mean, what we're describing is technology bringing the costs down
and therefore creating more incentives to build,
more value creation, which can only be a good thing
unless it is in some way hijacked
and you don't have a system of capitalism
where companies are really competing with each other
and forcing each other to iterate,
and also that includes many different players
who can participate in that value creation.
And when I look at the AI space right now,
just as an example,
when we interviewed and spoke with what is now WinSurf,
and I asked the founders this question of,
like, how do you compete with big tech?
And they explained how they're going to do it
and how they're going to take big tech on,
and then what do you know, Google buys them.
And I look at the same thing with, like, scale AI, which was, you know, one of the biggest AI startups.
Alexander Wang was this incredible thought leader.
And then, what do you know, he gets, I mean, they get an investment, which turns into he's now an employee at Meta.
And now he's building, you know, like meta social media AI.
And it all seems as though AI is being kind of overridden or taken over by big tech through,
these investments, which then turn into these sort of aqua hires, and it does make me concern that
all of the power and all of the value is accruing into one place, and it's the same place that
we've had over the past 20 years. So how do you think about that? Do you think about this possibility
that maybe big tech comes in and says, we need your software, we need your people, we're going to
acquire you, and do you worry about that concentration of power and AI? I think it's a very top of mind
thing for people is like, even from the investor side, is it going to be the incumbents that win or
will it be, you know, insurgents or however you want to, you know, the startups that can come
and, you know, kind of claw their way into like surviving without acquisition. I think the answer is
always founder and company dependent. Like, I think some examples that come to mind are like Airbnb and Uber.
These are companies where there wasn't a very obvious gap in the market such that anyone could start
a company like Airbnb or Uber and just, you know, succeed. Like I think, you know,
it took a lot of very intentional and very relentless work in the face of tons of adversity
to actually make those companies viable and successful. And I think in a lot of these cases,
it is the choice of the founders or the companies to either continue or proceed to joining
big tech. And I think at the end of the day, it really does depend on like, how relentless
are you willing to be to actually fight that fight? Because I think both of those acquisitions
were optional.
Like, I don't think they were, like, back against the wall
had no other choice.
I think it was, like, for whatever reason,
and I don't know the exact details
of either of these situations,
but it was like, you know what,
based on the journey so far,
let's elect to do this.
Presumably because they were offered so much money.
I mean, when I look at meta hiring
all of these AI geniuses,
and I assume this is probably a concern
for factory in many other AI startups,
what if Meta just hires our people?
And I wonder if it's because
these companies are so,
dominant, they have so much money, that they're like, here, here's a billion dollars, and
it's hard to say no to that.
Totally, yeah.
But I think if you went back in time and you offered, like, let's say, Travis Kalanick
a lot of money.
He'd say no.
Yeah.
Because he was, like, that was the mission.
And I think similarly at factory, we are super focused on people that are very mission-driven.
If you want to make a ridiculous amount of money, you can go to META, you can go to one of those places.
The people who have joined our team have chosen this mission with this team in particular.
because of that reason.
And I think that's what it takes, ultimately, at the end of the day,
because we do not want to be acquired.
We do not want to be part of big tech,
because I think they don't have the tools to solve the problem
in the way that we want to solve it.
Yeah, it sounds like what AI needs in order for there to be,
like, real competition is you need a founder
who wants to go to bat and who wants to fight, essentially,
who doesn't want to get, I guess, in bed with big tech.
But, I mean, one of the big,
big themes that we've been seeing with AI recently is, of course, this circular financing
stuff where these companies are investing and then the money comes back to them when they buy
their products. And it's hard to see the competition actually happening when you see everyone
kind of collaborating with each other. How do you think about that and how do people in Silicon
Valley? I mean, you're very tapped into Silicon Valley, Sequoia, one of the top firms, one
of your investors, how do people view that in Silicon Valley right now? And are they concerned
about it? People definitely make a lot of jokes about like the like circular investing and that
sort of thing. I mean, on one hand, I get it because there is a lot of interdependency of all
these companies and there is a lot that they can do together, which I think on one hand is a good
thing. On the other hand, it's a little bit inflationary to some like valuations or like
revenue numbers or these types of things. I think on the other hand, it's a little bit inflationary to some like valuations or like revenue numbers
are these types of things.
I think on the net, AI will be so productive that it won't matter that much.
But short term, it is a little bit like eyebrow raising, I guess.
But at the end of the day, it's like if you're, let's say, a foundation model company,
you need to get the direct deal with Invidia because you want the GPUs.
So you kind of, it's just one of those things that you kind of have to do.
And I don't, I guess I'm not sure what an alternative would look like in a dynamic where you
have four or five foundation model companies who are, let's ignore Google because they can
make their own stuff, but who are really competing over the GPUs in order to make the next best
models.
We'll be right back.
Support for the show comes from Shopify.
If you run a small business, you know there's nothing small about it.
As a business owner myself, I get it.
Every day, there's a new decision to make, and even the smallest decisions can feel massive.
What can help the most is a platform.
with all the tools you need to be successful, a platform like Shopify. Shopify is the commerce
platform behind millions of businesses around the world and 10% of all e-commerce in the U.S.
From household names, including Mattel and Jim Shark, to those brands just getting started.
That's why they've developed an array of tools to help make running your small business easier.
Their point-of-sale system is a unified command center for your retail business.
It gives your staff the tools they need to close the sale every time.
And it lets your customers shop however they want, whether that's online, in-store, or a combination,
of the two. Shopify's first party data tools give you insights that you can use to keep your
marketing sharp and give your customers personalized experiences. And at the end of the day,
that's the goal of any small business owner, keeping your customers happy. Get all the big
stuff for your small business ride with Shopify. Sign up for your $1 per month trial and start selling
today at Shopify.com slash prop G. Go to Shopify.com slash prop G.
support for the show comes from grunes even when you do your best to eat right it's tough to get all
the nutrition you need from diet alone that's why you need to know about groans grunes isn't a multivitamin
a greens gummy or a prebiotic it's all of those things and then some at a fraction of the price
and bonus it tastes great all grunes daily gummy snack packs are vegan nut gluten dairy free
with no artificial flavors or colors and they're packed with more than 20 vitamins and minerals
made with more than 60 nutrient dense ingredients and whole food
Grun's ingredients are backed by over 35,000 research publications, and the flavor tastes just like sweet, tart green apple candy.
And for a limited time, you can try their groony Smith apple flavor just in time for fall.
It's got all the same snackable, packable, full-body benefits you've come to expect.
But this time, these tastes like you're walking through an apple orchard and a cable knit sweater, warm apple cider in hand.
Grab your limited edition, Grooony Smith apple grunes available only through October, stock up because they will sell out,
up to 52% off when you go to Gruns, G-R-U-N-S dot co, and use the code Prop G.
Support for the show comes from Betterment.
Nobody knows what's going to happen in the markets tomorrow.
That's why when it comes to saving and investing, it helps to have a long-term approach
and a plan you can stick to, because if you don't, it's easy to make hasty decisions
that could potentially impact performance.
Betterman is a saving and investing platform with a suite of tools designed to prepare you for
whatever is around the corner. Their automated investing feature helps keep you on track for your
goals. Their globally diversified portfolios can smooth out the bumps of investing and prepare you
to take advantage of long-term trends. And their tax smart tools can potentially help you save money
on taxes. In short, Betterman helps you save and invest like the experts without having to be an
expert yourself. And while you go about your day, Betterman's team of experts are working hard
behind the scenes to make sure you have everything you need to reach your financial goals.
So be invested in yourself. Be invested in your business. Be invested. Be invested in your business. Be invested
with Betterment.
Go to betterment.com to learn more.
That's B-E-T-T-E-R-M-E-N-T.com.
Investing involves risk, performance not guaranteed.
We're back with first-time founders.
In terms of AI legislation, there seems to be a lot of debate right now
on how do you regulate AI?
and California is trying to be a leader in regulating.
What are your views on AI regulation?
Are people going over the top trying to regulate?
Is it warranted?
How do you think about that?
Maybe just to draw some parallels, in my mind,
I view things like climate regulation, nuclear regulation,
and AI regulation to be similar in that they are global
and local regulation doesn't really matter.
Like, for example, pick any one of those three.
If you make rules about in California, you can't have a gas car, or you can't build
nuclear weapons, or you can't build AI in the extreme in California, that doesn't really
matter because that says nothing about the rest of the world.
And if the rest of the world does it, it affects what happens in California, for climate,
for nuclear, for AI.
And so I think for AI in particular, the regulation that is interesting is less, like I
I think California just, it doesn't matter regulating AI state by state, at least at the macro level.
Maybe it's like in terms of usage for like interpersonal things, sure, but in terms of like
training models, the relevant stage there in my mind is the global stage.
And how does it affect like U.S. regulation versus European regulation versus China, let's say,
from what I've seen thus far, the time spent on like state regulation is kind of wasted,
at least as it relates to foundation models.
I think there is a concern probably in Silicon Valley that everyone's so afraid of AI.
I mean, I've seen these surveys that, you know, I think more than half of Americans
are more worried about AI than they are excited.
I guess that's something to philosophically tackle on your end because you're building it.
But then I would imagine that in Silicon Valley there's this feeling of everyone's just too scared
because they've watched all these movies
and they've watched The Terminator,
and so these people are getting too worried about it
to the point that we're regulating
in a way that actually doesn't make sense.
It's pretty interesting.
I think two things come to mind.
So one, there's the classic phenomenon of, you know,
you're a startup, you want no regulation,
then you become big, then suddenly you want regulation.
Yes.
And we've seen that happen with, I think,
basically every foundation model company,
which is always a shame to see.
And then the second, this is more just like a,
comment on the Silicon Valley and some of the culture there. I know so many people who work at the
foundation model labs who don't have savings. Like they just do not believe in like putting any money
in their 401k. They like spend it all because of this like vision of like something's coming.
Wow. Yeah, which is very weird. But then there are equally as many who work at these who are like,
you know, these guys kind of drank too much of the Kool-Aid. It's really important to have these
conversations and think about these things because I think it's actually, it reminds me a lot of
thinking about, like, in theoretical physics,
like, thinking about the big bang and, like, black holes
in the universe.
The first time you think about it, it's kind of like scary,
existential crisis, what is everything?
If we're in such a large universe, nothing has meaning, whatever.
I think thinking about AI, like, getting exponentially better,
kind of leads to similar, like, existential questions.
Like, what are we, like, what value do humans have
if there's gonna be something that's smarter than any one of us?
And then you have the maturity of, like,
wait, intelligence is not why humans have value.
That's not the source of intrinsic value.
We don't think someone's more.
valuable because they're smarter. So having these conversations and thought processes is I think
very important for both people working in AI and people who aren't. But yeah, there's some pretty
weird people who kind of are like a really, really in the bubble inundated in it and who kind
of get these interesting worldviews of like, you know, the singularity's coming. So I want to, you know,
spend everything that I have now. Yet at the same time, if they think it's not going to be good,
they remain working on it. So these AI engineers who are not saving any money,
they're doing it because they think like the end of the world is coming or because they think
that there's going to be some transformative event that will make them really rich? Like, is it more
of a Duma perspective? It's a pretty big mix. Like, some people think we will just become
in a world where we're like post-economic and just like money will be irrelevant and like for
anyone there's some like base level, whether it's like some UBI type thing or or some have like
the Duma perspective. It's pretty, it's pretty bizarre. It sounds irrational to me. Yes, I would
agree. Okay, you'd agree.
Yeah. And I think that it brings up an interesting thing in AI, which is there's this incredibly transformative once-in-a-generation technology that has come along, and it causes humans when that happens to act strangely.
Yes.
That behavior not saving while you're building AI because you think that it's going to mean some event that could either end the world or, you know, dismantle the system.
maybe they're onto something to me
it seems irrational. And I also
think it says something
about the potential of a bubble that
is emerging that a lot of people
in the last few weeks have been
getting more and more concerned about and that more
and more people seem to believe.
I mean, you know, I think
Sam Altman himself said the word
bubble, there have been other tech
leaders who are saying that.
As someone who is building
in this space,
how do you think about that?
Does it concern you or is it something that you're not too worried about?
Obviously, just to be a responsible CEO, I need to have priors that there is some chance
that something like that happens in like the broader economy where, you know, there's some
corrections.
Yeah.
My priors are very low in particular because like the ground truth utilization of GPUs is just
like fully, fully saturated.
Now it would be one thing if we're building out all these data centers for like the dream
of, okay, we're going to saturate this compute
at someday, but, like, we are doing that
today, and it's like people are still hungry for
more of that compute.
Now, I think there's a good argument
that a lot of
compute is subsidized. So, like,
NVIDIA might subsidize the foundation
model companies. The foundation of model companies
subsidize companies like us and maybe give us
discounts on their inference. And we might
subsidize new growth users, and there's a little bit
of that, I think that's the part
that there's a concern of, like,
actually drawing a similar comparison to Uber,
I don't know if you remember when Uber first came out, rides were super cheap because it was very much subsidized.
VCs were paying for us.
VCs and so the LPs, all the like pension funds were basically subsidizing people's Ubers in a very indirect way.
And like people kind of, you know, sometimes can make jokes about that even as it relates to LLMs.
The reason I'm less concerned is that the ROI is just so massive.
And like the productivity gains from in particular coding, it's like the fact that we have built factory with basically
basically less than 20 engineers, that is something that pre-a-out, we just would not have been able to do.
And so I think the leverage that people are getting is what makes me less concerned, and also the speed of adoption.
Like, I think even some of these enormous enterprises that we're speaking with, they missed, like, mobile by like five years.
Wow.
But for AI, they are on it because they know if we have 50,000 engineers, we need to get them AI tools for engineering because of how existential it is.
If there is a correction, and the way I see it is there will be a correction that won't
wipe out AI like some people seem to think, but it'll be similar to the internet.
There's a correction, valuations come down, there is some pain, and then long term you will see
massive adoption and massive value creation. That's just my perspective.
Say there is a correction. Who wins in that scenario? Like, what happens to,
to open AI, what happens to startups like yourself?
Like, who are going to be the winners and losers in that scenario
where we do see some sort of pullback?
So one core principle is Jensen always wins.
So for the last years, Jensen's going to stay winning.
So that's, I think, you know, not going to change.
And why do you say that?
Because he's just at the very base of the value change?
Yes, yes, yes.
And at the end of the day, like all of these circular deals,
they all come back to Nvidia.
anytime anyone announces, hey, we're doing like free inference, that's free,
but, you know, someone's paying Jensen at the end of the day.
So I think that's kind of one baseline there.
I think another, and this actually maybe relates to what we were talking about earlier
about, you know, these companies and the acquisitions is as it relates to like startups
and how many there are, there was a period that I think has been dying down at least a little
bit in San Francisco where if you're an engineer who like worked at AI for a month,
you basically just get stapled a term sheet like onto your forehead the second.
and you leave and, you know, you show up to a VC, which I think is not good because you don't
get, like, the Travis Kalanix or the Brian Cheskeys in a world where you're encouraged to do things
like that. Like, anytime anyone asks me, like, hey, Maton, you know, I'm thinking about starting a
company, I will always say no, always. Because if me saying no discourages you from starting a
company, then you absolutely should not have done it. And I think, like, there's almost like too much help
and too much like, yeah, you know, go, do it, go start it. Because then it leads to some of these things
we were talking about where the second the going gets tough, it's like, all right, acquisition time.
And this is maybe my localized view because I live in San Francisco and that's like, you know,
what I see more day-to-day than some of like the more macro trends.
But I think the first place we would see a correction like that is in, I mean, coding, for example,
there are like 100 startups in the coding space, you know, perhaps there will be less that are funded
because it's like, hey, you know, at this point, maybe it's not as relevant or, you know,
the Nth AI personal CRM.
Like, that's another one that's, there's been like a million companies there.
the correction might look like, at least at that level,
you know, funding being a little more difficult, let's say.
And then the way that that relates to the foundation model companies
is I think eventually you'll get to a point where they can subsidize inference less,
which just means growth probably slows.
Like Open AI in Anthropic, their revenue has been, you know, ridiculously large,
but also the margin on that has been pretty negative.
And so it's basically like how long can you subsidize
and, like, deal with that negative margin?
they're obviously legendary. Uber is a great example. Amazon's a great example where you can
operate out a loss for a period of time in order to build an absolute monster of a company and then
just turn on margin whenever you're ready. The question is how long can you sustain that? And so
if there were a correction, I think that would affect that. Yeah, it does feel increasingly that
AI, the danger of AI isn't adoption or technology. It's a timing and financing problem.
And, you know, I look at open AI and the amount that they're spending. I'm starting to believe
that the AI companies who are going to win
are the ones who
manage their balance sheets the best
and it's really going to be a question of
financial management
because of the thing that you say there where
all of this money is being plowed
in and it
it is a question
of how long can you go
at an operating loss
which you know Uber crushed it
Amazon crushed it
there were many other companies that died
that did not crush it
from that perspective.
So it will be really interesting to see how that plays out.
As someone who is building in Silicon Valley, in San Francisco,
you've built this incredible company that's generated a ton of heat and press.
Like, you are in AI.
What does that feel like?
What does it feel like to be one of the AI people?
Does it feel like you're in some special moment in time?
I'm like, what is it like?
It feels very much like we are still in the trenches
because there is a ton that we want to do
and that we need to get done.
I think for me, the most surreal thing is the team
that we've assembled.
Like, every day coming in person in our office in San Francisco,
it is such a privilege working with,
now we're 40 of the smartest people
that I've ever met in my life.
We're in New York right now.
We're starting to open up an office here.
I think that's where it's a little bit like,
whoa, like we're now, you know,
we have two offices on the opposite sides of the country.
it's more just like, I think it's just really cool to see over the last two and a half years
how dedicated effort can actually like build something that is concrete and meaningful and
some of the largest enterprises that we're working with. It's just kind of crazy to sometimes
stop for a bit when it's not like the nonstop grind to think like this organization now doesn't
have to deal with these problems because of something that we built because of this random cold email
because of this random hackathon that I met, you know at. I think it's just, um, it's a very
cool, visceral reminder that you can do things that affect things.
Yes.
And if you are really driven by a good mission, you can make people's lives better in
relatively short order.
And I think that's a really empowering thought.
What is something that you think the American population sort of gets wrong about AI and
also about AI founders and the people building this technology?
Most of the world only knows chat GPT.
Very few people know about like, in San Francisco,
everyone's like, oh, which model is better?
Like, Open AI, Anthropic, Google, Gemini.
The rest of the world, it's just like,
it's basically just chat GPT,
which I think on one hand is interesting.
Wow.
I think on the other hand,
it is really important for basically every profession
to kind of rethink your entire workflow.
And it is, in fact, I would say it's almost an obligation
to, like, basically take a sledgehammer
to everything that you've set up as, like, your routine
and how you do work and rethink it with AI.
For me, this is actually something that's really important
because I'm like the most habit-oriented, like, routine person and like constantly, you know, every few months being like, let me try and see how I could do this differently with AI.
In a way that's not like, oh, technology is taking over, but more just like it makes things more efficient and faster and more convenient.
So I think that's one thing is there is so much time that can be saved by spending a little bit of time to, you know, try out these different tools, whether it's something like chat, GPT or, you know, if you're an engineer trying out, you know, something like factory, I think.
regarding AI founders, it's hard to say because there's so many tropes that unfortunately can be
really true sometimes. And sometimes it's even frustrating to me because like I grew up in Palo Alto
and hated startups. Like hated it. Like I grew up like in middle school, we would spend time
like, you know, walking around downtown Palo Alto. And I remember I have a very concrete memory when
Palantir moved into downtown Palo Alto. There were all these people in there like Patagonias with like
the Palantir logo. And I remember, I have a very concrete memory when Palantir moved into downtown Palo Alto. And I
I remember looking so, like, scornfully at all these people walking by with these Pataconia's.
But, yeah, I mean, I think it's maybe actually, I think the thing is less for the rest of the world about AI founders and more like some of these AI companies.
It's really important to leave San Francisco, like exit the bubble.
Like, it's a cliche, but like touch grass, go to the sea the real world because while San Francisco is very in the future, you know, I've taken a Waymo to work for the last like two years.
The rest of the world is still like kind of how it.
was in San Francisco five years ago, and I think it's important to have that grounding,
because if you don't leave, and if you don't have that grounding, you could do things like,
now I'll put money in your 401K, and things like, not that you need to put it in your 401K,
but you kind of get these a little bit warped perspective sometimes.
That is really interesting.
Does that, I mean, this idea that there is a bubble, it connotes the wrong thing,
but there is this, it's an echo chamber, yeah.
And the fact that you're building and you're saying, you know, we're building these offices in New York
and the thing that is important for AI, and I think it's probably really true,
is to kind of go out into the world and understand like what are some real use cases
where this is really going to provide value for people, not just in your enterprise SaaS startup in San Francisco,
but anywhere else throughout America, does that worry you?
as you go further up the chain of power and command in Silicon Valley,
does it worry you, perhaps, that people at the very top aren't doing that enough?
They're not getting out there and understanding what this technology really needs to be and do for America?
I would say yes, and I think that's also just a very common problem,
just generally as organizations scale or as organizations get more powerful,
the people running those organizations inherently get separated from the ground truth of, like,
let's say, the individual engineers or individual people,
who are going and delivering that product to people. And I think similarly, they lose touch
with their customers as well. I think the best leaders have really good communication lines
towards the bottom. Yeah, towards the people, the customers they're serving or the people who are
like kind of in the trenches, like hands on doing the work. And I think you probably end up seeing
this in results of a lot of these companies, because I think it's hard to be a successful company
if you don't have some of that ground truth. Any good leader, I think, should be concerned about that
and should always be paranoid of like, you know,
am I surrounded by yes-men or am I in an echo chamber
and I'm not getting the real, like, ground truth?
Yeah.
Yeah, so that is something that's someone.
We'll be right back.
My name is Tom Elias. I'm one of the co-founders at Bedrock Robotics.
Bedrock Robotics is creating AI for the built world.
We are bringing advanced autonomy to heavy equipment to tackle America's construction crisis.
There's tremendous demand for progress in America through civil projects, yet half a million
jobs in construction remain unfilled.
We were part of the 2004 AWS Gen AI Accelerator program.
As soon as we saw it, we knew that we had to apply.
The AWS Gen AI Accelerator Program supports startups that are building ambitious companies using Gen AI and physical AI.
The program provides infrastructure support that matches an ambitious scale of growth for companies like Bedrock Robotics.
Now, after the accelerator, about a year later, we announced that we raised about $80 million in funding.
We are scaling our autonomy to multiple sites.
We're making deep investments in technology and partners.
We have a lot more clarity on what autonomy we need to do.
build and what systems and techniques and partners we need to make it happen. It's the folks that we
have working altogether inside bedrock robotics, but it's also our partners like Amazon, really all
trying to work together to figure out what is physical AI and how do we affect the world in a
positive way. To learn more about how AWS supports startups, visit startups.aWS.
Support for the show comes from Delta.
Owning your full potential starts with recognizing the steps you need to get there.
That might look like adding a training day or two or three,
and that can definitely be a grind.
But if you're actually able to make that mental shift and combine it with action,
you can become unstoppable.
Through a series of small steps all building on each other,
you can reach your destination.
And once you get there, looking back and seeing how far you've come,
feels that much sweeter.
There is always more potential to own.
And Delta Airlines is always there to help connect you to your full potential.
In 2022, Delta Airlines became the official airline of the National Women's Soccer League
as part of their commitment to invest in and support equity for women.
It's a cornerstone of Delta's investment to improve the air travel experience for everyone
and help you get to where you need to be.
From season kickoff to the championships.
Support for the show comes from HEMS.
Hair loss isn't just about hair, it's about how you feel when you look in the mirror.
HIMS helps you take back that confidence with access to simple personalized care that fits your life.
HIMS offers convenient access to a range of prescription hair loss treatments with ingredients at work,
including choose oral medications, serums, and sprays.
You shouldn't have to go out of your way to feel like yourself.
It's why HIMS brings expert care straight to you with 100% online access to personalized treatment plans
that put your goals first. No hidden fees, no surprise cost, just real personalized care on your
schedule. For simple online access to personalized and affordable care for hair loss,
ED, weight loss, and more, visit Hems.com slash prop G for your free online visit.
Hems.com slash prop G individual results may vary.
Based on studies of topical and oral monoxideil and finasteride,
featured products include compounded drug products, which the FDA does not approve or verify
for safety, effectiveness, or quality.
Prescription required to website for details,
restrictions, and important safety information.
We're back with first-time founders.
Who is like AI Jesus right now?
Is it Jensen?
Is it Sam Altman?
Is it Mark Zuckerberg?
Like, in San Francisco, who's the guy?
Who do people?
Revea. I mean, it's got to be Jensen.
Like, Sam, you know, a lot of wins, some losses.
Zuck, a lot of wins, a lot of losses.
Jensen, that guy just grinded for 30 years.
I remember when I built a computer at home to play, like, video games on a PC,
I bought an NVIDIA chip.
And in my mind, it was like, NVIDIA, you know,
they're the video game graphic card company.
Now they're the most valuable company in human history with no signs of stopping.
And he just grinded it out for 30 years.
Like, it is the most respectable thing.
he's also the nicest dude and he has no like he doesn't have enemies he's so you've met with him
i have he's extremely generous with his time he also know this guy's like knows every little detail
about factory like i don't know how he has the time to to do these things but he's he is a killer
he's he's really good when you think about sort of the long-term future of ai and there was you know
for many years it was agi is coming and and think about all the things it can do um think about how
it could solve diseases think about how it could solve cure
cancer. And then I see, like, erotica GPT, and I see the SORA AI TikTok feed. I'm sort of like,
what happened to the big vision? We're back to sort of porn hub meets TikTok, but it's got
AI. How do we expand the vision of AI? What is the grand vision for AI?
and do you think it's going to really come true?
Well, so I think on one hand, like, you know, the pure slop that is these like AI SORA
or the one that meta announced, I think on one hand...
Vibes AI.
Yeah, vibes AI.
I think on one hand, it's very, in a certain weird sense, it is beautiful in that it is just
like pure human nature.
Like, what do we do when we have really good technology?
Like, let's make porn.
Like, that's the first thought.
And in a certain sense, it's like, okay, I'm glad that even though we're generating all
this technology, we're still humans at our core. We overestimated ourselves when we thought
to get cure cancer. But on the other hand, there are still people who are doing really great
work. Like one of my friends, Patrick Sue, who runs Arc Institute, they're doing AI for
biotech research and biology. And I think they're doing a lot of really cool work. And maybe this
actually relates to something we were talking about earlier, which is, you know, people kind of at a
first glance might have a little bit of an existential crisis of, you know, intelligence is now
commoditized. So there's now, like some people are saying, you know,
know, we both live in a world where if we have children at some point, our children will never
be smarter than AI, right? Like, we both grew up in a world where we are smarter than computers
for at least a period of time. And our kids would never know that world, which is a little bit
crazy because, you know, a huge part of growing up is going to college, becoming really smart
in some certain area. And so I think now we're having a little bit of a decoupling of human value
being attributed to intelligence. But then there's a natural question of like, okay, well,
you know, we were sold this vision about, you know, let's say even the American dream of, like,
if you work really hard, get really good at this one thing, then you'll have a better life.
But now it's like, you're never going to beat the intelligence of this computer.
So what is the thing to strive for?
And I think this actually relates to like the AI porn versus the AI curing cancer, which is,
in my mind, the new primitive or the new, maybe like, North Star for humans is agency.
And which humans have the will instead, like, yes, you can like hit the hedonism and just watch.
AI porn and play video games all day, but who has the agency to say, no, I'm going to work on
this hard problem that doesn't give me as much dopamine, but like because of the will and
agency that I have, I'm choosing to work on this instead.
Wow.
And I think that might be the new valuable thing that if you have that in large quantities,
that maybe that's kind of what brings you more meaning.
Why do you go to agency versus many other things?
For example, you know, you mentioned you're a friend who's working on.
issues in biotech. Maybe that is a question like having the right values or, I mean, not to get
like mushy, but maybe a value would be kindness or a value would be creativity. There are lots of
things out there that you could pick and choose from. Why is it agency in your mind? I guess the way
that I think about it, it's like the agency to go against maybe like the easiest path for dopamine.
Or like the, like the natural, like human nature, like just give me like the good tasting food, the
the video games, the like, you know, easy fun stuff.
And I think maybe part of agency has to do with values.
Like if you value creativity and if you value kindness and, you know,
I think that is something that might motivate more agency.
The agency is basically, at least the way I think about it,
it's like the will to endure something that is more difficult
for maybe a longer term reward,
whether that's the satisfaction of, you know,
bringing this, you know, better health care to people
or satisfying that curiosity.
It's interesting because you say the word agency
and you are building agents
and there's like a parallel there
and it's almost as if
the people who are really going to win
are the people who can have some level of command
and directive agency over these AI agents.
It's the person who isn't just going to do what they're told.
by the guy who controls the AI agent and says, okay, create this code.
It's the person who can actually tell the agents what to do.
And that's the direction that you believe humanity and work should be headed.
100%.
And I think that's also, like, if you think back to, like, the people that you've met in your
life that come across as, like, particularly intelligent or, like, you know, remarkable
in whatever capacity, oftentimes it's not raw IQ horsepower.
Like, you'll note that.
When you meet someone with high IQ, it's pretty.
easy to tell. But growing up in the Bay Area, there are so many that are very high IQ, but aren't
that, aren't that, like, high agency or, like, independent-minded? And I think those are the
people that oftentimes it's, like, really, like, leave a mark when you remember of, like,
oh, like, that person was, you know, maybe they weren't even that high IQ, but they were
very, like, independent high agency. And I think that now is going to be much more important
because, great, you know, you might be born, have a lot of, you know, high IQ. Everyone has access
to the AI models that have this intelligence. So it's not really a differentiator anymore.
The differentiator is, do you have the will to use those in a way that no one has thought of before,
or in a way that's difficult but to get some longer-term task done?
It's really interesting because what you're describing is like, how do you, what can I do that AI cannot do?
And what you were saying is, AI cannot think for itself.
It cannot be an independent, creative-minded creature.
It can be a math genius.
It can solve problems within seconds, but it can't have the willpower to,
decide this is what I want to do
this is what is important to me
this is what has value
which I think is definitely right
we have to wrap up here
I just want to note
I saw a tweet
I think from yesterday
that you put out there
and it shows this
competition of all the different
coding agents
so you've got a cursor
and you've got Gemini
and you've got open AIs
coding agent, you are number one in agent performance.
That's right.
What does that mean?
What does it mean to be number one?
And how are you going to take that moving forward?
This is a benchmark that basically does like head to heads of coding agents.
And they use like an ELO rating system, so it's like chess, where at a high level you could
have in chess, let's say if you have 100 losses against, you know, someone that's equal
skill to you, but then you beat Magnus Carlson, you can have an incredibly high chess rating.
So this is like an ELO rating system where it gives these agents two tasks and then it just
has humans go and vote which solution they liked better, like the one from, let's say, factory
versus open AI or anthropics. And we have the highest ELO rating. So in these head-to-heads,
we beat them, which is pretty exciting. I think it's exciting on a couple fronts. One, we've raised
obviously very little money compared to a lot of the competitors that are on that. And I think that goes
to show that in a lot of these cases, being too focused on the fancy, like, train the model,
let's do the RL, let's do the fancy fine-tuning and all this stuff, sometimes it doesn't give you
the best, like, ground truth, like, what is the best performing thing for an engineer's given task?
Benchmarks are very flawed. You know, they're not fully comprehensive of everything that it can do,
but I think it's helpful when developers have a lot of choices out there to try and say,
okay, well, like, which one should I use?
This one is nice because it's pretty empirical
of like developers seeing two options and picking them
and then consistently our droids win, which is pretty fun.
Final question, what is the future of factory looked like?
What do you think about when you look about the next 10 years?
Ten years is very hard because AI is pretty crazy
and I think humans are bad at reasoning around exponentials.
I would say in the next few years,
bringing about that mission of, you know,
that world of developers being able to delegate very easily
and just have a lot more leverage.
Developers not need to spend hours of their time
on code reviews or documentation.
And I think more broadly,
that turns software developers into, like,
more cultivators or orchestrators
and allows them to use what they have trained up
for so many years, which is, like, their systems thinking.
That's what makes engineers so good,
is they're really good at reasoning around systems,
reasoning around constraints from their customers,
from the business, from the underlying technology,
and synthesizing those together
to come up with some optimal solution.
And with Factory, they get to use that to its fullest extent,
much more frequently in their day-to-day.
And I think that is a net good for the world
because that means there will be more software
and better software that is created,
which means we can solve more problems
and solve problems that weren't solved before,
which I think on the net is just better for the world.
Mattan Grinberg is the founder and CEO of Factory.
This was awesome. Thank you.
Thank you, Ed.
This episode was produced by Alison Weiss and engineered by Benjamin Spencer.
Our research associates are Dan Chalan and Kristen O'Donoghue,
and our senior producer is Claire Miller.
Thank you for listening to First Time Founders from Prof G Media.
We'll see you next month with another founder story.
