a16z Podcast - Patrick Collison on Stripe’s Early Choices, Smalltalk, and What Comes After Coding
Episode Date: February 20, 2026Michael Truell, CEO of Cursor, sits down with Patrick Collison, CEO of Stripe and an investor in Anysphere, to talk about Collison's history with Smalltalk and Lisp, the MongoDB and Ruby decisions Str...ipe still lives with 15 years later, why he'd spend even more time on API design if he could do it over, and whether AI is actually showing up in economic productivity data. This episode originally aired on Cursor's podcast. Resources: Follow Patrick Collison on X: https://twitter.com/patrickc Follow Michael Truell on X: https://twitter.com/mntruell Follow Cursor: https://www.youtube.com/@cursor_ai Stay Updated:Find a16z on YouTube: YouTubeFind a16z on XFind a16z on LinkedInListen to the a16z Show on SpotifyListen to the a16z Show on Apple PodcastsFollow our host: https://twitter.com/eriktorenberg Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
It's interesting to me that we haven't experimented in some sense that much with the paradigm of programming over the past 20 years.
You put those together, you now have the ability to, again, at the kind of level of the individual cell, to read, think, and to write.
And this starts to really feel like a new kind of touring loop and to have its own sort of completeness.
I think that's a case where the right API design, the right abstraction design, ends up having just quite significant business ramifications.
I think the basic idea of as development environment and not just text editor is really the right idea.
And that's the thing I want to see a return to.
Patrick Hollisonward's first startup in Smalltalk.
Its development environment let him fix errors mid-request, inspect stack frames, and resume execution.
And he wanted that more than he wanted a mainstream language.
He and his brother chose Ruby and MongoDB for Stripe instead.
Those decisions still define the company 15 years and 45.
seconds of annual downtime later. Now Stripe is shipping V2 APIs, rewriting core abstractions
first designed in 2010. It's taken years. Defining the new APIs is the easy part. Making them
work alongside everything already built on the old ones is, as Callson put it, more like an
instruction set migration than a product launch. This conversation previously aired on Cursor's
podcast also gets into why AI hasn't moved product to V-numbers, what today's dev environment
could steal from Lisp machines
and Collison's work at Arc
on foundational models for biology.
Michael Truel, CEO of Curser,
sits down with Patrick Collison,
CEO of Stripe.
Well, it's great to have you.
Thanks for being here.
Thanks for having me.
Great to be here.
I've heard that your first startup
was written in Small Talk.
Please explain.
I don't know what there is to explain.
It's the best programming language.
Well, I had worked on LISP
and LISP dialects before that.
And actually, I'd worked
LISP web frameworks.
And when we went to build our first startup,
we first
implemented it in Rails.
And then I found, compared to LISP,
that development process kind of frustrating.
And, I mean, we don't need to get into full details,
but I thought that continuation-based web frameworks
were really the right way to implement web applications.
There were no continuations in,
there's no continuation-based framework in Ruby.
and it's kind of searching around,
I found that there was a good one
that had just been written in Smalltalk.
So I decided to play with it a little bit.
And then I found that Small Talk
is actually this extremely interesting
development environment
that had a lot of the aspects of LISP
that I'd really appreciate it there,
like a fully interactive environment
with a proper debugger
so that you can edit the code
while in the middle of some web request
or deep in some stack trace or something.
And you could, for example, encounter an error with some web request,
edit the code to fix the error,
and then resume higher up in the stack,
such that the entire web request would just complete.
And so rather than this kind of annoying feedback loop
of having to add some log statements
and do this binary search to find the problem
and eventually deploy a fixed version,
a process that could take an hour,
you could just literally inspect the stack frame
see which variable has the wrong value, fix it, like, you know, jump back up, hit proceed,
and have the whole thing works. Anyway, the point is, in the hunt for this continuation-based
web framework, realized that Smalltalk, in general, had just a much more powerful development
environment as compared to Ruby slash as compared to basically every other mainstream programming
language. And so we decided to, yeah, use it for the company, which in hindsight was,
I mean, I don't know if it was a terrible decision or not. The reason I think one would
think it would be terrible is that it would be, you know, hard to hire people and hard to scale
and, you know, whatever. It wasn't hard to hire people or rather, nobody knew it, but it was easy
to teach them. Did they know before they joined? No, no. They learned really quickly. And then you
smart people learn languages really quickly. So I don't think that's really a reason not to use a
non-mainstream language. The company didn't work, I think for unrelated reasons. I think just the
idea wasn't that strong. But we also chose Ruby for Stripe. So I don't know, I think maybe the
gains were not quite as large as I'd put. And was your small talk,
enthusiasm shared by the acquirers of the startup. And what was the dynamic, you know, was there
like this blissfully ignorant management that foisted this small talk code base on a bunch of unsuspecting
developers that were then kind of like piling over, you know, or yeah, what was the dynamic
between the programmers management sort of what happened to that small talk code base?
Yeah, yeah, yeah. Does it still live on somewhere? I wish. And I'm 99% sure the answer to that is
no. It, um, it's, the company that acquired us, it was mainly a talent acquisition.
So, yeah, the codebase itself was less relevant.
Okay.
And it was immediately sort of just gone.
Yeah.
Okay, gotcha.
I've also heard that one of your earliest programming projects
was working on an AI bot written in Lisp.
And it was something like it was a client for MSN.
Uh-huh.
Yeah, I know you found that, but that is true.
And I heard that you got kind of nerd tonight by the idea of trying to get it to pass the Turing test.
Yes.
And I'm curious, what did you miss?
You know, why didn't you make Chatsbyte?
and, well, maybe a little bit more seriously,
how did it work?
And what was the state of neural networks at the time?
And did you consider using any antecedents
to the technology we used today?
Yeah, so that was the project.
It was a little critter that used MSN Messenger,
which was all the rage at the time.
I guess that puts me, that's like maybe a specific kind of,
you know, sedimentary layer in the chronology
of different instant messaging solutions
and probably dates me quite precisely.
And it was a really simple,
Basian next word predictor.
like there was nothing really that sophisticated there
to the hint there was anything sophisticated.
It was maybe that it used,
like the training data was the conversations itself
had an MSN Messenger rather than kind of general text corporate.
And you, it worked reasonably well.
And, you know, better versions looked a couple of words ahead
and, you know, what have you.
And, I mean, it never really passed the Turing test
where, you know, people have actual suspicion.
They're trying to, you know, exercise this discernment.
But it certainly passed some weaker version of the Turing test
where, you know, they were unsuspecting.
and people ended up having quite lengthy conversations with it.
And that was part of how I discovered Lisp.
And I remember paradigms of AI programming by Peter Norvig
being a really formative book
and had all sorts of interesting approaches there.
It didn't have anything on neural networks, I'm almost sure.
And I never, I mean, I'd read some Marvin Minsky stuff,
Society of the Mind or whatever on neural nets,
but I'd never really seriously looked at them.
I actually experimented a lot with genetic algorithms.
They were, I guess, more practical on your own computer.
It takes a lot of computer training in your own net.
So I experiment a lot with genetic algorithms.
And actually, I use VORJack at the keyboard layout
because it's more comfortable to type on than QWERTY.
But as does John, my brother,
so no one can never use our computers.
But I wrote a genetic, I don't know, optimizer
to figure out what the optimal keyboard layout was.
And it turns out it is, in fact, basically Vourgack.
using a genetic approach.
So I went deep down that rabbit hole,
but I never really played with neural networks.
And I guess that's why, you know,
that but probably 70 other reasons
is why I did not create chativity.
There is an old video of you being interviewed,
I think, after Silling Octomatic,
where you're asked about small talk.
That's where I found kind of that weird and weird fact.
I think at the time people asked you why,
and one of the things you said was,
I mean, you liked some features about small talk, list-style languages.
And you predicted, and I think that this was circa maybe 2008 or something like that,
that the mainline C-style programming languages
would increasingly borrow ideas from these older programming languages.
And that kind of has been the case in the JavaScript-Python ecosystems.
Yeah.
Do you think that there are any underrated ideas
varied away in kind of older, more esoteric programming languages
that should be borrowed by the main line?
Yeah, it's been interesting how a lot of the ideas
have been kind of borrowed by the JavaScript ecosystem,
system and in a strange way, like through the web inspector, where you have this, I mean,
that's one of the richest run times in some sense that people have, you know, general exposure to.
I don't think JavaScript has first-class stack frames. Maybe there's some weird extension or
something where you can get that, but, you know, ECMA script doesn't have that, I'm pretty sure.
First-class stack frames actually let you do a lot of other things for kind of obvious reasons.
So maybe that's very, that's kind of too specific. I mean, I think the idea of,
and maybe this is what cursor becomes.
I think the basic idea of as development environment
and not just text editor is really the right idea.
And that's the thing I want to see a return to.
That's the thing that the list machines had and genera.
That's the thing that, to some extent, Mathematica has.
That's the thing that Smalltalk has.
And I think it's just such a mistake that we have ended up
with development environments
where there is such a separation between the runtime,
the text editing and the...
and the environment in which the code, I mean, well, the runtime and the place where the code runs can be the same or different,
but there are three maybe slightly deconstructionally different things.
And in those three environments, they can all coexist in the same place.
And I find, like, I mean, still to this day, I use Mathematica a lot, not because I'm doing some particularly arcane, you know, symbolic mathematics,
but because it's just a more efficient development environment.
Now, that's going to be a bit less true with LNs because the Mathematica, you know,
mathematics does not support cursor-style prompted development,
but that I think is the core idea that I wish others would borrow.
And VS code has been a step to some extent slightly in that direction,
but I think we could take it way further.
And what I'd love to see, for example,
is when I hover over a line of code,
I would like to see profiling information about just the runtime characteristics
of that code or that function or whatever,
I would like to see logging and error information
overlaid. When I hover over a variable,
I would like to see how, like,
the most common values that it takes on in production.
These kinds of like just rich, deep integrations.
Are you a fan of inventing on principle and those talks?
Yes, yes.
I think Brett leans too much.
I mean, a huge fan of Brett, just he's such an incredibly...
Have you been to Dynamic Land?
Yes.
Okay.
Yeah.
and have supported it.
So she's a fan of Brett.
The place that I've maybe differed
or at least that just resonates with me somewhat less
is Brett is really into this idea of,
obviously of graphical and visual representations for phenomena.
And I think that works very well in certain domains,
like the kind of dynamical systems
that he has demonstrated some of the ideas
with, I think it's often very hard to find such useful, spatial, continuous representations
for arbitrary systems, like for various parts of stripe.
I'm not quite sure what that would be, and I'm not sure, even if we could find it,
you know, exactly how useful it would be.
Maybe it's just me.
I reason much more kind of symbolically and sort of lexically than I do visually and graphically.
It might just be personal preference, but I don't know, the kind of paradigm breaking
that he's been engaged in, I think, is hugely.
edible. Are you going to make a
true integrated development environment?
So we are playing
with ideas around letting
the AI increasingly take time into the
background to run its code and react to the
output. And we think that
this should all work well together.
Like, you know, we focused a ton on
inflow, speed, and control.
And we think that that's really,
really important for AI is, you know, to
give programmers the control over everything,
have them understand everything the AI is producing,
also to give them really, really fast iteration loops.
programmers hate waiting for things.
But in some cases, we think it's now becoming possible to go tell the AI to think for a bit
and then come back to you and have the API be a little bit more like the API
with another human being.
And we think you want that all to work well together.
So, you know, the AI can come back to you with 70% of something and then you can bring it into the foreground really quickly,
work with it and then spin it back off to the background.
And, you know, as part of having the AI spend a bunch of time thinking in the background,
to make that thinking useful, you kind of need it to run the code and then react to it.
or else it's just kind of staring at the thing that it wrote and thinking more.
Maybe I'm supposed to be the one answering the questions rather than asking them.
But do you think in five years the main thing that I'm looking at in cursor
will be code or something else?
I think it might be something else.
I think that there are big, big, big simplification,
but kind of when you're defining what a piece of software is,
There's like the logic component,
which is what engineers spend a lot of time on,
of designing exactly how the software works.
There's also for end user applications
and things that of GUIs, there's like this visual component.
And I think that there is, you know,
maybe it's going to be us, maybe it's going to be someone else.
There is a future version of the world
where the way you interact with AI
is a little bit less like, you know,
it's a human helper that you're delegating work to
or looking over your shoulder
predicting the next set of things you're going to do.
And instead, it's a little bit more
of an advance in compiler or interpreter technology.
And it could lead you to a world
where programming language has actually changed.
And they can start to get a little bit less formal,
they can start to get a little bit higher level,
they can start to be a little bit more about what you want
and a little bit less about how you do it.
And I think that it won't look like a Google Doc necessarily.
I think that there are things you want to keep around from programming,
like the naming of logic somewhere
and then using that in a bunch of other places.
I think that there's also this other element, too, of the visuals
of what a piece of software looks like.
And I think, you know, maybe us,
to maybe some other tool,
but I think there's a world where
kind of direct manipulation of the UI
starts to play a little bit more into it.
But these are kind of far-flung
experimental ideas.
In general, I will say,
and it's not terrible,
but I feel like they're,
it's interesting to me that we haven't
experimented in some sense
that much with the paradigm of programming
over the past 20 years.
And the many of the things we're discussing here
are from the 80s or the 70s
and there are way more developers, obviously, now
than they've ever happened in the past,
but in some sense, the aperture of experimentation there
feels like it's really not that wide.
And again, the JavaScript ecosystem
and a couple of others have done some cool things.
And there's a lot of experimentation at the language level
with Rust and Go and everything else.
But at the kind of the development level,
I don't know why it is, but maybe it's just too hard
and complicated now, but there's been less than I would have expected.
Yeah, I agree.
And I think...
May this helps...
Something we're working on.
Maybe this explains cursor's success to some extent
where you guys are the first people
to really take it seriously in quite a while.
Well, I mean, yeah, I think we also benefit
a lot from the why now of like there's now this
great new color to paint with
or a set of colors you paint with.
I think also there's just a ton of lock-in
with programming languages around both the neurons
in your head of like programming languages
are kind of complex UI for programmers to define
exactly how the computer should function.
And so, you know, people learn languages and those, you know, people don't like to learn that many things.
And then there's also the lock-in of you have a lot of logic sitting around in one language, and you need to maintain that.
And I actually think that that's a pretty interesting or one of our hopes is that as AI programming gets better and better and better.
One of the downsides of working on professional applications with hundreds of people dealing with many millions of lines of logic is the weight of the code base really starts to weigh on you.
And so the feeling of being in a net new code base
where it's just everything feels effortless goes away,
everything's a chore, you have to change one thing here,
breaks something else here,
and it becomes kind of this big ball of mud.
And making that effortless,
reducing the kind of weight of an existing set of logic,
I think is one of the areas in which AI can make programming better.
Someone said on Twitter today,
maybe it was Andre Carpathie,
but maybe I'm misattributing that
and, you know,
too many things to do with
vibe coding get attributed to Andre
like, you know, to Churchill or Einstein or something.
But I think about him.
But this person, whoever was,
was making observation that, you know,
it's one thing to be prompting the creation of code,
but another place where AI could conceivably do a lot to help
is in the beautification and the refactoring of code bases.
And you can imagine that, you know,
you're producing all this, you know,
a little bit ungainly, not quite,
correctly factored, you know, detritus at the front, and you have this, you know, and then
nocturnally, this thing comes up behind you and makes it all, you know, beautifully factored.
And the only CS class I ever took was this from, this class from Jerry Sussman on, on,
it was basically focused on, I mean, he called it large-scale symbolic systems, but really
what he was trying to focus on was the idea of creating code bases and environments and
abstractions that were easy to modify.
And there were no assignments in the class
where you write something from scratch.
Every assignment was about modifying an existing system
and thinking about how could you design things
in such a way that those modifications become.
And there might be quite deep modifications
become straightforward.
And I think that's a lovely idea.
Obviously in practice, it's often very difficult to do that
given all the exigencies and pressures
of the things you want to ship today and next week and so forth.
But if you could have an AI...
Often when you're writing this stuff, you realize,
well, I really should be doing it the beautiful way,
but I'm not.
Maybe we could have an AI coming up behind
us to actually.
Yes, yes, maybe soon.
One thing that happens to a lot of developers that care,
or a lot of people come to development because they care about building things.
They want to make things happen on the computer screen.
And so then that leads them to coding.
And then something that happens to, you know, a big group of developers
is they eventually realize the software they want to create is too big
that they can't write all of the code themselves,
and they have to go to humans to help them write the code.
And so maybe they then become an engineering manager, director,
or whatever it is, maybe they start a company, right?
And then most of the work becomes not typing code, it becomes coordinating amongst people.
Do you think that there are any ideas from programming that are helpful for the act of kind of programming amongst the organization to get a group of people to build software together?
Interesting.
I think taking APIs and data models really seriously.
If I was to do everything at Stripe again, I mean, there's a million small things that you would do different and even some kind of big things.
But the thing that I think we could maybe foreseeably
and beneficially done differently
would be to have spent even more time than we did
on APIs and data models.
And part of the reason is the I guess Conway's law effect
of how both of those things end up shaping the organization.
So I guess if you don't deeply internalize that,
then maybe you're,
less control over the organizational dynamics than you might otherwise like to have.
But also, I think it ends up shaping not only, I mean, the weak version of Conway's Law is that
it shapes your organization. I think the strong version is that it substantially shapes your
strategy and just your business outcomes. And this isn't exactly maybe a version of that,
but I often reflect on how the iOS software ecosystem for a very long.
long time and, you know, plausibly still today, was so much more vibrant and kind of vital and
successful than the Android app ecosystem. And, you know, there's a lot of things that are
different across those two ecosystems. There are now way more Android devices in use, I believe,
than iOS devices. But I think much of the, you like that app developers tended to prefer
building their apps on iOS and releasing apps first on iOS,
and maybe the iOS version being better than the Android version or whatever,
is because the frameworks and the abstractions for iOS
were just originally better than the Android ones.
But I think that's a case where the right API design,
the right abstraction design,
ended up having just quite significant business ramifications.
And I think there's kind of a sense that maybe it's not worth dwelling on these things
because everything in technology changes so rapidly,
and whatever assumptions you make,
they'll be obsolete in two years or something.
I think in practice, that's not true.
And that's like the right API design
and the right abstraction to the right data models
can really endure.
And for the first versions of iOS,
many of the classes that one used
were prefixed with NS.
NS, of course, standing for next step, right?
And so that's a case where the API design
survived for, you know, two decades or more.
And in the case of Stripe, you know,
Strip is now 15 years old.
And, you know, there were lots of things
that we designed 15 years ago that are still, you know,
in use today, which is kind of good and bad in the sense that they endured, but also we are still
you know, we are still under the...
Living with their fault.
Exactly.
And so, anyway, that's maybe the thing that I would...
That's the first thing that comes to mind.
In fact, on that final note, I was talking with an engineering leader at, you know, kind of
a preeminent, successful Silicon Valley private company.
And they were talking about how their code basis largely in Scala.
And they said that they like to think of kind of the beginnings of the startup as this big bang moment
where these, these, you know, tired, overworked, maybe overcaffeinated founding team members
are willy-nilly making these initial technical decisions that then dictate the lives of hundreds of professional engineers in the future.
And that scholar choice was one of them.
And they sort of live with the faults of that now.
But what are those kind of, what were the consequential, it could be good or bad, initial conditions,
of the Stripe Big Bang
that you guys still live with right now?
I mean, I think that metaphor is...
Well, it sounds true to me, is the first thing I'd say.
I mean, maybe it's a little bit of kind of survivorship bias
where, like, the actual statement is the early decisions
that we made that we never changed
are decisions that we lived with,
but there's a kind of totology there or something.
And there are certainly design decisions
we made pretty early on that are not true today.
So, you know, early versions of the Striped dashboard
or something were built extraordinarily differently
to, you know, the dashboard today.
And the converse is also true.
So, you know, initially we decided to use MongoDB at Stripe
and we decided to use Ruby at Stripe.
And those are still quite foundational technologies at Stripe.
And, you know, we had to build a lot of, you know,
infrastructure in order to make MongoDB as fault tolerant and as distributed and as durable
and as reliable and everything as we needed it to be and as it now is like we had a stripes critical
API availability last year was 99.99.96% which is 44 seconds of unavailability through the
whole year which is we fit in and others don't publish statistics that are kind of as granular but we believe that
is the best in the industry.
And so, you know, everything that our storage team is built and many other teams,
you know, it ended up really working there.
But that was a quite important critical decision, initial decision.
And, you know, Ruby, similarly, I guess companies sometimes change languages, you know,
along the way.
But I feel like the initial language chosen tends to have a...
I heard there were debates in Stripe about, or one of, actually, one of our co-founders,
internet stripe early on.
Or not early on in Stripes history, early on in kind of our collective personal history.
And he remembers there being documents upon documents about a potential Java migration.
Yeah. So that partly happened. As in we have rewritten a bunch of key services on Java.
So some services for which, I don't know, throughput in particular is really important.
And if you torture Ruby enough and maybe rewrite parts of it, you know,
parts of some hot paths in sea or something,
you can get it to be pretty fast.
But you're often fighting against the allocator
and various parts of even just like Ruby strings
are not that efficient and stuff.
So we've rewritten certain services in Java,
and now we use both.
Did you consider anything other than Mongo,
and why did you pick manga early on?
And what was the RFC process, RFP process,
decision-making process?
for that.
It was just me and John, so, you know, we were sitting on the couch.
It was like, should we use Mongo?
Yeah, fine.
Did they get through to you with, like, a blog?
Or was it just the reputation of Mongo at the time and open source communities, something else?
I think it was, so I wrote a data store for our prior company, an object-based data store.
And I didn't really like SQL.
I thought it was, there was too much of a translational kind of.
mismatch between the domain of the application and that which SQL natively makes expressible.
And so with SQL, obviously, you have to collapse down into a relatively restricted set of
primitive forms, whereas in your application, you might have a concept of, I don't know,
like say in the case of Stripe, of money that doesn't like exactly comport with how the particular
SQL database you're using happens to represent money or whatever the case might be.
And so, yeah, I just had this, like, principled objection to SQL.
I'm not endorsing this or saying it was good, but as this interview shows, I suppose,
I had all sorts of, you know, strange notions about technology.
And with Stripe, we wanted to be more mainstream than a little bit more, you know,
a little bit less heterodox in our technology choices than our prior company.
And so instead of using small talk, you know, okay, we weren't going to go to Java,
but we went to Ruby, which at least on a relative basis seemed more mainstream,
and similarly, rather than write our own object database,
we went relatively more mainstream and used Mongo,
which still give a lot of flexibility, you know,
by virtue of being a kind of object data store.
So that's fine.
Everything I've said might disqualify me from, you know,
ever making technology services for another company, but...
Would you do anything differently about Stripey 2?
We haven't talked that much about it publicly yet,
And the answer might be a bit like, you know, there's the Zhu and Lie quote about the,
or the Deng Xiaoping, about the French Revolution.
You know, it's too soon to judge.
And so back in 2022, I believe, we, I mean, to this discussion about data models and abstractions,
we realized that a couple of the core abstractions in Stripe were just not the right
long-term abstractions, and we had to fix that.
And so we designed a bunch of V2 APIs.
Fortunately, we had contemplated the possibility of this earlier at Stripe.
So, you know, most of the, you know, rest your eyes that people are familiar with in Stripe are
prefixed with slash V1.
They've been prefixed with slash V1 from, you know, 2010.
And so then in 2022, we decided, okay, we might use the, you know, we might increment the
name space.
So we designed those new APIs.
They started to ship
this year.
Congratulations.
Thank you.
And we're extremely excited about the functionality
that it's going to enable.
And without looking into the arcana of it,
they will enable things like
historically we have drawn
distinctions and represented separately,
things like end customers,
things like sub accounts,
things like recipients,
for different kinds of payments,
and we're unifying all of those
into being, you know,
into the same kind of entity representation,
which is on some level of clearly the right answer
and, you know, makes a lot of,
will and is already changing the businesses
of some of our customers,
because they can, you know,
enable their users to do various things
without having to re-enter details
or maybe to bring the same account
across different countries or whatever the case might be.
Anyway, it's been a long journey.
And the reason it was a long journey is,
I guess because it's not that useful
to just define these APIs in isolation.
If we just wanted to define them in isolation,
that's a pretty easy thing to do.
The thing that's difficult is to make them interoperable
with all the existing things at Stripe
and to build translation layers and so forth.
and then to figure out with our customers
what a sensible upgrade path might look like
because we control our code base,
we don't control theirs.
And so it's going to be, I don't want to exaggerate it,
but in certain respects at least,
it feels a bit more like an instruction set migration
for a chip architecture or something
where the instruction set by itself is easy,
but it's all the kind of coexistence questions
that become hard.
It's hard to ship this year
and we're excited.
about it. I mean, I guess your question was maybe what lessons we've learned from it.
And do you think there's anything bigger to draw out of that on either projects that are
rewrites or thinking about these kind of decades-long abstractions and how to do that well?
My trite answer to that is to unify everything you can plausibly unify.
How do you test design ideas for V2?
Well, the people designing it, well, I'll give you one other lesson, and then I'll answer that question.
So, and just the other lesson, just maybe a bit of Token Chief.
And also, is there some chief API designer who's the mastermind?
And it's one person, it's not some sort of working group?
There is a working group.
There are working groups, but there is also a singular person who understands and is more than anyone else responsible for the whole.
And I think that's necessary.
My other kind of trite exhortation would be to make anything that plausibly could be an N by M relationship to support that because if you only support 1 to N or N to 1 or whatever,
and even if it's non-obvious how it could possibly be N to M, just inevitably you'll end up needing that and you'll think, well, you could never have a company that's owned by two different companies or something.
but it turns out that every permutation in the space is in fact eventually export.
As to how to do that well, I really feel like it's, like these new APIs,
we think they're the, well, you ask the question, how do we know they're the right APIs,
partly from showing early versions of them to customers,
partly because the people who designed them had spent many, many years in the
witnessing and living with the shortcomings of the prior version.
So we were kind of coming with strong opinions.
But even the strong opinions,
one can sometimes predict wrongly or extrapolate wrongly
or over-engineer something or whatever.
So I think the cycles of customer validation,
customer feedback are extremely important.
I think it's also very important,
and we did a lot of this to literally write the integrations
that would exist in the new world.
Because, I mean, you really...
I mean, I think Java,
is maybe an example of, yes, it fixes a bunch of problems with memory management or whatever
that existed with C or C++ and antecedents, but at the cost of a lot of prolixity and overhead.
And in order to kind of safeguard ourselves against inadvertently overengineering things,
we forced ourselves to write a lot of API code specifically describing how we would implement various
business models and flows and so forth,
just to make sure that when you look at it, it feels right.
But I don't want to endorse our approaches too strongly just yet.
I mean, I'm feeling very optimistic,
but, you know, we're, I don't know what fraction,
but 60, 70% done or something, but not like 100%.
And so I don't want to, you know, prematurely to clarity.
How do you, Patrick Collison, use AI?
Well, I...
The main ways are the predictable ones,
where I use LLM chat tools a lot.
And I use them for.
Mainly for answering
for kind of factual or empirical questions
I'm curious about.
So for deep research style questions.
I don't always use deep research
and now that the LMs are getting better
at tool use and just navigating the web themselves.
You don't need deep research as much,
but for answering empirical or factual questions.
I wish they were useful for writing,
but I usually end up dissatisfied with the writing that they produce.
So I don't reuse them very much for that.
And even for editing or grading my own writing, I mean...
Have you seen any improvements as the models have progressed on the writing?
I agree, also.
It's surprisingly generic.
Yes, yes.
I'm trying to prompt it to not be generic.
Yes, yes, yes.
Inserting names of people who...
Yeah, yeah, yeah.
And it just doesn't work.
And so I've been disappointed at the times when I've given it.
People tell me that the base models are better at this.
And it's the sort of normification of ORLHF that puts it in some kind of attractor basin.
Yeah, I have not succeeded in using them effectively there.
People say that Plaud is better and O3 is better than earlier opening eye models.
And on a relative basis, that might be true.
but I don't want to sound
you know
self laudatory here
and suggesting that I'm
some particularly talented writer
I don't think I am. It's just like my personal style
differs from the
personal style so to speak of the models
and you know in some
self-centered way I
when I write I want to use my personal style
so I use them for the factual stuff
a lot and I find them like terrific for that
and even when I'm reading a book I'll sometimes
I've been recently using GROC's voice mode,
and I'll just passively ask questions while I'm reading,
and GROC is just listening in the backgrounds,
and the answers are very helpful.
And then I obviously use LMS for running code
and typically mediated through cursor.
So we are interviewing you, Patrick Collison,
as kind of the most,
if you had to pick the architect archetype
of a software industrial,
list. I feel like you would be kind of straight out of central casting for a number of reasons.
One is that you are running a large software company, a successful large software company.
Two is you started as a programmer and then moved to running the company.
And then three is the company also builds things for developers.
And so it's kind of the intersection of, you know, many circles in the Venn diagram.
And so it's helpful to hear about, you know, discussing kind of experiences with Stripe.
We are also interviewing Patrick Collison,
the moonlighting
economists and student of the world.
And so are progress studies doomed
now that AI is here?
Is there any need for them?
Well, I was going to say I think the need for progress studies
is increased, but again, I don't mean to suggest
that proper noun progress studies
sees increased need,
but I think the kinds of questions
that progress studies tries to answer
are now more pressing
and urgent because I think
the degrees of freedom are increasing. And I think there's some
Pondglacian view that AI will just magically solve all the problems and
you know, predictions of the future are hard, but one, I don't think that's true. And two,
I, in as much as we have, you know, evidence to date, I don't think that's been the track record.
So I think that, you know, how we use these things, what kind of decisions we make,
what kind of, you know, considerations and, you know, margins of human welfare, we need to further,
you know, I think all those judgments are going to really matter.
And maybe a critique you could have leveled at progress studies or progress studies style thinking
five years ago is these are all nice questions, but the world is on a kind of foreordained
escalator path to, you know, some kind of teleological outcome.
and I don't think the world feels that way today
or certainly it feels much less that way today than it did.
Because of global affairs or something else?
No, I mean, maybe somewhat global affairs,
but maybe the trifecta of global affairs writ large.
Second, I think that aspirations and ideals
are becoming contested more actively.
And there's an ambiguity these days in the U.S.
as to what the left and the right even stand for.
And I guess we currently have one party endorsing tariffs
and another party opposing them,
but with the valences kind of shift, flipped from what one might have expected historically.
And then third, yeah, obviously technology and first and foremost AI,
but in our industry, stable coins,
the rise of China as the preeminent manufacturing power,
in many technologies of the future,
like drones and robots and batteries and solar, you know, etc.
So, yeah, in many different ways, I feel like the future is, you know,
Peter Schwartz has this concept of, you know, the Schwartz window
as the window of, you know, contemplatable futures in, you know,
whatever number of years hence.
And I feel like that Schwartz window,
as of say 2005, as we contemplate the world of 2015,
was fairly narrow and was correctly fairly narrow.
I think the world of 2015 did in fact unfold largely
the way we would have expected in 2005.
And I feel like today in 2025, that window for 2035,
like it feels extremely broad.
So yeah, I think the progress studies' style questions
are more pressing.
So you were on the record in the record
in saying that people should focus more on the question of
why we don't see improvements in productivity numbers
as information technology increases
and also as more people have started working on science and technology
and more money has gotten into it.
And what do the numbers look like now?
Do we see AI in the numbers?
There was a new paper published in this very recently,
like the past couple of days,
that I've not had a chance to, I just cutest today to read.
So I've at this moment only read the abstract.
Its claim is that one does not, in fact, observe productivity improvements stemming from use of language models.
Now, I certainly can't...
To know what they're looking at?
They appear to be undertaking some kind of natural experiment looking at the individual level based on intensity of LLM usage.
But I certainly cannot endorse their methodological rigor, and upon understanding it better, I might be...
I might be either really impressed and find it very credible or horrified.
I don't know.
But that was just the finding I happened to stumble upon today.
I mean, look, overall, GDP growth in the U.S. looks, well, over the last two years,
it's been somewhat better than we expected.
Obviously, we're speaking right now at a kind of volatile time.
We certainly don't see any evidence for exponential takeoff.
And if we, you know, in as much as we thought that the encouraging
GDP figures that we have seen the US
for the last two years are attributable to some of these new
technologies, I think you would also expect
to see them in other countries, right? Because these technologies
are quasi-public good. Anybody can
use these LLMs.
GDP growth outside of the US
has not been that encouraging. We're
not living in some massively accelerated
period of economic growth for the world
writ large. And so, you know,
obviously it's early days, but
I think
we're seeing that the diffusion of these
technologies through the economy really takes time and involves substantial complexity.
And maybe just last point of that is, I believe Jack Clark said in an interview with Tyler Cohen,
one of the co-founders of Anthropic. And, you know, Anthropic, to some extent, has, well,
Anthropic has always taken the concept of AGI and even ASI, I feel extremely seriously.
And, you know, Dario speaks with this publicly. He's written about it, you know, et cetera.
And, again, Jack Clark, one of the co-founders. And he said that he expects AI to
increased GDP growth by half percent a year.
And I thought that, I mean, I interpret Jack as really an optimist.
And half a point a year is, in fact, a lot of incremental GDP when compounded.
So I'm not saying that that's small, but I think it's interesting that that was his figure.
Yes.
Do you think that with the form factor that AI is taking in the economy right now,
if we just kind of stretch the line forward, do you think we're going to need new measures,
in economic productivity than we have right now.
So, like, assume real productivity goes up,
assume that AI keeps getting better,
it gets kind of deployed in the ways you would expect.
Do you think we'll need new measures?
Or it should show up in the numbers.
No, no, I don't think so.
Like, I'm not thinking the GDP is perfect,
I think GDP can be improved,
but in any world where what we generally think of
as the economy is massively enhanced,
it'll show up in GDP, I believe.
When will we be able to program human biology?
I'm very excited about this. At ARC, which is this biomedical research organization, which I was
involved in founding, we're working on training foundation models for biology using DNA and things like that.
We're working on a virtual cell. And generally we're trying, I mean, a thing that I think is
that I didn't appreciate until really spending more time in biology is we've never, like we humanity,
have never cured a complex disease. So, you know, one,
one ontology or schema or something of diseases would be, you know, you've infectious diseases,
the flu, the cold, COVID, whatever, and tuberculosis and, you know, very diseases with high
mortality rates. Then you have monogenic diseases where, you know, there's just sort of one genetic
mutation that is responsible for the disease like Huntington's. And then you've complex diseases.
And the complex diseases are kind of the residual that are now left after we've cured most of the
problematic infectious diseases, at least in the Western world.
Most cardiovascular disease, most cancers, most autoimmune disease, most neurodegenerative disease, et cetera.
For certain of these conditions, we have maybe treatments that help like statins with cardiovascular
disease, but for none of them can we really say that we've cured it, that we like understand
the causal pathways and, you know, meaningful detail and that, you know, it's just, you know,
we can vaccinate against it or something.
And I think this is our hypothesis, you know, could be wrong, is that this is in part because,
because we don't have experimental and kind of,
maybe epistemic is too grandiose a word,
but kind of epistemic technology that's up to the task.
Like the pliotropy of the genes
in terms of all the different parts of the body
and the systems and the mechanisms inside the cell
that they affect is so much combinatoric complexity there.
And then the environment is such a vast
and difficult to quantify thing that just,
It's really hard to understand for any of these conditions, you know, the etiology and the dynamics and so forth.
Okay.
Then over the last 10-ish years, I mean, a bit longer, but a lot of the development has happened the last 10 years.
We've gotten three new classes of technology and biology.
For reading, we've gotten much better sequencing technology, single-cell sequencing, the ability to sequence, single-cell sequencing of our
RNA and those improvements.
At the kind of think level, we've gotten neural networks and deep learning and transformers and everything there.
I mean, they've existed for a long time, but we've gotten the recent improvements in them and the transformer in particular.
And then on the right side, we've seen obviously huge improvements in functional genomics and CRISPR and bridge editing, which is a technology that kind of arc,
but the ability to kind of make very specific directed perturbations in cells.
But if you put those together, you now have the ability to, again,
at the kind of level of the individual cell, to read, think, and to write.
And this starts to really feel like a new kind of touring loop
and to have its own sort of completeness.
And, you know, we will see how much this can do against these complex diseases
and whether sort of this systematic approach is up,
up to the task of shedding new light on their dynamics,
but we are hopeful and excited.
If we here at Kirsher and also others in the industry
are successful in automating lots of programming
as we know it today and replacing it with a form of software building
that's much higher level and more productive
and it's much more just focused on defining
what you would like the software to look like.
If we succeed in that,
who are you long?
People talk about the designers
and how this will be like a renaissance for them
but are you long,
the grad students?
I mean, there are lots of really, really amazing grad students
who are awesome and then maybe
you're less skilled at making things happen on computers.
But who do you think is the most unexpected beneficiary
of a world where both many more people
can make things on computers?
And then also, especially if it's an evolution away
from programming, you know,
that people are already making things on computers.
are much, much, much, much more productive?
I don't have a high confidence answer to that.
There's all sorts of trite stock answers,
like real assets, especially constrained real assets.
Maybe we should be long SF real estate or something
because, you know, it is one of the most beautiful cities in the world
and will be enduringly.
So maybe we should be lying the inputs and the ingredients to these systems
because, you know, demand for them will go parabolic.
and so maybe we should be long copper,
maybe we should be long positional goods and celebrities
and, you know, Taylor Swift's music catalog.
There's a lot of, I think, compelling theories here,
but part of what I think is interesting at this economic moment
is the unpredictability and the contingency and kind of sensitivity
to the precise assumptions in the technology trajectory itself.
And the shape that it takes in five or ten years or whatever
I think is going to do a lot to determine the answer to that.
And as I look backwards the last couple of years,
I'm struck by how many predictions have held up, you know, reasonably poorly,
even for people who are on the face of it, you know, extremely well informed.
And so I've asked a lot of people this question,
and I have not heard any answers that are so compelling that I feel like I have conviction.
So we are very happy to be serving Stripe and your guy's mission.
What would you like us to build?
How can we make Kirsher better for you?
Either you, Patrick Collison, or you Stripe.
Well, you guys are already making Stripe better.
So keep doing what you're doing is would not be a bad outcome from our vantage point.
Cursor has today hundreds and soon thousands of extremely enthusiastic stripe employees who are daily users of cursor.
And they report that it's a very significant productivity enhancement.
We'll wait for the economic numbers.
Well, the economy is pretty big, and these diffusions take time.
Yes, yes.
So, you know, it seems kind of greedy to want more if you're already making, you know,
Stripe spends more on R&D and software creation than we spend on, you know, any single undertaking.
And so if you're making that process more efficient and more productive,
then, you know, maybe it seems greedy to want anything more.
If I'm being selfish, okay, three things.
Perfect.
The runtime characteristics and integration stuff that we just discussed, I think, would be really valuable.
I think the refactoring and the beautification stuff that, again, we also talked about, I think would be extremely helpful.
And I think really change our degrees of freedom, as in if you could lower the cost of future changes to Stripe and improve the quality of the architecture.
And then third, we really care about what we call it Stripe craft and beauty.
and we want our software to be well-designed and pleasant to use
and pleasant to use not only in the superficial pixel sense,
but also in the deep it works very well sense
and is something you can set up and largely forget about and just trust
or forget about it in as much as you want to.
There's obviously a concern with AI that it leads to the creation of more slop
and more kind of crappy things,
but not more of the best things.
I don't know what it would be
that Cursor would do to ensure that
the world is creating more of the best software
and not just more software,
but I think that's an interesting and important dimension.
So those would be my...
Besides all the obvious things to do,
those would be three suggestions.
Amazing. Thank you, Patrick.
All right. Thank you for having me.
Yes.
Thanks for listening to this episode of the A16Z podcast.
If you like this episode, be sure to like, comment, subscribe,
leave us a rating or review, and share it with your friends and family.
For more episodes, go to YouTube, Apple Podcast, and Spotify.
Follow us on X and A16Z and subscribe to our Substack at A16Z.com.
Thanks again for listening, and I'll see you in the next episode.
This information is for educational purposes only and is not a recommendation to buy whole
or sell any investment or financial product.
This podcast has been produced by a third party
and may include pay promotional advertisements,
other company references, and individuals
unaffiliated with A16Z.
Such advertisements, companies, and individuals
are not endorsed by AH Capital Management LLC, A16Z,
or any of its affiliates.
Information is from sources deemed reliable
on the date of publication,
but A16Z does not guarantee its accuracy.
