No Priors: Artificial Intelligence | Technology | Startups - Build AI products at on-AI companies with Emily Glassberg Sands from Stripe
Episode Date: February 8, 2024Many companies that are building AI products for their users are not primarily AI companies. Today on No Priors, Sarah and Elad are joined by Emily Glassberg Sands who is the Head of Information at St...ripe. They talk about how Stripe prioritizes AI projects and builds these tools from the inside out. Stripe was an early adopter of utilizing LLMs to help their end user. Emily talks about how they decided it was time to meaningfully invest in AI given the trajectory of the industry and the wealth of information Stripe has access to. The company’s goal with utilizing AI is to empower non-technical users to code using natural language and for technical users to be able to work much quicker and in this episode she talks about how their Radar Assistant and Sigma Assistant achieve those goals. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @emilygsands Show Notes: (0:00) Background (0:38) Emily’s role at Stripe (2:31) Adopting early gen AI models (4:44) Promoting internal usage of AI (8:17) Applied ML accelerator teams (10:36) Radar fraud assistant (13:30) Sigma assistant (14:32) How will AI affect Stripe in 3 years (17:00) Knowing when it’s time to invest more fully in AI (18:28) Deciding how to proliferate models (22:04) Whitespace for fintechs employing AI (25:41) Leveraging payments data for customers (27:51) Labor economics and data (30:10) Macro economic trends for strategic decisions (32:54) How will AI impact education (35:36) Unique needs of AI startups
Transcript
Discussion (0)
Today, Sarah and I are joined by Emily Glassburg-Sands, who's the head of information at Stripe,
which includes data science, growth, machine learning, infra, business applications, and corporate
technology. Emily was previously the VP of Data Science at Coursera, where she led development
of AI-powered products to have personalized learning, scalable teaching, skill measurement, and more.
We're excited to talk with Emily today about Stripe, AI, FinTech, and education.
Emily, welcome to New Pryors.
Thanks so much for having me.
Oh, yeah, thanks so much for joining.
So you now look at the information at Stripe.
Can you tell us a little bit more about what the organization does,
how it's evolved under your tenure,
and what are some of the span of responsibilities that you're focused on?
Yeah, so I joined Stripe back in 2021,
originally actually to lead data science.
And David Singleton, Stripe's CTO reached out.
I didn't know a ton about Stripe,
but I knew millions of businesses were using it to collect payments,
which had to mean really interesting data on those businesses
and on a large swat of the economy.
Stripes clearly helping companies run more effectively
and also in a position to learn from its data
what kind of interventions significantly improve
companies' long-term success.
And in some cases, to actually action those.
Today I wear two hats.
So the first is I support a bunch of different teams
that are together tasked with enabling the effective use of data
across Stripe.
And this includes, you know, from decision-making
internally to building data-powered products,
we've been investing a bunch in foundations,
which includes building out our ML infrastructure
and better organizing our data,
you know, the really sexy stuff,
but also in applications like seeding a bunch of new Gen AI bets
and getting them out to our users.
So that's kind of hat one,
and then second, I'm accountable for our self-serve business.
So a huge number of SMBs and startups
come to Stripe directly to get started.
They self-serve through the website,
and we're really focused on understanding
who those users are, getting them the right shape of integration efficiently,
building product experiences that meet their needs, including as they grow,
and growing the portfolio of products they use.
So for many of our users, it's not just payments, but invoicing or subscriptions or billing
or tax or rev rec depending on what their business model demands.
Yeah, and I guess Stripe for a long time has been doing different things in ML in terms
of traditional ML, you know, I think fraud detection and the fraud detection API that,
you all have is one example of that. But you were actually quite early in terms of adopting
LLMs and sort of early generative AI models. Could you tell us a little bit more about how that
came about how the interest was sparked and how adoption really took off? I mean, I think it's fair
to say that Stripe isn't first and foremost an AI company. As you know, fintechs, including
Stripe, have long used traditional ML in many contexts, including sort of fraud and risk. But first and
foremost, we're building financial infrastructure for the internet. So Stripe got started by enabling
first really digitally native startups to accept online payments. And then over time,
millions of companies started relying on Stripe's financial infrastructure for a bunch of
different needs, whether that's reducing fraud or managing money flows or unifying online or
offline commerce, all the way to launching embedded financial offerings. And so as not a kind of
first and foremost AI company, we probably like a lot of people listening to this.
podcast had kind of our like, hey, what the heck can we get to do moment a year or so ago
when LLMs really broke through the zeitgeist.
And we were looking at the technical breakthroughs and the product launches all over the
ecosystem with awe, but also honestly a little bit of overwhelmed, the sense of, well,
there's very clearly a real opportunity here to better serve our users, but what is it exactly?
And how do we get it off the ground quickly and safely?
So it starts with a story of three engineers who hacked together in three weeks an internal beta for an LLM Explorer.
And the basic idea of LLM Explorer was, hey, let's get a chat GPT-like interface in the hands of the 7,000 talented Stripe employees and really let them figure out how to apply it to their work.
Our leaders all the way up to John and Patrick have intentionally crafted this strong culture of kind of bottoms up experimentation.
And we think a lot about sustaining it internally as we grow.
And with LLMs, it was no different, right?
And so where we started was, let's quickly unlock internal experimentation.
Let's get LLMs safely in the hands of all employees at Stripe.
The enthusiasm was palpable, you know, at Stripe as it was across industry.
We knew the experimentation was going to happen.
And so we really wanted to make sure that we enabled it to happen well and safely.
It feels to me like when people start adopting LLMs, they tend to do it in sort of three areas as an
enterprise. There's what are you doing in terms of external products and how do you incorporate
it? There's how do you use it for internal tools or use cases? And then there's, what are your
vendors doing? You know, if you're using intercom or Zendesk, are they adding it? And if so,
how do you think about that as a company? The third one seems to be each team kind of deals with
it as our vendors bring it up. I found in general, people have tended to follow the pattern that
you mentioned, which is I start off kind of thinking, hey, what should we do externally? And then
they immediately collapse into doing something internally, just so that people get their hands on it,
they try it out, they kind of see what it does and how it works, and they get some internal
efficiencies. And then they start thinking about the external product side of it.
You know, was there anything you did specifically to start to promote that internal usage?
Did you do a big internal hackathon? Did you try other ways beyond sort of the some of the things
that you mentioned in terms of adoption internally just so that you start spreading the thinking and
knowledge about it? I think you're spot on that a lot of
of companies, you know, the first place they go in their mind is how can this manifest in our
product, how can we help our users? And then they realize, hey, any one person can't actually
answer that question. We need to be putting this in the hands of folks with a range of different
backgrounds and expertise, thinking about different parts of our product and business to really apply
it. And that's exactly what kind of this built in three weeks beta did. Within days, a third of
stripes were using it. And you can think of LLM Explorer basically as a front end that supports
multiple models in the back end. So we started just with GPT 3.5 and GPT4, but today we serve
over a half dozen models through the tool. We knew it needed to have certain security features,
stripping PII and rehydrating, et cetera, straight from the start. And we spun up, yes, a Slack channel
and a hackathon and more to help kind of build momentum. We didn't actually need to do much to build
momentum within days, a third of stripes
were using it. And so from there, we started to look at, okay, what are they
using it for? What do we see in the logs? And the answer was,
stripes were using it for all sorts of things, honestly. But there was
this opportunity to create more community and sharing in the tool
directly so that they could build on each other's work and weren't sort of
doing that in inside Slack. So a simple example,
but shortly after we launched the original tool, we set up this little
functionality called presets and basically just lets you save
even share your prompt engineering, maybe this exists, if not some startup should go build it for
everybody. And then everyone else at Stripe can like search and upvote and you see what bubbles to the
top. And basically overnight, we had like 300 of these reusable LLM interaction patterns. And they ran
the gamut. But, you know, just an example, like thousands of Stripes still today use the
Stripe style guide, which basically, you know, I don't care if you're a product marketer writing
copy for the website or a sales development rep writing a cold email or like you're an exact who's
preparing for a meeting. You run.
your copy or talk track or whatever through this style guide and it returns back to you
the same content in striped tone. The enthusiasm was palpable and we had to figure out ways
to harness it and build more of a community around it. The weekly active user count of this
LLM Explorer is still at almost 3,000, which is just shy of half the company using it every single
week. And yeah, for sure, engineered, but also a ton of salespeople and marketers and all those folks.
So I think there's a lot that the technology can actually do to create the community. And then the
next step is, okay, how do you get it from this like internal prototyping to actually like
enabling also more production grade solutions? How did you begin to look at that data or go from
explore to exploit here? Right. So one of the stripes I talked to said that I should ask you
about applied ML accelerator teams. I don't know if that's like here or further along in the
funnel of like, you know, your plan to get these new AI capabilities distributed across
the company in real ways? We can talk about it here and we can talk about it later. So the idea
of accelerators is basically ring fencing one to two pizza teams and multiple of them to get new
AI bets seated. And one of the accelerators was actually what produced this LLM Explorer. So
it's very hard to just pull three engineers off of, you know, their work building radar to build
an LLM Explorer. So we have this sort of experimental bets funding internally. It's run out of, you know,
by David Singleton, so out of our CTO's office. And it'll basically be like, hey, we'd like to build
a one pizza team and we want to fund it for six months, so relatively durable. And here's roughly
the charter and here's roughly the milestones. But we're going to learn an
rate as we go. And so actually, this infrastructure is an example of an output from the accelerator.
We have other accelerators that are working on the applied side. So, hey, you know, we know that
given the advances in LLMs in particular, there's way more we can do for our support experience,
both user-facing and also internally for our ops agents. And to your comment earlier a lot,
for sure, there's third-party solutions we can buy. But is there some, you know, homegrown solution
that can actually be used across a variety of internal applications,
and can we just go build that?
So that's another example of the kind of thing
that our applied accelerators build.
And I think, you know, the applied accelerators aren't,
you staff a one, you know, you fund a one pizza team
and you go hire these people.
They're actually opportunities for growth and development
for internal talent.
So the vast, vast majority of folks who join the accelerator
have been a straight for many years.
They're doing a rotation onto the accelerator.
It's likely to become their permanent home, but that's up to them.
You mentioned you have some favorite applications that have already like these sort of assistant
capabilities. Can you talk about some of them?
The primary ways we're finding LLMs useful today at Stripe in user-facing applications
is first, automating the writing of code, and then second, accelerating information retrieval.
And both are proving really powerful for our user.
So on automating code, radar assistant and Sigma Assistant are two new products.
that are in beta and rolling out to all users soon.
Radar Assistant is really about generating custom fraud rules from natural language.
So most folks listening probably have heard of Stripe Radar.
It was one of our first non-payments products.
It's an ML powered product.
It helps identify and block fraudulent transactions.
But then in addition to the core radar product, which works generally under the hood
without any user provided direction, we have radar for fraud teams,
which is about letting users write custom rules.
So maybe you know you don't have any customers in a given country
and you want to block any transactions from IP addresses in that geo.
To generate these rules, employees that are users used to have to code up the rules themselves,
but Radar Assistant lets them use natural language to write those rules.
So it's a little thing, but speed matters a bunch in fighting fraud.
You have to work faster than the fraudsters.
And with Radar Assistant, a whole range of people in an organization from,
fraud analysts all the way to less technical folks can implement rules quickly and directly
without having to work through a developer. I actually think that's, and I do want to hear
about Sigma Assistant, I think that's a really interesting pattern that applies beyond perhaps
the fraud world because there are so many, let's say, like just decision engines today that
are some combination of heuristics and then machine learning together. And I think that will
continue and the ability to take natural language explicitly describe policy and have that work
really well with less engineering assistance, I think is going to be useful in like other domains.
Like, you know, could be underwriting, could be fraud, could be other choices.
Totally. And, you know, I think for some of our customers, and this is opening the aperture
in terms of which employees can use solutions like custom radar rules, but for a lot of our
customers, it's allowing them to use these solutions for the first time, right? So think
about the non-technical small businesses on Stripe, a bunch of them. You don't have to be
technical to get started on Stripe. You can use our no-code integrations. You need payment links.
You can use hosted invoices. These are companies who wouldn't dream of coding up custom fraud rules.
And so not having to have, they just don't have the developer skills on hand. And so just not
needing to being able to use these tools with just plain English, I think is really powerful.
And more broadly, I really love that democratizing power of generative AI. And it's very
much aligned with our founding ethos.
You're also going to talk about Sigma Assistant.
Sigma Assistant is similar in that it generates code from natural language, but it's in a
pretty different context.
It's actually applied to generating business insights.
So Sigma is our SQL-based reporting product.
It lets businesses analyze and get insights directly from their striped data.
And stripe data is, as we've talked about, pretty interesting.
For most of our users, it's all of their revenue data.
So which customers where are buying what for how much?
who's retaining, who's churning,
pretty central to a bunch of different decisions the firm has to make.
And Sigma Assistant is all about making sure our customers, employees
don't have to speak SQL to get access to those business insights.
They can just use natural language to ask questions of the stripe data.
Some of the folks in the beta are asking, you know,
really interesting questions and getting them answered,
you know, from the very basic,
how much revenue did we generate in December,
to, you know, what types of customers tend to be most.
delayed with their payments. So we're excited to be rolling that out broadly later this year.
Where do you hope all this technology to be in one or two years? Like how do you think
generative AI will impact your business, your customers, the way you do things? When I step back and
ask, where should we be in kind of three years, five years, I think the vision, the opportunity is
much bigger than what we could do in a year. With a fintech lens in general, I think the current
And sort of Gen. AI advances beg the question of, what does it actually mean to apply generative
AI to the economy at large? You could start with payments optimization. I think folks know that we do a
bunch of back-end and front-end optimizations for payments. Is there some actually new foundation
model built on financial data that would blow the existing conversion and off and fraud and cost
optimization models out of the water? You know, we can do incremental model improvements today
and quarter over quarter, they drive meaningful bips of uplift.
But it doesn't feel crazy to think that a good foundation model could outperform more traditional
approaches by, I don't know, 100 bips, 200 bips.
So I think just in payments optimization alone, we can ask the question of what might
foundation model look like in that context.
And then I think where it's really interesting with generative AI on all this payments data
is, can we actually become more of the economic operating system for our users?
You can imagine all sorts of ways this could be productized, everything from a dashboard of
insights and recommendations to like an API you hit to get customer level predictions, to like,
you know, turning important business model and personalization decisions. So pricing, recommendations,
discounting kind of on autopilot with Stripe. We know we can abstract away a bunch of the need
for our users to worry about payments and refunds and disputes. But you could imagine also starting to
tackle those sort of higher order tasks, understanding the value of users and setting the right price
and determining the geo strategy.
And then this is more on a macro level,
but businesses rely on all sorts of economic signals,
CPI for tracking inflation or small business index
for tracking the health of the sector.
And those are very useful for steering business decisions
based on macro trends, but they tend to be quite lagging.
And so this question of can real-time data
speed the time to insight and thus response, I think, is interesting.
So, you know, those are all more sort of future-looking,
but I'm very bullish on a world
where we're able to really holistically help users grow their businesses, well beyond payments,
but built on payments data.
Rewinding all the way back to now, you are a year into the exploration.
How do you decide to invest beyond a one pizza team?
Like, does that happen organically in all of the product engineering teams you have?
Does it happen where you, like, at some cadence, look at the usage date and say, like,
oh, these, like, top-down, bottoms up, our things do you care about?
Like, do you need to restructure the org to make that happen?
Yeah, it's a great question. And I don't think we 100% have the answer of what's the right operating model, but we've been very conscious and iterative as we're learning. And so so far the answer is both. Like, you know, it's not one piece of team or two piece of teams. It's four of them today. And should it be six or should it be eight or should it be 10? And then in parallel, where can we?
we really support the vertical teams or the core product organization in adopting LLMs or
generative AI more broadly, directly.
There are a couple of examples, but at Stripe were very focused on leveraging AI so that
non-technical folks that our users can do things that they couldn't do before, and then also
so that technical folks can move an order of magnitude faster.
And there are some pretty obvious industry standard ways that we're finding LLMs can automate
the writing of code and accelerate information.
retrieval, and we're building those both out of the existing vertical teams and out of the
accelerators. That makes sense. You mentioned the ups, you know, I think it was at least six different
models you're using internally. How do you think about what models to use for what? And do you focus
on rag, fine tuning, open source, close source, time to first token, inference. I'm sort of curious
like what that matrix of decisions is relative to specific use cases and how you ended up with
this sort of proliferation of models, because I feel like the more sophisticated people,
get, the more they tend to have this proliferation happen internally.
So we do have a proliferation of models, but we are not centrally, for example,
like within our ML infrastructure group, super prescriptive about what model individual
applications need to use.
So I talked about LLM Explorer and the presets and sort of that was back in March,
and we very quickly turned that into building an internal API for more programmatic use
of LLMs, right?
We wanted it to be equally easy
and safer stripes to build
production grades, systems, and services.
There are 60 applications
built on that now, a bunch
internal, but also several
external, and I'm happy to
talk about a couple of them. That's what planted the seeds
for a lot of the product initiatives we're now
investing in more heavily.
We have default
models based on the use
cases, but we also give individual
teams agency to choose based on
cost considerations, latency considerations,
considerations. There's obviously, like, depending on the application, different performance
requirements. And then, you know, there also is this very real question of cost. So, you know,
we're running this infrastructure centrally. But we found that for the most expensive applications,
you know, we do bill them to the local teams. And so we work with them very closely to understand
what makes sense, given the economic product, the importance of quality at this stage,
how they're thinking about scaling, what the latency requirements are.
and so on.
We have heard that previously,
it was a little bit overwhelming
for individual teams to figure out
what model to use,
but also to go through
the enterprise agreement
and get the infrastructure
up and running.
So centralizing a lot of that,
I do think,
has sort of economies of scale.
But again, we're not prescriptive
and we do leave agency
to the individual applications
to make those tradeoffs.
What other infrastructure do you decide
to build centrally?
Right. So another strip told me that I should ask you about your internal experimentation and sort of testing infrastructure. And so love to hear about anything new you've built in order to like enable teams from, you know, your org or a central org.
Yeah. So, you know, I think it's always a combination of buy and build. And we recognize that there are a lot of great companies building a lot of great ML infrastructure and experimentation solutions.
and some of them are very plinked and some of them are very general.
And, you know, we stitch together where there's a clear external solution
and we build internally where we feel our need is more unique
or somehow very important and not currently satisfied by the market.
Our experimentation platform is one that we've built internally.
We run a lot of charge level experiments and latency and reliability requirements
for charge level experiments are very, very high.
And so building and running that internally has been worthwhile.
But there are lots of cases, flight, weights and biases.
There's lots of third-party solutions that we lean on as well.
When you think forward on the directions that the overall financial services industry is going,
and let's put Stripe aside for a second because I think Stripe is obviously a core company
to sort of the internet economy and it touches so many different pieces of fintech and things like that.
where do you think outside of strike the biggest white space for fintech employing
AI is like from a startup perspective or even an incumbent perspective like where do you think
this sort of technology will have the biggest impact it's a great question and i don't know
exactly what others will do i think um having a really robust understanding of identity who businesses are
what they're selling has always been important.
And, you know, I think often in industry,
we think it's important for marketing or sales
or sort of go-to-market motions.
But it's also super important in fintech.
Yeah, it's important for credit lending decisions,
but it's also important for supportability decisions
and understanding where, you know,
the business does or does not meet the requirements
of a given card network or a given bin sponsor.
And so I think that that identity piece, like who is this merchant, are they who they say they are, but also what are they? What's their business? What are they selling? And how does that map to this pretty complicated regulatory environment is a really interesting and hard problem that lots of folks are solving in their own ways, but is likely an opportunity.
I think there's almost certainly an opportunity to, you know, whether Stripe does it or somebody
else does it, to make sort of financial integrations way more seamless.
Stripe has a whole suite of no-code products, so you can use, you know, payment links or
no-code invoicing, but how does one actually build a really robust specific to the user
integration without needing, you know, a substantial number of payments engineers or any
complicated developer work. LLMs are proving that they can be very good at writing code.
We have a couple cases actually where we're already seeing it work, but as the decisions get
more and more complicated, I think there's still a lot of work to do to build the right
integration and to build it well in an automated way. And then I think, as I mentioned before,
some of this layer on top of the payments data, like, okay, you could build solutions that
make payments work better, but payments actually allows you to really deeply understand and improve
the business is pretty fascinating. And you'd have to think about, like, is it a startup
that does that or is it an incumbent that does that and what's the what's the business model um
what's the business model there but you know if i think about the case of stripe um you know sort
stripe has the opportunity to be beneficent right incentives are super aligned the more stripe can
help its users businesses grow the more stripe grows and the more the economy grows and so whether it's
Stripe or someone else using financial data to help businesses be more successful, to grow
the pie, to grow the GDP, I think is really powerful.
It's a really unique data set.
Is there something in that data, the obvious example to me that comes up is Radar, but
otherwise, like leveraging that data and giving it back to merchants in some useful way already.
Yeah, so Radar is a great example.
I think you also see it throughout our payments product.
So maybe the most salient to a consumer, like an end user, not our customer, but our customer's customer, would be something like the optimized checkout suite.
So it's this bundle of front-end payments optimizations.
And it's a lot of little things, honestly, like dynamically presenting payment methods in the order that are most relevant for the customer that really add up in terms of driving efficient checkout experiences for end users and in turn driving up revenue for our customers and growing the internet economy.
And less salient to the end user is this whole host of back-end payments optimization.
So, for example, we use ML to optimize authorization requests for issuers,
basically identifying the optimized retry messaging and routing combinations
to recover a big chunk of false declines, about 10%, so billions of dollars globally.
And there are very similar applications across,
across a range of our products. So for example, for recurring charges in our billing product,
we use smart dunning to reduce declines. It actually reduces declines by about 30%. You basically
identify the optimal day and time to retry a payment for transactions that are declined, for example,
due to insufficient funds. It's really easy to know at what day and time sufficient funds
will pop in. And the list goes on, you know, Stripe Radar, which you mentioned, you know, considers
a thousand characters six of a transaction and figures out in less than 100 milliseconds if each
of the billions of legitimate payments made on Stripe can go through. And so, you know,
those are all payments or payments adjacent optimizations, but conversion, off, fraud,
we don't really talk about cost optimization. That's another one. Are all places where having
that scale of data allows us to create a better experience for the end user, create more revenue
for the business, and grow the economy. So I know that your background in like labor
economics has influenced both your career decisions, like joining Coursera and Stripe and your
approach to data science. Can you like talk a little bit more about like how you think that shapes
you as a leader or even Stripe's approach to like understanding like macro trends and macro data?
The through line in my career, both in academia and net industry, has been using data to understand
how individuals and firms make decisions and in particular to help those decisions be higher quality.
And so, you know, you mentioned labor economics. I've long been fascinated by who gets access
to opportunity and why.
So it started all the way back in college.
I met this playwright in New York.
She told me less than a fifth
of productions on U.S. stages
were written by women
and asked if I could help figure out why.
And as part of that, I read an audit study.
So, you know, some excellent playwrights donated
four never-before-seen scripts.
I sent them out to hundreds of theaters
and asked them whether they wanted to put it on stage,
why are why not?
And I just varied the pen name.
Like, is this written by Mary Walker or Michael Walker?
And bravely, you know, basically I found
that when purportedly written by a woman,
woman, the exact same script was less likely to be produced. But more importantly, the theater
community cared. Like, they wanted the best plays in production. And so the study spurred awareness
and over time change. And today, half of productions on US data is written by women. And I think
that early experience showed me how powerful data, especially when you use kind of robust
econometrics and causal inference and actually are getting to the root of the drivers can be
in understanding and improving decision making. And it's why I pursued a,
PhD in economics. It's what took me out of academia to Coursera. Coursera at the time had only
40 people, but it already showed the potential to dramatically expand access to world-class learning
and done right also downstream labor market opportunities. And that's also a lot of what led me
to Stripe. You know, well before me, Stripe was operating as kind of a beneficent player
in the ecosystem and has been very interested in genuinely helping business.
businesses on Stripe grow and using data to do that. And sometimes we help by guiding them
and sometimes we help by actually just building the product for them. But that's been kind of a
through line in my journey and a lot of what I love about Stripe. How much does Stripe think about
macroeconomics? So you have this amazing view into the global economy through all the commerce
transactions that are happening on your platform across so many different industries. How does that
data inform how Stripe thinks about different aspects of its business. So, for example, my sense
as Google through AdWords and other ad-related products similarly has a pulse on where it's
been happening or not happening. Does that look, we're tipping into a recession that impacts
hiring decisions or other things for them? I'm just sort of curious if similar things translate
for Stripe over time. Yes, certainly. We do get rich insight into where the economy's headed,
and we use it to guide our internal decision-making.
I think there's an interesting question we're exploring on,
is there a version of this that we can actually be providing to our users
to help them make decisions and help them grow?
The example of like the CPI or small business index
and can we get that in users' hands six months earlier
so that it's way more actionable is a really interesting question.
And honestly, we're early days there,
But I think as part of thinking about how might we become more of the economic operating system for our users,
it's not just the micro components of, you know, how do you price or how do you personalize?
It is also the macro components of how do you think about the ecosystem that you're operating in
and how can we help you operate more effectively given the macro trends that you're operating in.
That makes sense.
So you don't look at the one pizza team and say, no pepperoni this month.
You know, she's only...
No, no, no, no.
I mean, I think you know John and Patrick pretty well.
Vinnie, as, yeah, no, that would be whiplash.
Strip is in a very fortunate position to really be in charge of our own destiny
and be able to take a very long-sighted view in choosing where and how to invest in the business.
And so, no, from the perspective of, like, do we add 10 people or 100 people or 1,000 people?
We're not micromanaging at that level.
Well, that's much less driven, honestly, by the macro on average and much more driven by
where do we see opportunities to serve users, given what's happening.
And I mean, we can even talk about AI users, right?
AI users actually have, there's this whole wave of AI startups and they have fundamentally
different needs than a bunch of the waves of startups before them.
And so that actually begs the question of where do we invest more now to get ahead of those
needs because we know, because we know there's demand.
Yeah, it makes a lot of sense. I guess the last area that we had sort of questions about, given your background and all the amazing things you've worked on over time, is, you know, you spend a lot of time at Coursera, which is really focused on how do you bring different forms of online learning and knowledge to the world. And one of the areas that a lot of people have talked about from a global equity perspective and AI and its impact is education. Yeah. And so we're really curious to get your thoughts on how you view AI impacting education, but also importantly, where will that first substantiate?
Is that a U.S.-based thing?
Is it certain countries or markets?
Is it K-312?
Is it college?
Is it post-college learning?
We're just a little bit curious how you think about, you know,
AI and education and where is it going to be most important in the short run versus long run?
I was at Coursera for about eight years.
I grew from an IC to leading the end-to-end data team.
And through that journey, I was increasingly motivated by building products that were only possible because of the data.
And the first places we started were in the obvious places.
Oh, you personalize discovery of content, you personalize the learning experience, you do more
to scale the teaching experience. But where we moved to relatively quickly was what you would think of
as less education and more labor market, which is how can we use education data to help learners
and companies measure and close skills gaps and get folks into the jobs that best fit their skill
profiles. And so, you know, that's not at all to downplay the opportunity that we have in
AI to make meaningful advances in how, you know, elementary school students learn and make
that learning really customized to them and make sure that there is high quality instruction
in lots of pockets of the world that wouldn't otherwise have it. But I also think there's
this important pull through to the labor market. And, you know, I'm a labor economist by
training. People get education for two reasons. They get it to develop skills, but they also get it to be
rewarded for those skills in the labor market. And that first piece is like how you develop skills,
the learning. And that's really important. And AI can definitely help. But the second piece is like
how you signaled out learning out in the market, how you build a credential. And, you know, I was on
some world economic forums. And we were working a bunch on, hey, could we make with data skills
more the currency of the labor market. And Coursera substantially move that direction,
including in their enterprise product. And I hope many others will move that direction too,
not instead of, or as a substitute for using it in the learning experience, but just
recognizing that so much of what individuals need from education is that signaling, is that
credential. And I think the best way, most equitable, fairest way to do that is through skill
measurement. Great. That makes it done. As discussed earlier, Stripe has this amazing vantage
point and to all for so different online businesses and how they're evolving over time,
what are the differences between some of these AI-centric sort of next-gen
companies that Stripe serves as customers versus what you've seen traditionally in the
e-commerce or SaaS or other areas?
It's a great question.
I mean, we've worked hand in hand with the builders of a bunch of different technology waves
to make sure they have the financial infrastructure they need.
Some of the earliest waves were marketplaces, infraplatform, social media, think kind of the
young DoorDash or Instacart or Postmates or Twilio.
And, you know, those were up to become some of the largest companies today.
We've grown up with them.
There's also, as you noted, kind of the SaaS wave.
And the current wave is AI.
And in terms of the unique needs of AI startups, probably four notable differences
versus the prior waves.
You know, the first is just at a basic level, unlike a bunch of the past generations
of software startups, we're seeing AI startups have substantial compute costs right out of the
gate and that that's putting a bunch of pressure to build monetization engines faster.
The second thing we're seeing is a lot of these startups are seeing global demand for their
products straight out of the gate, right? They're making digital art or music or all sorts
of borderless things. And they want to get that across borders from day one. Third, I would
say, is a lot of subscription businesses. And obviously we see subscription businesses in a bunch
different contexts, but especially sort of the AI startups that are consumer facing heavily skewed
towards subscription business models. And then I just say fourth, as a corollary of the first may be obvious
because these startups are generally monetizing at a much earlier stage, they're in an interesting
spot where, you know, with very lean teams, they need to operate financially like very real
businesses, right? They need to grow up a little faster than they're sometimes ready. And so
we're seeing, you know, a bunch of adoption of our revenue and financial automation suite
to deal with, to deal with those differences.
That's pretty amazing. It actually reminds me a little bit of the 70s.
The original vesting schedules were four years because companies would go public within four years.
And so that's where the four-year vest comes from for stock.
And so I think historically companies used to grow up really fast.
And then you look at the initial internet wave and Yahoo and eBay and a variety of companies
became profitable within a few years.
And so it feels like this AI wave is exhibiting a lot of the same characteristics,
and that may just be reflective or indicative of real product market fit,
an enormous user man that's almost pent up.
I feel like whenever you have one of those waves,
that's when you see this rapid monetization.
It's happening so fast.
I mean, we saw a massive spike in the number of generative AI companies on Stripe over the last year.
And a bunch of them were two-person teams you've likely to ever heard of all the way.
Well, maybe you've heard of, but most people have it all the way to kind of hyper-scaling startups
with millions of users, like,
Otter AI and Mid-Journey.
We're looking at the list of top 50 AI companies put out by Forbes last year, and notice
over half we're using Stripe.
And oftentimes you have top startups and a bunch of them aren't even really monetizing
yet.
And it's striking what share of these AI companies are monetizing and monetizing early and
monetizing fast.
At the foundation layer, yes, Open AI and Mistral, but also a bunch of companies at the
application layer, Moonbeam for writing assistant or runway for video editing, which is pretty
remarkable. Emily, this is a great conversation. Thanks for doing it with us. Thank you so much for
having me. Yeah, thanks for joining. Very good to see you and amazing as usual. Find us on Twitter
at NoPriarsPod. Subscribe to our YouTube channel if you want to see our faces, follow the show on Apple
Podcasts, Spotify, or wherever you listen. That way you get a new episode every week. And sign up for
emails or find transcripts for every episode at no dash priors.com.