No Priors: Artificial Intelligence | Technology | Startups - How AI Agents Are Transforming Customer Support, with Decagon’s Jesse Zhang
Episode Date: January 16, 2025Today on No Priors, co-founder and CEO of Decagon, Jesse Zhang, joins Elad to discuss the future of agentic customer support. Decagon provides AI-powered customer interactions for companies like Rippl...ing, Notion, Duolingo, Classpass, Substack, Vanta, Eventbrite, and more. Jesse shares the thesis behind starting Decagon, why he sees customer support as the ideal entry point for agentic technology, and what areas of AI excite him most. They also discuss voice-based interfaces, issues with latency in current capabilities, and the connection between young math olympiad communities and today’s AI startups. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @TheJesseZhang Show Notes: 0:00 Introduction 0:30 Starting Decagon 3:15 Business impact of adopting agents for customer support and customer ops 8:00 AI infrastructure and models for customer success agents 12:05 Voice-based capabilities and text-to-speech engines 15:00 Combatting latency 16:25 Crossover of math and AI communities 21:12 Exciting areas of AI 25:29 Strengths and weaknesses of agents
Transcript
Discussion (0)
Hello and welcome to NoPriars.
Today I'm talking to Jesse Zhang, co-founder of Decagon.
Decagon is an early stage company building enterprise grade generative AI for customer support.
Founded in August of 2023, their platform is already being used by large enterprises and
fast-growing startups like Rippling, Notion, Duolingo, Class Pass, Eventbrite, Vanta, and more.
Jesse, welcome to NoPriars.
Of course. Thanks for having me, you lot.
Absolutely. Maybe we can start a little bit with sort of your background and what Decagon does.
You know, you're a serial founder. You started another company before this in AnticBot.
And, you know, now you and Ashwin have started Decagon and you've been working on it for a while and have seen some really interesting adoption from companies like Rippling, Notion, Eventbrite, Vanta, Substack, and many others, right?
So you've really started to carve out a real space for the company.
Can you tell us a little bit more about what DecaGa does, how it works, what the focus is of the company?
Of course, yeah.
So a quick background on me.
I grew up in Boulder, did a lot of math contests, stuff like that.
Growing up, studied CS at Harvard.
As you mentioned, started a company right out of school.
That company was eventually bought by Niantic, and then I left to start this company.
Oshman and I, we met through mutual friends, officially met at this VCN.
offsite. And when we got together, we were like, okay, biggest learning from first company
is that can't really overthink things too much. We started by just kind of obviously
being interested in AI agents. It's very exciting technology, arguably like the coolest thing
from this generation. And we just, you know, talked to a bunch of customers, like the ones you
listed. We, I think over the years have gotten a lot better at figuring out, you know, how to talk to
folks and, you know, what's, what questions to ask. And through that process, we kind of arrived
at, you know, a current use case as maybe what we think is like the golden use case for,
for these AI agents, which is customer interactions, customer service. The use case is very
tailor made for what LLM's a good at. And so we started building from there, right? And we still
weren't thinking too much about, you know, division or anything yet. It's just like, all right,
we had a lot of customers in front of us. How can we make it so that they're happy and they're really
like what we're building. And then that led to kind of where we're at now. I would say right now
as a company, Techagon, we ship these AI agents for folks to use on the customer service,
customer experience side. The thing that's made us special so far is we have a huge sort of focus
on transparency, I guess. So when people use us, especially these larger companies, it's very
important for them that the AI agent is not a black box, that they feel like, okay, even though
LMs are cool and like, you know, there's a lot of things you can do with them that they can see how decisions are being made, like what data is being used, how do you come up with answers? And if I want to get feedback, I can, that sort of thing. So currently we're in production with a bunch of these, the large folks that have large support teams, pretty much any company that has a large size of all support operation is a good fit for us. That makes sense. It's interesting because I feel like one of the things that's been really striking over, say, the last year in the AI world is,
the CEO of Klarna posted on X or tweeted about the impact that AI has had on their customer
supporter service team.
And Klarna is sort of like a buy now, pay later service out of Europe.
And, you know, his tweet basically said in the first four weeks, they handled 2.3 million
customer service chats.
The customer satisfaction was on part with humans.
It's 25% reduction in repeat inquiries relative to people.
It resolved customer errands or issues in two minutes versus 11 minutes for a human.
agent. And instantly, they were live 24-7 and 23 markets and 35 languages because AI support
so many things. And so, you know, at a huge impact on that company. And I think they sort of shifted
700 full-time agents to do other work, right, in terms of the impact at Klarna itself's organization.
What sort of impact have you been seeing with your customers is they adopt this sort of technology
and how do you think through the lens of, you know, what you're really bringing to these customers
and the sort of satisfaction that their own end users have.
It's an interesting way to think about it, which is, you know, all these people are shipping
this use case, right?
It's like, you know, there's a lot of evangelists out there, which is nice.
The Clarkin article is awesome.
There's a lot of tailwinds for the industry.
And I think one interesting thing we've seen is that the benefits that people get are all
roughly in the same vein, but different people prioritize different things.
And so at this point, it's not really even that much.
much of a hot take to say that, like, in a couple of years, these ages are going to be super
pervasive. People can use them for all these customer interactions. They're going to be everywhere.
And so, to your point, like, what is the benefit? So for our customers, it's always the
same. It's what fraction of total work, in this case, like conversations can the AI agent do? So
how much work is this saving us? And then two, how much happier are our customers, right?
Like, what's the customer satisfaction score, MPS score?
Those two are often just the leaders by far.
As I said before, like different people, maybe value each one slightly differently.
And then there's kind of other things like, okay, well, we want to make sure that there's accuracy, right?
Like, if we're in a regulated industry, you know, this has to be very accurate for us.
So those are kind of where the benefits lie.
It's like we're saving a bunch of money.
We're saving kind of time and resources.
But also on the other side, we're kind of making.
the customers happier. And so that can lead to higher retention, more conversions, and it's kind of
a lot more upside there. It's like you're giving every customer a personal concierge basically
in their pocket that they can chat with any time, any language 24-7, and that can be pretty
transformational for a lot of businesses. Is there any example customer that you can talk about
as a case study in terms of the impact this has had, how it's lifted their metrics, the success
safety and using Decagon? Of course, yeah. So we just did a big case study with a company called
Built Rewards. Great use case for us. They have a very large user base growing very quickly.
You're actually using it to either make points or make payments. A lot of my friends use the product.
And then as a result, because you have a large customer base, like people have questions,
people will have things that they need help on. And so like the number of support inquiries
basically grows linearly with the number of users and because of that because they're growing so fast
and basically exponentially that means the number of support queries is also growing exponentially
so when they first started using us that was like the main goal it's like holy crap we're getting
overwhelmed by all this volume can AI help here and so the thing that ended up happening there was
yeah within a basically a month of starting to use us they were able to stop scaling their
their team and the AI would take over a lot of automation. It just makes everything very smooth.
And then now, basically, we're almost almost a year in at this point. They've been able to really
restructure their customer support team. And again, we published a case study on this where
they were able to quantify like, okay, what are the savings, right? And so so far, it's around 65
agents of just like headcounts safe. So very tangible difference. And then for us, it's also great
because we're able to, you know, provide them the value there.
It's like a very easy ROI, but the customer experience is also a lot of snappier.
And, you know, they get a lot of social media posts about like, holy crap.
Like, I just tried, you know, build reward support thing.
And it doesn't feel like any sort of AI or chatbot system we've ever used before.
So that makes us happy.
Could you tell me a little bit more about you, Bill, from a technology and infrastructure perspective?
So I guess there's the core models that anybody can access, right?
the GPT4Os of the world of GBT4,
the Cloud Sonets, etc.
And then there's all the stuff you've built on top of it
to actually make this work well
for your specific use case
and for customer support agents.
Could you tell us a bit more
about what you all have had to build over time?
Of course.
Like you said, everyone has the same access
to the same models.
We see ourselves very much as a software company.
And we're obviously doing a lot of work around AI
and using the AI models a lot.
But I would argue that most applications nowadays, they're real software companies, and AI models are kind of tools that everyone can use.
And so most of the sort of alpha or most of the special stuff that you build is on top of the models.
It's either the orchestration layer or the software around it.
For us, there's been a big focus on both.
The orchestration layer is kind of how you can use all these different models together.
You probably have e-vals set up that measure how good each model is at certain things.
You put them together, and the whole goal of putting them together is to mold it around the business logic of the customer.
That's part one.
The other thing you build is just very classic software, right?
You have this AI agent there.
It's all the things I was saying before.
Like, you know, transparency is a big piece.
You really don't want this to feel like a black box.
That's just their answering question.
And so how can you build all the tooling to see, like, okay, what's the data that the agent's using?
What steps is it taking?
Can I analyze all these conversations that are coming in?
If you have a million conversations, it's like, okay, no one's reading all those.
So how can you make it so that the AI, the LLM can read every single conversation, tell you how stuff's going, find gaps in the knowledge, give you a breakdown of like, okay, here are the big categories you should care about.
There's like a bit of a trend here that's been interesting.
So that's all the software around it that we're building.
And that's typically how it's structured.
And the orchestration layer, I think it's going to be different for every agent, right?
like our agent versus like a coding agent, that orchestration is going to look pretty different.
But at the end of day, you're kind of just building a sort of a structure on top of the LMs.
Yeah. It seems like we're very early in the days of true agentic stuff.
And that includes the ability to sequence chains of events that includes certain forms of reasoning.
Obviously, there's things like 01 and other things that have been coming out to start to try and address this.
but we seem quite early in the scaling curves.
What do you think are the main pieces of technology that are missing
to really take you or your sort of vision of the next level
in terms of how these agentic systems should work?
Yeah, so one thing we were talking about the other day is
there's actually different types of intelligence with the AI models
and a lot of the recent developments with 01 or sonnet
and stuff like that has been around, I guess,
like quantitative reasoning intelligence.
So they've gotten better at coding.
they've gone better at math and for us actually those things help but they're actually not the
biggest difference maker so in our use case that type of intelligence that matters the most
we would probably describe it as instruction following so you just have a bunch of instructions
like can you follow it to a tv and i'm sure there's other kind of types as well but
for us we're excited to see developments on the other areas too and people
everyone's saying like oh there's a plateau happening with like you know the core models and the
intelligence i think when most people say intelligence like that they're probably talking about
the reasoning capabilities um for us and like the agentic flows that we use instruction following is
a huge piece because you have to like you know just think about like a customer service you know
SOP or like a plate like a workflow or something like that like you just have to be very accurate about it
And so I know there's research going on about this in the major labs.
And I think that's one thing we're looking forward to next year.
One other area that it seems like really touches on customer success and customer support
and sort of user experience is also voice-based support.
And I think one of the things that's a little bit under discussed in the AI world,
because we keep talking about large language models and understanding of text.
And obviously that stuff is crucial to everything else.
But I feel like we almost under-discuss text-to-speech engines and the ability to understand spoken word and then respond with audio, right?
And so there's companies like Cartesia, 11 Labs, Open AI, Google, et cetera, who are starting to provide some of these services and APIs.
How much of an impact does that happen, what you're doing, or is that a separate type of product?
Or, you know, how do you think about the voice component of these things?
Great question.
Huge impact.
So we have customers now trying our voice agents.
And if you just think about our space, right, like you have, the overall problem is the same,
which is you have a bunch of customers.
They have, you know, questions or issues or things you need to talk about.
And the channel really doesn't matter for them.
It's like some people prefer voice.
Some people prefer chat.
Some people prefer email.
Some people prefer SMS or something like that.
And so our job is to handle all of those.
And obviously you start with text.
because that's the most, it's like the easier one.
It's like easy to evaluate for the customer as well.
I think just now you're getting to the point where you have big companies that are very
interested in voice and like actually they've seen the results of a text-based agent and
they're like, okay, well, yeah, you should, we should be able to generate voices and do the
same thing for phone calls.
None of this would be possible without the models that you just listed, right, and those
companies.
so like LEM Labs, Open AI is doing some cool stuff, Cartesian.
And I think there's also been huge strides this year with those models around like how
realistic the voices sound.
Also latency matters a lot in our use case because if you're making a phone call, like you
expect things they feel very snappy.
So yeah, big big topic for us.
And as these companies get better, I mean, we're working with them pretty closely right now on
how you can actually, like, build these things well at scale.
But as they get better, that it's also going to be huge for us to keep delivering these voice agents.
It makes sense.
Yeah, my sense is one of the issues is latency in terms of, it takes enough time to take a audio stream or somebody's talking, translate it into text, feed that into a language model, and then output it as voice again, that it feels there's a lot of pauses or people,
off to kind of wait, and there's different things that people have been trying to do in the
background, like streaming the potential solutions back out and then, you know, being able
to try and shorten that latency timeline. Do you feel latency is still an issue, or is it just
solved by integrating voice directly into the models in a deeper way for some of these services?
Or when do you think latency becomes a solved problem for these sorts of application areas?
Latency is a big deal here, of course, with voice models. So nowadays, you have the voice to voice
models that we're playing around with, opening eyes doing a lot of work here. I think there's
obviously a lot of trade-offs there, voice-to-voice, latency is great. Sometimes, though, with these
production use cases, you do need the extra computation cycles. So, you know, fetch data, do multiple
model calls. Or there might be other reasons that you can't do voice-to-voice. And so, okay,
that's one option that you would consider. The other one is the one you described where you're
kind of going through your transcribing or doing speech to text and then doing all the
computation within text and then generating the voice at the end. That always causes a little
bit of extra latency, of course. And so as you mentioned, a lot of folks have figured out
fairly clever ways to get around that. You can start generating stuff first. In our use
case, you can always do something like, hey, give me a sec. I'm looking up your data. So these
are all things we're playing around with. I think for each customer that we work with, there's
different tradeoffs. And so we're really trying to base what we build on the things that we're
hearing from them and, you know, the sort of priorities that they have.
That's cool. One thing that I think is kind of interesting is the number of companies in the
AI world today that have been founded by people with Math Olympiad or IOUI or other sort of
backgrounds, right? And I think you were sort of involved with Math Olympiad stuff in high school.
I think Decagon has actually hosted some Math Olympiad events for the team, which isn't like your typical happy hour.
But there's other teams and companies.
I mean, before that, there was ramp and things like that.
But I think the brain trust team and the PICA team and cognition, which launched Evan.
And then you all kind of have that common thread.
Where do you think that comes from?
Like, why do you think this community is now so active in AI?
That's a good question.
I mean, yeah, we're actually all around the same age as well.
So we've known each other since, like, you know, middle school, high school.
One, it's a great community.
For us, yeah, we have a lot of people on the team with math contests, coding contest backgrounds.
I think it's more so that this community was always there.
Math contest has been around for a while and a lot of super smart kids that go through that.
It's also a great way for folks to kind of, you know, get to know each other and get connected
and, you know, build friendships.
And I think the main thing is that now in the last few years, maybe last like, you know, five,
six years. Like, there's just, because startups have been a lot more mainstream, a lot of folks
in this demographic have gravitated towards startups as opposed to, you know, traditionally
it'd be either academia or quant trading and things like that. So they're just a big influx
of these super smart, super talented people that come into the startup world. And because there's
this community aspect, you know, folks can see what other people are doing and like, you know,
what sort of works and types of companies that people are building.
And I know which doesn't say they're all the same, but I think a lot of folks with these
backgrounds are now kind of working on startups. And that's why there's a lot of, you know, I guess
progress in the companies that folks have been building. And are there other ways that you all
have been sort of supporting each other through the startup journey? Because I feel like every
generation there's sort of a click of people who built some of the more interesting companies
who all kind of interact. They provide advice. Maybe they angel invest in each other. Like there's
kind of a thriving community. And every five to seven years, it kind of shifts who it is. And I feel
like, you know, the Iowa sort of Math Olympiad community or coding competition communities are
kind of very engaged right now. Is there any formal version of that, or are you all just kind of
informally helping each other? Yeah, I mean, I angel invest in a lot of the companies just listed.
A lot of their founders are angel investors in our company. It's very informal, obviously. It's just
like, you know, casual friends helping each other. I think the main thing is that with company building,
There's just a lot of, a lot of service area, right?
As you know, it's just like, how do you hire people?
How do you, like, do you, like, do you build this thing?
Like, how do you structure, like, com for, like, I don't know.
There's infinite things.
So, yeah, having the other data points is obviously super helpful.
So I hang out with them quite often, play games, play card games.
It's a Chinese version of bridge that I play with a lot of these folks quite often.
And it's just, it's fun where you just kind of hang out.
everyone's kind of in the relative same stage of life. And so, yeah, like you said, there is definitely
a lot of camaraderie and help that goes around. Is coming from this background from the
sort of Math Olympia community impacted at all how you think about hiring or your hiring practices
at Decagon? A little. I mean, if someone else has the same background and, you know, has gone
through the same either contests or programs, obviously that is pretty good signals since I have a good
idea of what those people have done. My co-founder, Oshman, also a similar background. He didn't
grow up in the U.S., but in India, he did a lot of these contests as well. And so, yeah, I think
there's some correlation with people who just as kids just did a lot of this stuff. And then,
you know, now we're all adults. And, you know, there's some sort of, you know, signal there when
you're talking about hiring. But for the most part, like, it's, there's so many talented people here
whether or not you did math contests or not in SF, you know, at Decula, other companies that
I think our hiring process has been more or less the same. It is like a nice sort of trigger
for events, I guess. So when you host these events, you know, when people come out, you can get
a nice community of folks that are interested in the same things. And we're going to be hosting
probably more. And some, not all of them going to be like contest based, obviously like, you know,
puzzles and things like that where you just get a lot of fun engineers and, you know, people
bringing their friends and that's that's pretty important to us and then i guess um for a i writ large
what are you most excited about in the coming years or if you were to extrapolate out 12 to 24 months
what are you anticipating most keenly or what are you waiting for it so obviously the model's getting
better is awesome uh the models getting better across different modalities also awesome we talked about
voice there's also other parts the other modalities that also tangentially interesting to us right so
you talk about a lot of our companies are um a lot of our customers
you know, have software products.
And so it'd be awesome if, you know, you're asking questions the AI agents and it has
like context of your entire screen and, like, all the interactions you've done and stuff
like that, like, that would be great.
And you can even go a step further and have it actually like help you navigate stuff.
So there's just so much you can do there where you talk about the other modalities or even
just more advanced model capabilities.
Like we've seen the computer use demo from Anthropic.
Probably, in my opinion, not production ready yet.
But as that gets better, there's a lot of cool things you can do there.
So on the model side, that's one thing we're excited about.
On the not core model side, I think one thesis we have is as, you know, the years go by, again, AI agents, I think at this point, undeniable that there's going to be a reasonable explosion of them where there's used on a bunch of different use cases.
I think some use cases will take longer than others, but the value that they're providing is pretty undeniable.
So there's definitely going to be a lot of AI agents out in the world, you know, in our use case, customer service and other use cases.
But one thesis we have is that the nature of the work of, you know, human agents and, you know, people like us is also going to change pretty drastically.
And one of the things that are going to change is that there's going to be a lot more people that are supervising and editing agents.
And so that's something we think about.
We're excited for a lot of the sort of innovations there because,
Right now, a big part, like I said before, we care about letting, you know, the human agents for our customers and their leadership team to go in and make changes and, like, monitor the agents and just have a lot of, like, visibility and control.
And what does that look like, right?
Like, if you compare it to human, if you're monitoring a human, you can give them feedback in real time, you can be like, oh, no, no, don't do this.
Like, you did this thing wrong.
Like, please do this next time.
When you're doing that where the AI agent, there's like a lot of different possibilities because, you know, they have some.
things that are different than humans, right?
They're infinitely scalable.
They're like, you can, like, hard code things sometimes.
So that's the other area, probably going to next year that's where we're looking
forward to.
That's really cool.
And do you view that as a main area of differentiation for you relative to some of the
other folks in the market providing customer success and supportfully?
Yeah, right now, that's probably the biggest thing.
The interesting thing about our space is that, and I think those will
probably be true for a lot of AI agent spaces. The results are very quantifiable. You're basically
taking the agent and you're benchmarking against, okay, how good it would a human be? And how much
money is the saving me? Like how much better quality is the customer experience? And so because
of that, when people evaluate us in our space, it's pretty quantitative evaluation. You're like,
okay, cool, this kind of works. Like, let me just put you out into production for 1% of the volume
and build up from there and maybe do that with another option or you know a lot of the old
school companies like sales force are going to be this is a very exciting space for them too so
they're going to have alternatives and then you just benchmark everyone right like how good are the
stats how good are the metrics like how good of a job is everyone doing and uh i think so far we've
been performing very well and the main reason for that is this sort of transparency piece
you know giving people observability explainability control over the i and there's still a long way to go
that field, right? Like, there's so much more you could do. And that's, that's been our specialty
so far. That's great. And I've had some conversations with your customers over time. As people
have been trying some of these agents have called me to ask questions about different companies in
the space and everything else. And the three things they tend to point out is you all ship really
fast. You're very responsive as a team and company. But third and most importantly, just the product
tends to outperform. And so I think that's really been, you know, great to watch over time.
How do you think about the areas where AI agents are going to be successful versus not successful in the short run?
So basically, one of the things that we have been thinking through, and this is something that was pretty big for us when we were first starting out, is that a lot of, there's going to be a huge variance between, like, the different types of AI agents and how successful they'll be and, like, how quickly they'll take the roll out.
because when we were first starting the company, right?
Like, we were pretty open to what to build, and we knew that AI agents was exciting.
At that point, we didn't even know that if there would be any, like, real use cases that would emerge even in the next 12 or 24 months.
But we're kind of exploring.
I think our view is that for the vast majority of use cases right now, it is still, like, there's not going to be real commercial adoption with the
state of the current models because of a bunch of things. So one big thing is that if in a lot of
spaces, you can't, there's really no like structure there to like incrementally build up. Like it
has to be good, like almost perfect off the bat. So if you think about like a space like, you know,
security or or something like that where okay, okay, you have all these like sims out there and it's like,
it makes sense. There's like tons of logs. Like that's perfect for AI models. But the goal of
that job is like you need to catch like any small thing that happens and so because the models are
inherently non-deterministic it's very hard for buyers like really trust a gen a i solution there
and so especially a jetting solution so like i think that option there is going to be really really
really slow a lot slower than people think even though like people have cool demos and you know
see things seem to work like just getting really into her eyes at options can be very slow um so that's that's like
one interesting thing we've been thinking about.
And the other side of that is that there are also a lot of spaces where, you know, on the
surface it seems like, oh, like AIA just will be perfect here.
But then the sort of follow up is that it's actually not that easy to quantify the ROI
that's happening.
I would, you know, one example I would give this is like, you know, there's a lot of like
text to SQL companies, like stuff like that where you could kind of see it working.
basically immediately everyone's reaction is, oh, this is cool, but, you know, we're still going to have to have someone like monitoring it and, you know, editing it. And so it becomes kind of a co-pilot. Okay, cool. So then, like, how do we measure, like, how much we should pay for one of these agents? It's very difficult because, like, most teams don't have that many data scientists anyways. And so if you're, if you're claiming that you have an AI agent data scientist, it's like, okay, let's benchmark you against a real one. You're probably,
probably not going to be able to replace a real one. So I think that's the sort of thing where it's like
it's very hard to quantify the RMI. Like you're saving some people time. But because of that,
like it's, you know, like if I'm a large company, it's hard for me to justify, okay, I'm going to
give you a large contract for this like AI agent data scientist. So I think those are the things
that we were thinking through. We're not thinking through. Like in the moment, we're obviously
just, you know, asking customers like what what their willingness to, you know, to invest in certain
things is. But in hindsight, I think we're looking back on the last year, that's been a big
thing that's been true, which is the use cases that emerge, like you have to have those two
qualities. It has to be able to be something that can be rolled out slowly and doesn't have
be perfect off the bat, but it's already providing value, right? Like, I think coding agents is like
a good example of this where you're going to just like section off some tasks for them and
like they'll do it. And the other piece is the ROI. Like you have to really be able to easily quantify
the ROI. In our case, luckily, you have these support agent teams and, you know, people track
metrics very closely. So that's something we've been thinking about. I think most, I think the
takeaway from that is like, yeah, probably more bearish on a lot of these AI agent use cases
in the near term. But I think that this models get better, there's going to be, they'll unlock
a lot of new cases. Super interesting. Jesse, thank you so much for joining us today. Thanks,
you lot. Thanks for hosting. It's great seeing you.
Find us on Twitter at No PryorsPod.
Subscribe to our YouTube channel if you want to see our faces.
Follow the show on Apple Podcasts, Spotify, or wherever you listen.
That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no-dash priors.com.