Experts of Experience - How A 2-Week AI Agent Launch Enabled 67% Productivity Gains
Episode Date: September 24, 2025Description/ShownotesWhat if your company could launch its first AI agent in just two weeks? That’s exactly what Mollie Bodensteiner, SVP of Operations at Engine, accomplished — and the results a...re game-changing: $2M in projected annual savings, a 67% lift in sales rep productivity, and CSAT scores above 90%. Mollie shares the real story of implementing AI at scale, why ruthless prioritization matters, how to avoid the “Frankenstack” trap, and why AI should be seen as a growth enabler, not a cost-cutting exercise. Whether you’re leading a small team or scaling globally, Mollie’s practical playbook will help you cut through the noise, drive adoption, and build AI solutions that stick. Tune in to hear how Engine’s agile approach turned imagination into execution—and why trust, people, and culture are still the ultimate differentiators. Key Moments: 00:00 AI Philosophy & Common Challenges02:44 Ruthless Prioritization and AI Rollout08:04 Mollie Bodensteiner’s Background and Engine's AI Journey14:40 AI Implementation and Customer Experience Impact28:07 AI Agents in Sales and Coaching34:48 AI in Professional Training and Education38:06 Human-AI Collaboration and Adoption Challenges48:34 Ensuring AI Quality and Risk Management51:45 Choosing and Evaluating AI Tools01:00:54 Underhyped AI Applications01:03:00 Lightning Round –Are your teams facing growing demands? Join CX leaders transforming their AI strategy with Agentforce. Start achieving your ambitious goals. Visit salesforce.com/agentforce Mission.org is a media studio producing content alongside world-class clients. Learn more at mission.org Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
The philosophy with AI is assume everything's going to change.
We're all obsessed with OPEX and headcount reduction and cost savings,
but that's not the point of what we're trying to do here with AI, right?
The real value is how do you make your existing people more effective?
You guys launched your first AI agent in less than two weeks.
Since then, you've reduced average handle time by 15%.
You're on track to save $2 million a year from this.
And you've increased sales rep efficiency by 67%.
The things that I would have assumed were going to be impossible earlier
in my career or even a year ago, now possible in a few clicks.
Leadership teams that are going to thrive are the ones that continue to ask those types of
questions. Why can't I do something different?
CSAT scores are above 90% right now, which is extremely high for this industry.
I think it's simple. It's stop planning and start doing, right? Let's be honest. In six months
of planning, the tech is going to change 10 times over that your planning is going to continue to
be wrong, right? AI is a growth enabler, not a cost-cutting exercise. So,
we've got to change that perspective.
A mic drop. That was amazing.
Even with the best AI and tech in the world, if people don't trust it, you're not going
to get the adoption.
So, Rose, we've been on several meetings recently where we have, like, what I want to call
like an AI brainstorming session.
Totally.
Where we're with the whole team.
And our team's pretty small for those listening you don't know.
Mission Notre is a pretty small team.
So we've got like five or six voices on a call.
and we're just brainstorming, oh, we could use AI to do this.
We could make an AI agent that does that.
Oh, we need to document our entire production workflow and figure out where we can
automate things and use AI to solve all these problems.
And we just create like a list of a hundred different ideas.
And then we don't do any of them.
Yeah.
Or we are all, all of us are like, okay, I'm going to test this.
You test this.
They're going to test this.
And then we all get excited about conflicting tools or we got.
get excited about one particular tool, but it starts to not feel like it meshes with the
tasks that we would need it to execute. Yeah. I mean, there's like this, there's this mix of this
problem, right, where we've got this huge list of ideas that almost paralyzes us. And then we've got
distraction from all these new tools that are popping up that it's like, oh, go try this, go try this, go try
that. And I think there's a place in every organization to be testing out new tools, looking for new
vendors to work with, all kinds of stuff like that. But I see so many businesses just get stuck
in this freeze mode with the AI conversation because of this brainstorming problem of like
my imagination opens the door to every possible idea. And yet that just kind of stops us from doing
anything, which is why I loved our conversation with Molly Bodensteiner today, who is the SVP of
operations at Engine. She said something that I'm going to like paint
on my wall, which is ruthless prioritization. So picking one thing and sticking with it ruthlessly,
which I know, like, again, Rose, we're a small company. So like, if this is a problem for us
in our tiny little fashion that we are, I can't imagine how much of a brainstorming imagination
problem there is with companies that are 100 people, 1,000 people, 10,000 employees, 50,000
employees. Like, there's got to be so many ideas. How long that task list probably gets and how over
overwhelming that would feel, I can't imagine. Yeah. And if you're someone who's trying to spearhead, let's say, agentic AI solution, and you're like, oh, this is what we're going to do. Like what Molly did, she started with cancellations. So, you know, engine. Engine.com is a work travel company. They help you book travel for work and monitor it and see how spending's going and all that kind of stuff. So if you're like, I want to build an agentic AI tool for cancellations. But then you've got a thousand other employees that are like, oh, but you could do this and you could add this on there.
You could add that on there. Molly said, no, shush. We're not doing any of those things yet.
We're just going to do this one tool. And I feel like that set the tone for like our entire
conversation with her about how to think about AI implementation. And she just gave so many
great tips of like the first tip of is ruthless prioritization of just like really honing in on
that one thing that you're going to do. And their ability to do that meant that they were able to
move really quick. And they ruled out their first agentic AI solution in just 14 days.
That is so crazy. I feel like we've had so many conversations that are super cool, super high level, but they're not as practical. And it doesn't feel as it feels more open-ended. It feels like something that's not really a closed loop. It feels like this theory, this idea. But what Molly brings to the table is like question, answer, problem solution. It's really refreshing in terms of the whole AI conversation.
because it, like you said, it can just feel vast and scary and confusing.
Yeah.
I mean, there are so many opportunities.
So it's kind of like, which use case do you start with?
Right.
How do you decide which one to say yes to, which one to say no to?
Okay, now we've rolled out this first step.
How soon afterwards am I adding on new use cases?
Okay.
Oh, we made this, you know, agentic AI solution.
Do I build right on top of that one?
Or should I, you know, figure out a modular solution to this where like we're building
in parallel to these different solutions rather than everything being combined into one giant
solution. So there's a lot of different ways that you can think about your agentic AI rollout.
And Molly basically was like, no, don't do that. Don't do this. Don't do that. Do this. Here's my
mistake that I made. Here's the failure that we had. Here's the success that we had. These are the
results that we had. And it's just like a true playbook and masterclass in how to think about rolling out
AI. Like I just loved her opinions. I love the way she explained everything.
and she was just so clear and precise with what to do, what to act on, what not to act on.
I totally agree. We talked so much about change management, how not to curate a Frankenstack,
which was a term that I learned today. Yeah, I mean, I think it can be really hard as a business to
decide which tool to use, right? So that Frankenstack word is this idea that, like, you've got
20 different tools trying to like band-aid solutions together. So as a company, how do you avoid this
problem, especially as tools roll out all the time. How do you avoid the issue of we've now
invested in 25 different AI solutions and now they're all like not working together super
effectively? So she talks a lot about like choosing the right partner for AI implementation,
how to identify vendors, when to bet on things, when not to bet on things. And just ultimately,
yeah, like how to think about building a tech stack in today's AI world when things are
constantly changing. There was something she mentioned about how tech is table stakes, right?
So, like, most of these solutions are 95% similar.
So it's not so much tech that's going to differentiate you as a company.
It's actually the people in your company.
Like, do I trust you as a partner?
Do I want to work with you?
Do I like you?
Do we get along?
Would I grab beer with you after our conversation, right?
Totally.
So I thought that was a really interesting point about how, like, as technology becomes
more similar to each other, as AI makes it easier for us to develop solutions, it's
not so much about the solution itself that I care about.
it's about the people that I get to work with.
Well, Lacey, I feel like we should probably introduce ourselves.
I'm your host, Lacey Peace, and you're listening to Experts of Experience.
And I'm Rose Shocker.
I produce Experts of Experience.
And you're about to tune into an episode with Molly Boninsteiner, the SVP of Operations at Engine.
But before you do, please hit that like, that subscribe button, drop me a comment.
Let me know who you want to hear on the show, what questions I should be asking.
Let us know what brands have been impressing you lately.
like let us know who's impressing you, what customer experiences you've had recently as a consumer
that you think we should shout out. And without further ado, here is Molly Budenstiner.
Molly, welcome to experts of experience. Thanks so much, Lacey, for having me today.
Yeah, I'm excited. Before we get into your background and what engine is doing and all the
amazing things I know we're going to cover today, I want to tease our audience a little bit with
a few stats that I think might blow their mind. The first one is that you guys launched your first
AI agent in less than two weeks. Since then, you've reduced average handle time by 15%.
You're on track to save $2 million a year from this. And you've increased sales rep efficiency by
67%. Those are some amazing numbers. I can't wait to dive into all of that. But Molly, I want to
hear from you really quick, just like an intro to you. So we've teased our audience on what we're
going to cover. Who are you? Yeah, absolutely. No, I'm excited. And like, as you went through those
stats. The thing that just out in my mind is like we're just getting started too. So like we have
done this in such a short period of time and just we're continuing to keep innovating and it's
been fun. So I'm Molly Bodensteiner. I'm the SVP of operations that engine and so really
accountable for, you know, all things people process data and technology across this org, which has made
this shift in digital transformation and AI so much so much fun, right? And just we're again, just getting just
getting started. That's awesome. And Molly, you've been at Engine for eight months. Is that
correct? Less than a year? Yep. Less than a year. What did it look like when you came in?
Because I feel like you basically just came in and hit the around running to like accomplish all
these goals that we just covered. Yeah. So coming in, the thing that I appreciate the most about
Engine is like there has been this appetite for digital transformation and innovation. So I wasn't
coming in and being like the change agent per se. But I think just being fortunate enough to come into a team
that was curious about this and then a company that was also supportive of really like starting
to push the boundaries of what we could and should be doing from a tech perspective has has made
this just fun right like and you know we talked about the agent in 14 days right is not because like
we had this pressure from senior leadership to launch an agent in 14 days it was really how do we
how do we put the right focus and really start to like test what we can do and
see what happens versus I think you see a lot of companies that are like, yeah, we're going to build this
huge digital transformation and AI strategy, but they're not actually getting anything done. So I think
just the culture at engine of being able to like move quick, innovate, learn fast has definitely
helped. And then the other thing is like just where we're at in the evolution, right? Like basic automation
to like today's revenue intelligence platforms and like how much just changes I'd say like daily
in this space, just the things that like I would have assumed,
were going to be impossible earlier in my career or even a year ago, right, are like now possible
and a few clicks. That's amazing. That's amazing. So speaking of your early career, could you just
guide us through a little bit of your background? I know that you've like love, live, breathe,
love revops, but like talk me through how you got to that point. Yeah. So I, you know, I've been in
rev ops since before it was cool, right? I'd say I truly started my, I mean, that needs to be a T-shirt.
Yeah. I need to get you that T-shirt. And I laugh because.
Like, I go back to, like, my internship in college, right, was working on access databases in a sales organization.
And one of the things that I did was figure out, like, how do I centralize RFP answers into an access database so that I don't have to keep populating RFPs manually, right?
So, like, it really, you know, was revenue technology, but, like, it came from, like, just the curiosity of, like, how do I make things better?
How do I make things more efficient?
And how do I use, like, technology to, like, optimize processes?
and improve experiences.
And from there, that just opened up more opportunities
to move into CRM and like market,
you know, I say marketing automation platforms.
Back then, it was like really email service providers, right?
We weren't even doing marketing automation, not to age myself,
but like just really gravitated towards like this revenue technology
that helps not only make go to market more efficient,
but like more importantly,
it like enhances the customer experience and the customer journey.
And I think like that's been like the big hook that's got me into operations.
is like, how can I actually bring, again, together the people, the process, the data and the technology to, like, have direct impact on customer experiences, whether those are internal customers and stakeholders or external ones.
Yeah. What I love about what you're sharing about your background is you sort of come in and you're like, I don't, everything's being done this way, but what if we could change that and do it differently? And I feel like that might be sort of your modality throughout your career is just like, these are the assumptions we're making and how can we make it better?
and how can that impact the customer in a better way?
And I feel like the companies that are going to thrive,
leadership teams that are going to thrive
are the ones that continue to ask those types of questions.
Like, why do I have to fill in this RFP manually over and over and over again?
Why can't I do something different?
And I think a lot of the innovations we've been seeing in the space lately
are the result of that first question of like, why is it this way?
Can't it be that way?
So finding, you know, this company engine that has that culture
that's mirrored back to you is really cool.
So could you tell us what is Engine?
If I go to Engine.com, what am I going to find for those who've never heard of the company?
Yeah, absolutely.
So Engine is a business travel platform, right?
So we're working with primarily S&B businesses on just helping them book flights, hotels, rental cars for business travel.
We have over a million travelers right now.
And we're supporting, you know, companies of all sizes.
But what really sets us apart is like no contracts, right?
no membership fees, no agent-assist fees. So we're really just helping businesses save time and
money. And, you know, the fun thing about Engine, you know, since 2018, we've averaged 70%
year-over-year growth. We've scaled to over $2 billion in valuation and are almost out
a thousand employees. And so it's been a really fun, just growth experience not only from the
company perspective, but then you're overlaying like this efficiency and digital transformation on
top of it. Yeah. Oh, man. That's so cool. So kind of a smaller company compared to some of the
large enterprises we've had the opportunity to speak to on the show, but by no means a small
business, right? So how many countries or not countries, I guess countries or states are you
operating in? Is it just in the U.S.? Or are you around the world? So we're located in the U.S.,
but we serve clients all over the world. Yeah. Wow. And with that, how many users did you say
you guys have? We have over a million, million travelers. Wow. Okay. So you
you mentioned this less than two weeks, this 14-day rollout of your first agentic AI agent.
What did that look like? When was this? This was late last year, right?
Yep. Yeah, late last year. So towards the end of the year, last year, we rolled out Eva or
Ava, depending on who you talk to, the phonetics changes. But it's really Injun's virtual assistant,
right? And so one of the things that we did was we looked at our customer success side of
things and said, you know, what, what's our highest leverage opportunity for automation, right? And
what came from the data was hotel cancellations. We had 300 requests daily, which is a lot of
operating overhead, right? So as we looked at our AI strategy and like what was possible here,
we decided to not take the broad strategy and instead like put all of our resources just into a
single use case and a single workflow and build off of that. And so it might not,
that have been like our sexiest problem, right? Or even the most like technically interesting,
but when we look at the volume, right, 300 cases daily, like it's costing our team a ton
in productivity as well as like just customer experience that can be far furthermore simplified,
right? If I know that I need to cancel a hotel, you know, am I as a consumer and the experience
like I'd love to go talk to just a chat bot and get it done versus like having to pick up the phone
and call or like send an email and wait for a response. So what we did is we took,
took and really ran like a two-week sprint on this, right? And the way that this was possible
is, and transparently, we worked with a partner. So we worked with the trusted sales force
partner, especially as, you know, Agent Force was still relatively new on the market at that
point. So making sure. Very new at that point. I mean, it's been out for what, like a month at that
point. I think that they announced it at Dreamforce last year. Yeah. We were one of, you know, the first
companies to like just be fully operational like GA public on on agent force. And so using a
trusted partner like was super key. But like the other part of this is like we ruthlessly prioritized.
It was, you know, one of those things where as you start seeing what you can do, like it's like,
oh, we could do this. We could do this. We could do this. But like kept going back to like, this is scope.
This is scope. This is scope. To really make sure that we could focus and like build the right thing that
then we can build on top of and we have built on top of, but like taking that single use case and
getting that out the door. And I think that that's where a lot of companies I see get stock in,
not even companies, right? I even see it internally with the team. It's like, oh, we get into this,
like we can keep doing and doing and doing and adding and adding, which is great. But if you don't
ever get like the first thing out the door, you're not learning and you're not figuring out what I'm
going to talk about is like where it's going to fail. So you can start to course correct. Yeah, it's such a
problem. I mean, I've seen in our own company even. We've got a small team and we're doing media
production, right? But I was talking to a gentleman that I would like to work with to help us
produce more automations and maybe, you know, some AI agents that can support what we do.
And I was like, here's an idea. Here's an idea. Here's an idea. You know, like, there's a hundred
different ways that we could do this. And I do think there's still like a little bit of overhype
where, you know, he's like, actually that idea sounds easy, but it's actually not. And so like
sorting through all these different potential use cases is.
so difficult even in our little team that I can't imagine how difficult that is for a team of
almost a thousand employees like what you guys have or even like a larger enterprise is how do you
take the kind of pause the brainstorming or at least make it I don't know structured in some way
so that way you can't identify this is the thing we really want to focus on and this is the thing
we're going to you know what did what did you how did you phrase it you said ruthlessly
go after ruthlessly prioritized right and like if we wouldn't have done that I love that
We'd probably still be building Eva, right?
Like, because we'd keep adding and keep adding and keep adding.
And, like, you know, it's one of those things where it's like good versus good enough, right?
And so, like, really focused on, like, the good enough that we can learn and iterate without sacrificing, obviously, the customer experience.
But, like, getting something out the door to learn is going to move us significantly faster than sitting and trying to live in this world of, like, perfection.
Because realistically, with agents, there's no such thing as perfect.
That's so true. Yeah, I do want to get into that a little bit more. But starting out with this first tool that you guys made, the cancellations, right? I as a customer can go in and cancel and Eva would handle it for me. What was that initial feedback reception like that you got maybe in December or January, you know, the couple months after it had been launched? Yeah, absolutely. So, you know, one of the interesting things is like our actual CSATs from Eva's interactions were higher than like traditional chat, right? Because like agents were,
or customers were actually getting faster response times, right, and like smoother handoffs.
And, you know, one of the things that I think we did really well and have done really well
is like we designed with a customer experience in mind. But like we're up front with customers.
You're interacting. You're interacting what they eye, right? Like, I mean, let's be honest. As consumers,
we already kind of know that anyway in most situations, but like being transparent. And like building
that trust, I think was things that we learned through those like feedback loops and like the
customer knows what to expect. But the other big thing that we continue to learn is like where
limitations are, right? And like, how do we make sure we're setting the guardrails of like our
design to make sure even knows like when to escalate, right? And like building off of that,
like we don't let her struggle through complex scenarios. We make sure she knows like the right
handoff. But then within that handoff, we have all of that context shifting so that the customer
doesn't have to repeat themselves. And we're not sacrificing, you know, the customer experience
based on like the limitation of the technology. And I think that that was a big, a big learning for
us as we looked at, you know, the CSAT scores and like what was happening. It's like, is Eva really
handing off at the right time and managing this the right way for the experience so that it's frictionless
for not only our customers, but also for the agents that pick it up.
Yeah.
Oh, no, that makes total sense.
We spoke about that in the pod a little bit, is how important it is that if the AI
agent can't handle it, that I don't have to repeat anything that I said.
And it can be handed off really smoothly to a human agent that can do this for me.
So I love that you guys really early on were solving for that and noticed that as a problem.
And I don't think you would have, if you weren't moving at the pace that you did and being
as ruthless with what you were focusing your attention on as you had,
You may not have gotten that feedback so early on and solved for that so early on.
So when you guys launched this cancellation tool, after that, what did you do?
You kind of evaluated, how is this working?
When did you start layering on, oh, this is the next function we're going to add.
This is the next way we can service our customers.
Yeah, absolutely.
So I think from there, what we started to look at were like, what are the different topics
that we want to try to cover, right?
So, like, we've now taken this, the single topic here, you know, cancellations.
And it's like, okay, how do we continue to iterate off this?
And, like, one of the things I'd say, like, we learned the hard way in this was, like,
we then had these, like, 15 separate topics that we wanted to take back to, like,
scale on those pieces.
Like, each of them were really focused on, like, narrow tasks, like a reservation change
or, like, an inquiry on a car rental.
And what we learned is, like, that's actually not how our customers think, right?
They don't think in these, like, isolated stuff.
steps. And so by trying to start to roll those out, like, Eva really struggled with like that
context understanding and like being able to properly route that way. So what we did was we then
took, you know, those 15 steps and we cut those in half. And like that actually changed Eva's
performance by just being a lot smarter on like the design behind how we talked about things and like
going into now more of like what are our consumer experience. How do they approach these? How do
they talk about those components as we started to look at like modifications of reservations
or adding, you know, adding somebody to a room and, and those pieces of things.
The other items that we started to build in was like contextual awareness of where the user was
in the experience, right?
So if you're sitting on the sign-in page, like Eva is looking at different things than
if you're sitting on a property, right?
so like understanding like where you're at in the experience so like if you're on the forgot
password page like hey are you having trouble logging in and like we've built in like more
proactive yeah proactive outreach right from from eva to try to generate that support but also
looking at like okay you're looking at these properties what do we know about lacy right we know
lacy likes five star properties in new york city right like how do we help take your historical
shopping behavior and help to apply that to your current search, right? And like helping build that
experience based on, based on the data that we have on you. And again, I'm going to say a non-creepy way,
because I do think you've got to be mindful of like, we all know everyone. We expect, right? As consumers,
like, I expect when I call somewhere, you know my order history. You know what I've done. Like,
I don't want to repeat myself and like those types of things. But like, and I want you to use it in the
right way, but like, again, not in the like the creepy way. Yeah. Yeah. No, I, I like that
personalization because it's almost like I will have a different experience of Eva, Eva, however you
want to say it, than someone else using Engine, right? Because it's been kind of curated and optimized
for me. And I want that. I mean, I think about my chat GPT instance whenever I'm communicating
the chat GPT. And then I look over in my husband's instance. And I'm like, the way chat GPT talks to
my husband is completely different than how it talks to me. And so it's really interesting. And I
love that idea of bringing that level of personalization into these chatbot interactions that you
might have with different companies. So I think that's really cool. Yeah. And we, you know,
we work with travelers, right? So like who are the business travelers, but also who are the
administrators, right? And so the experience is very different for the person who is, who is traveling
versus the one who's actually managing kind of like the back end operations for the organization.
So being able to identify, again, based on their role and their usage, like, what problem are they likely looking to solve helps us be more proactive in how we engage.
Yeah, that's great. So fast forward now almost, almost a year. I mean, I'm like skipping out a little bit. We're almost a year in since the first AI agent has been launched. What are you seeing results-wise? I tease it up a little bit at the beginning of our episode, but tell me, I want to hear from you. Like, what's the reception like from customers? What progress?
as the organization had, yeah, just initial results from these different levels of rollout.
Yeah, absolutely. So on the EVA side, right, we've seen that we're pacing towards close to
a $2 million in savings annually. We've reduced average handle time for cases by 15%. We've improved
productivity of our customer service reps by 10%. And we are really like CSAT scores are above 90% right
now, which is, you know, extremely high for this industry. Usually benchmarks are around 83.
Yeah. So just, again, continuing to see great progress. And as we add additional use cases,
like, we're seeing more savings, right? And the part of this is like it's, you know, I think when
people talk about like savings of AI, you naturally assume like you're reducing headcount and
you're, you know, cutting jobs. Like, we're not, right? We look at this as like AI plus human components,
but like, you know, back to like not our sexiest problem, but like the one that was the most
operationally time consuming and expensive and a friction point in the customer experience,
like has freed up our actual support team to work on more higher value customer inquiries and
needs and allowing us to like be more thoughtful there versus again the 300 cases of just
canceling a flight or canceling a hotel. And you think about job satisfaction and those components.
It's just a no-brainer.
Yeah.
Yeah.
I mean, you can feel it as a customer, too, when you call into a center or you're chatting
even online with a human agent, right, if they're burnout, if they're not.
And being able to remove some of the stuff that's sort of like clutter with these AI agents
does allow that interaction between customer and employee to be so much stronger, so much better.
So have you guys seen the job satisfaction score goes up?
Yeah, absolutely.
Like our internal, what is it, any NPS, like has continued to approve.
Attrition is lower, right?
People are less likely to leave because they have more fulfillment too.
Wow.
So it's been all in all like a really good, good opportunity for us.
And again, as you think about career pathing and growth, right, you're giving more time for
cross training.
You're giving more time to do more of that higher value work for the business and for the customer.
Moving past the service use case, we talked a little bit about sales as well.
well. Could you talk to me about how you're using these tools with your sales reps in addition to
your customer service reps? Absolutely. So at the same time, we were kicking off Eva on the support
side. We started initiatives on our sales side, specifically focused on our new business outbound sales.
And so a couple of the things that we did there were building agents for prospecting. And then we
built some rep coaching tools as well as call prep and follow up. And so this, as we've continued,
continued to grow. As we talked about, you know, reaching a thousand employees. We've been increasing
our sales headcount, I think, by 400% this year. So we're bringing a ton of new sales reps in.
And, you know, one of the big bets we made as a business is like, how can we use AI to improve
rep productivity? And in Q1, we were able to get a 67% lift in rep productivity. Being able to really
streamline, you know, reps, the amount of time reps were spending on research and prep versus like the time
they spent actually in front, in front of customers. And a lot of that was driven based on
our prospecting agent, which really worked to surface, like, who are the most relevant accounts
that we should be going after? How are they scored? How are they prioritized? Let's source the
right personas that we should talk to from that and actually serve that directly to the reps with the
suggested personalization for outreach. One of the big things that, like, I feel really passionate
about here is like in the world of like AISDR and all of that like we all see that and we feel
that and like we kind of hate it and it's cringy right is like what we're doing is not replacing
human judgment right like we're augmenting it with like better data and faster insights so that
reps can be more productive in like the decisions that they make and the actions that they take
versus trying to automate the rep's job yeah yeah we spoke with a woman a couple not
have been a year ago on one of our other podcasts where they're making like an AI person,
like an AI rep that you can actually talk to and like how I'm talking to you, we could engage.
And while I don't want to say whether or not like that's the future where we're going,
I hesitate with it because it just doesn't feel like super, I don't know, like there's just
like this uncanny valley there of like, I don't know if I actually want to talk to you.
You're just AI.
There's like no emotional connection.
It is interesting that there are a lot of solutions focused on this of like how do we
replace reps when I just don't think we're there yet.
I think we're still very much in this place of how do we augment people because it's still,
you know, business will always be personal.
It'll always be this person-to-person interaction.
And there's a trust aspect to it, right?
And like, you don't, I think so much of that still is trust.
Yeah.
100%.
I mean, to that point, right, it's like if this AI tool has been optimized to sell me, right?
Like the whole tool has been designed to sell to me, then I inherently feel like I can't trust
what it's telling me.
versus like a human where they're like, actually, you know, hey, this thing, maybe this isn't
best fit for you or like genuinely, I'd be like, oh, I actually trust you now because you're
being super honest with me. And I don't know that the AI would be programmed to be that honest,
right? So yeah, I think there is just like this gap there. But to be able to take these tools
and support the sales rep, I think is so amazing and impressive. Because so much time is just
spent in the like recording the call notes or looking up someone's information. And
I know personally as someone who sat on the side of sales demos, I actually don't want to see the 50 slide deck you have. I want the like 10 slides that are relevant to me. So if AI can help you understand what's relevant to me, then I'm all for that. Absolutely. And again, I think we expect that right. Now as consumers, like we want that experience and like we've now elevated like our buying experience. So as businesses and sales teams, like if you're not, if you're not elevating your selling experience, you're going to miss miss that mark. Right.
And like if I can save reps from having to go and like re-input data from a call into CRM because CRM knows because it listened to the call, right?
Like that's time savings where the rep can now spend more meaningful interactions on like writing that right follow up and making sure we're delivering on like the action items that came came from that call versus like the back office task that, you know, are still valuable and like need to be done.
but like can be done with support of technology as we've been hiring you know all of these all of these
you know new reps coming in right like being able to onboard them efficiently right so like being
able to use call coaching and like call scoring and like helping streamline just even manager
productivity and how we help support new hires and identify again like gaps earlier on just sets
them up for more success too so it's what i wanted to ask you about was this like coaching the way that
you're using AI for coaching. Can you talk a little bit more about exactly what that means?
So if someone's on a call, then they get like an AI generated report that's sort of like,
hey, here's some feedback of how you did. Yeah, we've got a couple different mechanisms for coaching.
So one of which is like actually having more like a simulated kind of call experience, right?
So if I'm trying to sell the Lacey, like I understand Lacey is, you know, she's my ICP,
she's my buyer at this company. Like I can go do a simulated role play with Lacey.
right, using CRM and like real data, but getting reps just comfortable in having those conversations,
right? It's kind of, you know, the same way you'd role play with, you know, during an onboarding
process with a manager or somebody else, like they're doing it now with AI. And like then getting
a report back of that, right? Like, hey, you didn't have a value based opener or you missed,
you know, bank qualification or like whatever comes back from from that. So we have that more of that
like onboarding coaching and like skill refinement. Then additionally within like our call recording,
we have scorecards set up. And we have AI scorecards based on, you know, discovery calls to demo
calls to kickoffs that are looking at like, are we, you know, and again, I hate being like,
are they following the script? Because it's not a script, right? But like, are they hitting the right?
Yeah. Yeah. Key details here. And then looking at that scoring, but also managers are expected to score
calls too and understanding like how are managers scoring right do we have the right expectations
across teams and how we're looking at this but then also being able to report back to reps like
hey here's where you should focus and work like how are we seeing that increase and measuring that
over time too i love that role play scenario because i think it would allow people to have so much
more practice you know like i can just keep practicing i can keep getting better at it and i can kind
to do it in my own private scenario versus having to like try and fail, try and fail in front
of a live person or my manager. I mean, there's a lot of different applications for that besides
just sales. Like I think about my mom works in contracting and she's trying to get her
unlimited warrant. And part of that process is, you know, going through like an interview process of
how would you handle X, Y, Z scenario? How cool would it be if you had an AI tool that you could just
practice those different scenarios with? Or there's so many educational applications of that as
well with school. Like I'm becoming a lawyer. How would I handle X, Y Z scenario versus just being
a written thing that I answer can be an actual back and forth engagement, seeing how I respond in
real time. I really think that's cool. And I feel like we're just at the tip of the iceberg of how
this can be applied across the organization. And I think about, you know, like SDRs, right?
Like if you're an SDR, like, what's one thing you have to be really, really good at? And it's like
rejection, right? And it's, you know, it's probably it's so bad to say. But it's like,
It's blame rejection on a phone call, right?
And so, like, how do you just start to get really comfortable with that kind of thing?
And it's, you know, a lot of that is, like, it's skill-based training and, like, just going through those scenarios.
And the thing that I like about, like, more of the AI prompt and, like, the role play is, like, you don't know what you're going to get, right?
So it's not like, oh, I'm going to, like, keep using the same thing and it's going to be the same outcome each time.
And I'm just going to get really good at just, like, the same thing.
It's like, we have it where you might get just a total wildcard response.
or you get somebody who's more aggressive or, you know, you get, you know, somebody who's going to
challenge you. So, like, outcomes are always different. And, like, that is what they're going to
interact with day in and day out. So just getting them really comfortable there.
100%. Yeah. Yeah. I mean, even from a customer service rep position, that makes full sense.
Like, if I'm a customer service agent, being able to someone comes on the phone and they are hot and
they are angry, how do I handle that? Someone comes on the phone. They're crying. How do I handle that?
Like, it would be, it's definitely a lot of different applications for it. So,
Molly, you have implemented several different layers now.
There's several different use cases of agentic AI in the organization.
Your team has implemented that.
Your leadership has said, yes, we want to do this.
There are a lot of companies that are not as quick as you guys have been.
You know, they might have heard about agentic AI.
Maybe they were at Dreamforce, right?
And they saw the rollout.
But they are still not yet in implementation phase.
They don't even, they don't have results, right?
What advice do you have the leaders that are growing through what you guys went through?
about a year ago. Yeah, I mean, I think it's simple. It's like stop planning and start doing, right?
Like at the end of the day, like just don't worry about it being perfect and a perfect strategy.
Like, worry about just picking one real problem and solving that super well, right? Like for us,
you know, it was setting, it was setting that like 14 day deadline, right? And like shipping something
that works and then just measuring everything and iterating. Like you're going to learn more if you say like,
okay, 30-day deadline. We're going to get through this. Like, it's this use case. We're going to put a tiger team together and knock it out. You're going to learn more in 30 days of like true real implementation than you're going to in six months of planning. And let's be honest, in six months of planning, the tech is going to change 10 times over that like your planning is going to continue to be wrong. Right. And like more things are going to come out. And like you're just never going to get anywhere because you're going to be stuck on just trying to get somewhere. Yeah. That's so true.
true. That's so true. So Molly, getting into the people side of this, you mentioned earlier,
it's human plus AI. And I love echoing that. I think it's so true. What was it like initially
with your actual teams that were using this tech and implementing it out the gate? Like this,
you know, in November, December, what were your team's responses initially? Were people like super
eager? Was there a mix? Was there fear? And now that you guys have sort of like rolled this out,
what does how does a team responding yeah absolutely i think you know one of my big i'm laughing because
i'm like i think one of my biggest lessons on this is like i had i had it all wrong on like how
adoption would go right and i think you know we had really yeah and i'll do yeah so on on the service
side adoption went really well right like they're very eager like i think the value prop of like
wow i don't have to deal with these tickets anymore like this makes tons of
sense we you know have copilot we rolled out co-pilots like the just the efficiency gains in like
their day to day were just so just obvious for them that it was like this is great like i'm leaning
in this is awesome right on the sales side though which like in my my assumption was like they
were going to be the more like let's go this is awesome willing to adapt like it was very much like
i wouldn't say resistance it was almost like oh that's nice i'm going to keep you
keep doing what I'm doing kind of thing, right? And like, I think, and I think part of it was,
like, it wasn't, it wasn't optional on the service side, right? Like, it was required and it was just
like built into how, how the experience works. And on the sales side, it was more, like, it was
optional, right? Like, do you want to use these tools? Do you, like, the call recording and all that
stuff, like, and the onboarding stuff, like, yeah, they had to use that. But, like, the prospecting
features and, like, some more of, like, those types of co-pilots, like, weren't, we weren't saying
you have to use this, right? And, like, it's part of your job. Like, we kind of, like, let them
decide what they wanted to do. And so I think a couple of the things that helped kind of,
like, I'm going to say, like, turn the ship around on this that I learned is you have to,
like, build transparency in this a little bit, right? And, like, with the sales team, there were,
there were kind of two, twofold lessons that I learned. One of which is, like, we have to show
people exactly like how the AI works right to the point that like it was like actually opening up
the prompt and going through the prompt with the sales team right like and it wasn't just like hey
trust me like this is what it's doing it's like this is exactly what it's doing and it's you know
one of those things like I don't actually want to know how the sausage is made but like this is a
scenario where they didn't need to know like how the sausage was made to build their trust and
adopting it right and so we went through like the reasoning they understood like we showed them like
hey, if it's a bad output, right? And it doesn't have this confidence rating. Like, we don't give it to you, right? But, like, we're only giving you the things that we're confident in. And so that was a big piece that we really had to, like, hit home on like, how does this work? And like, what is this doing and why? And I think some of that was just like the trust that we understood like what they were doing to. Right. So like somebody coming in and being like, hey, Lacey, like I can help you do your job better. But I don't do your job is a little bit of like, uh, like, uh, like,
like a little bit of like a hey let's try this approach again too so like we also had to make it
obvious from day one right like so 20 minutes learning a new tool versus like saving five minutes like
that adoption curve like had to be a lot faster versus like if I can save you two hours of research
in five minutes like adoption will be immediate even if maybe it's not as like good as they think
it's going to be like their quicker ROI was there so we did do some like refractoring of like our
scoring and like how quick and how easy and like where we had it run automatically versus like
they could manually um run so like lesson one was like showing not telling and just continuing there
lesson two was find people with influence and let them be your advocates for this um you know i i
do not have i right i am not an influencer and like that is very clear of a weakness that i
fully own. I am not going to rally the troops. And so it was really like look for who that like salesperson is that is that
influential person who can help. And the good news is is most salespeople are influential because they're
sellers and get them to really help to advocate for you. I think one of the things that we fail at is we
tend to pick the top seller as the like person that you put as that influence. They're going to be your
worst adopters because they already know what works and what they're doing is working for them.
So getting them to try to change is going to be they will be, they will be your resistance people
more more often than not. So like don't go, don't go directly to the top, right? Like look in that
middle. And it's usually like that person who like people naturally trust and like go to for advice.
They're like have genuine curiosity. Like they're willing to experiment, right? But like they tend to be your
mid-level performers who like just want to get better too but not the ones that like are your
top performers that you're they're not going to rock their boat because they are doing okay right now
and so I think for us it was like find those people get them in early right give them some level
of incentive too to to get in early like this is one of the things that I think is like really important
is like hey we're going to try something it may or may not work and if it doesn't work like we're
not going to punish you for trying, right? So, like, figure out if there's some, like,
quota relief or something like that you can do there because, like, at the end of the day,
like, you don't know, right? Like, especially, like, while you're moving quick and trying
to find out. So I think, like, those were, those were really big things to drive for. Other things
that, like, we've now done is, like, we shout out the wins a lot more, too, like, okay, great.
Like, we built this efficiency, but, like, we're still talking about, like, hey, this was
like in, you know, an AI sourced account that closed, like, way to go and like look what they
learned and like look at like, you know, when we talk about like how we improve velocity and like
where that's demonstrated, like, we know based on like the productivity savings, like it's there,
but like we're continuously iterating that to keep driving adoption. Yeah. Yeah. That makes a lot of
sense too that you're sharing sort of the story of the win, not just the stat. Because if I hear
X number of productivity gains,
Like, as an employee, I might be like, cool, what does that mean?
But to actually have a place where I'm seeing, like, this account, oh, I know that logo closed from this thing.
Like, that is the only way that it's going to, like, concretely sit in people's brains that this actually works and this is doing something that I might be interested in and a tool that I actually want to work into my workflows.
I love what you shared about sellers being, like, way more ultra-critical of, like, the tool and wanting to understand it.
Because it makes sense.
It makes sense, though, that they'd be like, no, like, I'm about to get on the phone with someone.
I need to trust that, like, this is actually the thing they care about because I don't want to flop on the phone and be like, and say something that's completely wrong.
So it makes sense that they want the under the hood look at how this research is being done and what the prompt is.
But I imagine that that's not that's not as easy, though, as like, as just showing it to them, right?
because you could pull up an AI agent right now and you could share it with me.
And if I don't have the knowledge and understanding of how it works, I would still not trust
it because I'd be like, I still don't get what you're doing.
So was there like a level of education you had to do besides just showing under the hood how it
worked?
But actually like, here's what an AI agent is.
Yep.
Here's how propped engineering works.
Like what did that education process look like for your team?
Yeah, absolutely.
And I laugh and smile because like it is continuous education, right?
And like we are constantly doing this too because like things keep refining and keep changing and like evolving here. And you know, it's like what is, you know, you start with what is a prompt. And I chuckled when we were talking about this more of the fact of like one of the things that like as part of our sales motion that our team does is they go to a company's website. Right. And they're looking for like key indicators that this company might travel. And so that is like what they're taught to do as they're prospecting. Well, we have an agent that like goes.
and does that automatically, right?
So, like, we know what the keywords are.
We've sat in the motion.
Like, we have the agent go, look at the company website and get back, like, a score.
And then the deductive reasoning of, like, why do we think company ABC travels?
Well, here's what we see in the job description.
Here's what we see, like, on social media about them hosting these events, right?
And, like, it's very prescriptive, like, coming back.
And so I still laugh because, like, it was, let's put the person against the robot type of
thing because it was like they were still going to the website and I was like why why are you still
going to the website like what don't you trust you know like about what's coming back from the AI and it was
like I just I need to do it myself you know and it was like we had to get to the point of like yeah
do it yourself and come back and like let's look at how long did it take for you to go do this okay
it took you seven minutes right okay we had the AI do this like here's how much more it actually
captured in 30 seconds versus like the seven minutes that you spent here and so like we had to kind
to like do more of that like, hey, like you're going to get to the same outcome faster doing
this. But then on the balance, like the part that like will get you is like it's not always going
to be perfect. Right. So like we did still have to set like those expectations again like human
reasoning. Like you've got to like actually read through before like give it a sniff test a little bit
too. Right. Especially as you pointed out like they're going to get on the phone and talk to somebody.
So like they want some level of accuracy. And that's where we've like built in more feedback loop.
and like if AI comes back and like hallucinates, right?
Like we want to have that redundancy tracking and like QA on top of that too
to just make sure we're giving the best possible results.
Because I would rather give no result for something than give something that was not close to
accurate.
When we talk about quality control, I think there's going to be more conversations about
this in the future.
Like I just, I think it's becoming a huge component of these AI rollouts is that there
is always going to be a level of risk that you accept.
And it's just true with humans.
Like not I mess up.
Everyone messes up.
You know, sales rep goes to a website, makes an assumption, ends up being wrong.
Like, but we, we are less forgiving of technology than we are of human mistakes.
And so this like quality control conversation, I think is one that's, that is important because there is, there has to be a level of risk tolerance.
You have to accept that there will be some mistakes.
But it's like each company needs to decide within themselves, what is that level?
what's the risk I'm willing to take.
So how did you guys kind of determine that?
Like is there,
you said there's like an accuracy scoring.
How do you determine,
like how do you even come up with like the accuracy algorithm of whether or not this is
correct or not?
I mean,
you,
I'm going to say like you don't,
right?
Like you come up with something and you monitor it and you manage it, right?
Like you are still deciding things.
And I think a couple of the things that we,
I'd say did really well when it came to like quality control,
especially like a lot of these are customer facing, right?
So like back to risk tolerance.
Like my risk tolerance is a lot.
lower on something that's like going in front of a customer than it is potentially on
something that's like back in that like I know I have a little more like luxury and flexibility
with but like quality control like has to start in design right it can't be at the point of
inspection like so when we thought about this it was like what are the guardrails we're putting
in our workflows right like especially like with eva back to her like she can only access certain
data right she can only make certain types of changes like she has to escalate when confidence
drops below a threshold like she doesn't sit in analysis paralysis like we built that design
with that quality control in mind versus like going out and then being like oh crap we got to like go
back and like figure out when she's wrong going rogue right and then the other pieces of this is like
the first hundred interactions right had manual human review we're on a 10% ongoing we actually have
I'm going to laugh when I say this but like we have AI checking AI on our on our pieces to help like just
drive like more of those like checks too but like we're still doing manual review and like I hope that
we continue to always do manual review and like maybe that'll drop lower than 10% when we have more
of that confidence but like again tech is always changing things are always changing like we want
that alerting right and that's where like the CSAD and like these metrics are so important because
they also give a signal if something starts to seem off as well so we can catch issues faster and you know
the other thing is like the perfect automation doesn't exist right like I think just like being okay with like that and instead like optimizing for consistently good outcomes versus like perfect ones is like another really important thing to keep keep top of mind as you're doing this design and like sometimes that means like AI is going to err on the set of caution and like escalate more than necessary but like for us that's better than a bad customer experience because like again that's going to the bad customer experience is going to cost us more yeah oh for sure.
for sure. That makes total sense. Part of building a lot of these, like, AI agents and these tools that you guys as companies investing in is actually selecting companies to work with partners that do this, right? But also, like, what program you're going to use, right? Like, are you going to use chat GPT as the basis of this? Are you going to use Claude? Like, what company you're going to go with that's going to help you develop out your LLMs, right? Or I guess you could go the extra step and actually make your own LLM, which I know some companies have done, which is just like a huge undertaking and not something I would recommend.
to smaller organizations for sure.
So when I think about this, though, and I'm like, okay, cool, we've made these AI agents.
We made Ava, Eva, we've started to invest in all this stuff.
But now there's like this underlying structure that is not your companies, right?
It's a chat GPT, it's a cloud.
It's whatever, right?
Or maybe it's Google.
I get concerned about that because I think about like, oh, well, what if chat GPT, quote, unquote,
fails?
I don't think it's going to fail.
But what if something underneath there doesn't end up working or they're not moving as fast
innovation as a different company. And now I've built so many AI agents that are based off this
tool, like, what happens if this doesn't work out? And so I do get concerned about that.
I mean, this is something, though, that's not new. Like, this is a technology problem. Anytime
you invest in new technology, it's possible that the underlying framework might break or not
work anymore. But how are you kind of thinking about the risk associated with investing in just
like one algorithm that's going to be that source of truth for you moving forward?
It's a great question, right? And I, you know, I have a paid personal
subscription to Claude as well as chat GPT and I couldn't tell you why I use one
without the one over the other when I'm doing something but I have some some preference to one
versus the other based on what I'm doing and like I don't know I couldn't even tell you what that is
but I think to your question like platform lock-in is like a true concern right like it's there's like
a bigger risk to trying to like hedge too much and like ending up with like a Frankenstack right
And so from the engine side, we deliberately chose to, like, go deeper with fewer platforms versus, like, being spread across too many. But with that, you know, we use Salesforce and agent force. And, like, we're betting on one ecosystem, right? So, like, that's the ecosystem, not just the AI capabilities. But then we've got, you know, our cloud and our open AI. But knowing, like, if they change, let's say, their pricing tomorrow, we can swap in and out our models the way we need to. And I think one of, like, the
the core things in our design here is like we're not having to rebuild like all our
integrations and like all our workflows because of how we took more of like that
ecosystem approach on building on top of you know sales force but like we have these
abstraction layers now that we can leverage right so we don't hard code specific AI models like
into our workflows we have more of like interfaces that like sit underneath that support that
underlying technology it's definitely made more work up front right and like it was a cognizant
decision that we made to give us that flexibility later on because otherwise like if we have
to start over like we essentially have to rebuild right and so like when we think about
modulization and like building for scale like that was super important to us and you know we
I'll be let's talk about the hard lessons we learned right we learned that lesson the hard way
with Eva initially we did everything as like one monolithic agent and then when we wanted to start
adding those new capabilities, right? We had to do a little bit of rebuilding. So now we've got
module components that we can mix and match. The nice part of that about that is like as we expand
Eva outside of just, you know, our main travel platform and like actually look at like more
of our back in for our partners and suppliers. Like we can actually start to mix and match
those capabilities versus starting over from scratch, which definitely helps too. But like I think
the philosophy with AI is like assume everything's going to change.
and like going probably with that assumption, right?
Like models are going to get better.
New vendors are going to come out.
Like business,
I mean,
business requirements are going to change.
So like make sure your back in can adapt and like think about like how you build more like
plug and play.
And like,
you know,
we've invested in like APIs and like data pipelines that have like more openings to
connect to like different platforms and systems versus like locking us in as well.
So that again,
we don't have to touch core infrastructure when we want to make these adjustments.
That's really smart.
And are you, this might be, I don't know, a question for you personally or just how
broadly Engine is thinking about this.
But how do you track, you know, oh, there's this new AI tool.
Let's try it out, you know.
Or there's this new startup, this new vendor that could do something really, really cool.
How are you actually evaluating like whether or not that's hype or whether or not that's
something that I should pursue?
Are you personally just experimenting with stuff all the time?
I know like personally, I am constantly like new AI tools out. I'm over there trying it out.
Oh, it didn't work. Moving on to the next thing. But it also is really hard and it can be a bit distracting.
And I know we've made investments in AI tools that we thought would help us in certain ways that ended up just not working out at all.
Or like fast forward two weeks and the platform we were already using now offers the same function.
And you're like, well, now I invested in this thing that I shouldn't have done that.
So yeah, I'm just kind of wondering how you're evaluating like when to step in.
say, yeah, let's use that personally, and then just, like, from a business standpoint.
Yeah, it's, I mean, geez, this, like, this landscape of tech is crazy, right?
And, like, I think just we've looked at, like, you know, Scott Breaker has, like, the
MarTech, whatever, it's probably like 100,000 right now.
And, like, now you look at it, like, AI's just kick that one to the curb because there's
just so much happening all the time, right?
Like, every week there's a new AI first, you know, startup that's revolutionizing,
like, this piece of the rev stack.
And when it gets down to it, like, you know, realistically, like, these vendors offer about 95% of the same functionality, right? And there's like this like, you know, probably like niche five, five percent. And so to your point, it's like, and I'm a tech snob, right? Like I've got RevTech review. Like I love looking at like what's new on the market. But like the distraction is real, right? Like you have to make sure that you are moving forward the right way and like not.
pivoting off of what you talked about back to like the like we wouldn't have been able to
launch in 14 days if we kept trying to like touch everything new that came into the market.
And so for me, it's like before we like can start something new, you have to explain like what
we're going to stop doing or like what we're going to consolidate to and like really make,
make that case.
Right.
The one thing that I do appreciate about the market right now that I would say like wasn't
probably the case three years ago is like most of these places offer free trials right or they
have like quick demo accounts like so and this would be my warning to anyone like if you are buying
an AI solution or considering buying an AIS solution like if you're not running a pilot on it like
you're probably going to get hosed like you should full stop like put you they've got to put
their money where their mouth is when it comes to the tech and like truly be able to pilot it
because that is the only way you're going to truly know if it works for your
your business and really be able to evaluate that on onto the market. And so that's been one of probably
like my favorite things about this is like, oh, I can go look at, you know, an N8N or something like that and be
like, let me go just give it a try, right? And like, see, can it do what I need to do? And like,
there's so much more like self-service on that. I'd say the other big thing, though, that has shifted
and like are the way that I buy is like, it's not just the tool. It's now the partner. Right.
so back to like we're buying from people right like the best vendors are the ones that are
going to be like the extension of our team they're going to understand our business they're going
to be responsive when things break they're going to they're going to be builders with you right
and like really invest in your success because like that's what's going to in my opinion it's
not going to be the tech that separates like the winners and the losers in this space because like
the tech is now table stakes it's going to be like the relationship and the business that is going
to really like push, push this forward, right? Because like at the end of the day, like I'm looking at
four call, you know, call analytics platforms, there's going to be feature parity and tech parity
across all of them. It's which one's going to be the better partner. Yep. Yep. And it's not going to
be the AI agent sales rep that you're talking to, right? It's going to be the real person that you can get
on the phone and actually like have a, have a relationship built with them. It's going to be the one
that flew their implementation team to our office to like help do enable.
and configuration and setup and, like, is the one that wants to be at the table and the one
that's pinging you when they're like, hey, look at what our other client built, you guys should
consider doing this.
Like, those are, that's how you're going to win in this, like, vendor landscape.
Like, obviously you have to have the best tech, which, like, now is table stakes, which I think
before, like, maybe wasn't always the case.
But, like, now it's like you have to be the best partner.
Yeah, I love that.
So I've talked a lot on this show about overhype with AI.
And I think I've dove pretty deep into, like, the things that maybe aren't actually feasible.
So I want to flip the question around a little bit to you and say, what's underhyped?
Like, what's something that you've seen that you've implemented, that you've been, you know,
hands on with all this stuff that you're like, I can't believe these other companies aren't actually
doing this yet.
Yeah.
I'd say, like, process intelligence is super underhyped right now.
Like, we're all focused on, like, the sexy stuff, like, AI writing emails and analyzing calls.
But, like, if we can get AI to, like, actually understand our processes and start to, like,
suggest optimizations there and like even like you know at the top line you know level of like oh
hey like you should try this or that right but like where this gets really powerful is like at the
rep level of like hey lacey like your deal is stalled in stage three because you didn't send
the specific piece of content like that the buyer like could really learn from at this moment and like
more of like that recommendation engine at like a more prescriptive level i think goes such a long way
and then being able to, like, actually aggregate that up into, like, how do we optimize process?
And, like, right now, I think a lot of RevOps teams are still doing process analysts manually, right?
Like, we're looking at dashboards and trying to spot patterns.
And, like, let's be honest, like, CRM reporting and, like, even BI reporting really sucks at this, right?
Like, even, like, I'm sorry to be crass, but, like, even Google Analytics, like, when you look at, like, your website journey orchestration, like, it still doesn't really give you that prescription and, like, help.
you visualize the experience in the right way. So I think, like, for my, from my standpoint,
like, if we can get AI doing, like, continuous analysis and, like, recommendations of,
like, patterns and insights, like, that's going to help, like, change the revenue engine.
And, like, that's where I, I see some tools on the market, like, starting on this,
but, like, it's, like, really how do we do this at scale and embedded at the level like you need
it to? Yeah. Oh, I love that. That's great. So anyone listening, that was a startup idea,
steal that one. Yeah. Take that.
build that for me. Happy to, happy to give you insights. Molly, this has been fantastic. I got one
thing I'm going to do. I'm going to throw this over to Rose who's got our lightning round
questions. This is just a quick round of short questions for you to answer as fast as you can.
And we'll hold you to hold you to that. If you start answering too long, we're going to pause
out and make you stay with the lightning round. Okay, first lightning round question. What's a common
assumption about AI or RevTech that you're actively pushing back on right now? That
AI is going to replace people. People are obsessed with headcount reduction and cost savings, but like,
that's missing the point entirely. Totally. Okay, go deeper. I actually want you to go deeper on that,
yeah. So, yeah. So I think like biggest assumption like that I think, you know, I'm not actually
fighting this at Engine, but I think I see a lot of my peers fighting is like how they use AI to replace,
you know, headcount. Like we're all obsessed with OPEX and head count reduction and cost savings,
but like that's not the point of what we're trying to do here with AI, right? Like the real value is like,
how do you make your existing people more effective, right?
Our customer service team's not smaller because of Eva,
like they're handling more complex issues.
They're building deeper relationships.
Our sales team's not getting replaced by AI.
They're closing more deals because they're spending more time on strategy and
engagement versus research.
You know, it's like that pushback of like,
how many people's rules can we eliminate with AI?
Like, I think the better question here is like,
how can we help our people do their best work?
And I think if you frame it that way, like AI is a growth enabler,
not a cost-cutting exercise.
So we've got to change that perspective.
Oh, mic drop.
That was amazing.
It was not a lightning answer.
No, that was fantastic.
All right.
Number two, when you think about the work you're most proud of this year, what stands out?
Yeah, I think just in the short time that I've been at Engine just in eight months, right, is just how much we've changed the culture around AI, even knowing like we came into something that was so open to digital transformation, it still was, you know, a little mysterious and, like, not as, like, broadly addressing.
adopted. And now, like, it truly is part of our culture and how we operate and in our daily work, right? We've got office hours. We've got Slack channels. We have really, you know, not just implemented AI, but like we've democratized it. And like we talk about it and we share our wins. And we also create an environment where it's really like building like an AI literate organization, regardless of role of just being able to let anyone at any role continue to learn more from each other.
That's amazing. All right, number three, what's one mistake or false start you've learned
from the most from in the past six months? Assuming adoption would be a lot easier than it actually
was, right? So I thought like, I thought if we just like showed people what we could do,
the value, they'd be like all in, right? Like even if I'm so showing you, I'm saving you 10 hours
a day, like that wasn't good enough. So just really still investing in the change management.
I think my lesson was like successful AI implementation is like 70% change.
change management, 30% the tech. So like even with the best AI and tech in the world, like if people
don't trust it, you're not going to get the adoption. Trust and change management have been
huge overarching themes, I feel like, of our past few interviews. It's so interesting. I'm not seeing
that as much, at least on my LinkedIn feed. I feel like it is more sexy, flashy AI conversations
or maybe some like fear mongering. But yeah, I love that conversation a lot. I think we're going to see
this world where like enablement is going to be such a big, big push and like true like digital
transformation enablement kind of like resources and instructional design and like that type of like
management around this is going to be really interesting. Hey, maybe we'll bring you back for a part
two, Molly. Yeah, I got to I got to figure that one out because I still suck at the enablement side
of things. All right. Last lightning round question, we ask this to every guest that comes on
the show. What's one experience you've had a recent?
as a customer that left you impressed.
Yeah, absolutely.
So now I'm, like, obsessed with chatbots, right?
And, like, it's my favorite thing now that we built Eva to mess with.
And so recently bought shoes from an online retailer.
And, like, when I contacted support, like, they knew my recent purchase history, right?
Like, I had already kind of been logged in.
So it's like, oh, yeah, this is the, this is what you bought.
Like, what do you want to do with it, right?
Is it like, okay, I need to process a return, right?
So it gave me those prompts because it already knew.
And, like, I didn't even have to explain.
the problem, right? I didn't have to go in and be like, hey, I bought these shoes. And it's like,
what's your name? What's this? Blah, blah, blah, right? It was like, hey, Molly, we know you're here.
We know you just made this purchase. Like, is there an issue with this purchase? Do you need to do a
return? Like, let's just walk through that. They just knew, right? And I think from like an experience
standpoint and a customer expectation, it's like, I expect you to know, right? It's like,
how did you just make that so easy on me? What company is this? It was American Eagle, actually.
So, which was like super impressive. Shout out American Eagle. Wow. So so cool.
Yeah. Okay. Well, that concludes the lightning round. Thank you, Molly. Yeah. All right, Molly,
well, this has been so much fun. Where can listeners find you and where can they follow along with
Engines' journey as you guys continue to invest more in technology and grow as it sounds like you guys
are expanding? Yeah. Find me on LinkedIn. I'm pretty active there. And, you know,
InGen, we're continuing to share what we're doing on social media as well. And check out
engine.com, especially if you've got a business trip coming up. I'd love to help you travel.
I'm literally planning my dreamforce trip using engine.com. So I'm excited.
We got you, Lacey. Awesome. All right, Molly. Thanks so much. Take care.
