No Priors: Artificial Intelligence | Technology | Startups - Using AI to evaluate employee performance with Rippling’s COO Matt MacInnis
Episode Date: September 25, 2024In this episode of No Priors, Sarah and Elad sit down with Matt MacInnis, COO of Rippling, to discuss the company’s unique product strategy and the advantages of being a compound startup. Matt intro...duces Talent Signal, Rippling’s AI-powered employee performance tool, and explains how early adopters are using it to gain a competitive edge. They explore Rippling’s approach to choosing which AI products to build and how they plan to leverage their rich data sources. The conversation also delves into how AI shapes real-world decision-making and how to realistically integrate these tools into organizational workflows. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Stanine Show Notes: 0:00 Introduction 0:32 Rippling’s mission and product offerings 2:13 Compound startups 3:53 Evaluating human performance with Talent Signal 13:19 Incorporating AI evaluations into decision-making at Rippling 14:56 Leveraging work outputs as inputs for models 18:23 How Rippling chose which AI product to build first 20:53 Building out bundled products 23:26 Merging and scaling diverse data sources 25:16 Early adopters and integrating AI into decision-making processes
Transcript
Discussion (0)
Hi, listeners, welcome back to No Pryors.
Today, Alad and I have a spicy one.
We're here with Matt McKinness, the CEO of Rippling,
the Juggernaut Workforce Management Platform that unifies HR, IT, Finance, and More.
They're launching a new AI product that looks at the work output of employees
and generates performance management signals.
Sound terrifying? Let's discuss.
It's so good to have you.
Thank you for having me.
So I think a lot of our audience will know rippling or use rippling.
But for anybody who's missing it, what does the company do?
Yeah, it's an all in one platform for HR, IT, and finance.
We do all the boring stuff, but the important stuff to help you run your company.
So we want to eliminate the administrative burden of running a company.
That's all the official language, but most people come to us and say they need payroll and we have payroll.
They need a device management solution.
We have one of those two.
So we do all that stuff.
The rumor is, you know, many hundreds of millions of dollars in revenue, growing fast.
Anything else you can say about scale?
It's going well.
We've got about 3,500 employees.
We've got tens of thousands of customers using the platform.
So I'd say we're doing something right.
I guess also one of the things that you all have really pioneered is this notion of reintroducing compound startups or bundled products across a suite of different things.
How many different products do you offer now?
And what's the velocity in terms of adding new ones?
We have on the order of like 25 unique.
excuse that a customer can buy from us. Products come in different shapes and sizes. And so like
we ship small new things every quarter and then we definitely do like big things every couple of
quarters or so. We're about to ship like scheduling, which, you know, again, sounds unsexy,
but is actually really cool. We shipped an applicant tracking system for recruiting, you know,
tacking these sorts of things onto our HCM suite. We do a lot of this partially because we have so many
founders in the business. We have over 150 people who have started companies that now work at Rippling.
It's like an explicit strategy to go out and try to either give talented entrepreneurs whose business ideas didn't quite work out.
Like, hey, hand raised, I've been there, a safe place to land and continue either pursuing what they were interested in or do something new at Rippling.
And so that's worked out really well for us on the velocity front for shipping new products.
The compound startup thing obviously has been in the zeitgeist a little bit in the valley.
It's just obviously a huge tailwind for us that businesses generally want to consolidate as much of their software onto a single platform as they can.
So we're going to keep pursuing this and keep recruiting awesome talent.
wanted entrepreneurs and ship new stuff all the time.
Makes sense.
And I guess one way to think about your business is almost like, instead of one company
growing at a certain rate, you're like 25 startups all compounding from a smaller base,
which I think is very exciting in terms of the potential upside.
It makes like if you're, one of the things that I wished I had learned way earlier in
my career as an entrepreneur was just like basic corporate finance.
Like understanding an income statement and a balance sheet and how those things play together
and what, you know, investors look at in that context.
And I, you know, I get it now for the record to think I've moved.
mostly figured that stuff out. But the, when you look at the income statement for Rippling and
you think about the 25 businesses or just like the major product suites like IT and finance
and spend, they're sort of sub-scale businesses at some level on their own. And then in aggregate,
you have this beautiful top line picture, but from an efficiency standpoint, like, it's okay
today, but there's clearly just going to be this like blossoming of efficiency over time
for us as these different suites all play to one another. In SaaS software, most people don't
totally understand that like for a scaled businesses, the unit economics of your business converge
at the cross-sell motion. Like your new logo sales motion is super important, but as you sell
your products into your ever-growing customer base, like your economics start to look like that
more than the new logo sales motion because you always have a lot more in the cross-sell bucket.
And so for us, the compound startup thing also has these beautiful financial dynamics that we have
a lot of things we can sell to our existing customer base over time. And that's helped us a lot
economically. And one of the things that you're here to talk about actually is a new product that kind of
ties together a lot of the other ones in some ways. Do you want to talk a bit more about what that is
and how you all are starting to move into AI? Yeah, I mean, the pendulum swinging toward consolidation
has these obvious surface level benefits, right, of better sales efficiency and customers being
able to save money by not paying multiple sales teams to acquire them. But that's like super basic,
like it's really surface level. Where the magic really comes is where there's some
something common underneath all of these different applications that you're building that
provides you with either a scale advantage or what I like to call kind of like your vibranium
advantage. So you have some sort of superpower at the core of your platform that lets you do
things that other companies just sort of look at and think like how the hell do that they do that?
Like why are they able to do that and we can't? And for us, it's like our deep understanding
of the employee graph and about employee data. So everything that we build runs on these common
rails of like a deep understanding of data about the employees in your business.
And so the question is, like, what happens when you start to marry all of this data in a single platform?
Like, what's some cool stuff you could do with it?
And then you toss in the question of AI.
And, like, what could a large language model accomplish with this data and this, like, real understanding of its structure and its history?
And that was one of the big questions we started asking ourselves a few years ago and started investing in this new thing.
So there's a new product that we're just releasing called Talent Signal.
It's the ability of this system to read the work product of employees and, like, marry the data.
that we have on whom you've hired into your company at what job level with what, you know,
all the basic data about their job history, married together with the actual work product that
they produce to yield like an insight into how those employees are doing. And, you know, this is
obviously going to be super powerful and useful. And it's sort of this thing that I think everyone
knows is coming at some level, that like AI is going to contribute in some way to evaluating human
performance. And so we knew there was an opportunity here. And that's what talent signal is going to
is going to deliver. It actually feels like a pretty big break to be looking at what you describe
as work product because traditional HR and IT systems, they don't necessarily have that work product
data in them. Whether you've had a job as an individual contributor, you know, for some period
of time reporting to a middle manager, and like I did that for a while early in my career at Apple.
And the real sort of crunch point in your relationship with a manager comes around performance
review time where you have some opinion on how you've done and your peers and others around you
have an opinion on how you've done and your manager has an opinion on how you've done. And everybody
gets in a room and after they've written the feedback, you know, they do this thing called
calibration where managers try to hold themselves to a common standard and they all try to
hold one another accountable to a standard way of evaluating against the rubric. But the truth is like
the manager has never really, hasn't really sat there and like looked at everything you've done.
particularly if this is over like a, you know, six-month time horizon or a 12-month time horizon.
They just don't have enough time to do that.
And so if there's a bunch of like really interesting articles written about this in many
different sources, I recently read one in HBR where they talk about the manager vibe.
So like if the manager has a good vibe about an employee and there's ambiguity about their
performance in the review process, then that opens up this massive gaping hole for the vibe
to be the basis of the performance review.
And likewise, if you have a negative vibe on an employee and there's some ambiguity,
about their performance, like, then they're going to drive that negativity through that crack,
like a Mac truck. And the question is, like, how do you get around this tendency in these
fundamentally human processes? And the answer is, like, when you go to the source, like,
you bring the facts to the discussion. And so talent signal by reasoning from the work product
only. Like, it doesn't have access to demographic data. It doesn't know your race, ethnicity,
your age, your work location. It just knows this is the source code you wrote, or these are
the customer interactions that you had as a support agent. And then it generates this thing called
a signal that is a stamp, basically, that says that this person is high potential, this person
is typical, or this person is in need of attention. We call it pay attention, but they're effectively
at risk. And directs the manager to go and spend time with them, but it surfaces all of these
concrete work product examples that the manager can use to go and have like a good coaching
conversation with the employee. Do ICs get to see it? The ICs can see it when the managers let
them. And this is actually a thing that we've debated, like, quite a bit. Talent Signal is not
making employment decisions. It's just giving this independent signal to the manager about how the
employee is doing. A calibrated signal. Yeah, well, it is calibrated. And that's actually really
important. Because one of the pieces of data that we do feed the model is someone's job level. And then
we try to calibrate that actually across all the companies that the data is trained on.
Does it end up showing calibration relative to both the individual company and overall pool?
We don't separate it out.
We give you only the localized version.
And so you tend to see like a pseudo-normalized distribution.
So you see like in a population of like 50 engineers, you'll always see some people who are flagged as being high potential.
And you'll always see some that need attention.
Even if in the global model, you know, they were all skewed.
This is a really good company.
Yeah, yeah, exactly.
You know, because it's not particularly useful otherwise.
And this is all stuff that like is part of this early access program that we're doing.
Like a couple of things that I should share about this.
because I think your listeners are, like, going to clue in now, like, wow, the stakes on this
are pretty high.
Like, getting this right is awesome, but getting it wrong sounds kind of dangerous.
We are doing this as part of an early access program, and the way that the product works
is that it generates one signal, one time for one employee at their 90-day mark.
Even people who have been at your company for, like, three years, we can generate a signal,
but we're only going to base it on the first 90 days of their work product.
And the reason that we're doing it this way is because companies that look at this can see,
okay, this thing actually made like a pretty darn good assessment at day 90. And it took us like
12 months to figure out that this person was not a fit for our company or that this person was
going to be an exceptional member of the team. That builds trust in the model over time. And we think
like, I don't know if you guys have ever heard the Overton window concept, right? Like this idea
that people are only ready for a certain amount of change and how they think about a certain
problem. And for us, it was actually really important to contemplate in the design of the product
that we not stretch the Overton window too far. And also like by limiting it to the first
90 days, we get to build trust with the employees, with the managers, and have them kind of
understand the implications of this thing and whether it's accurate for their particular
circumstances. And over time, we can, like, expand how it's applied. These are all the different
issues that we've contemplated as we've gone along. But if somebody's been around them for three
years, and we have a 90-day signal, is that still relevant to that person who's been around
for three years? Nope. Highly not likely to be, like, incrementally useful information at that point.
The idea there is like, hey, here's what we would have said at day 90 for this person.
It's back testing.
Yeah, it's back testing and establishing some level of credibility for the model,
because we obviously done a bunch of testing with this and thought it was quite accurate.
And certainly instill confidence in us that, like, it's a useful signal.
So a lot of the use of it is actually for new employees versus people who've been around
at a company for a long time.
This version of the product, V1, like as we step into this baby steps, is to do the 90-day
signal for new hires.
And so the more people you hire, the more useful it is.
So high-growth companies, obviously, you're going to get more value out of this initially.
But the sky is the limit, obviously, as this thing evolves, and we all gain more trust in the model.
I want to talk about risks too, but what is the aspiration for how this changes performance management?
For me, the sort of motivating factor here, honestly, it's the bad manager.
If you're an employee and you are working in the bowels of the organization on hard problems,
your manager a little lazy doesn't sort of recognize the quality of your contributions,
shows up at that calibration meeting with a better vibe on somebody else, and they get the
promotion.
Talent Signal walks into that environment and slams your work product down on the table
and says, like, what about this?
I can give you a concrete example of an underrepresented profile at Rippling when we were
building this product.
She was an engineer in India who was working on one of our toughest problems, and she was
singled out as a high potential employee.
and she was, in fact, pretty early in her tenure at the company.
And we paid attention to that, and we talked to the manager about it.
And it was sort of an eyebrow-raising moment where she was kind of lifted from obscurity
by the model that was like, I don't know what your vibe is on this person.
But, like, man, they seem to be contributing at a high level.
And here are concrete examples of how they've done so.
So the lazy manager who doesn't, like, represent things the way they ought to,
is held accountable by their manager when they look at the total organization through this tool.
And it does a better job of representing the employee.
And obviously, I can talk all day about, like, lifting people from obscurity, but it also has the team performance impact of signaling that someone needs support.
If they're not performing well, if they haven't ramped well, giving that signal to the manager and the manager's manager knowing about it is hugely valuable to overall team performance too.
So the vision here is like to have an independent, when I say independent, it's independent of the biases of the manager.
it's independent of, you know, all the noise that sits in the company and just cuts at the heart of one vector on this employee, which is their work product, and gives them a chance to shine.
You just imagine this is the first time in kind of the recent history of the concept of performance management in companies where there is an orthogonal input that can really upset with facts how people are doing this.
What did you learn from dog fooding at Ripley?
We started talking about this internally quite some time ago, and as the product has gotten more mature,
And as we've talked about it more and more with employees, the feedback from employees has been super useful to informing the policies that we set up.
I'll give you a couple of examples like no one's allowed to make any significant decision using the model alone.
So anytime you talk about employment decisions, promotions, that kind of thing, you're not allowed to just point at talent signal and say, you know, it's at X.
You've got to have your own independent assessment of the inputs.
So it's really, it's like manager reasoning about work product, talent signal point to them to.
Talent signal is like a cheat sheet, but the manager has to do what is fundamentally a human process, which is to evaluate the whole person.
The policies that we've set up internally prohibit blind following of talent signal and require the manager to express judgment around what they saw in the study.
And like, look, we dog food the heck out of everything at Rippling.
Parker, our CEO, he runs payroll for the company.
Like every pay run goes through him.
He also approves every expense above $10.
We can talk.
We want to talk all day about that.
Sometimes I just eat the $10, you know, like, what's the point?
Of fighting Parker on the expense policy?
Yeah, it was Uber comfort and not UberX.
But anyway, the AI stuff, you know, he's obviously very close to the development of this.
And the employees have been, I would say, really thoughtfully engaged in balancing being
good sports as dog fooders but also sort of making sure that their own rights are represented
in in the development of this technology one of the biggest objections i can imagine especially as
you get to evaluating people whose job might be like you know classic middle manager i make other
people successful it's about like collaboration or focusing people on the the right tasks is that
is not captured in a concrete work product yeah what's your response to that i mean so first of all
talent signal focuses on individual contributors in terms of developing signals so for sales
people and support agents and individual contributor engineers, it like, it only does a signal
for them. We haven't gotten into the game of managers yet. That's going to be interesting for us
or for someone to dig into. But there is this question of like, what's it looking at and is it
sort of like, you know, this overlord looking at everything that I'm doing? What we needed to do in the
development of the product was find the highest correlate, like find the best R squared. Like, what
is the input you can give the model that is most predictive of the output, which is,
were they promoted, were they terminated for performance, you know, did they stay at the same level
for a long period of time? Just in general, what was their career outcome in the period studied?
When we did these sort of preliminary studies, the screaming signal was work product.
Was work product. You know, like, if you want to know if someone's a good engineer,
look at their contributions, like look at their source code. And don't just look at, like, you know,
definitely don't just look at how much they do. But like, really reason about the
quality of the code contributions, think about security issues, look at pull requests, look at
comments on pull requests. These foundational models do an excellent job of thinking about source
code and writing source code. And so they're actually really excellent engines for assessing the
quality. That was one of the coolest things I thought about seeing the demo, like when it was
looking at assessment of, for example, like maintainability, extensibility, right? Because that requires
code reasoning. Yeah, it has an opinion on this and it's able to express it really eloquently.
And then the manager has to go in and use their own judgment. I'll give you another example.
This is an example from a customer who's been using the product.
So the CTO of one of the, like, alpha test companies went in and saw that somebody he didn't think was a very strong engineer was flagged as high potential.
And he was like, okay, like that does not jive with my priors.
He had prior.
He had priors.
It doesn't jive with his vibe, right?
Like, it's not my vibe about this employee.
So he goes in and looks at the source code.
He goes, oh, I see what's happening.
He's like, I wrote all this source code.
And you're like, huh?
Like, tell us more.
And he's like, well, this employee has been struggling.
And so I've been spending time with them shoulder to shoulder, like writing code and coaching
them through this stuff.
And like what the model has picked up on is this really high quality contribution that only
happens when I'm sitting next to this person.
And it was like, aha, okay, cool.
So it's sort of like an unknowable misattribution.
How do you think more generally about managers?
You mentioned that you don't currently assess them.
Andy Grove used to always talk about how the output of a manager is the output of their team.
And that's how you're supposed to assess them.
So to some extent you could argue you have some signaling.
you can aggregate up. So when you look in the product, it does aggregate at the manager level
to show you the sort of distribution of high potential, typical, and needs attention employees.
That part, we're sort of saying to customers, like, you use it informationally to sort of spot
where there might be hotspots, but don't, you know, don't totally judge the manager of the basis.
Is that a reflection of hiring or is that a reflection of execution? Yeah. So I guess it's hard
to sometimes tease those things out. Now you kind of get why we call it talent signal, because it's like,
it's a signal. It's like, uh-huh, okay, the little yellow light
all going off over here.
So I was just curious, how did you converge on this
as a thing that you're going to do for AI?
Was it a big exploration?
Was it more like, hey, we actually have something here
that we've aggregated data.
This AI seems to be good at interpreting
certain types of data.
I'm just kind of curious how you landed here
of all the things that you could do with foundation models.
We thought about a lot of the obvious AI use cases.
I'm going to zoom out for a sec
and maybe toot the company's horn a bit.
You have a dollar, and there's a bunch of things.
things you can do with it. Back to this corporate finance topic, like there's a bunch of things
you can do with that dollar. If you invest that dollar back in the front and out the back
of the machine comes to, like don't take the two, put the two in the front, get four out the back,
put four on the front, get eight out the back. This is why SaaS software businesses run at such
a deep cash deficit over the course of their early years. Now, if you can't do that because you
don't know what technology you're going to build next, if you don't know how to invest it in sales
and marketing to go and acquire the next customer, if you don't know how to invest it in
R&D to go and build the next product that's going to generate incremental revenue, then you
might do something like stock buyback. And that means that the most creative idea that you
could come up with with this cash that your business is generating is to just like juice the share
price. And like even worse is a dividend. Because like now I can't even do that. I'm just going
like literally just going to give it. I don't know what to do with this money. I'm just going to
give it back to you. Like what would I do with this money? There's like such a bad signal on a
company if, you know, if the best thing they can think of is is a dividend. Now by contrast,
companies like Rippling and many companies in Silicon Valley not only know what to do or think
they know what to do with the next incremental dollar, but they want even more dollars than they have
access to. And so they use equity capital to go out and get a bunch more cash that they can use
to pump in the front of that machine and get even more dollars out the back. You look at some of
the highest performing companies in Silicon Valley and they reach profitability or that,
you know, some of them do at least. It's still centered around one idea or one product that
they have done a really good job of scaling up. And one of the like super unique things about
rippling. And it's like so easy for us as a team to take this for granted is that we have this
massive list of projects that if we were to go build them, they would turn into revenue.
Like we know the next product we want to build and the one after that and the one after that.
And the only challenge is like, can we hire enough engineers and not run out of money?
You know, because we know that in the long run, this is all going to work.
Oh, wait.
I think a common objection in like classic not rippling Silicon Valley do one thing well type
companies is really hard to focus on that many things. It's really hard to do that many things well.
It's hard to keep it cohesive. How do you teach the sales team that? How do you think about
cohesion? You just work harder. You know, like you just get the right people into the right
jobs and get enough leaders into the business who can deal with sort of the fractal of complexity.
This is also the traditional enterprise sales playbook from the 90s, right? And I think it's almost
like we had an era of 10 years in the 2000s where we forgot about this. And everybody became
single point products. And then there's you guys, there's HubSpot, there's Datag. Like,
a variety of people have built out these sort of bundled products and the cross-cell motion around a single sort of core either system of record or type of identity or something else.
So, I mean, there's the old saying from Netscape, from the Netscape days where all of innovation is either bundling or unbundling or some variation of that.
So now we're in an era of bundling again.
Yeah, history repeats itself.
History doesn't repeat itself, but it rhymes.
And, like, we're definitely in the rhyming phase of, like, the big platform stories from the 90s.
But Talon signal doesn't look like bundling.
It looks like something like pretty different.
Well, this is why it doesn't repeat itself, but it rhymes because the technology that emerges in these new situations offers new opportunity.
And so for us, we have all of these things we want to build, but the guiding principle is always what can we alone do?
What can we uniquely do with this new tool?
Vibranium.
We have Vibranium.
We have Vibranium in this underlying platform.
what is AI plus vibranium equals what?
You know, like, what does it yield?
And what I would say about other companies
that are doing AI products
is that for the longest time,
their roadmap sucked.
They didn't know what their next proximal feature
was going to be that was going to generate revenue.
They didn't have another skew idea
with 100% chance of generating incremental business,
and they kept filling in additional features
that made existing customers happy
and may have given them sort of marginal cross-sell opportunities,
but they didn't have the next big thing
that they could tack on.
AI comes storming into the scene,
and now all of a sudden,
everybody's a freaking AI company
because it's offered them this opportunity
to at least masquerade
as a company that knows what to do
with the next proximal R&D dollar.
We've never had that problem.
And so guess what we didn't do?
We didn't build a chat bot.
We didn't build a co-pilot.
We didn't build any of these surface-level,
obvious capable of we're going to build them.
They'll be in there at some point.
Who cares?
It's not going to sell a single extra subscription of software.
We said, we're going to skip that.
We're going to fast forward.
We're going to take these super expensive AI engineers
who are really hard to recruit,
easy to retain because it's such a great place to work, but hard to recruit, have them build
something that has the chance to be, the opportunity cost of which is, like, for sure, worth
it. Because the opportunity cost of putting them on the chatbot thing ain't there relative
to what we could otherwise be going and build. Are there any other types of new AI products
that are in the pipeline for you all? We've got a bunch of stuff working on in the AI world,
but getting this one right is like, you know, we're not like peeling people off of this project
to go work on project number two.
Like, we really want to get this one right out the gates.
There is some new stuff coming from the company
that's not directly AI-related,
but it is about really scaled data,
like super high-scale data.
We've already built this, like, really beautiful data platform
underneath Rippling.
It's kind of like our AWS.
Like, we're going to have our AWS moment at some point
in the next, you know, quarter or so.
But we're here to talk about talent signals.
So it sits on this data platform.
And when you want to install talent signal
for your engineering team, what you do is you just, you install the GitHub app on Rippling,
and it replicates your source code repository into this secure, you know, well-guarded environment,
but there it's going to do the analysis on the source code. You know, when you plug in Salesforce,
we're replicating a lot of the data out of your Salesforce instance, and that's heavy duty.
I mean, the size of our Salesforce instance is massive. And so really it was about how do we
marry the HRIS data with this scaled data platform, where everything,
is really beautifully structured, and in particular, all the employee data is de-referenced elegantly.
In other words, we always know who's who in all of these other systems, and then say,
okay, now, what business problems can we solve with that?
And, like, it was so obvious that this was the opportunity because we were seeing inside
of these workflow products.
And, like, GitHub can't do this because GitHub doesn't know who you've promoted.
They don't know who did well.
They don't know who you've had to let go of for performance reasons.
Salesforce doesn't know that either.
And so, like, I'm sure there's going to be really cool, like, code quality evaluation
tools built into many of these workflow systems, but ain't none of them going to know what
happened from a human perspective in the way that we do. And that's why this is kind of our magic
talent. Do you think you need a particular type of culture or leadership to be an early adopter
of talent signal? Or maybe is there a ready signal on that in terms of your alpha partners?
For sure. I mean, look, there are companies that we've engaged on this who have looked at it and said,
like, we're going to not be an early adopter on this one. And of course, we totally respect that.
I have the sense that like AI in the conversation about human performance is 0.1% of the way there.
You know, like there's a lot more to come on this.
You know, I would mostly comfortable saying that it's like an inevitability that LLMs are going to be involved in assessing human performance in many different contexts.
Yeah, you know, it's interesting.
A friend of mine who's a CEO of a public company told me that he sometimes uses some of the chat related products to talk about employees.
issues where he'll chat and say, hey, I'm trying to work through this thing with an employee.
What are some of the things that I should be doing? How should I think about it? And so you already
start to see sort of glimmers of that future emerging. What do you think are some of the
principles that people should be building against in order to make sure that they're approaching
it in sort of a thoughtful way or to your point, they're not just sort of deferring the decision
to AI. I do think job number one is to understand, like, you can become numb to the impact that
this kind of stuff can have on people's life. I think if you're like, if you're not in the AI world
and you hear people like me talking about, like, risks, the risks or AI safety or the ethics.
It sounds weird.
You're like, why are they talking about ethics and risk?
Like, it answered my question about whether, you know, how to convert a half of a cup of, you know, oil to ounces, you know,
because that's most normal people's, you know, experience of AAS is a very benign, friendly,
approachable thing.
But it doesn't take you too long when you contemplated in this context to think about,
okay, so like, you know, if a manager were to run off and make decisions purely on this,
that any hallucinations or misattributions could actually be really consequential to people's
lives. And this is why the way that Rippling is approaching this, right, we're doing this as
an early access program, we're constraining it to the first 90 days. The signal is awesome.
Like, it looks like it's going to be super useful for people. And also, we're very conscious
of the risk of bias that might be amplified or introduced through the whole thing. And so when you
ask about people who want to start using these tools in these kinds of context, what advice you
might give them. It's like, number one, you got to understand the stakes. Number two is like,
even if the system is arguably bulletproof, like you have to go to ground and still do your job
as a manager. You've got to go inspect the context. Don't let the misattribution that I described
earlier around somebody who got a lot of coaching from their manager influence. You're thinking
about them. Just see it for what it is. You know, not just the talent signal thing, but just AI more
broadly, to see it for what it is. You have to have some like understanding of the underpinnings
of these systems in order to be able to judge the quality of their output. And I think it's
probably too high a bar to say that everybody out there who's potentially a user of these
kinds of tools is ready for that. So who is in terms of CEOs or HR leaders or whoever else
is choosing to do it now? It's pretty clear that the companies that have chosen to partner with
us already on this are either reasonably performance oriented, like very interested in finding
new tools to compete. Like I think it's easy to go back to a sports analogy.
where if I could tell you that you're a coach and you've got a team and you're going for Olympic gold and it's a beta version of something that assesses your form on the court or kind of depends on what sport we're talking about, that like you're pretty keen to give it a shot and see if it can help you juice team performance.
And if you're careful and you mitigate the downside risk, like it could give you a leg up in what is a very competitive environment.
There are a lot of business people, CTOs, like the engineering side of this, the sales leaders who are interested in this, I mean, sales is hyper, hyper competitive.
And so if they can get a leg up, this is just like part of the arms race for sales.
And then support teams are so coaching oriented already.
Like a support team is so generally so focused on rubric adherence and, you know, weekly air checks with their employees to make sure that they're communicating the right way about their new product or using the right tone.
they already have these cultures. And so really, I suppose, one of the things we've gotten
right about this is that when we selected sales engineering and support as the areas to build
the first version, those are already organizations that have a culture of competitiveness and a
culture of looking to find the next incremental advantage for themselves. And then I think it rolls
up to the company culture where companies have said they're going to wait this round out
And some of the more hard-edged or, you know, competitive, tight environments, those guys have said they want to play ball.
I think those are also three disciplines where there's also a lot of coaching.
And so there's products like Gong where I've seen people like share calls so that people can learn off of each other.
You know, customer support.
Obviously, there's a lot of training, you know, code, sometimes people pair program.
So it does also feel like the places where the coaching aspect of what you talked about can become really valuable.
Yes, 100%.
It seems like an obvious trigger for the AI pitchfork crowd to, you know.
Yeah, I will say that like I'm thankful for the pitchforkers. Like I'm thankful for the people who are going to hold us accountable and and criticize and or critique, you know, the quality of work that we're putting out with this product because it's really easy to sort of inhale your own exhaust and get excited about the potential without necessarily understanding the full picture. And so when someone comes at us and asks hard questions about bias or asks hard questions about, you know, the unintended consequences of,
involving AI and decisions this important, we're going to listen and we're going to learn.
Feedback is a gift, like, it's a real thing.
And so I know that there will be some people who raise an eyebrow at, you know, what we're doing.
And then all I can say is, like, we're really committed to learning from them and making sure that we make this a tool that works for everybody.
Great. Thanks so much for joining us today.
I am really glad you guys let me do it.
Thanks, Matt.
Find us on Twitter at No Prior's Pod.
Subscribe to our YouTube channel if you want to see our faces.
the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new episode every week.
And sign up for emails or find transcripts for every episode at no dash priors.com.