No Priors: Artificial Intelligence | Technology | Startups - From Job Displacement to AI Trainers, Brendan Foody on Work in the AI Age
Episode Date: April 10, 2025On this episode of No Priors, Sarah and Elad sit down with Brendan Foody, CEO and cofounder of Mercor, to discuss the company’s rapid growth and their vision for the future of the labor market. The...y dive into how AI is reshaping the workforce in real, tangible ways and what skills are worth investing in today. Brendan shares insights on evaluating talent in an AI-driven world, including how models might identify outlier or 10x candidates and even assess “taste.” The conversation also touches on the evolving role of human data, the future of hiring in fast-scaling startups, and whether AI will act as an individual contributor or a data-centric manager. Show Notes: 0:00 Introduction 0:16 Building Mercor 3:00 Identifying outlier talent with AI 9:07 How AI is reshaping the workforce: job displacement & evolution 11:18 What skills should we invest in now? 12:18 Verifiability 13:36 Evaluating models 16:07 What should kids learn today? 17:05 Evaluating taste in talent assessments 18:45 Future of data collection 26:07 Humans’ role in the AI economy 28:53 AI as a contributor vs. a manager 33:03 Mercor’s goals 34:50 Evolution of labor markets 36:00 Hiring advice
Transcript
Discussion (0)
Hi, listeners, and welcome to No Pryors.
Today, we're chatting with Brendan Foodie, co-founder and CEO of Merckor, the company that
recruits people to train AI models.
Merck was founded in 2023 by three college dropouts and teal fellows.
Since then, they've raised $100 million, surpassed $100 million in revenue run rate, and are
working with the top AI labs.
Today, we're talking about where the data for foundation model training will come from next,
evaluations for stand-of-the-art models and the future of labor markets.
Brendan, welcome to no priors.
Brendan, thanks so much for doing this.
Yeah, thanks for having me.
I'm excited to be here.
So you guys have had a wild last six months or so.
There's huge traction in the company.
Can you just talk a little bit about what Mercor does?
Yeah, so at a high level, we train models that predict how well some will perform on a job
better than a human can.
So similar to how human would review a resume, conduct an interview, and decide who to hire.
We automate all of those processes with LLMs, and it's so effective, it's used by all of the top AI labs to hire thousands of people that train the next generation of models.
What are the skills and job descriptions that the labs are looking for right now?
It's really everything that's economically valuable because reinforcement learning is becoming so effective that once you create evals, the models can learn them and how to improve capabilities.
And so for everything that we want alums to be good at, we need evils for those things.
And it ranges from consulting to software engineers all the way to hobbyists and video games
and everything that you can imagine under the sun.
And it's really whatever capabilities you're seeing the foundation model companies invest in
or even application layer companies invest in, the evils are upstream of all of that.
And are you also helping companies outside of the core foundation models with a similar
type of hiring or is it mainly just focused on AI models right now? Yeah, so actually when we
started the business, it was totally unrelated to human data. It was just that we saw that
there were phenomenally talented people all around the world that weren't getting opportunities
and we could apply LMs to make that process of finding them jobs more efficient. And then
we realized after meeting a couple of customers in the market that there was just this huge
vacuum because of the transition in the human data market. And that
the human data market used to be this crowdsourcing problem of how do you get a bunch of
low and medium skilled people that are writing barely grammatically correct sentences for
the early versions of chat CBT. It was transitioning towards this vetting problem of how do you
find some of the most capable people in the world that can work directly with researchers
to push the frontier of model capabilities. But we've still kept that core DNA of hiring people
for roles, human data, and otherwise. And a lot of our customers hire for both.
Do you think all of hiring eventually moves to these AI systems assessing people, or at least all sort of knowledge work?
I think certainly, because we're already seeing on most of our e-vals that models are better than human hiring managers assessing talent, and it's still like the very early innings.
And so I think we'll get to a point where I'll almost be irrational to not listen to the model, right?
Where people trust the model's recommendation.
And like maybe for legal reasons, we'll still have the human pressing the button and making the final sign off.
but where we just trust the model's recommendations on who should be doing a given task or job
more than we trust the humans.
I guess in any field people say that there's 10x people.
There's 10x coders who are way more productive than the average coder.
There's 10x physicians or investors or you name it.
Do you see that in terms of the output of your models?
In other words, are you able to identify people who are outliers?
Totally.
This is one of the most fascinating things is that the power law nature of knowledge work
frames the importance of performance prediction.
And that imagine if you can understand
the kinds of engineers on an engineering team
that are going to perform in the 90th percentile, right?
Or even if you could say,
I know that this person that costs half as much
is going to perform in the top quartile, right?
It frames, like, how you think about the value
that we create for customers
and how you think about, like, the long-term economics of the business,
and it all ties back to, like,
how do you measure the customer outcomes
and really go on them.
And is it a power law or what's your distribution is it?
Because people always talk about human performance as a bell curve.
Do you think that's actually true or do you think that's the wrong way to interpret human
performance relative to knowledge work?
It's very industry by industry, right?
Like for you in investing, right, it's like the most power law thing imaginable.
And where it's just like the top handful of companies each decade are the ones that
matter such a disproportionate amount.
And it's the investors that when in those versus if you're hiring like factory workers, right,
it's a much more commoditized skill set.
There is a lot less of a difference.
And I think like software engineering is somewhere in between.
It's definitely very power law, but I don't think it's as parallel as, say, like, the handful of best investors in the world.
Do you have a prediction for either because of the distribution of like skill level or the measurability, like where you should expect that models are better at evaluation or identification of talent?
beyond a, you know, human data first?
Yeah, so it's really everything that you can measure with text.
The models are really good at.
Like, if you can ask questions in an interview and read through the transcript,
the models are superhuman at that across many more domains than what would think.
Like, it's not, it's more domain agnostic than I would have initially anticipated.
I think the things where models are going to be slower is on the multimodal signals
and understanding, like, how passionate is this person about what they're working on, right?
Like, how persuasive are they or good at sales?
And those capabilities will come, but they'll just take a little bit more time.
So that's my mental model for thinking about it right now.
Right.
So, like, if I'm interviewing a candidate from one of our companies and they are saying the right
words about, you know, motivation level, but I don't believe it.
Like, that might be a next level signal if I have any predictive power here.
Totally, totally, exactly.
The other thing is that the models are way better at high volume processes.
And an example is like, say you're assessing 20 people for the same job and you hire those people, you see how they perform, it's very easy to attribute features of each person's background to how they perform.
Right.
It's sort of the stack ranking where you can understand like this person had this nuance in their interview or this person had this nuance in their resume.
And that was the thing that explained how well they performed on the job versus if those 20 people are performing 20 different jobs,
then it's just this mess of figuring out, like, what is causing what things to happen.
It's way more difficult to understand, like, what features are actually driving signal.
And so I think it'll be those higher volume processes that also get automated first.
Is there anything that surprises you about, like, basically, the discovered features in terms of, I don't know, any domain that you are working on today that identifies amazing talent?
That's a very good question.
Or maybe in engineering, because it's relevant for many of our.
our listeners.
Yeah, I think that one of the really interesting things
for engineering is that there's so much signal
about a lot of the best engineers online
that I don't think people properly tap into.
It's everything ranging from their GitHub's
to the personal projects on their website,
to the blog post that they wrote during college.
It's just because there's like, it's bottlenecked by manual processes.
The hiring managers don't have time to read through all this stuff, right?
they don't have time to, or with designers, they don't have time to consider every proposal
or images from someone's dribble profile before doing their top of funnel interviews.
And so I think one of the things where people are under-indexing on Signal the most is the
things that can be found online. But then a lot of the things that can be indexed on during an
interview, like how passion is this person? Does this person have the skills that it would require
for the job? I think humans are relatively good at.
at least they're a little bit more adopted right now.
Are there hidden signals for other types of domains where there's less online work?
An example that would be physicians, lawyers, you know, there's a lot of other professions
where...
Yeah, there's all sorts of these hidden signals.
Like one interesting one we've seen in the past is that people who are based internationally
but study abroad in Western country tend to like work much more collaboratively or communicate
better with people and it's like they're the kinds of signals that make
sense when you look backwards and evaluate them, but are hard for like a human without having
full context of like everything happening in the market to really understand and appreciate.
And there's often, like one of the most important things, as you can imagine, is just how
intrinsically motivated and passionate are people about a domain. And so looking for signals of
not just like on their resume and their interviews, as well as online, of like what indicates
this thing, right? Like how do we, and it pertains not just to who you hire, but also what those
people should be working on, right? Imagine the nuance between hiring a biology PhD to work on
like biology problems versus hiring the person who wrote their thesis on drug discovery to
write like problems and like come up with innovative solutions contextual to their thesis. And
there's just so much inefficiency with the way that we do matching the way we use all of those
signals right now. So you're evaluating people. Are you also doing evaluations of the models
relative to the people? Yeah, yeah, of course. And then when,
What is your view in terms of the proportion of people who eventually get displaced by these models?
In other words, if you can tell the relative performance and you can look at relative output,
how do you start thinking about either displacement or augmentation or other aspects like that?
I think displacement in a lot of roles is going to happen very quickly,
and it's going to be very painful and a large political problem.
Like, I think we're going to have a big populist movement around this and all the displacement that's going to happen.
But one of the most important problems in the economy is figuring out how to respond to that, right?
Like, how do we figure out what everyone who's working in customer support or recruiting should be doing in a few years?
How do we reallocate wealth once we have, once we approach superintelligence, especially of the value and gains of that are more of the parallel.
distribution. And so I spend a lot of time thinking about like how that's going to play
out. And I think it's really at the heart of what do you think happens eventually? X percent
of people get displaced from like color work. What do you think they do? I think there's going
to be a lot more of the physical world. I think that there's also going to be a lot that of like
niche skills. What does the physical world mean? Well, it could be everything ranging from people that
are creating robotics data to people that are waiters at restaurants or are just like
therapists because people want like human interaction, like whatever that looks like.
I think all of, I think that automation in the physical world is going to happen a lot slower
than what's happening in the digital world just because of so many of the like self-reinforcing
gains and a lot of, yeah, self-improvement that can happen in the virtual world, but not
a fiscal one.
Do you have a point of view on, like, what types of skills, knowledge, reasoning are worth
investing in now as a human expecting to stay economically valuable?
So Sam Olman said this thing, when Stone asked him this, about how people should optimize
for just being very versatile and, like, able to learn quickly and change what they do.
And I think that resonates a lot because there's so many things that one would think the models aren't good at,
that they get very good at very fast, that I almost think you just need to be able to navigate that quickly.
What are the characteristics of those things that you think models will learn the fastest?
Like if you were to say, here's a heuristic, what do you think of the components of that?
If it's verifiable, for things like math or student code that are verifiable, they will get solved very quickly.
So you want a feedback loop or utility function they're optimizing against the model.
For things that aren't verifiable, like maybe it's, you're tasted a founder, right?
That's much harder to automate.
And it's also a very sparse signal because, yeah, there's just not that much data on it.
This is a pretty fundamental research question right now.
But like, what do you think are the most interesting ideas about verifiability beyond code and math?
Well, I think that there's ways that you can have certain auto graders or, like, criteria that humans can apply.
and I'm very interested
or that models can apply
those criteria and I'm very interested in
how that will play out over time
and there's obviously a lot of other
domains where models will take on structured
data, they'll structure it, they'll figure out how to
verify it and it's very like industry
by industry. I think it's going to be hard
for one lab to do everything
there and there's going
to be more specialization
as we progress further
and further and marginal
gains in each industry become more challenging.
How much do you believe in a generalization from the code and math type reasoning and intelligence?
Like, if I'm this much better at proof math, does it make me funny eventually?
Me being the intelligence.
Yeah, I generally believe in it.
But to a certain extent, like, you still need a reasonable amount of data for the new domain and to kickstart it.
But there's going to be a lot of transfer learning.
I think it's very funny when Sarah does proofs.
So I think it all fits.
She gets at proofs.
Yeah, at all kind of.
I actually think being bad at proofs is funny.
Okay, let's talk about evals because you're, you know, working on the bleeding edge of model capability.
There has been this whole sense of what people call evaluation crisis around like the models are so good and they're somewhat indistinguishable at the fringe of capability today that we don't know how to test them, you know, ignoring all the issues with people.
gaming the benchmarks, right? What do you think how, like, what right ideas are there about
evaluating models, especially as they become superhuman? Well, I think one of the most important
things is that a lot of the evils historically have been for like zero shot of a model or like
a test question, right? That might be academic. When the thing that we actually need to eval is like
what's economically valuable work, right? When a software engineer goes to their job, it's so much
more than writing a PR. It's like coordinating with all of the relevant parties to like understand
what does like the product manager want and how does that fit into, you know, the priorities of
each team and how does that all translate to like the end output of work. And so I think we're
going to see an immense amount of eval creation for like agents. And that is the largest barrier
to automating most knowledge work in the economy. Where should people start? Like that feels not
terribly generalizable. So Sierra has something called Tao Bench that I think people are trying in
their other efforts here, but it is perhaps more specific to a certain function. Yeah, I think that
people will need to have these by industry and they should probably start with tasks that are more
homogenous, right? Like it's going to be, for customer support tickets, I think that's a great example
because there's like one interface that the customer support agent interacts with. Maybe they call a
couple of tools like accessing the database or reading through the documentation. But it's a relatively
like homogenous uniform task. I think the things that are going to be more challenging,
but also in many cases more valuable, are creating evils for these like very, very diverse
tasks, right? All the things that go into making a good software engineer. That's going to be
really hard to do. Like I think it's going to be a years-long buildout for even some of the
verifiable domains, because there's so much that goes into a good software engineer of, like,
how do they have taste for, like, you know, what is the right way to approach a problem,
or what are the products that people really enjoy using?
And I'm really excited for that.
So if you were to counsel people with young kids, say your child is, I don't know, five to ten.
Yeah.
Should their kids learn computer science?
I would probably not push them towards teaching their kids computer science, but I'm not totally against it.
I think that the key thing is, I would encourage them to just, like,
find something that's intellectually stimulating, they're really passionate about, where they can learn general reasoning capabilities.
And those reasoning capabilities will probably be very valuable and cross-applicable.
I always love building companies growing up and like hustling and doing small things like that.
And I think that is something that could be helpful.
But I am skeptical that like the really valuable thing is just people who can code in five years.
I think it's much more likely like the people that have these contrarians.
ideas around what's missing in markets and have the taste of what, like, features and nuances
need to go into solving that problem.
You said taste a few times.
Are there signals of taste that you feel like you can discover in any domain?
Yeah, absolutely.
I mean, I think that oftentimes you just want to see the softer signals of how people think
about certain problems, and certain people have intuitions, whether it be like the way they
approach a problem or if they're looking at different like products, how they notice nuances.
Yeah, it's very industry. It's very contextual to the industry, but it's important to measure.
How can you score it? Like, what's the positive feedback loop here?
We've done a variety of things, but oftentimes we will give people like a problem that as closely
as possible mirrors what they would solve on the job. And then we would see how they compare
to other people. And so that helps the scoring it.
You ask them for the thought process is part of that.
I know, for example, it's almost like looking at like code reviews or other sort of intermediate work along the way relative to something.
We definitely do.
One thing I've realized about talent assessment is that a lot of people focus too much on the proxy for what they care about rather than the thing they actually care about.
And so ideally, you want to measure the thing that you actually care about.
So if it's that person building an MVP of the product, ideally you have an interview that's like a scope down version of doing that.
The place where you need to use proxies is when it's like a longer horizon.
in tasks where you just want to structure the proxy to get as much signal as possible.
And so that's sort of how I think about tone assessment.
Yeah.
Can I ask this scale of impact question?
So if I think about the very largest employers today, like it's called it like low
single digit millions of employees.
Yeah.
Right?
Or I don't know anything about contractors and Amazon workers and such.
But how many people do you think like will end up doing data collection?
I think it's a huge volume.
I think the reason is that it all comes.
down to creating evils for everything in the economy.
I think part of that will be current employees of businesses
that are creating evils for that business
so that those agents can learn what good looks like.
Part of that will be hiring out contractors
through a marketplace to help build out those e-vils.
But it would not surprise me if that becomes
the most common knowledge work job in the world.
How long does that last?
So effectively, people are being brought on to displace themselves.
This is true.
Is that a six-month cycle?
Is it a two-year cycle?
Like, what is the length of time at which people have relevancy relative to some of these tasks?
There's always, like, a frontier.
So I think the...
Unless it becomes superhuman, right?
Yeah, unless it becomes superhuman.
Yeah, yeah.
It's almost like time to superhuman.
But I had an interesting conversation, which is that, like, you don't even know that you have superintelligence without having e-vils for everything.
Because it's like you sort of need to understand what is the human baseline and, like, what is good.
It's, like, grounded in this, like, understanding of human behavior.
Yeah, a friend of mine basically believes that.
that you know NyQuest theorem, which is basically if you're sampling a signal, like,
you need to be able to sample it twice the frequency in order to be able to actually extrapolate
what it is. Otherwise, you're not sampling richly enough to know. And so he views that,
that there's some version of that for intelligence. Like, you can tell if somebody's smarter than
you, but you don't know how much smarter because you aren't capable of sampling rapidly enough
to understand it. I mean, so I always wonder about that in the context of superintelligence or
superhuman capabilities in terms of how smart can you actually be since it's hard to bootstrap
into the eval?
Well, so I think like when you take it to the limit and you have superintelligence,
what you're saying makes a lot of sense.
But another way I think about it is that if we classify knowledge work in two categories,
one is like solving an end task where it's sort of a variable cost of like you need to do
that repeatedly.
And the other is creating an eval to teach a model how to solve that task, which is like
a fixed cost that you do one time.
It does seem structurally more efficient for work to trend away from the variable cost
of like doing it repeatedly towards this fixed cost.
of how do we build out the evils and the processes for models to do this themselves.
That said, it all comes down to, like, how fast are we approaching superintelligence, right?
Like, if the models are just, like, getting that good, that fast, then sure, I don't think
we would need humans creating evals very much, but I also then don't think we would need humans
in many other parts of the economy.
And so you sort of need to be thoughtful about the ratio of that.
Does that create an asymptote in terms of how good these things get, or do they start
creating their own evils over time?
I think that they'll play a role in creating their own e-vowls, where they like...
Yeah, where they might come up with certain criteria for what a good response looks like
and humans validate that criteria.
However, I think you often need to ground this in, like, the experts in that particular domain.
Sure.
But I'm just thinking of like Med Palm or something, right?
Yeah.
Med Palm 2, where the output of the model was better than the average physician.
It was basically like a health model of the Google Belt.
Yeah.
And they would use physician panels to rate outputs of the model versus individual physicians
and the model did better by far than individual physicians.
At some point, it should do better than the physician panels where feedback from the physician
panel should make the model worse, right?
In other words, if you just are all dead off of individual physicians, the model already was going
to get worse.
And so there's a little bit of this question of how much, when does human scoring create
worse outcomes because the humans aren't as good at a task?
Well, I think the models will be able to delineate between.
between the valuable human knowledge
and the human knowledge that's not valuable.
And that maybe you have doctors that create
like a bunch of e-vals for this particular task
and the model realizes like, wow, like I see the mistake
that the doctor made on these particular tasks,
but I'm going to ignore them.
And like, here are the things that seem insightful
or the things that I can learn.
And the models will, yeah, use that data and value
that data immensely.
The other thing I'll say is that I think it's easy
to look at these evals
and the rate of improvement on the evils
and just think we're a lot closer
of super intelligence than we are.
But the truth of the matter is, like,
there is a lot between being really good at Sweet Bench
and replacing software engineering, right?
There's like all the coordination problems
we talked about.
There's like so much else that goes into that.
And I think that we're just going to need a lot of evils
for tool use.
We're going to need a lot of evils for agents.
And that build out is not, is going to be a lot longer
than, you know, a couple of time horizon.
How do you think about incentives
for all of these like expert knowledge workers?
because the opportunity cost for a great software engineer with, like, taste and architectural understanding is a great job at Mercor or another, you know, interesting tech company versus some of the geo-arbitrage on just basic knowledge work does not exist in, you know, as the skill level increases over time.
That's true in coding. It's true for physicians. It's true for finance people. Lots of areas where you might want e-vals and labels.
Totally. I think that it'll definitely become more power law over time, which means that, like, the best people are going to, of course, make an incredible amount of money.
So you think it's more just turn up the dial on, like, what any piece of information is worth from the higher skilled workers?
Yeah, yeah. But you also want, like, the evils at the frontier of what the models can't do. And so it might be that for, like, a very well-scoped problem, like answering a medical question that someone has, you might.
might need to get the like you know world-class doctor that is like oh the one of the
handful of people that's able to be better than the model that very well scope problem but for like
the broader agentic problem of like how do we you know talk about this case in a way that the
like patient is receptive to how do we then like you know coordinate with these set of tools to
you know help to complete the diagnosis and you know send whatever emails at
X time. Like I think for those kinds of things, I still expect that the like bulk of the
bell curve, people that are closer to the like mean of the distribution will be able to
contribute for a longer period of time. What do you think is the biggest shift that nobody's
really anticipating that's coming? It could be domain specific. It could be broader.
Well, so maybe I'll answer this in two parts. Because like there's when you, I think about
nobody, it's like it feels like the bulk of the country is not really coming.
to grasp with how fast jobs will be displaced. And that just feels like a big problem, as I said
before. And I think that we need to stay very proactive as like as a government, as an economy,
etc. And there's certain areas we're already seeing large-scale job displacement that you don't
think is being reported on. It's definitely being reported on in customer support in recruiting.
I think one of the challenges is that a lot of this happens at economic contractions when
people get more efficient, get more focused on bottom line. And so I think that a lot of
It hasn't happened yet, but it's going to happen imminently.
And then in terms of things that, like, maybe no one even in, like, San Francisco is thinking
about, which is another interesting part of that problem is that these agentic evils for
non-ferifiable domains is under-indexed on significantly.
Another thing is that people in San Francisco have a tendency to, like, not think critically
about the role humans will play in the economy because they're so focused on, like,
automating humans. And so I think that it's important to, like, think more about that problem.
Like, one thing that I've thought about it is that ideally model should help us to figure that
out over time, right? Like, what are the things that people are passionate about? What motivates them?
And maybe it doesn't need to be an economically valuable thing. Maybe it's just, like, a certain
kind of project that they're, like, working on. And I think that people aren't indexed enough on how
humans will fit into the economy in 10 years.
You know, one thing that I feel that I've really,
I really misunderstood or didn't quite understand the scope of,
was a degree to which we effectively had different forms of UBI
or universal basic income in different sectors of the economy?
Government is a clear example where there's enormous waste,
fraud, grift, et cetera, happening.
Yeah.
Parts of academia, if you just look at the growth of the bureaucracy
relative to the actual student body or faculty,
big tech, if you look at some of the science,
size, you know, basically, a lot of these things were effectively UBI.
And so to some extent, one could argue that parts of our economy are already experiencing
what you're saying in terms of there's high-paying jobs that may or may not be super
productive on a relative basis.
And so the question is, is that something that we actually embrace as a society, given
some of these changes in displacement?
And if so, where does that economic surplus come from?
Yeah, it's interesting.
I think that as we have better analytics around the value of employees, it seems intuitive,
that these companies will become, you know, start doing more layoffs, more cuts, et cetera.
Do you think those evals become illegal at some point?
Because it feels like that happened a little bit with certain aspects of merit or merit-based
testing for different disciplines or fields.
That happened with the government in the 70s where they're removed as a criteria.
I'm just wondering if that becomes something that more generally people may not want to adopt
because it exposes things, or do you think it's something that is inevitable economically?
There's definitely going to be pushback, but I think it's inevitably economically
because it's hard to regulate and just like so strongly valuable to companies that they'll move towards it.
I think it depends on what segments of the economy, because some of these are not economically driven already.
They're just not efficient as sectors.
But if you look at healthcare education, everybody's seen this chart that shows a bunch of industries that have some measure of output per dollar spent.
And you have increasing spend on health care and education and no improved output.
Yeah.
And like that's happened for a long time when there's increase in productivity in many other sectors.
And the answer is there's no economic pressure, actually.
Sure, it's regulated versus unregulated sectors, effectively.
And the regulation is what causes the divorce from economics.
Yeah, also one thing that I think is very interesting is that a lot of people are in the
mindset of AI being really good as an independent contributor when actually it may soon
become much better at being a manager, right?
In like taking a large problem, breaking it down, figuring out how to performance manage people
for how they should be doing.
And this ties into your point around, like, what should we do with all of those unproductive employees?
Because if we have, like, a ruthlessly rational agent that is making the decision there, it is probably going to be very different than a lot of the decisions that have been made historically.
One of our companies asked recently what I would expect an assistant to do that it doesn't do today, right?
And I think the biggest thing is, like, you know, if I give it enough context and some objectives that I'm trying to achieve, I'm not like a particularly organized person.
I have a lot of output, I think, all things relative, but, you know, is it, like, perfectly
prioritized and tasked out and sequenced so I'm not bottlenecked on a particular thing?
No, right?
And I would absolutely expect that the assistant can do that for me.
Well, and it goes to the point earlier, right?
Just tell me.
Tell me what to do for the next three minutes.
We have these models that are, like, incredibly good at math, right?
Like, you give them a test and they can ace the test, but they still can't do, like,
basic personal assistant work, right?
And I think it goes to show that there's still a lot of, like, research and product to
be built out. And, like, how do we actually bridge the gap with what's economically valuable
to complete that end-to-end job that, like, you're willing to pay a human salary for?
Do you think the models are good enough for that? There's just incremental engineering work to make
it better? Or do you think it's, okay, so we actually have model capabilities that you think
would allow us to build certain types of triagentic systems versus we need, like...
That are proactive, too.
Or actually, maybe, let me put it this way. I think with a small amount of e-vals for agents
in various categories, the base model has, like, all the reasoning capabilities.
And the reason you still need those, like, evals is the models need to understand, like,
when they should be using tools in certain ways.
They need to understand, like, how to synthesize information from those tools.
But it's not a reasoning problem.
It's, like, much more this problem of, like, learning each company's knowledge base and, like,
what good looks like in that role.
And so there is going to be some, like, post-training, and I'm very bullish on RFT and everything
that's going to mean.
Can you see more about RFT and explain it for our audience?
Yeah, so basically everyone used to talk about fine-tuning in the context of SFT, supervised fine-tuning,
where you would have inputs and outputs for a model, and the model would learn from those input-output pairs.
But the main issue, and supervised fine-tuning customization never really took off because it wasn't very data-efficient.
Like, companies would create a few hundred and eventually try to scale up to tens of thousands or hundreds of thousands of assets.
F.T pairs, but oftentimes wouldn't be able to get a lot of the capabilities that they were looking
for. Whereas in reinforcement, fine-tuning, you instead define the outcome that you care about.
So in Sierra's case, like I was talking with them about how they define what like a good customer
support response would look like. In our case, we define, like, what are the key things that
you should identify as a characteristic of this candidate, whether it be that they're passionate
during their interview, they demonstrate XYZ domain knowledge or they worked on this side project
that demonstrated that skill, and then you reward the model for identifying that. So you set the
solution, and then the model can learn in that environment how to get really good at it. And the reason
I'm so optimistic about it taking off is that it's like profoundly data efficient, right? And it
finally makes sense to customize models at the application layer. And profoundly data efficient is actually
like hundreds to thousands of examples, like some tenable number for,
an enterprise or a, you know, medium-sized business to think about versus, like, I don't know,
a billion tokens.
Yeah.
Yeah.
Yeah.
Exactly.
And so it'll be very cool.
I think we're going to have these agents that fill all roles that employees currently fill,
working alongside employees.
Human employees will help create the evals.
I also think that, like, contractors in our marketplace will play a large role in that.
It will just be this, like, huge build out of e-vowls to, you know, create customers.
agents and across every enterprise.
What is most important for Merckor to get done in the next year or so?
So there's two things that we focus on as a business.
I think those will be most important for this year as well as for the next five years.
The first is how do we get all the smartest people in the world on their platform?
And that ties into the supply side of our marketplace, the marketplace network effects around
similar to like an Uber or Airbnb, because if we have the best candidates and we're able to
give them job opportunities and understand what they're looking for. The second thing is predicting
job performance. Are you trying to offer anything that isn't comp? Yeah, we are. So one of the
things that we realized is that the average labor marketplace has a 50 to one ratio of supply side
relative to demand side, which means the average person that applies, talks to their friend who
also applied, and neither of them got jobs. And it's almost just this structural part of building labor
marketplaces. The way to actually scale up the labor marketplace to have hundreds of millions of the
smartest people in the world on the platform, is to build all of these free tools, such as AI
mock interviews, AI career advice, you know, shareable profiles for people, all of the things that
just create the most magical experience possible for consumers and give that away for free
because it's powered by this monetization engine on the other side of the business. And so that's
a very significant focus for us. I interrupted you. You're going to talk about what else was important.
It's performance predictions. So we get all the data back from our customers of who's doing well for
what reasons. And, you know, how can we learn from all of those insights to make better predictions
around who we should be hiring in the future? And that's the data flywheel that you would find
in, you know, many of the most prominent companies in the world. And I think that the marketplace
network effect is the more obvious one when you look at the business. But I actually believe
that the data flywheel will become more important over time based on a lot of the initial
results that we're seeing. How do you view the labor markets evolving over the very long term?
Well, I think that the largest inefficiency in the labor market is fragmentation, and that a candidate, wherever they are in the world, will apply to a dozen jobs, and a company in San Francisco will consider a fraction of a percent of people in the world, because it's all constrained by these manual processes for matching, right, where they need to manually review every resume, conduct every interview, and decide who to hire. When you're able to solve this matching problem at the cost of software, it makes way for a global unified labor market.
that every candidate applies to and every company hires from.
And I believe that that's not only the largest economic opportunity in the world,
but also the most impactful one.
And so far as how you can find everyone the job that they're going to be passionate about and successful in.
But that include AI agents?
In other words, the marketplace would be a hybrid of people and agents all competing for labor globally.
I think so because customers ultimately come with like a problem to be solved, right?
And ideally, it's some coordination of how those two fit together.
Given you spend all your time thinking about how to attract high-skilled candidates and determine their effectiveness,
like what advice would you have for people who are hiring and startups and scaling companies?
Early on, it's hard to stress the importance of talent density.
And just like, there's always a tradeoff between hiring speed and hiring quality.
And you should just, for those early employees, like, always index on quality.
Like you need to be patient and you need to make sure that people are extremely high caliber.
When you're scaling up an org, you obviously don't want to drop those standards, but people need to be a lot more data driven around what are the characteristics of people that actually drive the outcomes they care about.
And it feels like where a lot of the problems happen is when that slips, when it's sort of like this vibes-based assessment that doesn't scale very well, where each hiring manager is doing it in a fragmented way.
and it's hard to enforce those standards across the board.
And so just being very disciplined around, like, what are you hiring goals?
What are the characteristics of people that you know are actually going to achieve the business outcomes you care about?
And how do you measure those things is really important.
I find that almost every great company either hires well, like what you're talking about or fires well,
which is sort of your phase two.
But I think often they do that one of those things really well early.
For some reason, most people don't seem to get both right early on.
I don't know why it is.
I think it's almost like a founder bias or something like that.
And then I feel like over time, hopefully they pivot into both.
Google was a good example of an organization that would always hire well, but couldn't fire well.
It took them a really long time to clean people out years, like literally years.
Interesting.
Facebook, on the other hand, was kind of known for a more mixed early talent pool,
but they were very good at removing early people who weren't performing.
So I always thought that was kind of an interesting dichotomy between the two.
And those were the rumors in the valley when each company was, you know,
tens or low hundreds of people.
You know, now obviously they're all very professionalized in terms of how they do both.
have their UBI.
Yeah, exactly.
Yeah.
So that's how that was kind of interesting.
Yeah, I think it's like a, just because I mostly think about like engineering hiring and
go to market hiring and an investor hiring, they're all professions that have like some
time scale of outcomes that isn't like an hour, right?
And so I think you're always looking for proxy of outcomes for these like longer outcome
jobs.
And I think there's like a really interesting question very related to evals and assessment of like,
well, what are the proxies we're going to discover?
for each of these roles because I think it's a huge shortcut in hiring, hiring well, not necessarily
firing well. If you can do references, if you can do work trials with engineers, like, you actually
know a lot in the first five days, 30 days of whether or not something's going to work out.
Totally.
And like, you know, I think we're always, I'm always looking for proxies for that.
Yeah. And I think one of the, like, crazy things about the market is that any candidate that
you do a work trial with has probably done work trials with like a lot of other top companies
in San Francisco, but you don't have any of the data on that, right? And obviously there's like
some interesting data like privacy and centralization questions of like companies want that to
be their proprietary knowledge. But I think that market is going to trend towards becoming a lot
more efficient over time or even the references of people, right? Of like those that you don't hire,
theoretically it's beneficial for the top companies to understand the reasons that, you know,
other companies in different markets aren't hiring specific candidates, et cetera?
What do you think companies that attempted some sort of like common generic evaluation,
like the hires of the world in a previous generation, like, got wrong, right?
Because like the theory of like, well, we should have a common application of some kind
or shared assessment has existed but not worked at scale or worked at quality?
I think that LinkedIn centralizes and aggregates the very first layer of the application process
of like, what are the things that this person has done and, like, who are they connected to?
The challenge historically has been that the rest of the process to facilitate a transaction
has not been possible to aggregate and automate.
It wasn't possible to, like, actually record all of these interviews and, like, scalably conduct
interviews of everyone.
It wasn't possible to, like, you know, get all of this, like, data and analyze it properly
on, like, what are the things that go into causing someone to perform well?
And so I think there's just this, like, huge why now that's enabled
by LMs becoming so capable so quickly.
That makes sense.
I think one of the theories that my partner Mike has is around the like scalability of
LLM's being able to interrogate humans of like the usefulness of that data in a bunch of
different domains.
Yeah.
And it would be great to see the aggregate of that for hiring.
So my co-vaders and I are all Teal fellows.
And so we're very passionate about how we could apply LMs to help identify like the next
teal fellows.
And so like I often wonder like imagine if you could have Peter Teele fellows.
as a heuristic, interview everyone in the world when they're 18, right? And, like, and maybe
he could go through and, like, meticulously spend time determining, like, you know, who is
actually going to be good at what job. Like, I think we're approaching that world very quickly.
It'll be fun to see how that impacts the labor market, the investing market, and everything
else. That's really cool. Thanks for doing this, Brett. Yeah, it's awesome. Thanks for having.
Find us on Twitter at No Pryor's Pod. Subscribe to our YouTube channel if you want to see our
faces, follow the show on Apple Podcasts, Spotify, or wherever you listen. That way you get a new
episode every week. And sign up for emails or find transcripts for every episode at no dash priors.com.