The Changelog: Software Development, Open Source - Principles for hiring engineers (Interview)
Episode Date: February 8, 2022This week we’re joined by Jacob Kaplan-Moss and we're talking about his extensive writing on work sample tests. These tests are an exercise, a simulation, or a small slice of real day-to-day work th...at candidates will perform as part of their job. Over the years, as an engineering leader, Jacob has become a practicing expert in effectively hiring engineers — today he shares a wealth of knowledge on the subject.
Transcript
Discussion (0)
All right, welcome back.
This is the Change Law.
Thank you for tuning in.
My name is Adam Stachowiak.
If this is your first time here, thank you so much for tuning in.
If you haven't yet, subscribe at changelog.fm.
And if this is your fifth, sixth, tenth, hundredth, I don't know.
Thank you for tuning in all these times.
As you may know,
we have a membership at changelog.com slash plus plus. Check that out. And today, Jared and I talk
to Jacob Kaplan Moss, and we're talking about his extensive writing on work sample tests.
These tests are an exercise, a simulation, or a slice of real day-to-day work that candidates
will perform as part of their job.
And over the years, Jacob has become a practicing expert in effective hiring as an engineering leader.
And today, he shares a wealth of knowledge on that front.
And big thanks to our friends and our partners at Fastly.
Our pods are fast to download worldwide. And that is all because of Fastly and their fully configurable CDN.
Check them out at Fastly.com
This episode is brought to you by our friends
at Square. Millions of
Square sellers use the Square app marketplace
to discover and install apps
they rely on daily to run their businesses
and the way you get your app there is by becoming a Square app partner.
Let me tell you how this works.
As a Square app partner, you can offer and monetize your apps directly to Square sellers
in the app marketplace to millions of sellers.
You can leverage the Square platform to build robust e-commerce websites,
smart payment integrations, and custom solutions for millions of businesses.
And here's the best part.
You get to keep 100% of revenue while you grow.
Square collects a 0% cut from your sales for the first year or your first 100 Square referred sellers.
That way you can focus on building and growing your Square customer base.
And you get to set your own pricing models.
You also get a ton of support from Square.
You get access to Square's technical team using Slack.
You get insights into the performance of your app on the app marketplace.
And, of course, you get direct access to new product launches.
And all this begins at changelog.com slash square.
Again, changelog.com slash square. Again, changelog.com slash square.
We are joined by Jacob Kaplan-Moss, who's a software developer, one of the co-creators of Django, and an engineering leader.
Welcome to the show.
Hey, thanks for having me.
Happy to have you.
I didn't expect that when we had you on the changelog, we would have you not to talk about Django.
But that's the way it turned out. We have listeners who have been asking for another Django episode.
And to those listeners, I say, we're working on something, we have an idea, and stay tuned for that.
But this is not that episode, because you've been focused on other things as of late, and
you've been writing quite a bit, 10 posts in fact, a large mini-series on your blog
all about work sample tests.
So that's what we're here to talk about today.
Jacob, maybe share your role,
what you're up to, your background, why the hiring process is something that you've been involved in,
and why this is these work sample tests. You've been doing a lot of deep thinking on this topic.
Sure, yeah. So these days I work at a security consultancy called Latakora. We mostly work with startups, well under 100 employees, usually more like 25 to 50, helping them get their security house in order at that early stage.
So as you can imagine, hiring there is something we help out with a fair amount. about hiring a lot lately is that it's been a running theme of my career. Maybe for at least
the last 10 years, I've been in some sort of hiring position, at times doing quite a bit of
it. A previous job, I was working at 18F, which is a part of the federal government. And I rebuilt
our entire hiring process there and was probably responsible in some sense for hiring dozens,
if not over 100 engineers.
The hiring manuals I wrote there are still being used, even several years later after I've left.
So when I made a goal of trying to write more, starting about a year ago,
I knew I wanted to focus on management, engineering management in general,
and fairly quickly,
you know, a theme of hiring emerged. There's just a lot to say and a lot of bad practices and
mistakes like throughout the industry, most of which I've made myself. So every time I think,
you know, okay, I'm done with this series on hiring. I'm not going to write about hiring
again for a while. There's another thread to pull that
leads to another area to think about. There's just a lot there. The other thing I'll say about
hiring and why I find it so important from a management context is it's probably the highest
leverage activity that a manager will engage in. And by that I mean, you know, if you hire well, that person could be on your team for two years, four years, 10 years and could do, you know, a huge amount of work for the company.
And so the investment in time in making sure you make a good hire pays huge dividends.
On the other hand, if you screw it up and you hire someone who is incompetent or outright toxic, you can just absolutely destroy a team. So when I think about being an effective manager,
if all you're good at is hiring well, you're probably better than
80% of managers in the field.
Picking winners and losers for basic terms, right?
Yeah, I guess so.
The hiring in the tech industry is somewhat notoriously bad, or the process is fraught.
You know, there's so many Twitter threads of people who've gone through bad hiring processes
or things that have failed or why this company is bad, why that company is bad.
It's obviously a difficult thing to do well, as you've just stated.
But why do you think it's so hard?
Where do you think the core of the problem is? Is it software companies specifically that struggle, or is this a system-wide thing?
What are your thoughts on if you could whittle it down to what's wrong with it?
I don't think it's just software engineers. I don't really know because this is my industry
and I don't know other ones as well. But I do think there are some things that are distinctive to our industry that make it more difficult. The first part, if you think about it,
interviewing is weird, right? You want to hire someone to write software. So you get on a Zoom
call and ask them some questions about writing software, but that's not actually writing
software. And, you know, we all know people who can talk a great game, but can't actually deliver.
We don't really have good mechanisms for doing, I don't know, tryouts.
Can you imagine if like, I'm a basketball fan, so I want to talk about basketball.
Can you imagine if the Warriors, instead of having someone come work out with the team,
they called them over Zoom and said like, hey, how good are you at shooting free throws? Oh, you're good? Awesome. You should join our team.
But that's kind of what we have to do. The real problem is that when you join a new company to
write software, it can take weeks or even months to become fully productive with the tooling.
You got to learn all the existing code. There's probably some tool you haven't experienced yet.
The CI system was all weird,
and you've got to figure out how to interact with that. It might even take you a week just to get
credentials for your Git repository. So we can't just say, hey, come shoot some free throws for us.
Hey, come write some code for us, at least not directly. And that's the whole theme of the work
sample test thing. And I think we'll get back to that. The other thing I think we have going on within tech
particularly is as an industry, we love to think that we're special somehow. And we so rarely look
at practices from outside the industry. We just sort of go like, oh, no, no, no, software is
different. And we ignore hundreds of years of management theory
because we get computers
and that makes us different.
I can't even finish that sentence.
I don't know what the theory of the difference is.
There's a real disdain for looking outside
the tech industry for patterns or common solutions.
That may have to do with our ethos around disruption,
innovation, creation.
These are things that we strive for.
We're trying to change and do things differently
and think different, as they say at Apple back in the day.
Maybe we just apply that to too many domains
and say, nah, we're going to reinvent this
because we reinvent things.
And maybe that's part of it, at least.
I think a lot in terms of the concept of innovation tokens,
I'm blanking on who came up with this term,
but he gave a talk titled Choose Boring Software,
which is all about choosing tried and true
and boring technology to base your products on. and you know he makes the argument that like
you only have a certain number of quote-unquote innovation tokens of areas where you can where
you can be disruptive and be innovative and if you try to be disruptive everywhere you're just
going to be you know like a chicken with its head cut off and i think that's true i think that is
totally true like there is a real upside and like there there's a good side of this. We're different ethos, which is just what you
described of like questioning the status quo and being willing to be disruptive. And absolutely,
we have, you know, technology now that enables us to do things that were just wild a few years ago,
but like, we're bad at recognizing, okay, hiring is not actually one of those places.
Like, why are we trying to be like, this is not an area to be special. This is an area to do the
boring thing. You'd mentioned your history at 18F. And then obviously now we're at now and
gravitating towards writing these hiring manuals and, you know, the documentation, obviously
from the Django project,
you've got lots of experience writing the documentation around that as well, I'm sure.
So it's sort of maybe in your blood, but what attracted you to solve this problem or at least
make the mistakes and document how you would not make the mistakes again and potentially
share that with others? Like what, what drew you to, I guess, care so deeply?
Yeah. I mean, it's just what you said. Like it's, this is the, the manual to
hiring that I wish I had that would have prevented me from making a lot of the mistakes that I look
back on and, and cringe. You know, I, I hired someone once who was so bad that the rest of my
team almost quit over it. And like, luckily I was able to do something about it before I lost, you know, five other people because of this bad hire.
But, you know, that one that one hurts.
And and, you know, I'm responsible for a period of time when like everyone on my team hated coming to work and hated their jobs.
And that sucks.
And I don't want to go through that again.
And I certainly don't want anyone else to go through it again.
And so, yeah, I like I write when I care about something, I write about it. It's, it's the
format I'm most comfortable, you know, expressing myself in. So it's the way that I know how to
make an argument and, you know, and lay something out. And yeah, so those things came together into
kind of the work I've been doing over the last year. So let's get a definition here on the table for everybody.
Work sample tests.
This was a term that was new to me.
And in fact, at first I thought it was about sampling a part of your software and testing that.
So I thought it was a testing series.
But it's a hiring series.
And so you talk about work sample tests.
Here's a bit from your intro post. You say
that it's clear that actually working together is the gold standard when you're hiring somebody.
What if we found a way to do that in a compact and timely fashion? That's exactly the line of
thinking that leads to work sample tests. So what are they? They are an exercise, a simulation,
a small slice of real day-to-day work that we ask candidates to perform.
They're practical, hands-on, and very close or even identical to actual tasks
the person would perform if hired.
They're also small, constrained, and simplified enough to be fair
to include in a job selection process.
You say work sample tests are a critical factor in effective hiring.
Interviews are not enough.
Hiring without work sample tests risks selecting people factor in effective hiring. Interviews are not enough. Hiring without work
sample tests risks selecting people who excel at interviewing but can't actually perform the job.
So in there is your opinion about interviewing not being enough. And it seems like the reason
is what you state there is that like, well, now you're hiring for who's good at interviewing,
which is not actually the point. Is that what you're trying to say with interviews not being enough? Yeah, exactly. Real quick, the term work sample
test, you're right, it's not a super common one within the software industry. And I cast around
for the right term to use when I was writing the series. It is common within sort of like HR and
more old school management stuff. So I chose it over practical exercise or coding homework or other
terms that I considered because if you Google for any of those things, you find a bunch of
random stuff. Coding homework really doesn't work to find discussions of this sort of thing.
Whereas if you Google for work sample tests, you do find a lot of discussion about this from sort of an HR and a management point of view. So it's not the best term, but it's the best one
I could come up with to get me moving on this post. So yeah, to your question, yeah, the thing
is that interview processes are always to some degree a proxy, right? We're doing this exercise
where we want to know if someone is going to be good at,
I don't know, building a Django app, right? We have a job requirement. We know that we need
someone to come in and write the backend to our web application, write a REST API. We know what
tools we want them to use, and we know what professional skills they need. We know they
need to be able to be compassionate in code reviews, and we know that there's going to be a mentorship aspect to the job.
They're a senior person, so there's going to be more.
We have this list of things, job duties, we know someone needs to perform.
So we are trying to design a selection process where we measure something that we hope is correlated to those workplace behaviors. We can't just measure the
thing we want directly. You know, some of the reasons I've already talked about, it's too long
to get up to speed. It's too long to measure those behaviors. It would be unreasonable to ask someone
to, you know, really contribute to our code in a substantial way,
potentially even illegal to have them do work for us for free.
So whatever we do, any interview process,
any selection process we design is,
we're not measuring the thing we care about.
We're measuring something else,
and we're trying to correlate it to the thing we care about.
And interviews can do some of that.
I think I'm
pretty good at using interview questions to suss out professional skills like communication and
conflict resolution and that sort of stuff. But conversational interviews really just can't measure
technical skills. They don't measure software development. They don't measure ability to
produce code. They don't measure familiarity with a key piece of technology
that's super important. I can't think of any other way to tell if someone's a good Python developer
than to ask them to write Python and look at their code. Nothing else correlates better than the real
thing. There's so much that goes into being good, in particular at software development. Some would
say that you do most of your programming before you even code, where you think about
the system.
You think about, you know, and you have some sort of domain knowledge, which can play into
being better at, which comes from experience within actual programming or within a specific
team or a new domain.
Like if you've never, you know, sure, maybe you've played with Python, but maybe you haven't
really done a web app with Django before.
Maybe you've done some CLI tools or, you know, some scripts or something like that, but you haven't really, you know, done things with like maybe an API or something like that.
Like, so you're, you're dancing in new waters.
It's super challenging to, I would say, test the skill set.
Maybe what you're really testing for is the potential of a skill set, right? Because you don't actually want them
to necessarily have the exact skills, you want them to be able to gain them
alongside with the rest of the team. Because not all the team began with domain knowledge, began
with all the skills. They had to start somewhere, so it's the potential of having the skills.
Yeah, totally. And that's, you know, it's going to depend on the
job, of course.
Like there are going to be some roles will have non-negotiable experience that needs
to be brought to the table.
If I'm a non-technical founder and I'm looking for my technical co-founder, I need someone
who can build an app.
I mean, I care about the language, but I need someone who can do something.
Right.
In those cases, it's paramount for the skills to be there.
Yeah, or I don't know, if I'm hiring an infrastructure engineer
and we use AWS and I need them to be productive reasonably quickly,
pre-existing experience with AWS might be non-negotiable.
But for most positions, yeah.
A number of times I've hired
people for Python development roles who have backgrounds in Ruby, Go, PHP, Perl, and are just
willing to switch to Python. And in those cases, a work sample test is going to tell me how big
the gap is. They're going to give me a realistic picture. You know, are we looking at, you know, a month for this person to get to where I need them to be? We're looking at six months. And, you know, the answer might be done this before is super important and really,
really difficult to suss out without looking at an actual code sample or something similar.
So I'm hearing echoes of my conversation with Paul Orlando.
We had him on the show a few episodes back.
He writes about unintended consequences.
We were discussing Goodhart's law, which is, you know, if a measure is summarized,
if a measure becomes a target, it ceases to be a good measure
and he breaks that into two thoughts
and the second thought that he has about it
is really what you're bringing up with proxies
so we are trying to decide something that's very difficult
and almost qualitative to a degree
or heuristic is like is this person a good hire
and it's like well well, define that.
And then you create, especially let's focus it on the software side.
Are they a good software developer according to the needs of the role?
And it's like, well, how do we actually know that?
And like you're saying, interviews aren't enough.
I don't think you're advocating for no interviews.
Interview plus work sample tests, right?
Yeah.
But the work sample tests are the proxies
because you're not making them do the job, us work sample tests, right? Exactly. Yeah. But the work sample tests are the proxies because
you're not making them do the job, but you're making them do things that are like the job
close enough and you're trying to get it close enough that it actually works. And that's really
the hard part, right? Is like in that conversation, I use the obvious example of like, if you measure
lines of code in terms of productivity, that's like a terrible proxy, you know? And so we all
laugh at it, but people who aren't in the know have done that in the past.
The closer that those proxies get to approximating the truth
or the reality of what you're hiring for,
it seems like the better they are.
And we're going to go through,
you have eight rules for doing these well.
So it seems like the nut of the problem is like,
okay, we need these work sample tests.
But still, even with that knowledge, it's like, okay, how do you do that? Well,
because it sounds like that's really the most important thing is not knowing, well, we need,
we need to have some work samples, but like, how do we go about that? Because there's whiteboards,
there's puzzles, there's all sorts of these, which are kind of like work sample tests, but they're
terrible in practice for many reasons. So what you're trying to put
out there is like what you think are good work sample tests. Is that right? Exactly. Good hearts
law is such a great, great law to bring up here because one of the really common and I think
really broken practices is to use like algorithms exercises, leet code style exercises as again,
as proxies for programming skill.
And there's this whole history here, which is sort of hilarious, where back in the day,
Microsoft had this famous interview process where they asked these brain teaser questions.
How many ping pong balls can you fit in a 747?
Why are manhole covers round?
I'm trying to remember.
I'm trying to remember all.
That one's famous, yeah.
Right. So they ask these questions.
And the idea was, this is a proxy for critical thinking.
But what happened was, people just studied
Microsoft interview questions and memorized the answers.
Memorized the answers, yeah.
And so then Google comes along and Google goes,
well, this is a bad practice.
People just memorized the answers.
We should ask algorithms and data structures questions.
We should make them write code.
Okay, fine.
There's a theory.
I guess you can make the argument that that's correlated to job performance.
Although I have a BA in English and my lack of CS 101 knowledge hasn't been an issue for
me in 20 years in the industry.
But fine, maybe I'm weird.
That's no big deal.
Whatever.
Google comes up with this idea.
They think that it's a proxy.
But then what happens is Google forgets that it's a proxy.
And now if you want to get a job at Google or Facebook or Microsoft
or a lot of the big tech companies,
literally if you have the time to just grind on LeetCode for six months,
you'll probably make it through the job interview.
They're not measuring whether you're going to be good at the job anymore.
They're measuring how much time you've spent grinding LeetCode.
That's pretty good for a new college grad who wants to get a high-paying job, but it's
kind of not great for the company or the teams.
And it's super not great for anyone who's not someone with a ton of spare time to sit in their room and, you know, grind out questions about linked lists or whatever.
You know, someone who has a kid or family or parents to take care of and is trying to switch careers is just not going to be able to compete in a job interview, despite the fact that who knows, they might be better at that job.
So, yeah, a big goal of me writing the series was to try to lay out not just like, leet code is bad, this idea is good. That would have been easy,
but I wanted to explain why leet code is bad and what's better and why it's better so that people
can develop their own work sample tests. I think the big meta-argument is
nothing off the shelf is probably going to work super well for you.
Because...
It's bespoke.
You mentioned these eight rules.
The first one is simulate real work as closely as possible.
The work sample test should simulate what the job is like.
And every job is different,
so the idea of being able to pull something off the shelf that will automatically work
for any job on the planet is probably a fool's errand.
You've got to consider, what do I need this person to do?
And then how can I construct something that is as close a simulation to that real work
as you possibly can?
And if you do that,
you're almost certainly not going to come up with writing on a whiteboard unless the job involves literally writing on a whiteboard.
If you're hiring someone to, you know, to teach a class.
Yeah, sure.
Make them write on a whiteboard.
Probably part of the part of teaching.
But and it might be might be fine for that.
But if you're hiring someone to write code
and you have this concept in mind
of simulating real work, it's really easy to see why a whiteboard isn't going to work. This episode is brought to you by our friends at Firehydrant.
Firehydrant is the reliability platform for every developer.
Incidents impact everyone, not just SREs.
Firehydrant gives teams the tools to maintain service catalogs, respond to incidents,
communicate through status pages,
and learn with retrospectives.
What would normally be manual, error-prone tasks
across the entire spectrum of responding
to an incident, this can all be
automated in every way with FireHydrant.
FireHydrant gives you incident
tooling to manage incidents of
any type, with any severity, with
consistency.
You can declare and mitigate incidents all inside Slack.
Service catalogs allow service owners to improve operational maturity and document all your deploys in your service catalog.
Incident analytics like to extract meaningful insights about your reliability over any facet of your incident or the people who respond to them. And at the heart of it all, incident runbooks, they let you create custom automation rules to convert manual tasks into automated, reliable, repeatable sequences that run when you want.
Create Slack channels, Jira tickets, Zoom bridges,
instantly after declaring an incident.
Now your processes can be consistent and automatic.
Try FireHydrant free for 14 days.
Get access to every feature. No credit card required.
Get started at firehydrant.io.
Again, firehydrant.io.
It's almost like you'll need a glimpse of the future their future right that's what you're trying to do but then there's also this aspect that if you're simulating real work as closely
as possible there's a lot of upfront investment and effort into every hire which kind of comes
back to your earlier point which is they can be so critical so crucial like you've had people who wanted to quit their job and hate their job because you hired the wrong person at one point in time in your career.
So it places a lot of emphasis, obviously, on the process.
But there's this intentionality that's required to deliver on point one of this framework of yours.
Point one requires a lot.
I mean, it's not easy, right?
And I think that's
probably why some of these practices persist. It's a lot easier to be like, what does Google do?
Cool, I'll just copy it. Right, what's the easy button? Yeah, and there's no dancing around it.
Hiring is hard. My rule for myself is when I have an open position from the moment I start
writing the JD until that person starts,
that's like half my time gone.
That's,
that's how much work it is to design a process,
to recruit,
to screen,
to interview,
to select,
to keep in touch with them before they onboard,
to onboard them,
like from, you know, and so a lot, I think a touch with them before they onboard, to onboard them.
And so I think a lot of people, a lot of first-time hiring managers don't know this,
and they think that they can just squeeze hiring in sort of around the edges of what the rest they're doing. They don't realize it's probably going to be the biggest task they have. And then
also, their managers probably don't know that. So if I go to my
manager and say, cool, I know you want me to hire for this position. I'm going to need half my
calendar back. You know, they're going to tell you no. So yeah, like it's hard, but I don't know
that that's, it's also worth it. Right. It's also what else, what else are you going to do during those 20 hours a week that will have
tangible, positive impact on your team and on your company's bottom line for years? If you have
other activities that are that impactful, absolutely go do them. But I think most people don't. I got in a bind when I was at a nonprofit once and I was younger and I would just say more naive
to, I guess, just process. And I was telling the founder of the company, you know, what we needed,
because I was feeling the pain as the job doer, the person delivering the work and,
you know, executing. And I needed support to help me do more of my job.
And I'll get less specific just to keep it short.
But I was just like, we need more people.
And, you know, I didn't feel his pain.
I didn't understand the financial constraints, the ability to make money,
the profitability of the company, like things that this person was super aware of
that I was just very less aware of.
And one thing he said to me was just like, if you hire
somebody, you have to be willing to fire them too. And so not that I don't want to hire somebody,
but I want to understand the role that we need to fill before we're just so anxious and fast to
hire. Like I want to make sure I understand what we're hiring them for because they're going to be
here for years, hopefully. I don't want to hire somebody just to fire them because we didn't understand.
And that to me just rung true for so long because I was like,
even to this day, our company is rather small.
Jared and I are the only full-time people.
We just hired somebody for the first time after basically 13 years in business.
I mean, seven or so really making money.
But we've been really slow to hire because i think we
don't know how to hire really we have desires to grow and do different things but how do you slow
down enough when we have jobs and roles to fill as a smaller company it only actually exacerbates
whenever you have a much higher scale but you know company where you've got 10 or 12 or 50
you know people and maybe even more when you come to enterprise level
of just how much time it takes
to understand the roles you're hiring for
and how important it is to really understand that
so that you don't hire just to fire.
That is such good advice.
Yeah, I mean, that's the other thing
is firing people sucks.
It does, it's the worst.
I mean, it mostly sucks for them,
but it also sucks for everyone else.
And I've had to fire a couple of times
and it sucks for me too. I got PTSD from it. It's not fun. So yeah, I think maybe in some,
you know, you can also think of like getting hiring right as a way to avoid putting yourself
in that position or limiting the risk of being, being put in that position for sure.
So as we walk through some of these eight rules or principles, I should state,
as I stated at the top, you've written a
lot on this topic, this conversation will not cover the breadth or the depth of what you have
out there. So links in the show notes if this is fascinating. If you're in similar positions to what
Jacob is, then definitely go read the whole thing. This is like, pretty much a book, maybe you'll
turn it into a book at some point. But we're just going to keep it focused on these eight rules
because we feel like those are digestible and conversable. You mentioned the first
one, simulate real work as closely as possible. And I think we've touched on that enough. If we
look at the second one now, limit work sample tests to less than three hours. You want to unpack
that for us? Why is that a rule? Yeah. So if you think about it a little bit, these first two are
a bit in tension, right?
Like simulate real work as closely as possible. If you follow that as far as you go, you would say,
well, we should ask people to do a lot. If their job involves, if we're hiring a full stack
developer, we should ask them to develop a front end and a back end and deploy it to AWS and set
up monitoring. And suddenly you're asking someone to spend like a week. The real challenge, and I
wrote a whole article about this in designing good work sample tests, is balancing between
predictive value and inclusivity. You want to design a test that is as predictive as possible,
that is going to as closely correlate performance on the test with job performance, eventual job
performance, your predicted job performance. But you also want to design a test that is not going to
accidentally select out people who have children or limit you to candidates who can spend,
who are willing to spend a week on a work sample test. So three hours is the practical limit that
I came up with. And then briefly, the way that I get there is common practice in interviewing pre-COVID
is for a candidate to fly out and spend a day on site with the company.
Most people agree that that's reasonable.
There will still be some people who can't afford that amount of time,
but there's always going to be a certain,
it's reasonable for a company to ask for some of your time
when you're interviewing with them.
And we can debate around the specifics,
but I just
started with eight hours. It seems like that's a common enough practice. It's going to be hard
to find a job that requires less time investment from a candidate. And I think more is starting to
get pretty unreasonable. And so when you subtract out all the other stuff that a candidate is going
to need to do for you, you end up being left with about three hours for a work sample test. I've also done a ton of these, and I know that you can create a good work
sample test within that time window. And so it feels like a really practical time window. And I
wanted to be like very specific. It's not a target. Less is better. If you can design a work sample
test that only takes someone an hour, that's awesome. Your interview process will be better for it.
But three hours is, in my opinion, the limit.
If a company is asking you to spend more than three hours, say, writing code for them,
they're being unfair to you.
And you can probably do better.
So is that time period, and maybe this leads us into number three but is that time period
part of the test like does because people work at different speeds your three hours might be my six
or maybe it's adam's seven minutes because he's amazing but is that part of it is like the three
hours is is a limit or is a constraint or is that just like well we think this generally takes three
hours however long it takes you fine i yeah so in terms of when they can spend that time, unless there's a reason why time boxing
is super important to your work sample test, and this is rare, letting people do that work
whenever they have time.
I almost never set any sort of deadline.
When I assign a work sample test, I say, here's what we want you to do.
Absolutely no pressure. You
know, just let me know when you think it'll be so I have an idea of what to expect. But if that
changes, no big deal. You know, I've, we're interviewing for a role right now. And one of
our candidates was planning some travel, and then it got canceled because of COVID. And this whole,
you know, this whole situation ensued. And it's going to be a month between when we ask them to give the work sample test and when they completed it. And that's
totally fine. That's not going to, that's not going to influence my decision in any direction.
It's simply like a fact that has no bearing on, you know, how I, how I'm going to make a decision.
And this is a role that can wait that long. Like it doesn't need to feel that bad.
It is, it is. It might be different if I had some pressure right uh i don't but for this role it's fine in terms of people working at different
speeds so i really do consider the three hours to be a time limit the way i communicate this to
clients as i clients the way i communicate it to candidates is by saying you know here's the
assignment i expect this will take less than three hours. So
if you're getting anywhere close to that, I think something's gone wrong. Please, please reach out
and tell us where you are and what's and what's going on. Internally, when I develop these tests,
I have, I usually do them myself if it's at all possible. And I usually ask people in that job
or a similar job internally to try them out so that I can validate that this time
is appropriate. And actually, I usually aim for like half that. So like 90 minutes when people
internally are doing it, because they'll be more familiar with the problem space and the types of
tools, again, because it's going to be close to the work. So most of the time, I would say
candidates actually complete these in something like an hour or two. Three hours is more like
an upper limit. I really don't want people spending longer than that.
I think it's unfair.
And it starts to get into, again, what are we measuring here?
Are we measuring something that correlates with job performance?
Or are we measuring something that correlates with free time?
Because I don't want to hire someone who has more free time.
I don't want the candidate with the least responsibilities outside of work.
I want the candidate who will
be best at the job and I don't really care what happens after they go home.
Or stay home.
Stay home, right. Exactly.
Go home or stay home.
Go downstairs, get off the couch, whatever it is.
And their Zoom call. Yeah.
I noticed that in the definition for this, I'm not sure, I guess, what would you call this,
the definition, the written version of what it means to use a strict time box, that there's nothing that says about compensation.
Did that come into play or does that come into play at all with any of these sample tests, the compensation of time?
Or do you feel like both the company and the candidate has to invest in the possibility of a relationship?
So I think if it's going to be longer than that eight hours total,
I think compensation is appropriate.
I wouldn't consider asking for more than that without compensation.
However, I'm starting to see a couple of positions
that are offering compensation for anyone taking a work sample test
or sometimes finalists or something like that as a way of
indicating respect for their time and indicating some company values. The example that I can think
of most clearly is the Software Freedom Law Center, which is a nonprofit involved in open
source and free software law, was conducting a search, I believe, for an executive director,
and they compensated everyone who they selected to go to the interview stage. I believe they gave them $500 each. This is an organization that is keenly aware of the
difference between paid and volunteer labor. They're in the open source world where this is
a massive area of conversation right now. And I think, putting words in their mouth,
to some degree, I don't know why they did this, but I think this was a way of sort of living their values of this work is important.
The way we represent that in our society is we give people money for it.
I would like to experiment with compensating everyone who gets to a work sample test phase.
It's not going to be entirely my decision because I'm part of a company and I can totally understand.
I think reasonable people will differ on this. I don't think it's unreasonable to say, yeah, both candidates and companies should invest some time.
And like the norm, the norm is that interviews are not compensated. And, and I don't know that
it's unfair, but I would like to try and see what impact that has on, on the candidate pool.
And if that moves the candidate pool in a positive direction, I'm inclined to say
that the cost would be minimal compared to, you know, potential rewards there. So this is outside
your framework. But while we're on the topic of compensation, I wonder your thoughts on the idea
or process of doing extended paid trial runs like we will give what if we maybe it's just with your
finalists
or maybe you're down to a couple options
and you're like, how about you work for us
for two weeks at this set rate
and then we'll make a decision at the end.
Is that something that is fruitful
or are there problems with that idea?
I talked about this a bit in,
I have an article that I titled
something like What Doesn't Work
where I talked about some of the stuff
that I sort of discarded.
And this is an idea that I don't think it works most of the time, not because it's a bad idea.
I think it's actually probably the best from a correlation to job performance, right?
Like if you want to know if someone's good at the job, like give them the job for a month.
Right.
Like you'll find out.
The problem is that like if you flip it around and think about it from a candidate standpoint,
I can't, I'm not going to quit my job that I have right now to go work for you for a
month and then maybe get a full-time offer.
That's just too big a risk.
Maybe if we had universal healthcare, I might feel differently, but we don't.
And so that's a pretty big deal for a lot of people.
Yeah, it's selective towards those who are currently not employed.
Like if you're currently employed, then it pushes you out of the process.
So I've seen situations where it does work, right? An example is I've been in situations where
there's someone who's a consultant who we initially talk to about doing something on
a contract basis. And over the course of that conversation, they go, huh, your company's
really cool. Maybe I don't want to be a consultant anymore. Maybe I'd like to be a full-time employee.
And in those cases where they're already set up to do consulting,
maybe you're already working together,
it makes a ton of sense to say,
awesome, let's feel it out for the next three months.
Let's get you on that contract.
Let's work on this project together.
And if we're both still really happy three months from now,
then you'll just keep doing the same thing and you get a W-2
now. I think that it can be really great. I just think that situations where it's sort of fair to
both parties, especially to the candidate, is rare. So as much as I'd like to use it more
frequently, I've only seen that happen a couple, three times. It's a case by case basis. It's never, it has to be like a pretty specific situation.
Yeah. I think this really leads to why the process of hiring is so challenging because
there's just so many different cases where this could happen. It's like the worst if else statement
ever, or, you know, it's like, it's just so many different paths you could take when it comes down
to how you respond.
Like in this case, let's hire him for two weeks.
Well, that's terrible for somebody who currently has a job, but great if it's an existing consultant and it's not much of a change for them.
It's really just trying it on even further with a different possibility.
It certainly speaks to why hiring is so hard. And you also have to figure out, not just every time you have one of those differences
from one candidate to another, you have to decide whether that difference matters.
Is this signal?
Is this something that I should include in my hiring decision?
This person I mentioned who needed to take a bunch of extra time on the work sample test,
is this something I should be considering in my hiring?
Or is this something I shouldn't be considering?
And every single piece of information you get from a candidate,
you kind of have to decide, is this signal or is it noise?
And sometimes it can be really, really difficult to tell.
You know, if someone calls into a video interview
and their audio and video really suck,
is that whatever fine working from home is hard?
Probably.
But what if they're in
a position that's going to be primarily client facing where they're going to be spending six to
eight hours a day on video calls with clients? Maybe you need to ask them about their video
setup and whether they would be open to some feedback and some improvement there or just ask
them what's going on. Do you live in a place with really bad internet access? Maybe that's, maybe that's a deal breaker for this job, right?
But you have to be really intentional about, about all of that.
It's really easy to get someone on an interview and be really frustrated by a bad internet
connection and make a decision not to hire them or to, you know, score them lower because
of that internet connection without actually thinking, does this matter?
You know, does this just mean if we hired them, we would need to pay for them to get
a better internet connection?
Or is this actually something, do they live in a cabin in the
woods and can't and literally can't get a better internet connection, right? There's every single
door has this like, is this correlated? Or is this not? And it takes a lot of thinking and
intentionality to ask that question over and over and over again.
This episode is brought to you by our friends at Retool.
Retool is the low-code platform for developers to build internal tools. Some of the best teams out there trust Retool.
Brex, Coinbase, Plaid, DoorDash, LegalGenius, Amazon, Allbirds, Peloton, and so many more.
The developers at these teams trust retool as the platform to build
their internal tools and that means you can too it's free to try so head to retool.com changelog
again retool.com changelog and also by our friends at work os work os is a platform that gives
developers a set of building blocks for quickly adding enterprise-ready features to their applications.
Add single sign-on with Okta, Azure, and more.
Sync users from any SCIM directory.
HRIS integration with Bamboo HR, Rippling, and more.
Autotrails.
Free Google and Microsoft OAuth.
Free Magic Link sign-in.
WorkOS is designed for developers and offers a single elegant interface
it abstracts dozens of enterprise integrations this means you're up and running 10 times faster
so you can focus on building unique features for users instead of debugging legacy protocols
and fragmented it systems you get restful endpoints json responses normalized objects Thanks for watching. single pay-as-you-grow pricing that scales with your usage and your needs. No credit card required.
Again, WorkOS.com. They also have an awesome podcast called Crossing the Enterprise Chasm,
and that is hosted by Michael Greenwich, the founder of WorkOS. Check it out at WorkOS.com
slash podcast. so let's hop into rule number four we talked about making the work as simulated as real as
possible we talked about limiting it to three
hours. We talked about being flexible with when they do that, avoid deadlines if possible. Now,
number four is provide as much choice to candidates as possible. Give them the choice of several kinds
of work sample tests, languages, environments, etc. That sounds like more work for you,
the hiring manager, to just come up with as many tests as you can. Yeah, it does. Yeah, it is. And probably if there's one
quote-unquote rule that has more flexibility than others or that I would forgive people for not
investing as much time in, it's this one. Because yeah, I think in an ideal world,
you'd offer people a few different choices that allow people either to play to their strengths
or that fit better into their personal schedule.
So to get concrete, the ones that I usually offer people
are coding homework, like here's an assignment,
go home and write this.
I offer to look at code they've written previously,
like open source code or a previous job,
or I offer to do a pair programming exercise.
So same as the first one, but instead of like,
go off and do it on your own, let's get on a call together and we'll pair and write it together. That's kind of only
two exercises I need to develop, right? And then kind of two modalities for that exercise. And the
reason for that is I want to give people the ability to get as close as they can to what
working in sort of their optimal environment might be while sort of fitting within what I need to
measure and see from them.
So let's take that open source slash previous work situation.
Obviously, that's one that people would choose if they have that
as something that they can fall back upon, right?
Right.
But most people don't, which is why it's not sort of the default.
It's pretty rare for people to have code that they can actually share with me.
Most people, the code they've written has been for a previous employer, like they legally couldn't show it to me. And it would be,
even if they could, it would be a bad idea, like ethically. It's kind of a nice get out of jail
free card though, or get out of your work sample test free card because you're like, here, I've
already written some stuff. We can just take a look at it. So in the cases that they do, let's
say I have an open source Python library that you and I could sit down. Then what do you do from there?
Are you with them, like poking through the code, asking questions?
Do you have them describe to you what it does?
And they can make excuses for why that function's not as pretty as it could be
because it was under constraints?
Or how does that actually go in practice?
Yeah, so that's a great lead-in to principle number five,
which is that the code is not just like a simple pass fail.
It's the beginning of a conversation about that code. So regardless of whether someone writes
code to my specifications or gives me something that they've written previously, we're going to
then get on a call and discuss it. And I'm going to ask them, you know, what did you do here? Why
did you choose this dependency? You see you use mocks for testing here. Why did you choose mocks?
Are there other approaches you thought about? Did you consider a different testing framework?
Oh, I use PyTest. Do you like PyTest? What do you like about it? What don't you like about it?
Whatever. I'll just kind of dig into the code and ask a bunch of follow-up questions.
I think a mistake that a lot of people make is they take the code sample, they run it,
it produces the output they have expected, and they say, cool, pass.
Or they take it, they run it, it doesn't.
They say, nope, fail, reject the candidate.
And I have rejected people who have submitted working code because their code was really
poorly written or worked accidentally, or they couldn't explain what they did or how
it worked.
Worked most of the time.
Worked most of the time, right?
And I've totally hired people who have given me code that didn't work
and were able to explain, oh, yeah, this was the part that I didn't know how to do.
Or, oh, I misunderstood that part of the assignment.
Or, oh, I really wanted to focus on testing and I got kind of distracted
so I didn't finish the rest of it.
Just for having the guts to submit non-working code.
Like, look, here it is in all its glory.
Yeah, I mean, I told them three hours.
And so if they hit three hours and weren't done,
I would expect them to give me not done code.
The analogy that I like to think about is,
I'm not looking for production quality code.
I'm looking for ready to open the pull request, right?
So I may not even be,
at the point of opening the pull request,
it doesn't necessarily work, right?
I mean, how many times have we gotten like 90% there
and then gotten stuck and we open a pull request
and we say, all right, I think something's wrong.
I don't know what's going on.
Can someone else take a look and help me figure it out?
And as long as that code is mostly good,
it'll eventually hit production.
We'll just get some feedback, right?
So I kind of think of any sort of work product
that I see as part of a work sample test
as a draft or as like pull request.
Like it's the first version
you might share with colleagues
because the part that the work sample test
doesn't include is the feedback that you get
that makes whatever your work product is really good.
Another good example, the position I have open right now, the work sample test involves
writing in English this time. And if you look at the writing on my site or the writing that I do
for work, it's well organized. It's mostly free of typos. It's fairly clear. It's cogent.
And you might think that it springs from my keyboard
looking this way, but you'd be totally wrong. My rough drafts are full of typos, confusing,
hard to follow. In almost every case, there's an editing step. Always me doing a few passes
of editing, but often someone else, especially for my work product. I've got a whole team of
colleagues that I share drafts with that give me feedback on. So when I'm assigning this work sample test that involves writing,
I'm telling people, I'm not looking for something that would be done here.
I'm looking for ready for feedback from my colleagues level.
And it's the same thing with code. I'm not looking for ships to production. I'm looking for
ready to open a pull request. So the bar is much lower, but I want people
to be, I need to know that people can understand and talk about
and have a conversation about that code,
because that's exactly what's going to happen at work.
I'm going to open a pull request and someone's going to say,
oh, you're using mocks here, I thought we usually used stubs.
Why don't we change this?
And you have to be able to have that,
I need someone who can both write the code
and also then have that conversation with their colleague
and get to the right point.
So in a sense, it's kind of like an interview sandwich with a work sample in between,
because you're talking to them first and then giving them this exercise.
And then it seems like afterwards, whether it's the code review session or whatever it's called,
the way they handle themselves afterwards, maybe the way they receive criticism
or the way they explain even here's where I got stuck and why.
I feel like that has a large bearing on the decision making process as well.
Is that fair to say?
Absolutely.
Yeah.
Especially the getting stuck thing.
Like we all get stuck and someone who, gosh, someone who is really self-aware and can tell
me like, this is how I got stuck and this
is why.
And this is, if this was real work, here's what I would need to get unstuck.
That is like so valuable versus someone who I don't know that about.
Like, I'm going to have to learn over time, like what situations they, you know, they
get stuck in.
Someone who has that level of self-awareness is really,
really valuable. And you, so just as I said, like interviews aren't enough. I could also say work
sample tests aren't enough, right? You need the interview part two. They sort of, they go hand
in hand. They teach you different things. If it's not a pass fail it, like you said,
it's gotta be a conversation, but it's to be some sort of awareness of how this person works and an idea if they can do it.
Not that they are doing it, but can they do the tasks that you need them to?
Obviously, there's certain criteria that you need to edge upon,
but if it's not pass or fail, it's just got to be more information
on their ability to show up and do and to merge well.
Ultimately, you are going to make a binary decision.
You're either going to hire someone or you're not.
You can't 80% offer them a job.
There's no kind of hire.
There's either a yes hire or no hire.
At the very bottom level, there is a binary decision.
But the way that I get to that decision
or the way that I recommend other people get to that decision or the way that i you know recommend other people
get to that decision is by like considering the totality of the evidence in front of them there
there are it is rare for one poor behavior during an interview to be the reason why you reject
someone it's not not unheard of there are there are red flags there are things that someone can
do during an interview that's so egregious that it doesn't
matter how good the rest of their behavior is.
But usually, by the end of an interview process, you've got a big old list of strengths and
weaknesses.
And if the strengths mostly outnumber the weaknesses and the weaknesses are stuff that
you feel pretty confident you can help them with, you can manage around, won't be an issue, then you make the offer. And if you look through that list of weaknesses
and you go, you know what, like just not going to happen. This person won't be happy here. This
person won't be able to contribute. This person will be a bad teammate. Then you don't make them
the offer, but you kind of can't know that, you know, you know, to bring it back to the like
work sample test that the situations where I've made people offers despite submitting bad code have been where everything
else was so good that I'm like, you know what, they're bad at Python. And first thing is going
to be a bunch of a bunch of classes, a bunch of, you know, online stuff, some pair programming,
we'll send them to PyCon and pay for a bunch of tutorials. And I just know that
the first six months with this person is going to be working through leveling them up in the
language. But I've seen that they're a great teammate. I've seen that they have tons of
experience writing software in other languages. I've seen that they've demonstrated an ability
to learn. And based on the totality of that,
I'm more comfortable hiring this person
who is going to fit in with the team super well
and bring all sorts of other skills to the table
beyond the Python experience, you know,
versus a candidate who's maybe going to be able to write Python on day one,
but is, you know, less of a good colleague,
worse at communicating, you know, et cetera, et cetera.
I don't have it readily available, but there's a – Jared, I have to mention TikTok.
I'm sorry.
I'm beginning to apologize.
I do a lot.
But there was a TikTok I found that – it was of Simon Sinek, who is well-known for the book Know Your Why.
And he's got a lot of things.
And I really respect a lot of the stuff that man has to say.
He's really got a way of just looking at the world and making wise decisions,
I would say, and just good at mindset. And I don't have it on hand, but I'm going to put it in the
show notes. But it's something where he talked to how the Navy SEALs evaluate adding teammates.
I'm going to paraphrase what I think it says, and we'll link it out and people can check it out
themselves. But essentially, it wasn't about their skill level necessarily as a Navy SEAL.
It was about their ability to be trusted.
Can you trust this person?
Do you want to work with this person?
And maybe that's like you had said some parts of what you hire upon.
But like if you can't, if all things being fair and equal, capable, willing, all the things, if you can't trust them, they're not going to be a good teammate.
He's like it begins at the trust level and the character level. If they can be,
if they have good character, good on judgment, and you can trust them to show up and be where
they need to be and to have your back when you need to have their back. Now, obviously Navy
SEALs can be in dire situations, you know, it's a different circumstance, but I think,
you know, potentially as a coder, you might be too. Cause I mean, you might put a bug out there
that does some serious damage to the world in serious ways.
So that was something I thought was really interesting, was just the barometer being around trust as a person.
Sure, yeah.
There have been a number of situations in my career where I've taken a flyer on someone who, essentially where I've hired on potential, where I've said, I love this person.
I think they have a ton of the professional skills that will make them a great
teammate. I think they bring a bunch of things that aren't necessarily in the job description
to the table, but they're kind of weak on some of the core things that I need for the job.
But you know what? Like everything else about them is so great. I've never regretted making
a decision like that. I have regretted making a decision where I look at someone and I go,
well, they tick all the boxes on my job description, but I think they kind of might be a jerk.
And I ignore that and I hire them anyway.
And it turns out they are a jerk.
And it doesn't matter how good someone is at software.
If they're not nice, life's too short, man.
Love it.
Don't be a jerk. You'll be infinitely more hireable than
if you are a jerk seems obvious but we have to state the obvious sometimes speaking of obvious
number six is all about being obvious don't surprise them so tell candidates ahead of time
about the work sample test give them clear instructions when assigning the test how early
do you tell them and how do
you tell them? Is it like on the job description? Is it the first thing you say when they sit down
for their interview? How do you go ahead and lay that all out there? I like to put it on the job
description. Here's what the selection process looks like. You're going to have a phone screen
and then you're going to do a work sample test and here's a just brief description of what it'll be
like and then you'll have an interview with this person and then that person. I put this all on the
job description. Or that could be something that
comes up and they're like, first time you talk to them, you know, you have that initial phone screen,
you decide, yeah, this is worth pursuing. Okay, let me tell you about every step in the process.
I've seen so many times where people like the interview, you know, for like a week,
they get all excited. And then they get this email from the recruiter that's like, all right, as the last step, we'd like you to spend, you know,
the weekend writing this web application. And they're like, well, God damn it. If I had known
you were going to ask me of that, I'm, I wouldn't have even bothered with the previous interview
steps. I don't, I don't have time for that, you know? And I don't understand why companies do
that. Like, why would you want to invest the time in someone just to like you're wasting your
own time yeah it's i don't understand why this is a thing that i need to tell people but it is
it is a thing that i see all the time like just lay it out don't be don't be cute don't be sneaky
just tell people what it's going to be like and and if it's something that they can't you know
that they don't have the time for that they can't invest the time and let them make that decision
up front yeah i couldn't agree more.
I mean, just put it out there.
And then if they're still surprised
at the point that you ask for the work sample test,
well, that might be an indicator
that they don't read in detail,
which is obviously something that you have to do in most jobs.
I'm afraid that people think I'm, I don't know,
that I think they're dense or something like that
because I repeat it like three times.
Like I'll, you know, I have it in the job description and then we have, when we have
the phone screen, I tell them like what to expect.
And then like when I send them the work sample test, I reiterate what the expectation is,
you know, like I say it over and over and over again, because I really, it really isn't
as frustrating when there's a misunderstanding and someone does the wrong thing.
And it's like, oh man, I just wasted everyone's time.
So I repeat it so many times.
I think I sound like a broken record sometimes.
That's a good thing though.
Honestly, I mean, setting clear expectation is such a good thing,
especially in this context because there's always some,
there's already so much stress on both sides.
Like as an organization, you want to get the person on board.
It's some sort of urgency level.
I'm sure there's always an urgency for hiring,
you know, especially in software development.
And you obviously want to get the best candidate as soon as you possibly can and start making progress on your goals, your mission.
And as a candidate, you know, you don't want to be strung along.
And how many times have we all been strung along with unclear, ambiguous process that just has nothing in these principles at all in terms of respect.
Like to me, these principles scream respect on both sides, like respect your candidate
and respect your own organization, willing to prepare and have a process that is clear
to you and to them so that there is no missteps.
I mean, I applaud you for that.
That's super good.
And if this becomes like the rule book or the way, you know, in quotes, the way,
you know, then at least then that's, that's everyone's leaning on the clarity.
Yeah. And like, if you decide that, you know, you really need a three day work sample test
and it's just not negotiable, it's what you have to have for this position and whatever,
it may be unfair, but you don't care and you got to
do it. Like, just be upfront. Let me make that decision. Let me, you know, have enough respect,
like you said, have enough respect for your candidates to tell them what's going on and
let them choose whether to, whether to engage with it or not. So number seven, we touched on
briefly already. We can probably just list it here. Test your tests internally before giving
them to candidates. Also, it's pretty self-explanatory. Eight kind of attaches to six
to a certain degree. So let's hop to that one. Offer exercises late in the hiring process. So
while you tell them about them up front, you say that should be late. And you say, I recommend
they be the penultimate step, which is probably just an excuse to use the word penultimate.
Well played. Absolutely. Yes. which is the second to last step
for those who are not familiar with the word
before your final wrap-up interview
with the hiring manager.
So why so late?
Why not just kick off with it?
Yeah, this is one of the places where there is,
first of all, as late as possible,
as possible is doing a lot of heavy lifting
in that sentence.
Sometimes as late as possible is doing a lot of heavy lifting in that sentence. Like sometimes as late as possible is first. Okay. The reason being that it's, it's likely the most time consuming
part of the exercise. So if there are going to be a situation where you're going to discover an
obvious mismatch, again, it's not respectful of everyone's time. It's a waste of everyone's time
to put it, to put it later in the process. Or sorry to put it earlier in the process. You know,
you don't I've seen some people, for example, that the work sample test is actually sort of
the gate to an interview, you don't submit your resume, you just send a code sample. And if that's
good, then you get an interview. And I don't think that's a great idea, because there's going to be a
lot of people who you would otherwise screen out much earlier in
the process. You would have a phone screen with them and you would see that they're just like,
obviously not a fit. Or, you know, I had a phone screen with a candidate recently
whose salary expectations were just wildly higher than we could offer, more than double
what the top of our range. And like, this person may very well be worth that. They probably are,
but it's simply not in our budget. And so I wished them well and offered to help with the rest of
their job search. But if I had asked them to do a work sample test and then discovered we couldn't
afford them, I think everyone would have been a little annoyed. So I just want to
make sure that people aren't asking for a bunch of time from someone that's going to be in the end,
totally meaningless. Because it takes so much time. It's the time aspect, right? It's not so
much that it's the three hours, it's the expectation of go away, come back, some sort of
investment of our most valuable resources as humans, which is our time, right?
Time away from our families, time away from our existing job, our hobbies, our wellness,
our self-care, our sleep, you know, whatever it might be.
Like it's, you want to make sure that the candidates are whittled down to the people
that you would want to offer a position for if everything checks out.
As you said, the totality of all the things, not just this pass fail, because it's not a pass fail. Yeah. Another way to look at this is like
resumes are dumb and resume screens are not great. But the one advantage resumes do have
is you do it once you make a resume, you know, for your your spring 2022 job search,
right? You invest that time once and then you send it out to 30 places. For example,
tests aren't like that.
If every single one of those 30 places asked you for a three-hour time investment before you could even talk to a hiring manager there, you're looking at 90 hours of work.
Are you really going to do that?
I wouldn't.
Unless they were all the same test.
I'm just kidding.
Right.
Exactly. So, so that's the thing is like, as, as much as
I think resumes suck, I think they are the best balance of respectful enough to candidates time
and giving us as hiring managers, like just a little bit of information enough to look at a
resume and be like, no, you're wildly off base or okay. Yeah. Worth talking. And it takes very
little effort from
the candidate side to send you know that 19th copy of the resume that 50th copy of the resume right
well while we're talking about saving time i should have brought this up during the work sample
types section but i forgot now i want to bring it up make sure we don't miss it you have this other
type of work sample test that you call the reverse code review i thought it was of interesting. Do you want to tell us what that is and maybe if it works
well? You say it's kind of like an edge case kind of thing, but this is where they're reviewing
your code, right? Yeah, this is one of my favorites, but it doesn't apply super
often. But for some roles, instead of saying, hey, you write some code and I'll
take a look at it, you can turn it around and say, here's some code I wrote, take a look at it.
What do you think?
Weirdly, this works really well
at either end of the seniority spectrum.
It works really well for very junior engineers
who asking them to do a lot of code writing
might be a little fraught
because they don't know very much yet.
And absolutely, when you're hiring junior developers,
or entry-level specifically,
you're definitely hiring on potential, right? You're absolutely hiring based on where you think
they're going to be in a couple of years. And so sitting down with some Python code you've written
and saying, like, what do you recognize? What don't you recognize? Oh, yeah, that's, you know,
that's what this is. You know, these are optional static typing. Like, did you study static typing
in school? Like, what, you Like, what do you think about it?
It can be a really good approach. And similarly, this can be a really good approach for
very senior people where code review and feedback becomes an increasingly major part of that job.
As you get to that level of seniority, the code you write is maybe even less important than the
feedback you give the rest of your team.
And so sending someone your code, some code you've written and saying,
what would your feedback be, can really help you measure both technical ability, because you're measuring how much they know about the ecosystem to give you that feedback.
And also you can measure how they give that feedback, how compassionate they are,
how they talk about the parts that are bad, that sort of stuff.
In many ways, it's like a mini pull request, but in reverse.
It's like they're reviewing your pull request because it's already code written.
Yep. I've actually done it exactly that way.
When I did this in the past, we set up a dummy repo with a web app in it,
and I opened a pull request and invited them to the repo and said,
review my pull request, and that was how we did this exercise.
And that's good too, when it fits, because it's just so close to
the day-to-day practice as a software developer. You're going to review pull requests
as part of, regardless of which code you're writing, which language you're choosing,
which framework you're choosing or whatnot. It's the part of the flow that everyone
in your team is going to be a part of. So it's going to be a natural next step as a team member.
Absolutely. Yeah. My friend Sumana has a really funny story about a time when she was hiring and
they had an exercise like this where they gave someone a buggy code and asked them to enumerate
the bugs in the code. And they knew of 12 bugs in the code, so they expected that a perfect score would be 12 out of 12.
They'd find all of them.
And the person they end up hiring sent them a list of 13 bugs.
There was actually one in the code that they didn't know about,
and so they were like, oh my goodness,
this person is even better than we expected anyone to be.
It's the hire.
I agree with you on liking that one a lot,
because when it would apply, it really leads to a lot of investigation.
It doesn't even have to be criticizing the code.
It could be just pointing out what they see, particularly like you said.
I love that example a lot because it really just gives that person a lot of free range to just ask questions in a lot of ways too.
That's – Jared and I get this because we get to ask a lot of questions doing our jobs.
That's a fun thing to do, I guess, to people like us,
but it's not natural to everybody.
How to ask good questions to get to that next step,
or why did you do this, or did you know about this module
or framework instead, or whatever it might be,
instead of doing it the way you did it.
And that's a cool way to discover somebody's skill set.
Why is it the edge case?
I understand why it would be great for seniors or entry levels
and maybe better for those groups, but is there a reason why it's not more commonly practiced or
maybe you're just used to the typical way? I certainly haven't tried it that much for
the bulk of roles because I find the default of writing code or submitting something you've written previously
to work so well.
I think the drawback would be that
for most mid-career-ish roles,
I don't care as much about their peer review ability.
I need them, again, not to be jerks,
but I don't need them to be able to catch 13 out of 12 bugs. I'm less interested in
the type of feedback that someone more senior would give. I just want them to be reasonably
competent there. But on the flip side, I am interested in their ability to solve problems
with software. So I think for most roles,
if the job is mostly to write code,
then the work sample test should be mostly involving writing code.
I could also see the argument that maybe
it's slightly easier to BS your way through
than producing code is critiquing code that exists.
So maybe you have a little bit less,
maybe it's just a slightly less approximate proxy
than, like you said, especially if the job is writing code,
let's test them writing code versus reading it.
But it's definitely a cool option to have in your toolbox
for times where it makes sense.
I love it, I just don't use it that often
because I think you're right on there.
I think you're right that as you're developing as an engineer your ability to recognize problems outstrips your ability to
solve them you know we we learn we learn what's bad more quickly than we learn like what to do
about it and there's also you know there's another thing that happens when you get more senior in
career there's like this galaxy brain moment where you realize that the things you used to think of were always bad sometimes aren't. And you realize that all of the
best practices you've learned have situations where there is caveats. Yeah, exactly. And so
you're much more likely to find someone who can tell you that, you know, X is bad when it is in
that case,
but hasn't yet realized that it's more contextual and not some sort of rule of the universe or something.
You got, I think it's nine total posts in this series.
Plus the Q&A.
Ten, nine or ten, something like that.
I just counted them and I forgot my count, so I apologize.
I literally just counted them.
Off by one error as we're all programmers here.
Yeah, you're fired or hired.
I estimated 10,000 to 15,000 words written.
I'd love to know the actual words written,
but definitely a culmination of written work
towards the subject matter.
Obviously, we only really dove deeply
into the eight rules of fair tests, which is,
you know, one of 10. And I really also appreciated your wrap up and Q&A to sort of summarize it.
And I'm just so thankful that you care enough to write it all down, to maybe correct your own
mistakes and maybe give future Jacob a path forward in the correct manner, but also to share
that with the rest of the world. I know that we always appreciate when people like you do this,
because it helps us, I guess, in the future hire better and to treat people more fairly,
our organization, our teammates, as well as the future candidates of our future endeavors. So
anything else in closing, Jacob, anything else you want to share about the process,
anything left unsaid? No, I mean, like like thanks for the kind words I I do hope it helps you know I do hope it helps
you and and other people hire more effectively and more humanely you know that's the the goal
it's been it's been really fun getting to put this stuff down on virtual paper but I'm you know
the thing that I I can measure words written and
feel good about the amount of content I've produced, but the thing I can't measure is like
the impact that this may or may not have. And I, you know, my, my real hope my is that, yeah,
this will nudge hiring practices in our industry, you know, even just a little bit towards something
a little, a little friendlier, a little more hum humane and and a little more effective a little editor's
note here i put my foot in my mouth here in a second or two trying to describe to you which
episode this is lo and behold i got it off by an entire episode so it's not actually episode 478
it's episode 479 so if you have any thoughts any anything to discuss, head to changelog.fm slash 479.
So in the note of feedback loop for Jacob, hey, if you listen to this show, we do have comments on our episode.
So you now know the future, you know, the number of this episode 478. So go there.
Links in your show notes.
It's in the show notes. Yeah. Thank you, Jared. There's a link in the show notes to discuss
it on ChangeLog News. That links
to the comments and Jacob will get an email
when that happens. And maybe
it's a question. Maybe it's a feedback to
say, you know what, I've read this or I'm reading this.
I'm so thankful. Or maybe it's
a year from now or two years from now when this
episode is out and it's like
further in the future and
you've implemented it and you've seen it in practice in your organization.
Whatever it is, may this be one area of feedback loop for you, Jacob, to rest upon.
I'm sure that your email is pretty public out there or something like that.
Twitter, I'm sure.
But, hey, in the future, if you've listened to this, if you read all the culmination of this work and you've put it to practice and you've got anything to say back to Jacob, I know we here at ChangeLog appreciate the feedback loop. So it happens so few and so not
often, but when it does, it's usually kind words and we appreciate those kind words.
So yeah, I'll definitely be paying attention to that. So if anyone in your audience has
questions or comments or hate mail they want to send me through the change log entry,
I'm happy to read it.
Good deal.
Good deal.
All right, Jacob.
Hey, thank you so much for your work.
Thank you so much for your time today.
We appreciate you.
Yeah.
Thanks, guys.
Really appreciate it.
All right.
That's it for this episode of The Change Log.
Thank you for tuning in.
Big thanks to Jacob Calvin Moss for all his attention, all his work, all his attention
for what he is doing for work sample tests. The writing is deep. You should check it out.
And we're really curious what you're doing at your place of employment. How are you handling
new hires? How are you handling these work sample type tests? Are you doing them at all
at your job? Let us know in the comments.
We'd love to hear from you. And on that note, big thanks to you for listening to the show.
We appreciate your attention. Do us one big favor. If you enjoy the show, share the show
with a friend. That is honestly the best way for you to help us to grow our shows. We mentioned
Change All Plus Plus, but really what we want you to do is
just share our shows with your friends. That's the best thing you can do to help us. And big thanks
to our friends at Fastly for having our CDN back. Check them out at Fastly.com. And of course,
Breakmaster Cylinder. Those beats are awesome. They're banging, as they say, slamming, as they
might also say. Thank you, Breakmaster C master cylinder. That is it for this week.
We will see you next week. Game on.