What Now? with Trevor Noah - Hilke Schellmann: Is Your New Boss a Robot?
Episode Date: February 26, 2026AI isn’t just coming for your job — it might already be your manager. Trevor and Eugene sit down with investigative journalist Hilke Schellmann to examine how artificial intelligence has quietly i...nfiltrated the workplace. From hiring software that analyzes your facial expressions to productivity trackers that monitor everything from your writing style to your bathroom breaks, Schellmann explains what these systems actually do — and what they get wrong. Do they eliminate bias, automate it, or just hide it better? And what happens to human work when the algorithm is watching? You won’t want to miss this episode. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Transcript
Discussion (0)
What I have learned by, like, bringing AI into the talent acquisition hiring space,
I learned, like, how bad our old processes are, like, job interviews, actually really bad.
Because it sort of filters out the people who are good about talking about doing the job.
As opposed to doing the job.
So we have this, like, confidence versus confidence problem.
Like, people who are, like, come off as, like, confident.
We often think, like, well, that person speaks so confidently about the –
they must be really good.
It turns out, like, that are more often than not men.
and that doesn't mean actually they're competent.
So we sometimes complain.
What? Us men?
No.
Never.
What?
So, you know.
Little old us, Hilka?
No.
As always, not all men, but a lot.
As little old men,
acting like we know more than we do, us.
Come on, Hilka.
Men's explaining.
This is one.
Now with Trevor Noah.
You based here? Where are you based?
Yeah, based it on my U, 20 Cooper Square.
I live in Brooklyn.
Oh, what part of Brooklyn?
Greenpoint.
Green point.
It's an old Polish neighborhood.
What did your voice go down when you said Greenpoint?
You caught me.
Well, it's very different.
I've been in the same apartment for 16 years.
It's been beautiful 16 years ago.
It's still kind of beautiful, but the neighborhood is changing a lot.
But isn't it becoming cooler and younger?
Yeah. Oh, that's what you don't like about it. Well, I like that it was like kind of Polish and you walk into a store and people talk to me in Polish and I don't really know Polish. The only thing I know is like one line that's like Niemians, Potomian Popolsku, and that is really bad in Polish saying I don't understand Polish. But I kind of like that. Wait, that's Polish for I don't understand Polish. Well, it's very bad Polish, I was told. But it was enough Polish that the Polish realtor was like, whoa, I've never met a German who speaks Polish. I was like, well, I just said that I don't speak Polish. And it took me six.
hours from the train from Berlin to Warsaw to learn this one phrase.
Because Polish apparently is very hard.
So I kind of like that about Greenpoint.
And that's like becoming exceedingly less.
But the uptick is like the beauty of it now is like we have beautiful restaurants.
Yeah.
So that's pretty cool.
Maybe I'm just getting old.
I've never understood why people learn the phrase,
I can't speak your language in another language.
Because you want to be polite.
But just speak your language.
It's a test on yourself to see how much.
you can learn.
Okay, but now think about this,
think about what this says to the other person.
Yeah.
You've said to them in their language,
you can't speak their language.
To me,
what it shows me is you just don't want to speak my language
because you've learned enough to say you can't speak it
and then you won't learn the rest.
No,
I think that's like being really polite.
No,
you're like visiting them and like,
you want to be nice to that.
You've literally walked up to somebody
and someone came up to you and they were like,
I don't speak English.
And then you're like,
well, you did a great job there.
And they're like,
that's enough for me.
Think about it
We've got enough for them
You're like, no, that's enough for me
That's good
Well, Hilka, welcome to the podcast
Well, thank you for having me
Thank you so much for joining us
This is like, you know, sometimes
And maybe it's confirmation bias
Sometimes you'll see a thing in the world
That confirms the feeling
That you're having and the idea
And like a lot of us will be like,
It's a sign, it's a sign
Literally coming here into the studio today
I saw these posters that are all over New York
It's a little QR code
and it says AI, who are the winners, who are the losers?
And it's a QR code and I don't know what's happening
and there's all these different ones everywhere.
And then they say, is your job next?
Is your job next?
And it's all like ominous.
It feels like it's promo for a movie, but it's not, I think.
So what is on this QR code?
I'm not going to scan a random QR code.
This is how your phone gets hacked.
I'm not going to scan the cure.
I was just looked and I was like, yes.
I wish I did.
I was like, we're talking to the perfect person today
because you have,
dedicated more time in your life than most people into answering this question, like,
basically, like, who are the winners and who are the losers? So, like, before we delve into,
it's like, if you were to explain to somebody who you are and what your passion is in and around
the topic of AI and how it relates to work, how would you introduce yourself to them?
Oh, wow. I guess I feel like, you know, I'm an investigative journalist, and I have, you know,
I used to investigate all kinds of things, and now I just investigate AI, and I'm trying to understand,
Like, how does it work in society?
And maybe who are the winners and the losers?
But also, like, you know, I really think about like,
where it's changing the world of work.
And I saw it eight years ago starting.
And I was like, oh, I don't know.
People are aware of this.
And somebody needs to look into it.
And there was kind of nobody else there who was like looking into it.
So I was like, might as well look into it.
I'm just driven by like sort of curiosity.
And I'm like, what is going on here?
So now it has a little bit involved.
Like I investigate AI, not only AI and hiring and in the world of work.
I also build AI tools.
I think about like how journalism will be impacted by AI
and how we can maybe safe journalism
or a factual-based society
when everything can be generated.
So those are kind of things and questions that I think about.
I love the idea of being an investigative journalist,
doing everything and then focusing on one thing
because then it makes me go,
what was it about this one thing that you thought supersedes everything else?
Like what were the other topics you were covering before this?
Yeah, I mean, I covered, you know, like,
violence against women in Pakistan. I went to Pakistan. I looked at South Asia. I did all kinds of things.
And I don't know. I had like one lift ride in 2017 in the fall. I was in Washington, D.C., trying to
get from a conference that has nothing to do with AI to the train station. I got in the back of
the car and asked the driver, how are you doing? And he said, I've had a weird day. In the history
of me taking lifts, no one has ever said that. And I was like, really? Well, what happened? He's
like, I had a job interview by a robot with a robot. And I was like, what? Job interview with a robot.
And he's like, yeah, you know, he had applied for a baggage handler position at an airport. And he got a call from a robot that asked him three questions. And he was really weirded out. This was in 2017. So we are, you know, light years further down the road of AI now. But I was like, I've never heard of this. So I started looking into it. And here we are. And then I went to a conference. And I was like, wait a second. There are all these like AI vendors and HR. And like, it's being.
used everywhere, no one talks about it. And whoop, down the rabbit hole, I went. And somehow
it never, it doesn't let me go. I'm thinking about, like, the next four books on AI, the next
research studies on AI. It just doesn't, I don't know how I, I don't know. I'm very, I'm very
better predicting the future. But I could tell that this is like a transformative technology
that we need to pay attention to. And not only how the technology work, but it's like societal
implication. What does this mean if we use AI in hiring? What is it, what are the
consequences of this. If we use it in journalism, how does our world change or maybe not change
and how does it improve the world or maybe not? And I was surprised that there isn't maybe a
whole lot of improvement as we wish it would be, at least in hiring. So I think that was a little
bit surprising, sadly. When I first saw, like the first time I went to a conference and somebody
was explaining how they do like emotion scanning on their faces and like checking the internation of
your voices to find out if you're going to be good at a job and like the words that you say.
And I was like, wow, who knew that like facial expression in job interview could be predictive
of your success at a job?
Like what a way, like a new way of science.
And then, you know, we trust but verify as a journalist.
So I trusted that information.
And then I went on to verify it and talked to a lot of experts who are like, what?
Emotion on faces.
Like that doesn't exist to predict how good you are at a job.
No ways.
like, oh, that's too bad. Intonation for our voices, we can't really tell what kind of emotions
you have. Like, we can sort of like make a prediction, but that's not always really the case.
Like, you know, it's kind of like when I'm in a job interview and I say I'm nervous and, sorry,
when I'm when I'm in a job interview and I smile and people, you know, facial emotion-scanning
algorithm would say like, oh, yeah, she's totally happy. She's smiling. And I'm like, I'm fucking
nervous. I'm not happy in a job.
interview. Who in the world has ever been happy in a job interview? So that's kind of like,
you know, it is a prediction, but we're using it to like sort of select people.
It's just like your intuition. From what I hear, it's like your intuition as an investigative
journalist was basically to say there's something deeper that's happening here. There's a world.
Do you know what I mean? Yeah, totally. And somebody has to look into it. And for some reason,
it just sometimes happens to me who's standing right there. So I have to like take it on.
It's like, you know, when the chairwoman of the Equal Employment Opportunity Commission,
when I was talking to her about AI and hiring, and she's like, yeah, I do wonder, now we have
these, like, one-way video interviews.
And, you know, the companies use the recording, run them through a transcription service,
like speech-to-text transcription like you have on your phone.
And then the AI predicts upon that transcription.
And she was like, I wonder how good the transcription software works for people with accents,
people with speech disabilities.
I'm like, yeah, totally.
And you have, like, a federal agency.
You should totally look into that.
and study that.
And she's like, oh, yeah, I don't know.
And I was like, okay, there's no one here.
So I started to study it with the help of a research team, a computer scientist, sociology, professor.
I don't do this work alone.
But, yeah, so that's kind of the work that I do.
You know, the more you speak, I realize this is how it sounds like whenever I speak to Trevor about technology.
He knows so much about technology.
I only know how to send texts.
But you send them very well.
Very well.
Sometimes I send pictures as well with those texts.
And an emoji don't get me stuck.
But I've actually never heard you talk about AI now that I think about it.
Never.
Because also I don't understand how much of it is in my life.
And I don't also understand how much it scares people.
So I'm even scared to ask people, what is it about AI that scares you?
Because I don't interact with technology that much.
So how would you explain to me what scares people and how much I've been using without
even knowing I've been using?
Yeah.
Well, we use it in everyday life.
Do you have a spam filter on your email?
Nix.
I specifically said to you.
Well, you know, it's like sort of the rise of AI has been everywhere, right?
And it's really like software, really, what it comes down to.
It's just sort of like maybe software and steroids.
It just thinks better than we used to where we say like, oh, if this, then do this.
Like we have now self-learning tools that can sort of do translations from, you know,
we could now be talking in German or French and in AI.
could just translate that in our voices.
And then I can generate that.
So we see it kind of everywhere, moving into everything.
That's crazy.
So wait, you're saying with the technology now, out of nowhere,
we can just go from speaking English and then we just switched into another language.
In real time.
In real time.
I don't know if it works in real time, but we can definitely do it.
That is unbelievable.
And then you speak English and then Eugene spres English?
You also?
Yeah, I speak also.
These are Zooku get us AI.
Okay, you do not have to emulate the AI.
Ice-rock and nothing.
You know, your book, your book really, I think, shook me up in the perfect ways
because you've written extensively about the world of AI.
And what I wanted this conversation to do, because I try to talk to people like Eugene, funny enough,
who I realize don't have the handle or the passion for tech that I have.
And sometimes I think if you love tech too much, you're just focusing on like the tech side of it.
And you're like, wow, the engineering.
The new tech.
And then when I speak to a person who's not into tech, they just go like, wait, wait, wait.
What does it do for me?
What does it do against me?
And how do I need to think of its role in my life?
And your book really broke it down because one of the first things I noticed about your writing is,
AI is fundamentally going to change what the word job means.
Do you know what I mean?
Like job has constantly had like evolutions over time
Like people used to go like
A job is this and you know like
It meant using your hands
And people like that's not a job
And the first people on a computer or think
They're like that's not a job
And then now people go that's not
But fundamentally
From everything I've seen you rights
And obviously everything that's happening in the world
It seems like job itself is going to change
What have you found in your investigations
On like how AI is changing
What jobs actually are or aren't
Like in different fields
lawyers, doctors, etc.
Yeah.
I mean, I think we already see
some of it coming down.
You know, we see,
we already see some of the consequences
of like AI infiltrating our daily lives.
We see a lot of way less like sort of early career hiring
because I think a lot of times people who use AI
sort of describe it as like, oh, yeah,
I have like a little intern with me who does like a lot of jobs for me, right?
Like they can write code for me.
They can do, you know, you can generate a research report.
of stuff that I need to know. Like I can generate emails, newsletters, like stuff that I have to
write that we maybe were going to give to set my calendar, book my flights. Yes.
Yeah, all of that. Yeah, totally, totally. It can do a lot of that. You know, we're still thinking
about like, still are looking into like a gentic AI, can it really book the best flight for you
that you want? You know, we're still working on that, but it can definitely help you like generate
research, doing math problems, all kinds of things. So I think we see a lot of companies already moving
towards like, oh, having fewer head counts and sort of like, I worry a lot about like,
how's this pipeline going to break of people doing like early entry jobs, how they're going
to get the expertise and the wherewithal to like move up if we sort of take out the first layer
of jobs.
Maybe you just have to like upskill people.
But how do we, how do we, that seems to be the conundrum, right, is law firms, most of the
people who start out in a law firm, start out, they've got their law degree, they go.
and work at a law firm and it sounds like your job is just to like go through the paperwork and do the
research and write up briefs and do this but you're working for someone but in that process you're
learning and they're teaching you what they're looking for and they're trying to but if we cut off
that level then where does the expertise come from yeah because we say up skill but then who is doing
the up of the skill yeah yeah I mean I think it's like sort of you know what I sometimes fundamentally
think of and you know we don't have all the answers yet to some of these questions if I may say
that is like sort of like what what stays as a human uh in the age of AI right if like
uh AI can do sort of what we think as like very human yeah uh things like if AI can write better than I
do how can I express myself like and what does it mean for humans in a world of AI like what do we
bring to the table now that AI can do so many things for us don't go anywhere because we got more
what now after this in in the job space actually
I would love to know, like, you've done a lot of investigating.
And I want to get into some of the stories because I think people will be fascinated by
how humans have been affected by AI already.
Is there, is there like a concrete number on how much hiring is actually done by AI now
and how much is human?
Because a lot of people out there, if you told them, oh, hey, your job application,
your CV, your resume, whatever you type up, it's not even seen by a human in some companies.
Yeah, nothing.
Yeah, sorry.
So we think about like...
How do you think you got here?
You think if I knew you were coming, you'd be here.
If I looked at your resume.
This was AI.
You now have to say it was like shitty AI or something.
This freaks me out at every turn.
Wait, wait.
So someone applies for a job.
So you like upload your resume or you don't even have it uploaded.
Like you already have it on LinkedIn and you just hit the one click.
Yeah.
So the company I'd like to work for.
Yeah.
So like all of these big platforms, they all use some form of AI.
That I can tell.
You don't have like a central register where companies have to register and say like we use this AI tool or not.
We just know this from surveys and sometimes me calling companies.
So I know that they use AI.
So you have to think about like the beginning of the hiring process.
You often have thousands of people applying for a job, right?
We call this like sort of a big funnel.
And some companies, you know, this is a couple years open.
I talk to Google.
They get over three million applications.
IBM gets five million, over five million applications a year.
So it's a lot of resumes that come into this funnel.
So what we now see is like a lot of companies and usually large companies, a lot of Fortune 500s, use AI to reject people to sort of call the hurt of all these applicants.
And like so we see in the early stages, rejection, rejection and like a few people going on the ESPA for AI and then, you know, doing like one way video interviews and now we have video avatars interviewing people.
Just break down.
What is a one way video interview?
Because I think a lot of people, I didn't know what that was until I read your work.
I hear you. I've done so many.
I'm 30 seconds ago. What are you talking about?
No, but I didn't know.
Me. Yeah.
So like a one way of video or audio interview, like, you know, there's now a traditional
way to do this, which is like six or seven years old where you don't have anybody else
on like, you know, you kind of lock in, you get a link, do this video interview if you
want the job in the next 48 hours.
So you click the link.
And then instead of a human on the other side, it's on a Zoom call.
You just like get maybe a video of somebody saying, hey,
Hey, welcome to company X.
We're so delighted you were here.
We have a couple of tests for you.
And then you get a question like, what are your strength and weaknesses?
Why do you want this job?
And then you tape yourself, basically, you get like a couple of minutes to prepare.
And then you tape yourself like saying like, my strength and my weakness is this.
And then I think all of the applicants I've spoken to would think that like human watches all these videos, bless their hearts if they do.
And some companies actually do have humans watch all of these.
but some companies also use AI to rank people and do that.
So we see that more and more,
and we see this often like entry-level jobs.
We see this in like retail companies, fast food.
Like it's called high turnover, no, high volume high.
I don't remember.
So it's jobs generally have people where people are coming in quickly and leaving quickly.
Exactly.
It's not a career job.
So people are going.
Sometimes it's a career job.
Oh, but it's just like high turnover.
But it's a high turnover, or you have, like, lots of candidates that you have to go through.
So, for example, like Goldman Sachs said a few years ago for their summer internship,
they had, like, over 100,000 applications.
So they have to, like, go through these applications and, like, narrow down the pool.
So you use, like, resume screening AI, you use, like, video interviews.
You can use games.
We see, like, personality.
These games are supposed to find your personality while you're clicking on balloons,
pumping up balloons, they find your personality,
all kinds of ways to assess you
without maybe putting in a whole lot of work
because humans are expensive to do this work
and also, sorry to say this,
but a lot of humans, they do suck at hiring
because we have bias.
We have human bias and I suck at hiring.
But this is the conundrum though.
So this is the thing that's like weird now
just for this part of it is,
my reflex, when I hear something like that
is to go, oh no, this is not good.
How can you have AIs, screening people's interviews?
But then on the other hand, I go,
if you have 100,000 people applying to a job,
let's be honest,
I don't think there's any human who's going to get through those 100,000 applications.
I don't think there's any humans.
And I wouldn't be shocked if there were like a bunch of humans
who were skipping through this before
because they were just like, it's like auditions in a way.
At some point the person's tired.
You want to get them when they're fresh.
You want to get them when they're in the mood.
Yeah, not when I hungry.
I wonder, is there a world where, like, does the AI make it better then?
You know, I wish I could tell you that.
So we don't know.
We don't know.
I've asked many, many companies to let me come in as a researcher and, like, sort of look
at, like, here's your traditional way of hiring.
Here's your AI hiring.
And what do they say?
And have this, like, run at both times and then sort of double check, like, you know,
the people that the AI said that would be high performers.
Did they actually turn out to be high performers?
And I have not seen a company do this.
this or want to share this with me or with anyone.
I think it's because, I don't know, there's like a lot of turnover in HR, like these
processes don't work that well.
And I think what we already know, so what we know from a survey of C-suite leaders, like
sort of leadership in companies of over 2000s in Germany, the UK, and the US, when they
ask them, if your company uses AI tools, do they reject qualified, do they reject qualified candidates?
and almost 90% of the leadership said yes.
So they know that their tools for Jack qualify candidates.
They still use it because I guess the efficiency from using AI versus humans,
it's just much, much more greater.
But it's not that we know that one process is better than the other.
I mean, we do know that humans are very biased in hiring
and even the best anti-biased training is not going to get out of it.
And, you know, we all know the shortcuts, right?
if you see somebody on a resume that they went to Harvard,
you're like, oh, they must be smart.
No.
They're not.
Well, we know from social signs.
This episode, Eugene Kosa learns about the world.
She's like, wait, what are you telling me?
But you go, you know, so this is where I feel like we stumble on the first conundrum.
Generally, generally,
machines like predictability.
Yes.
Right?
Algorithms like predictability.
That's what an algorithm is fundamentally sort of trying to do.
It finds like patterns and, you know, yeah.
And a pattern is a predictability, right?
The conundrum or the paradox of being human is that the biggest breakthroughs that have come from humanity have often come from the pattern breakers.
The person who didn't think correctly, the person who didn't fit the algorithm, the prison.
Yeah, the outliers.
Yeah.
So I wonder if companies in moving all of their resources towards efficiency and patterns,
and pattern recognition
might go the opposite direction of innovation
because it's almost like the misfits and the mistakes
are sometimes the ones who give you the biggest clips.
Do you know what I'm saying?
Yeah, yeah, yeah, totally.
I feel like the solution has caused a problem.
Well, we were speaking about how many people
had applied to Goldman Sachs,
and I think if it wasn't for technology,
would you still get that many applications?
That's interesting.
Would 100,000 people from all over the world
show up at the address
to put in their resume.
So I think technology also allowed easy access.
I also think there's people who know they don't qualify,
but would do it anyway.
So why would you put a human through all of this?
But also I think it's a box ticking exercise for some companies as well.
I think some companies don't want to hire anybody,
but they'll just put out a thing that says we want to hire somebody.
Then they'll end up doing the internal process anyway
because if you're going to trust people with people's monies and files and information,
you'd want someone that you know.
So I think companies know exactly what's going on,
but they're just sending out hope.
And I think once you advertise a job,
it's a great way to advertise your company as well.
Yeah, yeah.
I mean, sort of like people online, you know,
they often joke because obviously some people,
obviously are very aware that companies use AI.
And now a lot of, you know, I think it felt very, like,
passive and sad for a lot of applicants
until sort of LLMs and chat GPT and other AI came around.
Where now it's like much easier for me as an applicant
to generate a resume.
There's actually now AI warfare.
Yes, it is AI.
Because it's like, I'm going to use the AI to apply for the job.
They're going to use the AI to grade me.
I'm going to use the AI to pass the grades.
Exactly.
I'm going to try to use AI to like outsmart the AI.
There's actually AI programs that now apply for you.
So you don't even have to do anything.
So there's all kinds of stuff.
But like the question is like, well, what are we then doing here?
Yeah, like what are we?
Yeah.
What are we doing?
That is a great question.
That becomes the question.
What are we doing?
Because if the.
the AI is hiring, the people who are using the AI to get the job, that the AI has hired the people,
then that's what I mean is like we have to ask the fundamental question, wait, what was the point of this process in the first place?
Because multiple studies have shown humans are terrible at predicting the future, especially when it comes to hiring.
A lot of the time when you're hired, you're hired because the person sitting across from you saw something in you that they considered correct.
for the company.
But a lot of the time, it's just wrong.
Yeah.
You know what I mean?
It's just, it's wrong.
And then people don't do well.
And they were like, well, that didn't work.
But the prediction is wrong.
You know what I'm saying?
Yeah.
And so now, I almost feel like we forgot what the whole point of an interview was.
Like, I'm not a historian, but if I was to bet, I would think an interview was just to be like,
let me see what your vibe is.
It was a vibe check.
Yeah.
But it turns out, like, vibe checks not so great, actually.
Like, because you...
And predicting who's a good employee.
Yeah.
But also, like, a vibe check is like finding people who like, like, are often like have the same background as you.
They speak like you.
Exactly.
Exactly.
So you find the same people again.
Yes.
Which, you know, we kind of know that like diversity is good for companies.
Also like, I mean, I think that's why we have, you know, fewer women, people of color and leadership positions because we have underestimated them as humans and hiring for decades and promotion decisions.
So we have like sort of a lack of diversity already because of human.
bias and sort of the vibe.
You know, you know, when you can't come to a job interview, you want nothing more but like somebody,
you know, like the HR manager or the hiring manager to like you.
And then you start talking about like, well, what school did you do?
If you walk and wearing the same shirts.
Oh boy.
You like this, you know, sports team, yadi, yadi, yadi.
And that chit chat feels like very good for humans to make a human connection.
Yes.
But it's actually really bad because that brings the bias in because now as a hiring manager.
I'm like, oh, man, you went to the same school as me.
It's so cool.
I see you in a completely different light than other people.
And I'm supposed to look at, like, what are the capabilities and, like, your skills that you need for the job?
Not if we went to the same school, but we as humans do that.
And that's where, like, a lot of the bias comes in.
The unfortunate thing is you might think, well, AI is like a pattern machine that just finds patterns, right?
And we'll just look at your, like, capabilities, your skills and find the most skilled person.
but what we've seen in some of the AI tools when I talk to lawyers and others who get access to these tools when like an AI provider,
you know, they built the tool, an AI vendor and a company may use their tools.
Sometimes they bring in lawyers and do their due diligence, like how does this tool work?
And what they found out is when the lawyers looked at it, that the tool used, some of these tools use kind of problematic keywords.
So, for example, one tool.
There was the Amazon story that you wrote about.
Yeah, the Amazon story is one of them.
The Amazon law is pretty insidious.
So this was like if you had the word woman or women on your resume, you got downgraded.
Because, you know, the tool had learned over time.
You give it resumes of people who currently work here or who maybe made it to the last round of hiring,
sort of labeling them as these are the successful people.
Well, if you work in a tech company and you probably have a gender disparity already built in from maybe previous bias,
you kind of replicate that, right?
If the people who are in the role,
if you use their resumes,
the machine does what it does best,
it looks for patterns.
And it finds out, wow,
women are less successful here.
So we should downgrade them in the hiring process.
There were some applications in the story
where Amazon was hiring people.
And their system basically went on its own,
doing its job as it had been told.
And it went,
oh, I've noticed women's,
soccer team, women's, baseball women's, anything,
does not match with the people who are currently at the top of Amazon.
They don't have that word on their resumes.
Exactly.
So this person is less likely to be like that person.
So we're going to downgrade that.
But this had nothing to do with your actual qualifications.
Wait, did AI do that or did someone who put the input to the AI do that?
No, there's no input.
This was a issue.
Yeah, you have to think about like, you know, sort of present day AI.
What we do is like we give the AI just the digital.
data we have and have it like, we call it unsupervised learning, have it like figure out
what do these people have in common and who should we hire.
The best fit for here.
So, yeah, so it looks at like patterns in the resume lake that you give it.
And I guess it scans all of the words.
And then it does what it does best.
It does a pattern analysis and finds out, you know, one other example was like, if you
had the word Thomas on your resume, you also got more points.
If you had the word what?
Thomas. Thomas? Like the name Thomas. Or like in another case, it was like words like Syria and Canada.
What, those got you up or down? That got you up, actually.
If you had the combination, you were hired.
Yes.
Wait, wait, but now, but now. If your name is Thomas on top of them.
From Syria via Canada.
Yeah. So here's my question, though. Does that mean that people could, are there tricks that people could use now?
So if I was writing a resume today, could I just write somewhere randomly?
Syria, Canada.
Passions reading about Syria.
Canada.
I like it.
Maple syrup.
Thomas.
Thomas.
Thomas.
Thomas, Thomas.
Thomas.
Thomas.
Thomas.
Thomas.
Thomas.
So I think the problem is that most tools are like individually calibrated
to each company.
So I could only get hired at Amazon by doing this.
Well, Amazon had that women's problem.
But they say they changed that.
They also say that their machine learning algorithm was never used
solely to make hiring decisions.
But no one would say that it was.
Like, I mean, which company would be?
I don't think I've seen a single story where a company has come out and said,
yeah, man, we were just using a computer to choose who was coming here.
All of them go like, no, no, this was not the only thing.
This was merely a pilot program that determined.
You know, the more you guys talk, the more I realize,
are we under, you're a journalist, you know this,
are we underplaying the role that biases have played in our lives?
people choosing whatever it is that represents a certain
a group of people or a company even
based on what they think the taste of the
population or demographic is.
Do you understand what I'm saying?
Yeah, yeah, yeah.
So you think in generally or in the hiring process?
Not in the hiring process.
Because if you're going to work for a company
and the person says there goes,
I think you'd be great here because of what, what, what, what?
Now we're going, because I think bias is always,
and I could be wrong, always comes in
when we speak of race, gender, or religion.
Once you've ticked those three boxes, we're like, yeah,
but how many places have we gone to where there's that mix
because of someone's biases who decided maybe people who are six foot with muscles
should be in construction?
And because they look like this, they sound like this, they talk like this.
Actually, they'll be great for this job.
So how many of us are beneficiaries of biases?
I think a lot of us are beneficiaries and a lot of us also have
been sort of the victims of bias and probably unbeknown because you know you go in for for a job
interview or you you send in your resume and most likely is to get rejected right because there's only so
many jobs at the at the that are being given out so the question is like were you ejected and I think
most of his humans think oh well I was rejected because I wasn't the most qualified candidate
or it might have been that you've been rejected because your name is Thomas or in one actually
instance there was the word African-American
that was used to weigh resumes.
In another instance, there was, if you had the word baseball on your resume, you got more
points.
If you had the word softball on your resume, you got fewer points.
So that's probably gender discrimination, right?
Because more women...
I would give you zero points for both.
In my company, I would be fair.
You say baseball?
You say softball.
I would detract points.
Trevor, do you see how it circled back?
How does...
And this was not a baseball position.
And the question is like, you know, like, what does it have to do with baseball?
But no, you know, you know what...
In a way, I don't...
I know this is going to sound like a little crazy,
but like I can sort of understand these ones.
And when I'd read the examples in your work,
I would go, this sort of makes sense.
I can see why they've made a mistake here and they can rectify it.
But there are some examples that you've given that blow my mind.
For instance, there's one story that you go into of a guy, I think, by the name of Mike,
and he's like working for Bloomberg or he's like trying to get a job at Bloomberg or something.
And please help me understand this.
Because from what I understood, I'll say it.
then you let me know if I'm right or if I,
he had to play a game like Candy Crush type stuff of popping balloons.
And then he got fired because of how he popped the balloons.
He didn't get fired.
But he did apply to a job.
He was based in Barcelona and he was based in Barcelona and applied to a job in London.
And he got a link immediately after applying saying,
like, hey, go to this link.
And, you know, I sort of feel like we as job applicants,
we are sort of forced consumers of this tech, right?
Because if you want the job and you get an email with the link saying, like,
hey, you have 48 hours, click on this link, play this game.
What are you going to do?
You're going to do it.
Even though you were like, and he was like, while he was doing it,
he was like, this is weird.
It sounds like the beginning of a horror movie.
Do you want to play a game?
Why do I have to do this?
Like, it sounds great.
And I think a lot of applicants technically like it better than answering 100 questions
about like are you the life of the party?
Like I've rather pump it balloons.
But when you realize, wait, is this the only criterion I'm going to be judged on how well I
like pump balloons or like in one of the games I had to hit the space bar as fast as possible?
And while I was doing that, you get like 15 seconds or so to do that.
And I was like, what does I have to do with the job?
Like in what jobs do you have to hit the space bar as fast as possible?
Maybe it's like a company where like there's like big gaps between people's names.
Maybe there's like maybe you're working at a job.
company where it's like suspenseful pause
incorporated. Maybe it's like,
I mean, I want to know what this job is now
where somebody out there is just like,
maybe it's a company
that had to cut costs because all
the enters on the
keyboards were broken and now they have to hire
people who can use space to get to the next line
because you can't just press return.
You can't just press, come on, come on.
And then that boss was like, you know, we need
people who can press the space bar.
Get me the fastest space bar
presses in the world.
We found them. We found them.
But, you know, I mean, what's interesting, like, that actually, that suite of games was used by, like, multinational companies around the world.
We're talking, like, legitimate, not some random company.
You're saying this is used by, like, big name companies.
How fast can you press a space bar?
And that's one of the many games that you have to have to play.
And, you know, they say they're not actually, like, looking at your capabilities of hitting the space bar.
It's, like, finding out how much, like, you know, how risk of versus you are, like, what your personality is in a new.
this.
Like, are you somebody who likes challenges or not?
I guess space bar sort of...
Who takes any order that you're given.
I'm sure even the...
That kind of stuff.
The time between you deciding are you going to press the space button or not, actually
maybe counts.
Oh, that's interesting.
Did you really think about this instruction?
I don't know if that counts, but I did talk to an industrial, sorry, industrial
organizational psychologist who said, yeah, we looked at all of those things.
And actually, the people that take longer until they start playing, they're actually
less successful, but he said, we are not using that criteria.
Called it, my man.
That's my.
Called it.
You called it.
You did call it.
Yeah.
But so we don't know exactly, but, you know, all of these, like, every space bar hit
and everything that I do obviously gets recorded somehow and can be used.
But the question is, like, you know, on a good day, our personality is such a low predictive
measure to measure how good we are going to be in a different.
job because it also turns out like I can overcome things in my personality right like I don't know
if any one of you have like I tried to you know I used to be like really shy I didn't like to talk to
strangers I know it's part of my job and I like calling people on the phone and chatting with them but like
going to like a party like a reception with actual people I don't know and like going up to them
it's like I used to hate it and then I was like it's part of my job and I made it I made it a game to
challenge myself. So I was like, I'm gonna, I make a game game for myself. You just walked into
parties with a keyboard and you're like, how fast can you hit this space bar? You win.
Nice to meet you. Nice to meet you. I'm Hilka. We can be friends. This is my research.
That would be, I should have done that. That would have been much more interesting.
But what were the game? What did you? No, no, the game was that I have to approach strangers and like,
You did this for yourself.
Yes, yes.
What was your reward?
My reward was just like, well, getting to know people and like learning about them.
I like this.
So this was how you overcame it for yourself.
You went, I'm afraid of speaking to people.
So I'm going to make it a game where I just walk up to a stranger.
What happened?
That's what I tell my journalism.
What happened when it didn't go well?
Well, I'm still here.
So I was afraid I was going to get decapitated, right?
People are nice and they're like, but you know, I'm still here.
And, you know, sometimes people were just like.
and like just left me standing there
and I was like...
You realize it's not as bad as you thought.
Yes, but you see this is AI again
having, let's say if this was a program
you would score higher because you're a woman.
It's easier for women to do that than a man to do that.
Oh, that's interesting.
If I walk into a random room
then there's a bunch of women then I'm like,
hey guys,
I should take a game where I'm trying to be social.
Ranger, danger, psycho.
But for women,
for a woman is much easy
so the bias is kicking again
if I go to a Midwest town
as a black man from Africa
and I walk in there
there's truckers
and I go,
howdy folks
no one's gonna say hi to me
that was a good howdy though
you like that
you nailed that
I'm in
if my eyes were closed
when you walked in
close your eyes now
howdy folks
hey who's that over there
that was not bad
I'm in
darn
once I look up
things might change
So you see how biases is informing what the outcome ends up being?
But it wasn't a bias challenge.
It was just like a personality overcome challenge, right?
Because we all have like certain things that we like to do and we don't like to do.
You were biased.
They were.
On the other, on the receiving side of it, they were like, here's a woman.
She's smart.
She's nice.
She's saying, hi.
Less threatening.
That's true.
Exactly.
So the bias is kicked in.
So the same applies when an HR manager is sitting across some.
someone who they look at and go, I wouldn't want to be stuck with you in an elevator on the 14th floor.
But then that's at night.
Yeah, but then that raises the question then.
Is there ever going to be a world without bias?
And is that what we should be looking for?
I mean, look, we can all wish, but we know that that's, that's never going to happen.
Like, we humans are biases machines.
Yeah, but now that the machines, but now that the machines are doing the job, could it be possible?
And I know, I'm not saying it will, but I'm saying could it be possible that the AI, because here's what I think about in what you're saying.
We're living in a world where we know that biases exist.
We know, right.
So whether it's in courts, whether it's in law enforcement, whether it's in jobs, whether it's in schools.
Doesn't that?
We know that bias.
Social settings.
Bias exists.
Right.
Now, AI has gotten involved.
and we see the AI mirroring many of our biases.
Yeah.
But the difference is with AI, we can actually see it.
We couldn't see it before and we couldn't like prove it.
We had to conduct like weird studies.
Before you couldn't say this company didn't hire anyone because they didn't say baseball
or because they had women or because they said black.
But now you can't you can actually look at the data and go, oh damn.
And I sometimes wonder if it'll be easier.
And again, this could be the optimistic side of me.
But I sometimes wonder if it could be easier for us to address bias in society
because we actually have concrete data now that shows it.
And we get to blame it.
We don't have to blame each other.
We'd be like, oh, my God.
The racist AI was to you.
I'm sorry, my friend.
AI is a Trojan horse.
You're right.
Do you see a world where that's possible?
Yeah, yeah.
I mean, I wish companies would actually look at these tools more closely.
I think the general notion, though.
That's interesting.
Is they buy it from a vendor, the vendor sort of like, you know, sort of services the algorithm over time and make sure they still run and there's less bias.
Like the check if there's like gender and like very basic racial bias in there.
But they never look at like, you know, does it let people with disabilities through or something like that, right?
Like, and it also, we don't see a whole lot of companies actually checking how are the decision being made.
And I think that's sort of where the problem lies.
Like if we actually somebody would look at the thousands of keywords,
resume parsers used to predict if you're going to be good at the job,
they would find those keywords that are learned from lawyers and other places.
And, you know, those are keywords we shouldn't be using.
We should be looking at like your skills and your capabilities
and not if you are on the baseball team or not.
Yeah, yeah, yeah.
And I came to this as a human.
I remember like for the first time talking to a lawyer about this.
And I was like, well, maybe they had I found something that humans couldn't.
That like, in this case it was playing lacrosse in high school that was like,
a predictor of success.
And I was like, maybe it found out for this like whatever insurance job or sales job.
It was really good to play, you know, to play lacrosse in high school.
It found this like hidden gem that we humans couldn't.
And the lawyer started laughing and he was like, God, you think like a human?
I was like, really?
He's like, it's a pattern machine.
It does a statistical analysis for whatever reason, like playing lacrosse in high school,
a bunch of people who were in the job had that criterion.
It doesn't mean, yeah, it doesn't mean that.
that like lacrosse has anything to do with your success.
And in fact, he's like, well, if it's like playing team sports,
what's with all the other team sports?
Like, why weren't they included?
Why do you get more points for baseball and fewer points for softball?
But it is essentially, I think, as a non-American, is the same game, just a bigger field?
You see, the hook is on my team, minus points for both.
Like how you saw pickleball and cricket ball.
We call it beach ball.
Don't bring pickleball into this, please.
Let's not bring.
Trevor, doesn't want to talk about pickleball.
Don't press anything.
We've got more.
What now after this?
You know what I realized speaking to you guys about this?
Because I wanted to know as little as possible about the topic
so I can get enlightened in real time.
How's that going?
Very well.
Because I've worked in retail before in South Africa.
And I've realized that HR has always been the enforcer and the goon of the corporation.
Because when you come in, they're the first people to ask you,
what do you like?
But basically they're trying to see, do you want to fit in?
here and be here and when you come from first. Then when you get let go, you do what they call
an exit interview. And that will help them not hire a person like me ever again. So I use
public transport. I went to a township school. So they knew that all of those factors and my age
as well and how long I stuck around in that job. So they know the propensity of me sticking around
longer or doing something wrong or right, according to them, is based on how long I stayed and where
I come from. And what changes I've made in my life since I started working there. So this,
could predict if someone earns this much for this long at this age from this background,
the money will start becoming too little for them to be here.
So AI now is doing that at a rapid rate.
Instead of saying we don't want women, it will cut out words like soccer and blah and blah and blah and blah.
And then the people that say those words maybe, they get hired because likelihood is they are men.
How many kids do you have?
How far from the job you live?
And what are you willing to do for this job?
I was going to say, like, you know, like, when,
when you think about it, like how these kinds of statistics and prediction works,
it precedes AI by a long time, right?
Like, we know statistically that if you have a longer commute to your job side,
you are much more likely to quit statistically.
But is that fair?
And, you know, we've seen companies trying to use this, like zip codes and stuff,
to then say, like, okay, well, we only hire the people that are, you know,
live in the zip code riding around our store location because they are less likely to quit.
But like, does I really have, that's a criteria that has nothing to do with the job.
It doesn't say anything about your capabilities and if you're going to be good at the job.
It only says something about your situation.
Yeah.
And, you know, and also like, well, first of all, like there are people who do the two-hour commute each way and they do a fabulous job.
So you're cutting out all those people.
And it's not their fault.
And then on the other hand, you also have to look like we live in very segregated communities in the United States.
There's historical vet lining.
So if you like start taking out zip code,
you might actually take out huge swath of African-American population or Asian-American population.
I think that's true around the world, to be honest.
It doesn't matter where you're from.
And we sort of see this kind of statistical bias get replicated again and again.
But now we have this like layer of objectivity.
And we don't interrogate the tools again to actually.
Plausible deniability to enforce more of the biases.
How did you know that?
I think it's plausible deniability.
of the companies that use it and buy from the vendor because then they can't be taken,
you know, it would be very hard to have a court case where you say, like, well, you knew that
YouTube was biasing women and there's like two million people that apply to this,
two million women that apply to this company and you use the bias algorithm on them.
So suddenly give you like potentially two million claims.
That's why we see like sort of what I think is sort of a cloak of silence around this because
companies obviously don't want to come out.
I've had so many people who work in HR tell me, like after the book came out, you know,
oh, yeah, we use that tool that you talk about. And we, you know, stopped using it. And I'm like,
oh, really? I was like, well, that's good. I'm glad you did. They're like, yeah, we sort of realized
we had the same questions. We found the same things that you found. And we just didn't think it was
fair. And I was like, okay, can you talk about this? They're like, oh, absolutely not. But we need to
learn. Like, we'll never get better. We can put pressure on the vendors to build better tools.
if we don't know how the tools work,
and if there's any problems in the tool,
I just looked at a fraction of these tools.
Like, I tested some of them myself.
I worked with, like, scientists to test them.
I looked at, like, you know,
I spoke with, like, whistleblowers
and, like, lawyers who, like, work in the space.
But I have just a sliver of the whole sort of world out there.
Like, we need to do a whole lot more,
but I don't think it's in the company's interest.
They want something that, you know,
like sort of saves the money in HR.
It's always a cost center.
HR never generates money.
or talent acquisition however you want to call it.
And so in a way, they want to save more money,
have less labor involved.
And they don't want to hire people and know stuff,
like picking apart the algorithms,
then, you know, then it might not work.
And what are they going to do then?
They just spend so much money in it.
So when you look at what they're doing,
you know, it seems like,
and maybe I'm going to a dystopian conclusion,
but I've read through some of the companies
that you've investigated and some of the tools
that they've used, it feels like it's becoming more and more pervasive. So first companies just
looked at what you submitted to them, your resume. Then companies started scrubbing what the world
knew about you. And then now because of the way data is shared, I'm even seeing stories where they're
saying some companies may be able to go, you know, as far as your social media. I mean, one of the
craziest examples I saw, which I don't know how true it is, is like your Uber rating is a possibility
in a future, which sounds like something
China was doing or trialing, by the way.
Yeah, yeah, with the social.
Yes, remember that.
Yeah, yeah, basically in China.
If you have a high social score, you get to travel
and you get like certain benefits of society.
But if you're like Jaywalk, you know, visit grandma.
That's me.
No, really.
And so, but now when I think of that, I'm like,
are we heading towards a world where a company can hire you
or fire you looking at your Spotify playlist going,
Oh, this? Oh, no. Oh, yeah, yeah. I mean, look, some psychologists say that like the way we behave is very predictive.
And they can certain find certain ways. Like there was a big finding a few years ago. And I think it was like that a lot of computer scientists are really into manga comics.
So the question is like, well, if you look at them resumes, should you hire the people that like manga?
us because you know they're going to be good computer scientists.
But what is with the people who are great computer scientists who just are not into manga?
Like, that's not fair to those people, right?
So, like, that's sort of the problem with these shortcuts.
But I sort of do feel like there is a dystopian vision that, like, you know, I sort of felt
like at one point, I was like, wow, maybe at one point we're just not even going to do a job
interview anymore.
A company will just tell you if you're hired or fired or if they don't want you based
on all of the social exhaust, the data exhaust, we sort of leaf around.
and companies can predict who we are.
It turns out we did test the sort of personality testing
that is being used on social media.
It doesn't work.
But it's still being used.
It doesn't actually stop people from using shitty technology.
That's sort of the bad part here, right?
But it doesn't actually work to predict what the people are doing.
It does make me think of a dystopian world, though.
Like just this idea that you will be hired before you've ever.
applied for a job.
I just think of like us in the year 3,000 or something,
and a van just pulls up, the door opens,
and they're just like, welcome to the job, Eugene.
We know you better.
We know you better than you know yourself, soldier.
And you're like, what are you talking about?
Yeah.
But you might not even be wrong.
In my conspiracy mind, I'm thinking that AI tools are just a big giant facade
for data harvesting.
Companies know if what they are offering to the,
public is still viable.
Learning institutions know who are the most likely candidates for them to start giving or
keep giving the courses that they're giving because we forget that high learning institutions
are just businesses as well.
Oh yeah, totally.
And some of them use this kind of technology.
To find out.
One-way video interviews.
And yeah, I mean, I think what fundamentally comes down to, it's kind of funny.
What I have learned by bringing AI into the talent acquisition hiring space, I learned
like how bad our old processes are, like job interviews, actually really bad.
Because you are, it sort of filters out the people who are good about talking about doing the job.
As opposed to doing the job.
So we have this like confidence versus confidence problem.
Like people who like come off as like confident, we often think like, well, that person speaks so confidently about.
They must be really good.
It turns out like that are more often than not men.
And that doesn't mean actually they're competent.
So we sometimes complain.
men? No.
Never.
What?
So, you know.
Little old us, Hilka, no.
As always, not all men, but a lot.
Us little old men,
acting like we know more than we do, us?
Come on, Hilka, ooh.
Men's explaining, what?
Wait, I think.
I think this highlighting yet again, the same point again, of saying that biases have gotten
at this far.
I've often heard people who go, if I'm in a criminal trial and I'm thinking of what kind
of lawyer to get out and someone who's talkative, who's out there, who's loud.
But the person who handles my finances must be quiet, you know, reserved and frugal,
and they'll know how to handle my finances.
You know what I'm saying?
Yeah, yeah, yeah.
I've never heard about this talkative lawyer, but I'm sort of like a-
Yeah, we have someone who goes, razzle-dazzle.
We've seen the lawyers that represent rappers.
Charisma.
Yeah.
Yeah, charisma.
Yeah, you want, and it's interesting to exactly what you're saying.
If I hear you correctly, you're saying, in a way, it seems like we are expanding and scaling
on a foundation that was already broken.
Yes, absolutely.
The way we hired was already broken.
Like, job interviews are broken, like sort of looking at, and, you know, resume is a very,
have very little predictability.
Because, you know, like, you put certain, like things, you need to have this skill and this skill
in the job.
And you put that on everyone who applies for the job.
99% of the people will have that on their resume.
And you can't find things like teamwork.
Are you a good collaborator?
Yeah, you don't know.
How are you going to know that from a resume?
How are you going to know that from a job interview?
You can ask questions like, well, tell me how you overcome, you know,
really a challenging situation at work.
But you can train for that.
Like the best way, you know, one of the best way to predict if you're going to be successful,
this will come to no surprise to anyone, is to put you in the job.
And then you can find out if you're going to be good at the job.
Hey, look at that.
Totally doesn't work for most companies to hire 100 people and then let let 99 go at the end of the month.
But sort of my hope sometimes is like, wait a second, like we have virtual reality.
Like we have other ways like could we put people in the jobs and actually have them do the jobs, the most important parts of the jobs?
And then figure out like other than just see how they actually are at the job.
And I think that would also give candidates a way to sort of understand better what is this job actually.
Have you suggested this to companies?
Because this is, I like this idea.
You?
I really do.
You?
I really do.
I do.
I think it turns, you know, I do think it is a little bit more complicated than just
what I'm saying.
Because, you know, like a lot of jobs have different, yeah, they have different capabilities
and different things that you have to test for.
Right.
And some of it is hard to test.
But we need to be better.
Or some like total cynics in this world have sort of suggest that, you know what?
If you want to hire, use a random number generator.
Because that is at least.
fair. You have the same fair chance
as you, you and you
to get
picked.
It's also a way to go bankrupt as a company.
I mean, that's like a, whew,
I'm all for like,
but that's also like chaos.
There's random and there's chaos.
Do you know what I mean?
If you're going to say to people,
you have a random number, just bring the person in.
Yeah, I don't.
If they have the basic capabilities.
Oh, okay.
So you're going, okay, so you're going basic capabilities
and then, like, you've got the qualifications
and then it's random.
I'm in for that.
I'm down.
Yeah.
Try that.
I'm down.
Wait, so, but you know what I want to move on to is like the, we're talking a lot about hiring.
Yes.
Your work really delves into keeping the job, which I think a lot of people aren't aware of and might
even be more terrified to find out about.
Oh, yeah.
What we see at the surveillance at work.
Yeah, like, for instance, and I know there was an explosion of this during COVID, once people
working remote and then companies are like, we need software to know whether people are actually
in their underpants or not,
and we need to figure out
what people are doing at home.
But now, companies
are starting to deploy AIs
that not only see how, like, active you are,
but they try to predict
whether or not the company should fire you,
not based on what you're doing now,
but what the company thinks you might
want to maybe do or not.
Yeah, I mean, I think it's often, like,
you know, it's called like a digital neighbor or something,
like sort of like the idea is,
like, you were a vice president of sales of North America,
there might be a vice president of sales in Europe.
And one of them is like might be more successful or not.
That's actually kind of vague and hard.
But for the sake of this example, we'll assume,
okay, maybe the European person is better at their job.
And so then in AI will like sort of take in all of the digital traces that you leave.
How many emails you send, how many Zoom meetings you attend?
Are you a bullying Zoom meetings?
Do you speak up?
Like you can kind of assess a lot of different things?
and then tell the person in the U.S. like, hey, the person that is your job in Europe and, like, sells more or whatever, like, is more successful.
They do this.
Why aren't you doing that?
It's sort of like a clone of like looking at all of their everything that gets recorded.
And, you know, it's sort of like, I don't know.
We have different ways to be successful.
Like maybe you write 500 emails.
The next person is successful by doing like 100 in-person meetings a week.
That's probably not possible.
But, you know, maybe they do 50 a week.
Who knows? But we sort of, and you know, what does it mean to be successful? Like we had this, like, whole thing.
Probably don't remember this. And I might be dating myself. But they used to be like algorithms in New York City to assess teachers like 20 years ago or so. Like every parent was like, I wouldn't know. My teachers. Well, it turns out like these algorithms were terrible. And a lot of teachers were like put in rubber rooms because their students didn't gain enough knowledge in a year. But it could be that they were already at the time.
The teachers were put in what?
They're called rubber rooms when like when like teachers were not in the classroom anymore
but they were still on the payroll of the Department of Education.
They called them rubber rooms at the time.
Yeah.
Because in my head I was like picturing a room.
You had to go somewhere to work.
I was like a rubber.
No, it sounds like a cell.
I think it wasn't a cell.
Okay.
No, because I just went through that.
You're like they put the teachers in rubber rooms and then I was like, wait, they did what to them?
Oh, they just called it a rubber room.
Huh?
I don't actually know.
history of that. Good question.
Yeah, I want to know.
You want to know about the rubber rooms.
Yeah, no, no.
If someone's taking me to a rubber room, I want to know what a rubber room is.
I would actually.
You would love to go to.
Oh, wow.
Oh, my God.
I don't, I don't know if I want to go there.
Wow.
You got to get paid for free.
You don't have to do nothing.
It too.
Wouldn't you want to be in a rubber room?
Play squash.
Play.
In a rubber room.
In a rubber room.
I still can't.
I still can't believe how digital peeping Tom and the digital tettle tails.
It's just everywhere now.
Yeah, it is everywhere.
I mean, you know, it starts like Supermanine
with like your green light on your email.
Like, are you active or not?
That's sort of like a way.
Yeah.
And then we see when people realize, oh, everything gets recorded.
We see sort of what we call productivity theater.
You know, that people like...
Slow it down.
Do you say productivity theater?
Yeah.
It's like sort of gaming the algorithms.
So you sort of...
Acting like we're busy.
Exactly.
So like in the morning, like you check in on Slack and be like, hey, everyone, good morning.
Like 745, crazy.
And then you turn around and take your dog for a walk and then you don't show up at your desk at 10.
But smoke and screens.
You were like you were productive at 745.
Well, you know, an algorithm will now be able to understand that you haven't said any emails in an hour.
Can I tell you what you've just done though?
You have in a single sentence unraveled one of the greatest mysteries I have struggled with.
working in an office.
I remember the first time
and only time I worked in an office.
I was always shocked by how some people
were just constantly sending
emails and messages.
And I always felt like
they were unnecessary
and they were always at random time,
sometimes on a weekend.
I was like, but now when you put it that way,
I go, they weren't working.
They were trying to maintain,
the appearance of working.
Productivity theater.
So you just like, yeah, you send a message at 6 a.m.
And people are like, man, you up at 6 a.m.
Yeah, wow.
Emails at 3 a.m.
What the?
Well, you just don't stop working.
Yeah.
And, you know, I do think that like...
Meanwhile, you just left the club.
Send.
Schedule. Schedule send.
Schedule send.
Oh, schedule.
Look at this.
Yeah.
But, you know, think about like the office was like sort of always a place to look for productivity, right?
because you had a manager, look at everyone who's working.
And if you left early, that was not so good.
Even though, you know, we know that some people just like set at their computer,
so if the internet didn't do any work, but they were physically at their seats.
We didn't have the technology to actually like sort of see every one of their clicks
and what they're doing.
And now we do.
And sort of we can sort of look at everything you do.
But like the question is like, is this kind of analysis really meaningful to understand
how many emails you sent?
Does that actually have anything to do if you are productive or successful?
What does successful in this job mean?
Those computer systems you're speaking about,
I remember reading about how warehouse is also using it.
Like this is something that I hope people understand
will be pervasive across all jobs.
Because if you work in an office where you're using a computer,
they can track your clicks, they can track your typing,
see what you're doing and how you're doing it.
But in warehouses, I've seen that now they're doing,
employing AI camera systems that see how many employees take bathroom breaks or don't take bathroom.
I swear, how long you spend in the bathroom, how quickly you actually move one package over to the next.
And they look at that.
Different algorithms.
How many of like items do you put in a box per minute per hour?
I imagine your bladder.
Your bladder is the reason that because you've got a smaller bladder than another person, you're getting fired.
Technically that would be illegal.
Yeah, but they wouldn't say it's because of that.
Because they would just go like you take excessive bathroom breaks.
Yeah.
Or you're falling under your productivity.
Exactly.
Because the other people around you, they're hitting these numbers.
Why aren't you hitting those numbers?
As a conspiracy theorist, I'll always say, who, who help is benefiting from this?
Who is?
Because I look at COVID and you explain to me how tough COVID was in the city.
Yeah.
But if you look around the world, how many running shoes have suddenly become in fashion?
How many running clubs?
How many running apps are being used?
how many outdoor activities, hiking, you name it, that people are now having invested themselves in and investing a ton of money in because they missed being outside so much because it was taken away from them.
Could it be that people that fund startups are now having the time of their life because they realize there's these educated people who are trying to get into the job market with these kind of expertise and these kind of interests, but maybe they're not going to get in there.
So how about we give them a hand and make money out of them?
Sure. I mean, I think like the way we see like this kind of technology benefit is usually the companies because that's where the money is, right? Like is an individual like going to buy success company like success AI? Like we don't really see it. It's not really a market, right? Like this is the same way for like job applicants. There's like there is some AI where you can sort of test your resume in the job description. But we see like vastly outnumbered AI for like.
like vendors, the people that make the employment decisions, those folks, because that's where
the money is. Like I sometimes dream of, like, you know, we were talking about bias. And I was like,
you know, wouldn't it be cool if you have like a bias detector in job interviews that pings the
hiring manager? Like, stop talking about you schooling. Like, you know, this is like where bias creeps in
or at least analyze afterwards. So you get like real time feedback. Like, hey, you shouldn't really
ask those questions. Like stick with the structured interviews in a job interview, for example. And
we don't see that because I don't think there's really a market there to do that yet you know I sometimes
feel like you know wouldn't it be cool like I have a young kid so like if you're like a parent and you have
a little AI who's like hey you really shouldn't get so upset with your kid you should really say I like
how you did this and this but like I think a lot of parents wouldn't want to do that because as soon as
you have the data somebody else like child protected services or wherever can come in
and look at that and be like the way you talk to your kid, no good.
Like no one wants that, right?
You're not fit to be a parent.
We'd love to hire you as a manager at our company.
How's your bladder?
You have the personality to enforce the algorithm.
Actually, let's talk about that then.
As somebody who's investigated and gone down all of these rabbit holes,
as somebody who's seen how AI is affecting who gets hired and how you get hired,
who gets to stay in the job and how they get fired.
Yeah.
As somebody who's done all of this work, I'd love to know what you think some concrete solutions could actually be, like where we see progress, where we see solutions.
Is there something, let's break it down.
Is there something lawmakers can do?
Is there something that companies can do?
And then is there something that just workers can do?
So I do think there's room for improvement in all levels.
So I do think that there could be better loss.
here. For example, what we see, you know, the funny thing is, like, I am originally from Germany,
but I remember talking to the former head of talent acquisition at Vodafone, which is a huge
telecommunications company in Europe and other parts of the world, not so big in the U.S.
And he was laughing. He's like, you know what? Like, we use AI and hiring now. And when you want to
upload your resume, there's like Germany and the rest of the world. Because Germany has this one
funny thing that, like, once you're working in a company and you're, you're working in a company and
have, I think, more than five employees, they can have a workers' council. It's not a union.
Sounds like it, but it's different. And the workers' council, there's actually a law, and they get to
code aside technology in the workplace. So some of the surveillance technology, we don't see
happening in Germany because this workers' council has to be notified. And I think a lot of companies
shy away from using some of this very intrusive AI tools. But in the United States, for example,
like anything that happens on a work computer
belongs to the company.
So don't do it like private slack messages,
like private surfing, all of that can be recorded
by the company and it belongs to them.
So you want to be very careful of that.
So I think there needs to be many more privacy protections
and I think companies should be mandated
to tell the employees what kind of software they use in them.
So for example, some of it is like very basic,
but like if you suddenly print a lot,
that might be an indication that you're at flight risk.
So maybe the company lawyer
should be looking into what you're moving away from your computer.
Like those kinds of like sort of digital tale tales.
You know, I think companies should tell us.
And maybe there should be a way for like employees to co-decision making.
Some of the time, you know, if you're working in a nuclear power plant, maybe you do want
AI to scan for like exposure to radiation.
I won't want that.
So, you know, there might be cases where this is like actually really helpful.
And maybe everyone agrees that like, you know what?
Printing is a problem.
shouldn't be printing so much and you shouldn't like move files and that could be an indication
that you're leaking yaddi yadi like maybe maybe we can make a decision together but we don't we don't
see that so it's like all top down and people are you know these kinds of tools and decision makers
are being used on them and they don't even know it and I think that's really unfair and there's
no way to push against that I think also like companies need to be much more skeptical when they
buy these AI tools not believe the hype that this is going to solve all their problems
they're going to hire the best people, like actually show me, show me the evidence.
Like, show me how it works. I'd be happy to look at it. And, you know, I'd be open to it.
Like, maybe an AI is better. Wouldn't that be great? But we need to know. We don't actually
know that kind of stuff. So we need to interrogate these algorithm, understand the processes
underneath them, and really critically assess them. I think that's where maybe humans are coming
in in this world. So we need to be much more skeptical there. And then,
As like the applicant for jobs, that's the hardest part because there isn't necessarily something you can do except like call your congressperson and sort of be aware what is out there and like try some of the tools.
Like, you know, there's definitely better ways to like have a machine readable algorithm and there's things you can do.
But you know when like 5,000 people apply for one job and they close the job description after, you know, they close the job portal after 24 hours.
Yeah.
there's nothing we can help you with there.
Like it's sort of like a bigger societal change.
To be much more skeptical about these tools and put pressure on lawmakers,
decision makers to do a better job here.
And to just be more transparent.
Like one of the stories like of Martin came through because he lived in the European Union
and knew about the laws and he asked for the data.
Like there is a general privacy protection law.
And you can ask for your.
data that companies have on you. And that's how he found out that the company used AI, which was
against a law, la, la, la. So he got a settlement. He actually started a case. So that was like a gold
mine for me. I call him patient zero. Because he's sort of the first person who like encountered these
kind of AI tools in the hiring phase and then actually got the data on himself, right? That's like gold to
me. So we could sort of unravel and talk about the case because we had the data. And we don't have
anything like that, at least on a federal level in the United States. So there's like way more work
to be done to make this better. And I do think in general, like, I do like, I think we talk a lot
about like sentencing guidelines with AI to send people to prison, should you get a mortgage.
And I think those are all very consequential decisions. And we absolutely need to take a closer look
at those, a look at them critically. But I also think hiring is really important too. Like it matters
if I can pay the bills. Like it matters if I can put food on the table. Like also like
happiness is tied to our jobs for many people. Like we spend enormous amounts of hours at our
jobs. So like it better be something we kind of like at least. So it matters if I get the job or not.
So we really should be scrutinizing these kinds of system if it makes decisions on humans.
If it makes decisions about my spam and it doesn't work, I'll find another spam filter. Like
fine, great use for AI. But for hiring in these
critical human decision making is where human lives are at stake, you've got to be much more
skeptical, scrutinize these tools. And then we probably have a chance of building a better world.
Well, I will say there's one part of the equation I'm very grateful for, and it's that we have
an intrepid investigative journalist who's doing the work.
Thank you. Sometimes you wonder, like, does my work have an impact? But I do think sometimes,
you know, when I show people, like my videos from eight years ago about like the emotion recognition
of facial expressions.
And they're like, wow, that could be so easily biased.
And I was like, wow, I guess our work sort of like has made a difference because eight years
ago we're all looking at like, whoa, who knew?
This is so cool.
And now everyone is like, oh, wait a second.
Like if they're only like, you know, more men than women in the data, la la la.
And I was like, wow, there is like sort of a much more education around AI and bias and all
of those things.
And I think it has made an impact slowly but surely.
Slowly but surely.
But I'll tell you, now I know for my next job, I've got something to think about when we get out there in the streets.
And from my side...
Please work with me on hiring.
Oh, no, yeah.
Thank you very much.
You know, from me, from Syria, from Canada and Thomas, we just want to say...
Let's flip a coin on hiring.
Let's do it.
And see how it works.
Okay.
Thank you very much.
Thank you.
What Now with Trevor Noah is produced by Day Zero Productions in partnership with Sirius XS.
The show is executive produced by Trevor Noah, Sanaziamen, and Jess Hackle.
Rebecca Chain is our producer.
Our development researcher is Marcia Robiu.
Music, mixing, and mastering by Hannes Brown.
Random Other Stuff by Ryan Hardoof.
Thank you so much for listening.
Join me next week for another episode of What Now?
