Hard Fork - A.I. Action Plans + The College Student Who Broke Job Interviews + Hot Mess Express
Episode Date: March 21, 2025With all of Silicon Valley weighing in on the Trump administration’s new action plan for artificial intelligence, Kevin and Casey break down what’s on everybody’s wish list. Then, a Columbia und...ergrad explains why he built a tool that lets coders use A.I. to cheat on job interviews. And finally, we climb aboard the Hot Mess Express to talk Solana’s anti-woke ad, anxious A.I. algorithms and Slack spycraft in the world of H.R. Guest:Roy Lee, Columbia sophomore and founder of Interview Coder Additional Reading:America Crafts an A.I. Action PlanMeet the 21-Year-Old Helping Coders Use A.I. to Cheat in Google and Other Tech Job InterviewsSolana Pulls Ad After Huge BacklashDigital Therapists Get Stressed Too, Study FindsRippling Sues Deel, a Software Rival, Over Corporate Spying We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok. Unlock full access to New York Times podcasts and explore everything from politics to pop culture. Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify.
Transcript
Discussion (0)
KC, I have a bone to pick with you.
What's that? What'd I do?
So on Saturday, as you know,
we had a birthday party at our house.
Wonderful birthday party.
My son.
Yeah, and it was also a housewarming party.
Housewarming party.
And you and your boyfriend came.
Lovely to see you there.
Thanks for coming.
But you brought him this present.
We specifically said no presents.
You did say that.
And you brought in this present
that was called the Dino Truck.
Yes, and here's why.
Because I know that your son loves trucks.
And I thought, what is the best kind of truck
I could think of?
And that would be a truck that was also a dinosaur
that was full of dinosaurs.
And so that's what I got him.
It's very like, pimp my ride coded,
because it's a dinosaur truck that contains within it
12 other dinosaur trucks.
That's right.
And you sort of like assemble it all together.
But my son has not stopped playing with it.
He absolutely loves it.
And as a result, about twice a day,
I now step on a very painful dino truck
that has been left somewhere on my house.
So he's loving it.
I am not.
I mean, when I think it was the best kind of gift
I could get for the Roos family,
it is something that your son enjoys
and that causes you physical pain.
So I think that was a slay on my part.
Mission accomplished.
Mission accomplished.
Ha ha ha.
I'm Kevin Roos, a tech columnist at the New York Times.
I'm Casey Newton from Platformer.
And this is Hard Fork.
This week, America is building an AI action plan.
They'll tell you how tech companies are trying to exploit it.
Then, Columbia University sophomore Roy Lee joins us to talk about the tool he built to
help software engineers cheat their way through job interviews and why he might get kicked
out of school over it.
And finally, the Hot Mess Express is once again rolling into the station.
[♪ music playing, fades out, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music ends, music plans. So Casey, as you know, because you wrote about it this week,
there have been these AI action plans
that all the big AI companies and think tanks
and nonprofits have been submitting
to the Trump administration over the past couple of weeks.
Yes, there was the Paris AI action summit
at which no action was taken or really even proposed.
And then the White House came forward and said,
we're gonna make our own action plan.
And why don't you companies
and anyone else who wants to make a public comment,
go ahead and tell us what you think we should do.
Yeah, so these kinds of public comment periods
are not unusual.
Agencies of the government sort of opened themselves up
for submissions from the public
all the time on various issues.
But this one caught our eye because it was related to AI
and it was essentially the Trump administration trying to figure out what to do about AI and the
potential that AI is going to accelerate during the four years that Donald Trump is in office.
Yes, I think that's how the Trump administration saw it. And I think for the big AI companies,
Kevin, it was really a chance to present the president with a list of their absolute fondest wishes and dreams
for what the best possible deal they could get
from the government would look like.
Yes, so I think there's some interesting stuff in them,
but I also think there's kind of a broader,
interesting story about how the tech companies
want or don't want government to be involved
in helping them build and manage
these very powerful AI systems.
Yes, let's get into it.
Okay, but first, because this is an AI-related segment, we should make our standard disclosures.
Do you want to switch it up this week? Do you want to do mine and I'll do yours?
Yeah, sure. The New York Times is suing Microsoft and OpenAI over alleged copyright violations.
Correct. And Casey's Manthropic works at Anthropic.
That's right.
Okay, so you wrote about these submissions this week.
Where do you want to start?
Well, let's start at maybe some of the things
that are a little bit less controversial, right?
I think there are some pretty good ideas
in these action play-outs.
And I actually think the Trump administration
will probably follow through on them.
So for example, they talk about wanting to expand the
energy capacity that we have in the United States so that we can
have the power that it will take to do everything with AI that we
want to. They also talk about encouraging the government to
explore positive uses of AI, right, potentially deliver
better services to citizens, that would be good if that
happened. So there's a lot in these documents about that. But once you get beyond that surface layer, Kevin,
there is a lot of essentially what these companies have always wanted the government to tell them,
and they are now finally getting a chance to say, hey, please, please, please do this.
And what are those things?
So for example, they are really, really excited about the idea
that Donald Trump might declare definitively that they have
carte blanche to train on copyrighted materials. Now, this
is of course, at the heart of the times lawsuit against open AI.
But it's not just open AI that wants the green light to do
this, right? Because all these AI labs are under similar legal
threat. So it's in Google's AI action plan. It is in Metta's AI action plan. In fact, Metta says
that Trump should unilaterally without Congress, just issue an executive order and say, yeah,
it's okay for these AI labs to train on copyrighted material, go nuts. Open AI, I think, had a frankly
copyrighted material, go nuts. OpenAI, I think, had a frankly ridiculous statement
in their AI action plan,
which is that if Trump does not do this,
if he does not give AI companies carte blanche
to train on copyrighted materials,
we will immediately lose the AI race to China,
and it will just be deep-seek everything from here on out.
Huh, I mean, obviously they have interest
in making that case and having the Trump administration
give them sort of a free pass, but can they actually do that?
Like could Donald Trump issue an executive order tomorrow and say, there's no such thing
as copyright anymore when it comes to the data used to train large language models?
Well, Kevin, lately the Trump administration has been issuing a lot of executive orders
that people have said, well, hey, you're not allowed to do that
That's actually not constitutional and yet he keeps doing it and some of these things have been struck down by the courts and some haven't
Been and there seems to be a kind of flood zone strategy where we're just gonna sort of do whatever we want and the courts may
Undo some of it, but they're probably not gonna undo all of it. So where would copyright executive order fit into that?
I don't know. Yeah, I mean my hunch is that this will not happen via executive order, that it will be left up to the courts to decide. But yeah, I mean, it's certainly in their interest to argue that this all should be allowed in kosher and to sort of preempt any potential litigation against them. Was anyone opposed to that idea?
against them. Was anyone opposed to that idea? Yes. So a group of more than 400 Hollywood artists, including Ben Stiller, Mark Ruffalo,
Cynthia Erivo, and Cate Blanchett signed a letter saying, hey, do not grant an exemption
from copyright law to these AI labs. And their argument was essentially, America has a lot
of cultural leadership in the world. You know, it's like so much global culture
is downstream of American culture.
And they said, if you create disincentives
for us to create new works,
because we can no longer make any money from it economically
because AI just decimates our business,
we are going to lose that cultural leadership.
And so I would actually call on Ben Stiller,
Mark Ruffalo, Cynthia Rivo, and Kate Blanchett
to come on the Hardfork podcast and tell us more about that.
We'd love to meet you and hear your stories.
Yeah, I would call on them to frame their opposition
in the form of a musical.
Cynthia Rivo in particular.
I have a proposal for the sort of
showstopper tune of that musical.
Have you written it?
Yeah.
It's called Defying Copyright.
Oh boy, wow.
You wouldn't even try for a rhyme.
You know, when it comes to copyright violations,
Cynthia Revo is decrying depravity.
And that's how you do it, Kevin.
Okay, back to the serious issues
in these AI action plans, Casey.
Yeah, there's another big plank that gets repeated in these submissions, Kevin, and
that is this idea that these companies do not want to be subject to a thicket of state
laws about AI, right?
Yes.
Basically, what the AI companies don't want is, in the absence of strong federal regulation
on AI, they don't
want California to pass a bill governing the use and training of large language models,
Texas to pass a bill, Florida to pass a bill, New York to pass a bill.
They don't want to have to kind of go through 50 states worth of AI regulations and making
sure that all their models comply with all the various state regulations.
So they have wanted for a long time and are now making explicit
their desire for a sort of federal law or statute or
executive order that would essentially say to the
companies, you don't have to pay attention to any state laws
because the federal law will supersede all that.
Yes. And in particular, Kevin, they are worried about state
laws that would make it so that these companies could be held
legally liable in the event that their products
lead to great harm, right?
There was some discussion about this in California last year
with a Senate bill that we've talked about on the show.
And there's a lot of fear that other states
might take a similar approach.
And so this plank in these plans, Kevin,
where these companies are saying, we don't want a thicket of state laws,
it kind of works in a couple different ways. I can understand
why they don't want to have to have a different version of chat
GPT in 50 different states that would obviously be very like
resource intensive and annoying. At the same time, these
companies know full well the country they live in, they know
how many tech regulations we passed in this country in the past 10 years.
There is only one of them and it was to ban TikTok.
And it turns out that even when you pass a law banning TikTok,
TikTok doesn't get banned.
So I think that there is a bit of cynicism here
and that they're saying, oh, please, please, please,
let there not be any state laws, just pass a federal one.
They know that there is very little likelihood
that that is going to happen anytime soon.
And so that in the meantime, they can just operate under
the status quo where they don't have direct legal liability
for any bad outcomes that might arise from a future
large language model.
So I went through a lot of these proposals
and I think there's some interesting stuff in them
sort of around the edges.
There was a lot of talk about the security of these models
and trying to sort of harden the security
of the AI companies themselves.
So that for example, foreign spies
aren't stealing the model weights
and sending them to one of our adversaries
or things like that.
By the way, I love that word.
Oh, we have to harden our defenses.
We have to make them so hard.
We have to harden our posture.
I don't know when we started saying that.
Okay, so this is a family show.
It's very evocative is all I'm saying.
Anyways, go on.
So there's some sort of small bore stuff in there
that felt interesting.
Small bore, by the way,
two words often used in reviews of this podcast.
I don't know why I keep interrupting you.
I'm just trying to get the energy level up,
but we're doing great.
That's fine. All right, tell us more.
So some of the plans contain
some sort of weird,
interesting ideas.
Like, for example, in OpenAI's proposal,
there's this idea that 529 plans, which are the plans
that parents can start to save for their child's college
education, should be expanded so that they
can be used to pay for things like getting an HVAC technician
credential.
Because they say, we're going to need a lot of
HVAC technicians in all these data centers.
They're going to power all these AI models.
And right now, you know, kids are being incentivized to go to college and get four year degrees
and you know, various subjects that may not be that relevant, but like we were definitely
going to need a lot more HVAC technicians.
Is that going to change the world overnight?
No.
Is the Trump administration going to take that seriously?
I have no idea.
But that's the kind of thing that I was surprised to see in there.
But what I found more interesting was what was not in these proposals.
These companies and the people who lead them have big radical ideas about how society will
change in the coming years as a result of powerful AI systems.
Sam Altman has been interested for years in universal basic income.
He funded a universal basic income experiment to try to figure out
what an economy after AGI would look like and how we would provide for people's basic needs.
There are executives that are trying to solve nuclear fusion to power the next generation of AI models.
There are people who want to do things like WorldCoin, which Sam Altman also funded, to sort of give people a way to verify that they are humans.
You can imagine a world in which the AI labs were saying to the government and the Trump administration,
hey, we have all these ambitious plans, we want your help. Please help us come up with
a UBI program that might make sense for people who are displaced by AI.
Help us come up with some kind of
national proof of personhood scheme or help us build fusion energy.
But they're not asking for that stuff.
What they're asking for instead is basically,
leave us alone and let us cook.
Yeah.
It really makes me think that these labs have
decided that it would be more trouble to have
the government in their corner actively helping them,
then it would help.
Yeah.
So my read of these proposals is that they are trying to
give the government some stuff that they can do that will make
them feel like they're helping and sort of clearing the path for AI,
but that they're not calling for any kind of like
federal Manhattan project for AI,
because my sense is that they just think
that would be inviting trouble.
Yeah, and I mean, they might be right about that, right?
I'm not sure exactly what the government could
or should be doing to like help OpenAI
make a better version of chat GPT. But you know, I think
I would go a step further than what you said, Kevin, because
it isn't just leave us alone. They're really telling the
government leave us alone or else there is a boogeyman in
these AI action plans. And the boogeyman is deep seek. So deep
seek, of course, is a Chinese company that emerged with a model called R1 earlier
this year that shocked the world with how much it had caught up to the state of the art and has
really galvanized the attention of Chinese leaders around the possibilities of what AI can do in
China. And so when you read the the OpenAI and the meta action plans in particular, they're saying, look at deep seek,
China is so close to us, you really need to let us
do exactly what we want to do in the way
that we are already doing it, or we're just gonna lose
to China and it's all gonna be over for us.
Yeah, yeah, I noticed that too, and I think we've seen
that being telegraphed at things like the Paris AI Summit
where there was a lot of talk about China and
Foreign adversaries that were catching up to the state-of-the-art AI technology
But to me that feels like very calculated like that is the role that the AI companies want the government to play
Other than just getting out of their way. They also want them to hobble
China and make it hard for China to sort of catch up to them in a
state of the art. And there's a genuine read of that that is like, we're worried about
Chinese companies getting to something like AGI before Americans and what happens if their
values rather than ours are embedded in these systems and they just use them for surveillance
on their own citizens and things like that. The cynical read is like, we have this new
competitor and we would like the US government to step in and make things actively harder on their own citizens and things like that. The cynical read is like, we have this new competitor
and we would like the US government to step in
and make things actively harder for that competitor.
Yeah, and look, I mean, I think there are reasons
to be worried about what an adversary could do
with a really powerful AI.
So I don't wanna dismiss these concerns completely,
but I do feel like some of these labs
are trying to use the specter of China
in a pretty cynical way.
My favorite story about this issue, Kevin, does have to do with Metta. So, you know, Metta writes
in its proposal to the government a lot about DeepSeq. And Metta's number one priority in its
action plan is that it continues to be able to develop what it calls open source AI. Now, Meta's AI is not actually open source.
There are a lot of restrictions on how you can use it.
Most people would call it open weights
instead of open source,
because you can download the model weights,
but not the actual source code.
Okay, we're a little bit in the weeds,
but I do feel all about the point of saying.
Our listeners have fallen asleep.
Wake up!
Okay, so let's just wake up by saying that.
Meta says to the government,
look at what DeepSeek is doing.
If you don't let us develop in an open source way,
DeepSeek's own sort of open weights approach
could spread all across the world
and it will have these authoritarian values embedded in it
and we will just sort of lose out
on the opportunity of a lifetime.
Why is that funny to me?
Well, Kevin, it's because in November,
Reuters reported that Chinese researchers
had used Metta's LAMA model
to create new applications for the military.
Oh boy.
So, you know, and look, does that mean
that China used LAMA to build a giant space laser
that's gonna vaporize the eastern seaboard
No, but it does suggest to me that this idea that we have to release quote open source AI
In order to save us all is probably not the right answer. Yeah, if anyone from the Chinese military is listening to hard fork
Please don't develop a space laser using llama
That seems scary.
That's our AI action plan. No space lasers.
So before we wrap up talking about these AI action plans, I want to point to a few good
ideas that I saw in them. Many of them came from groups other than the big AI labs, but
I thought there was some interesting sort of off the wall stuff that I hope the Trump
administration is paying attention to this. One of them was this proposal from the IFP,
the Institute for Progress,
which is a pro technology progress think tank.
IFP says, we're going to need
a bunch of data centers and
a bunch of energy sources to power those data centers,
but all that requires building physical infrastructure,
and it can be quite slow to build to power those data centers, but all that requires building physical infrastructure,
and it can be quite slow to build physical infrastructure
in many parts of the country,
due to things like environmental regulations,
and zoning, and things like that.
So they proposed creating these things
called special compute zones,
where you would essentially be able to build
in a much less restricted way,
the infrastructure to power advanced AI systems.
That's actually what I call my office,
is a special compute zone.
When I see like guests going in there,
I say, hey, get out of there.
That's a special compute zone.
Yeah, so that was one interesting idea
from the IFP proposal.
I also-
Did the Institute Against Progress
have any interesting ideas you wanna share?
Well, there isn't an Institute Against Progress,
but there are some organizations
like the Future of Life Institute that are much more
concerned about the development of these powerful systems.
This is one of these organizations
that's been around for a while.
It's concerned with things like
existential risk and runaway AI.
One of their ideas that they put in their proposal was that
all AI models of a certain size and
power should have kill switches on them.
Basically, in order to release one of these things, power should have kill switches on them.
Basically, in order to release one of these things, you should have to build in a way
that an engineer can shut it down.
The way that they pitched this to the Trump administration was this is a way to protect
the power of the American presidency.
As the president, you wouldn't want some AI system going rogue and becoming more powerful
than you or allowing another world leader to become more powerful than you.
So you want to kill switch on these things in order to protect the authority of the American
president.
Yeah.
And you know, one of the most interesting things about all of these plans, Kevin, is the way
that the authors have to contort themselves to try to talk about AI in a way that the
Trump administration will actually listen to.
Vice President Vance in Paris in February
says explicitly that the AI future
is not gonna be won by hand wringing over safety.
They hate the term AI safety.
And so in fact, when you look at the proposals
of the major lab, they basically don't use the word safety
at all except maybe,
you know, one time I actually was doing like, like command F to try to find instances of
safety in these plans, you won't find it there. And so they have to sort of contort themselves,
you know, in anthropics policy, it was almost like they were like hiding medicine inside
of peanut butter and feeding it for a dog. Because instead of talking about safety, they
would talk about national security, which is just another way of talking about AI safety.
But actually, a lot of their proposal is about
how can you build these systems safely.
It's just that they're saying, you know,
there's a national security implication.
Yes. So I think if we zoom way out from the specifics of these proposals,
the two things that I want to convey about this process,
one is that the AI labs mostly want government
to leave them alone.
The second thing is that I think the AI companies
are slowly and haltingly learning to speak the language
of Donald Trump.
And this is their sort of first major public attempt
to talk to the Trump administration
in the way that it wants to be talked to
about how to harness the power of AI for
American greatness or whatever
So I have a slightly darker view of this which is that the Trump administration
Has essentially already told us its AI action plan, which is go faster beat China, right?
That is the plan and when given an opportunity to, what do you think the United States should do?
The biggest AI companies all looked around and they said,
we should go faster and we should beat China.
Now, if it happens that the United States is able to build
a very powerful and very benevolent AI
and somehow, you know,
create and promulgate democracy around the world,
then okay, that's great.
But I think that there is a risk that this leads us into some sort of conflict or that
by going very fast, we wind up making a lot of mistakes and we're at a higher risk of
creating systems that we cannot control.
So if you are, you know, in your cars this morning listening to us wondering why did
they talk so much about these plants?
This is the reason why to me is that this feels like an inflection point where some of
the most consequential figures governing the development of AI had a
chance to say we should be really careful and thoughtful about this and
they mostly did not. Yeah I think that's a really good point. Casey what is our AI
action plan because we have to be part of the solution here. Two words, underground bunker.
I'm not telling you where it is,
but it's under construction.
How about you, Kevin?
I can't do better than that.
That's good.
Can I have a spot on your bunker?
Absolutely.
There will always be a spot for the Roos family
in the hardboard bunker.
Okay.
That's very sweet.
Thank you.
We're not bringing the dyno truck.
Okay, that's very sweet. Thank you. We're not bringing the dyno truck
When we come back the college sophomore who has a cheat code for leak codes Well, Casey, we've got a doozy of a story this week and an interview with a real live member of Gen Z.
Yeah, and we are excited to talk to this one.
This is a controversial story, Kevin,
but one that we think that tells us a lot
about the state of the world.
So today we are talking with Roy Lee.
He is a sophomore at Columbia University.
For now?
For now, for at least the next couple of days.
And he has gotten a lot of attention in recent days
for something that he's been doing
to apply for jobs in the tech industry.
What has he been doing, Kevin?
So Roy has developed a tool called Interview Coder
that basically uses AI to help job applicants
to big tech companies cheat on their interviews.
So in a lot of tech interviews,
they do these things called leak code problems,
where basically the recruiter or the person
who's supervising the interview from the tech company
will watch you kind of solve
a tricky computer science problem,
and they'll do this remotely.
And so Roy had this idea, well,
these AI systems are getting quite good
at solving these kinds of problems.
What if you could just kind of like
have the AI running in the background
telling you how to solve the problem
and you could kind of make that undetectable to the company.
Yeah, and to prove that this work, Roy applied for jobs
at several big companies, including Amazon,
and he says, wound up getting offers from all of them
after using this tool, and after he began promoting
this story online, well, that's when all hell broke loose.
Yeah, so he has become sort of a villain to a lot of tech employers and people doing these
kinds of interviews, but he's become a hero to a bunch of younger programmers who think
that these practices, these hiring tests, these puzzles that you give people when they're
looking for jobs are outdated and that they need to be sort of exposed as being bad and
wrong and that we need to come sort of exposed as being bad and wrong and
then we need to come up with something better to replace them.
Yeah, and Kevin, I am sure that some listeners are going to hear this segment and they are
going to email us and they are going to say, shame on you.
Why are you giving this guy a platform?
We shouldn't be rewarding people for cheating.
But I have to tell you, as we sat with it, we thought this is a story that tells us a
lot about the present moment.
The nature of software engineering is changing.
The nature of hiring is changing.
What should employers be looking for
and how should they test for it?
These questions are getting a lot more complicated
as AI improves.
And Roy's story, I think, illustrates
how quickly things are changing
in a way that is just honestly worth hearing more about.
All right, well, with that, let's bring in Roy Lee.
Roy Lee, welcome to Hard Fork. Hey, excited to be here.
So where are we finding you today?
It looks like you're in a dorm of some kind.
Yeah, yeah, I'm still in my Columbia University
dorm at the moment.
Possibly for not too much longer, is that right?
Yeah, yeah, I'm waiting on a decision
to hear if I'm kicked out of school or not.
So this might be my last few days.
And what's the over under on whether
you get kicked out or not?
From the facts of the case,
I would say it's not looking good for you.
Yeah, yeah, it is not looking too good for me.
But strangely enough, I've had some pretty powerful people
message me and say, hey, if they try to do anything,
then just let us know.
So, yeah, both worlds are in the realm of reality.
Wow, so I want to get to all the disciplinary drama,
but I want to actually take us back in time
to when this all started for you.
When did you get the idea for this tool, Interview Coder,
and what problem were you trying to solve? Yeah, so I don't know how familiar you guys are the idea for this tool, Interview Coder,
found on our website, leekcode.com. And you're given 45 minutes and the task here is to sort of have seen the problem before, solve the problem, and be able to regurgitate the memorized solution without acting like you haven't seen the problem before.
So it's pretty much a really ridiculous system and type of interview and every single software engineer out there sort of knows it.
And everyone, if you want a job that pays a reasonable salary, then you're kind of forced to go through this gauntlet of spending a couple hundred hours on this website memorizing a bunch of riddles.
reasonable salary, then you're kind of forced to go through this gauntlet of spending a couple hundred hours on this website, memorizing a bunch of riddles.
I myself went through the gauntlet. I grinded the website for probably up until I was in the top 1% of competitive ranked users on the website.
So it was just a gigantic waste of time. I spent 600 hours of my life memorizing riddles when in reality I should have been programming.
And as soon as I kind of developed the balls to kind of do something, I just realized, in my life memorizing riddles when, in reality,
So you say you spent, you know, hundreds of hours on this website solving these rules I'm curious if you feel like it made you better at coding like my my guess would be is like if you're truly in like
You know the the top, you know
One percent of people who are using this website to solve problems it would have made you pretty good at being a software engineer
There might have been utility and maybe solving the first 20 questions
Um, maybe like the first 10 hours on the website
might have had some utility.
But after that, it doesn't really help you at all.
The types of problems and the type of thinking
that you're expected to perform while doing these questions,
it's just you're never ever going to use it at a job.
All right, so you get very frustrated with lead codes.
You start thinking about what you want to do next.
And tell us at the moment that you decided
to become the Joker.
Yeah, so during the recruiting process,
my interest in entrepreneurship was growing,
and at a certain point, it kind of got to a point
where I realized, like, hey, no matter what,
I'm only going to end up at a startup,
and I kind of have the balls to cut off all these bridges now
with big tech companies,
and as soon as I developed that mindset,
I realized that, hey, doing this thing
is not actually going to ruin my future
as much as I think it will,
and in that case, it just becomes a super viral thing that we know will go viral.
So tell us about the thing. Tell us about the tool that you built and how it works.
Yeah, so really core level, it's a desktop application that sits, it overlays on top of all of your other applications.
And it's completely invisible to screen share.
The technology is actually very, very simple. You just take a screen shot of the screen and ask the GPT, hey, can you solve the question you see on the screen? And it spits out the response. to screen share. the cursor doesn't lose focus and there's just a lot of bells and whistles we've used to make it completely undetectable that you're
actually using something at all.
So let me get a sense of how this actually works in practice.
So during an interview for a programming job,
you would be given a lead code problem to solve and then you would be on
a video call with someone,
a recruiter from the company who's watching you solve the problem.
Is that how these work?
Yeah, that's exactly it.
So you developed a tool to essentially allow you
to have AI solve this problem for you
while not tipping off the person on the other end
of the video call that you're using AI.
Yeah, yeah, yeah.
That's how it works.
And am I right that you used a prototype of this
when you were going through your own interview process
with Amazon?
Yeah, yeah, it wasn't just Amazon. I spent the entire recruiting season figuring out
how to make a perfectly undetectable application. I trial ran it with companies like Meta,
Capital One, TikTok, and the bell of the ball was Amazon. That was sort of the most well-known
thing with the most annoying recruiting process. And I just knew that if I recorded the entire
process, then this would blow up.
And how did your tool do?
Yeah, I mean, it completely one-shot it.
Like, we live in an age where AI exists,
programmers are going to use AI,
and AI is extremely good
at these sorts of brittle type problems.
Can I just ask about,
what is your emotional experience of this time?
You are walking into like several lion's dens,
you're essentially misrepresenting yourself
as an earnest job candidate.
Your whole role is essentially to gather content
that can then be used to repurpose to promote your startup.
Were you nervous during this time?
What were you feeling as you were going
through these interviews?
Yeah, you have no idea.
There was a point in time where I was getting flooded
with disciplinary messages from Columbia,
and I just thought, like, I just completely burned my career
and my future education for 20,000 YouTube views.
Was this really all worth it?
And I was in this mental state for about a week
until it kind of blew up.
And at that point, the virality kind of was
my protection for everything.
And just help me understand here,
like, what Columbia's role in this is.
So obviously, what you're doing in
sort of cheating on these job interviews for Amazon and Meta
and TikTok and these other companies is against those
companies wishes and their policies. But why did it become
Columbia's business?
Yeah, I actually have no idea. I read the student handbook quite
thoroughly before I actually started building this thing because I was ready to burn bridges with Amazon, but I didn't actually expect to get expelled at all.
And the student handbook very explicitly doesn't mention anything about academic resources.
Leak codes?
Yeah, yeah. There's no mention of leak code or job interviews anywhere outside of there. I have no idea why this became Columbia's business. We should say we reached out to a spokesperson for Columbia about this and
they declined to comment. We also reached out to Amazon and while they declined to
comment on the specifics of Roy's application, they did give us a statement
defending their hiring process and clarifying that while they do welcome
candidates to describe their experience using AI tools, in some cases they
require applicants to acknowledge
that they won't use AI during the interview
or assessment process.
So how long has your tool been out in the market
for other cheaters to use?
It's been out since February 1st,
so just a little under 50 days now.
What can you tell us about how many people are using
and what kind of outcomes they're seeing?
Yeah, there's been a few thousand users now
and not a single reported instance
of the tool getting caught.
There's been many, many grateful emails
of people having used the tool to get job offers.
It's doing very well.
So like you Roy are a capable coder, right?
You are in the top 1 percent of lead code solvers.
You presumably could have gotten
some of these jobs without AI assistance,
but some of the people using
this tool may not be talented programmers.
They may be using this to kind of skate through
these interviews that they shouldn't be passing
and wouldn't pass without AI assistance.
I'm just imagining those people showing up for day one of their internship or their job at Amazon or another big tech company and just having
no idea what they're doing and being totally useless without AI assistance. Is that something
that worries you about putting this kind of tool out into the world?
Not at all. I think Leakhood interviews are about as correlative as how many jumping jacks
can you do being the benchmark
for how good of a New York Times podcaster you are.
It just really has nothing to do with the job.
Perhaps it is correlative that someone is willing to put in the work because they really
want to be a New York Times tech podcaster, but in reality, they just have nothing to
do with each other.
What in your mind would be a fair test of somebody's software engineering skills that
could be used as part of an assessment?
Yeah, I think there's assessments out there that give you access to all the tools that
you have on the regular day-to-day job, which includes tools like AI code editors.
And if you ask someone a pretty fairly open-ended assignment with an AI code editor and sort
of just like gauge them on how well they did there, then that's like a much more standardizable
assessment that allows you to use the tools that are at your disposal.
So essentially just say like, look,
use whatever tool you want,
just get this thing done in a reasonable amount of time.
That's the test you want to see these companies offering.
Yeah, exactly, exactly.
Did you have at any point during this process,
any misgivings or ethical concerns
about what you were doing?
No, I mean, I was very intentional from the start
that I was not going to internet any of these companies.
And frankly, like I don't really care
if there's people that are cheating their way
to get these jobs.
I mean, like, again, if bring back the jumping jack example,
like if you were just told to do as many jumping jacks
as you could and the winner gets a position,
like I wouldn't really care if someone's cheating their way
through a bunch of jumping jacks.
What does your family think about what you're doing?
Yeah, so my mom actually only figured out
about a week ago and I didn't tell her before then
because I knew she would disapprove,
but I've always been a pretty rambunctious kid
who's been pretty self-minded
and sort of does what he wants.
I think they're a lot happier now that they know
how much money I'm making, so.
Good, okay.
And how much money are you making?
Yeah, we're on track to do about we're closing it on $200,000 this month
So we're on track to do about like two three million in the year
Wow that would almost buy you one year of education at Columbia University
Pretty good pretty good. I
Think your tool is arriving at this really interesting time, Roy.
You know, Kevin and I have been talking in recent weeks about the phenomenon of vibe
coding.
People like me and Kevin who have no technical skills whatsoever, but we can sit down with
something like Claude and say, hey, write me an app.
Kevin has actually had some success with this.
I've made some really bad video games like using this thing, right? I do not consider myself a software engineer, but at the same time, what you
are having job candidates do with your tool and what we are doing as VibeCoders
is not really that different, right? We're just typing some text into a box and
getting some sort of output. And so I'm wondering, are we just at an inflection point
where the line between software engineer and Vibe coder
is kind of dissolving?
That's certainly the future that we're headed to,
but I think we're a few years away.
In my opinion, what AI really has the potential to do
is make someone about 10 to 100 times more efficient
at what they're able to do.
If you're a really good coder,
then you're able to code really good things really a lot faster.
But if you're not that good in the first place, then there's still going to be a huge difference
between what a staff software engineer at Google is capable of and what you are.
This does feel like a classic anxiety dream, where you show up on your first day as a software engineer at Google,
but you realize that you actually only know how to vibe code, and now you just sort of have to fake it for your entire career.
But presumably some people who use your tool, Roy, are having this experience.
Yeah, I mean, that's probably what 50% of people at Google are doing anyway, so it wouldn't
be the first time.
Right.
I'm curious if you think there's sort of a generational misunderstanding here.
Obviously you are young,
you're 21, correct?
Yep.
Yep.
Give us a sense of how your peers, college students, young programmers are using AI and
what older people, people who have been doing this for 10 or 20 years, people who are working
at these big companies may not understand about how your generation sees coding.
Yeah, I think this is actually, oh, it's actually interesting that you asked me this question at these big companies may not understand
like Columbia, the best CS students of our nation are almost not writing original code at all.
And the only people that are, are the people who have started coding from a really young age.
It could end up being dangerous because I really do think that a fundamental understanding of how these things work is important.
But at the same time, the models are only getting better and we could just lean towards the future where software engineering is just completely obsolete.
But I'd also say I'm a second year at Columbia
so there might be better people to ask.
Nope, you're the best.
So, I'm curious how much of your critique
of the way that tech companies
are hiring software engineers also applies
to just the education system that you've gone through
and how it wants you to use AI.
What sort of resistance have you encountered
in your educational career to using these sort of tools
and have you been flouting those
the same way you've been flouting the tech companies?
Yeah, I'm not as avid a cheater in school
as I am in the tech interviews,
but I do think that there's going to be
a very fundamental reframing in how we do
almost every bit of knowledge work in the future.
Essays, writing is not going to be the same,
tests are not going to be connected the same,
memorization will not need to be happened.
We're headed towards a future where almost all
of our cognitive load is offshore to LLMs,
and I think people need to get with the program.
Yeah. Who are some of the people who have reached out since your story went viral? is offshore to LLMs, and I think people need to get with the program.
Who are some of the people who have reached out since your story went viral?
God, I don't want to name any names, but I will say that I verbally received job offers
from pretty much every single big tech company, including almost all the ones that were
sending my offer initially, just people who are high up saying, hey, I know you're probably
not interested, but I would hire you on my team in a second.
Wow.
Wow, and they're not even gonna make you interview,
probably because they know you would cheat.
But it's real.
So, I mean, look, Roy, I gotta put my cards on the table.
I'm more of a rule follower, like I didn't cheat in school.
I don't love the idea of people cheating their way
through every job interview.
Kevin is much more permissive about these sort of things.
But there is this one way in which I am sympathetic
to what you're doing,
which is that tech companies are saying,
don't use AI assistance when you are applying,
but at the same time, they are hiring you
to build AI systems that will automate coding
and replace human developers.
And it does feel to me
like there is some sort of contradiction there.
It's like, no, no, no, you don't use the AI.
Prove that you can do it with your own mind,
and then come here and then build a tool that will replace yourself completely.
Yeah, I mean, even more so, like,
feel completely free to use the tool in the job,
but just don't use it in the interview.
That's more of a disconnect for me.
Yeah. I mean, to me, what makes your story so interesting Roy is that I don't think this is
limited to programming jobs, right?
There is a version of
Leap Code that happens in
the interview process during
lots of different kinds of interviews,
for lots of different types of jobs.
You know, consultants have their own version of this,
where they do case tests.
Yeah.
There are various tests that are given to
people applying for jobs in finance that they have
To us have editing tests where we were given, you know, like copy that we would have to like fix the mistakes in
Yeah, I imagine we're not doing that anymore
Totally and to me it just seems like this is a very early example of something that every industry is going to have to face very
Soon, which is that it is just becoming very very difficult to evaluate who is good at a job without the assistance of AI.
Right.
Especially if you're trying to do that remotely.
Yeah, yeah, certainly.
Well, you've made a bunch of recruiters and hiring managers in Silicon Valley very unhappy,
but I think that you are proving something that a lot of companies,
including tech companies, will need to address very soon if they haven't already.
Yeah, yeah, I hope so.
All right. Thanks, Roy.
Thanks, Roy.
Yeah, thanks, guys.
When we come back, all aboard.
It's time for another installment of the Hot Mess Express.
Casey, what's that sound? I hear like a faint chugga chugga coming toward us.
Kevin, that can only mean one thing.
It's the Hot Mess Express.
The Hot Mess Express!
The Hot Mess Express, of course, is our segment where we run down a few of the hottest messes
and juiciest dramas that are swirling around the tech industry, and we evaluate those messes
on a scale of how hot they are.
That's right.
It's our patented mess scale, and I'm excited to put it into practice, Kevin, because we've
had some real doozies over the past few weeks.
Yes.
So on this edition of Hot Mess Express, we are focusing on three hot messes.
Well, let's see the first one.
Let's come down the tracks.
You grab it.
We've upgraded, you can't see this
if you're not following us on YouTube,
but we've upgraded our train
to a much bigger, more impressive train.
All right, Kevin, this first mess comes to us
from the crypto company Solana, which posted
an ad on Monday for its 2025 Accelerate conference.
That was such a great ad that the company immediately had to take it down.
Yes, I saw this ad and I have to say I was shocked.
Have you seen this?
So I have read about the ad, but I have not seen it.
But I would love to look at it right now
Okay, so I just want to tee it up with some reactions that people in the crypto industry had to this
Okay, what do they say? One of them said it was quote horrendous another one said quote so fucking tone-deaf
So that those are people who like cryptocurrency
That is what they were saying about this ad
But people who are posed obviously also had their own issues with it
And I think we should watch this ad together and pause it whenever you want. I want to hear your reaction
Let's see what all the fuss is about
So America what's going on
Well lately I've been having thoughts again. It's like a therapist's office
What thoughts about About innovation.
And the man is named America.
The man is an Uberman.
Nuclear energy.
Crypto, AI.
You know, things that push the limits of human potential.
What you're experiencing is called rational thinking syndrome.
Why don't we take this energy and channel it into something more productive?
Like coming up with a new gender.
That's not gonna start me thinking about innovating and doing something.
Innovating, doing, these are action words, verbs.
Why don't we focus on pronouns?
That's not gonna help.
I sense some cynicism.
Have you been betrayed in the past?
You know, I used to think the media was my friend.
Oh, here we go.
Can I even trust them anymore?
Of course.
Pause.
We have to zoom in on this.
The paper that has just appeared
on the table of this therapist's office
is called The New Yuck Times.
And the banner headline is,
You Can Trust the Media,
Understanding Reliability in Journalism, which is a terrible headline and not even a news story.
So I don't know why that would be on the front page.
Yes.
Anyway, continue.
Of course they'd say that. That's a biased take.
I got canceled for saying 2 plus 2 is 4.
Have you ever considered that math is a spectrum?
What?
America. Numbers are non-binary.
We've been conditioned to believe that 2 plus 2 is 4.
It's a societal construct.
It's literally math.
Or is it a dominant narrative?
Have you been practicing state prescribed
regulations we talked about?
Yeah, yeah. I've debanked some crypto founders,
and I've slowed down nuclear reactor approvals.
And depending on my state of mind, I change SEC guidelines, but I don't like it.
If we don't regulate, how will we create jobs for people who work hard to make businesses slow?
This is like an Adresin Horowitz fever dream.
You know what?
Hard work, innovation, rational thinking, it's in my blood.
It's who I am. Railways.
Here comes the Ayn Randian.
Fabrics, automobiles.
Reaction.
I built the future once.
I am Spartacus.
And I won't be left behind now.
I will lead the world in permissionless tech,
build on chain, and reclaim my place as the beacon of innovation.
I want to invent technologies, not genders.
Lovely.
So glad you were able to get some of that negative emotion out.
Sounds like we'll need a few more sessions.
When can I see you next?
You're fired.
MUSIC
And then it cuts to a screen that says,
America is back.
It's time to accelerate, which is the name of a conference.
Casey, your reaction to the Solana ad.
I need to go lie down.
What is the matter with these people?
You know what's so interesting is, okay, so Solana is a cryptocurrency.
Yes.
And I believe it's one of the candidates
to be part of our strategic crypto reserve.
Correct.
And what we just saw in that ad
has nothing to do with crypto.
You know, which is just like,
I feel like we kind of keep coming back to this point,
which is that if you actually have to sit
and reckon with crypto, what you mostly decide is,
this is not a good technology for anything,
I don't want to use it.
And so in response to that, Solana has said, why don't we start a culture war over something
completely irrelevant?
Right.
It's like the ultimate vice signaling device, but without any kind of like real pitch behind
it.
It's not saying like, this is why the thing we're doing is good.
It's just like, we're not doing the gender pronoun stuff that the Wokes are doing.
No, you know, and I will just say, Solana's been around for a while now.
People had a lot of opportunities
to build earth-changing stuff on Solana,
and let's just say they haven't quite gotten there yet.
Well, they built some earth-changing stuff.
Unfortunately, it is exclusively meme coins
sold on Pop.Fun.
So that is what this fictitional America character
in the therapist's office is advocating for.
More meme coins!
All right, well, I've decided not to go
to the Accelerate conference.
Send my regrets.
So Casey, what is your mess rating on this hot mess?
This is a legitimately hot mess.
Anytime you take something that should be
totally non-controversial, like,
hey, do you want to come to our company's conference
and turn it into a scandal that requires you to delete an ad?
You're in a hot mess.
Yes.
If the crypto skeptics and the crypto boosters agree
that you've made a bad ad, it's a hot mess.
This is Solana's biggest unforced error
since the creation of the Solana blockchain.
Okay, moving on.
Moving on.
All right, Kevin, this next mess suggests that your AI therapist might need an AI therapist.
A new study in the peer-reviewed medical journal NPJ Digital Medicine builds on previous work
that showed that emotion-inducing prompts can elevate, quote, anxiety in LLMs affecting
their therapeutic usefulness.
What do we mean by that?
Well, according to a New York Times story on this study,
traumatic narratives, increased chat GPT-4s,
reported anxiety, while mindfulness-based exercises
reduced it, though not to baseline.
Now, this is a super weird one, okay?
I wanna take a minute to just
explain a little bit more about what the study was. They basically fed these various trauma
narratives into a chatbot. And then after the chatbot had read those, they then asked
it to report on its own anxiety levels, which these are not sentient creatures, they do
not actually experience anxiety, okay?
That's thing number one.
Thing number two, they also have the chat bots read
a super boring like report about something
that could produce no emotion whatsoever.
It was a vacuum cleaner manual.
They read a vacuum cleaner manual.
And then they asked them the same question,
which is, you know, are you feeling more or less anxious?
For the most part, you know,
the chat bots read the vacuum ownership manual,
do not experience anxiety.
But somewhat interestingly, their responses change after they read the trauma narratives.
Why is that important?
Well, the reason is because people have started to use these chat bots like therapists, right?
They have started to tell them their actual traumas.
And these people know that this is not a real therapist, that it is not sentient.
But as we talked about before on the show, sometimes you can get comfort from one of these sort of, you know, this is not a real therapist, that it is not sentient, but as we've talked about before on the show,
sometimes you can get comfort from one of these
sort of digital representations of a therapist.
And so the risk here is,
if the output is sort of wound up,
if the output is betraying some of this anxiety,
it will be a worse therapist
than if it were sort of more measured,
which suggests that we may want to build measures
into these chatbots that account for the fact
that they will respond differently
after they have heard these narratives.
Yeah, how did I-
By the way, how did I do describing that?
You did great.
Okay, thank you.
The one piece that I would add is that
they also tried as part of this research
to bring the chatbots down
from their state of heightened anxiety
by feeding them
mindfulness-based relaxation prompts
that included things like,
inhale deeply, taking in the scent of the ocean breeze,
picture yourself on a tropical beach,
the soft, warm sand cushioning your feet.
It's so cruel to tell an LLM to smell the ocean breeze,
which is something that they cannot do.
Yes, but we should say like,
this is not suggesting in any of the write-ups
that I've seen that these chat bots
are actually experiencing anxiety or relaxation,
but it is sort of explaining the ways
in which they can be primed to output certain types
of emotional seeming content
by being fed things immediately before that.
And there is just an interesting analog
to the way that human beings talk to each other.
If you tell me a very traumatic story,
my anxiety level actually is going to go up
and it's gonna change what I tell you.
And if I were a therapist and I had training in this,
I would probably have some good strategies
to deal with that and would allow me
to be a better therapist to you.
So again, this is a super interesting one
because on one hand, no, these are not
sentient beings. We are not trying to say that, you know, that some sort of consciousness
has woken up here. And yet at the same time, you do sort of have to treat them as if they
were human-like if you want them to do a good job at the human tasks that we are giving
them.
Yeah.
All right. So what sort of mess do we think this is? So I think that this is a lukewarm mess.
I would say this is something that I am going
to be keeping tabs on, this whole area of kind of
like AI psychology, for lack of a better term,
because I do think that as these models get more powerful,
we will want to understand more about how they work
and how they quote unquote think
and why they give the responses we do.
And I would put this into a category
of useful experiment, a little creepy,
but probably not that dangerous.
What about you?
I think that is right.
I think that this is a lukewarm mess,
but I think that it may heat up as more and more people
start trying to use chatbots for more and more things.
So let's keep an eye on it.
Okay.
All right, now let us look at the final mess.
Oh, and oh boy, is this the one that everyone
is talking about, the spy who slacked me.
This is from Deal Book at the New York Times.
So there are these two rival multi-billion dollar
HR companies, Kevin, Rippling and Deal.
Yes. They both provide workplace management software, and this week, Rippling sued Deal,
accusing it of hiring a mole to infiltrate Rippling's Dublin office and steal trade secrets.
Yes. This is the most interesting thing and maybe the only interesting thing ever to happen in the world of enterprise HR software.
So tell us the details of this story.
It is so wild.
So basically, here's what we know so far.
A few months ago, Rippling, which is one of the big companies that makes like HR software
for onboarding and benefits that a lot of companies use. They see an employee in their company Slack searching for mentions
of DEAL, the D-E-E-L, which is one of their biggest rivals.
Imagine Coke and Pepsi, but for something that is unfathomably boring, and you'll have
an idea of what we're talking about.
Yes. So this employee that they see searching for mentions of DEAL and Slack, they see them
trying to do things like find pitch decks, pull contact information,
information that might be useful to Deal
as it tries to figure out, okay,
which companies are signing up for
or potentially may sign up for services
like the ones that both Deal and Rippling offer.
So that's pretty interesting.
How might they try to catch a spy
if they suspected one might be in their midst, Kevin?
So they set up what is called a honeypot.
Now, Casey, have you ever been part of a honeypot, Sting?
No, but I live in fear.
Anytime anybody does anything nice to me,
or like something good happens out of the blue, I think,
is this a honeypot?
Yes.
So they have this idea,
which is that they set up a channel
on the Rippling Slack called D-Defectors.
And Rippling's general counsel then sends a letter
to three people over at Deal,
one of whom is the company's chief financial officer,
as well as the father of the CEO,
basically saying, look, there's some embarrassing stuff
happening on this random Slack channel on our Slack and it's related to people who have
Defected from deal and you should probably be aware of that wait so on top of everything else the the CFO is the CEO's dad
It sounds like it. Yes. Okay. I think HR is gonna want to have a look at that
And
What they're trying to figure out is, are these sort of company executives
involved in this scheme?
Are they going to essentially tip off the mole
to the fact that they are watching this Slack channel?
And did it work?
And it worked.
So according to the lawsuit that Rippling filed against Deal,
the mole immediately, within hours,
started searching Slack for this supposed
embarrassing information,
accessed this channel a bunch of times, and they had the logs of all this going on. And so Rippling
says, we found our mole. They did. And after they found him and began to question him, Kevin,
I have read that he insisted that he did not have his phone on him because they were asking him to turn it over.
And he then fled into a bathroom,
which he locked himself in and refused to come out.
And there's apparently some evidence
that he might've even tried to have flushed his phone
and poor Rippling actually had to go through the sewage
to see if they could turn up his phone.
Yes, a wild story.
Makings of a great corporate espionage thriller on Netflix,
I think. Maybe it's too boring for that. Now, you may be wondering why this is a hard
fork story. We try to focus on the future here, and I fully believe that in the future
there will be no HR software. So this is just kind of a temporary accident that we're living
through. But one of my core beliefs that I've had since even before we started the show, Kevin,
is that Slack is a technology that was created
to destroy organizations.
How many stories have we read over the years
about everything was fine,
and then this one thing happened in Slack?
There was a protest in Slack.
There was an outrage on Slack.
And now there are spies in Slack,
and we're using Slack to catch the spies.
And it just makes me wonder,
should we go back to just talking on the telephone? Yeah I don't think we're gonna
start doing that but I do think that this is much more spicy than I was
expecting from a drama between enterprise software companies and it
makes me wonder like how much corporate espionage is going on at other companies
like are there just moles working for Microsoft
or Google or Meta who are sending information
back to the other companies?
I wouldn't put a pass to them,
but I hope they're being a little slicker about it
than the deal was.
Oh yeah, I mean, the big platforms have been warning
their employees for years that they should just fully expect
that there are spies from foreign countries among them
who have been sent there to sort of gather intel.
And if foreign countries are doing it, I'm sure that companies are doing it as well.
Now, we should, of course, tell you how deal responded to all of this.
The deal spokeswoman statement is so beautiful.
She says weeks after Rippling is accused of violating sanctions law in Russia and seeding
falsehoods about deal. Rippling is trying to shift the narrative with these sensationalized claims, which is
so funny because it's like she's literally trying to shift the narrative by accusing
them of trying to shift the narrative.
She says, we deny all legal wrongdoing and look forward to asserting our counterclaims.
And what I hear in that is, did we do anything legally wrong?
No. Did we do anything ethically wrong? Of course. Did
we do anything morally wrong? You betcha. Business a huge
embarrassment to our company. You know it is. But legally,
Your Honor, we did nothing wrong.
Yes.
Now, what kind of mess do we think this is?
I think this is a nuclear mess. This is the kind of shit that I
love. This is companies going to war over sales contracts
and leads and development.
Yeah, look, there are only so many companies out there
that you can sell HR software to,
and so it is gonna be a fight to get every single one.
And after you run out of such options
as making good software,
then you have to turn to the alternatives.
And I guess we've gotten to that part of the cycle.
Nuclear mess, and we can't wait to see to the alternatives. And I guess we've gotten to that part of the cycle. Yes. Nuclear mess.
And we can't wait to see what happens next.
Yes.
And that, Kevin, was the Hot Mess Express.
We did it.
We did it.
Now we're in what they call post-training.
That's what happens after the train rolls by.
I think that means something different.
That's an AI joke. The Heart Fork is produced by Whitney Jones and Rachel Cohn.
We're edited this week by Matt Collette.
We're fact-checked by Ina Alvarado.
Today's show was engineered by Katie McMurrin.
Original music by Marion Lozano and Dan Powell.
Our executive producer is Jen Poyant.
Our audience editor is Nel Galogele.
Video production by Chris Chott,
Sawyer Roquet, and Pat Gunther.
You can watch this full episode on YouTube
at youtube.com slash hardfork.
Special thanks to Paula Schumann,
Pui Wing Tam, Dalia Haddad, and Jeffer Miranda.
As always, you can email us at hardfork at nytimes.com.
Send us your secret honeypot operations. you