Front Burner - Will AI agents take over the workplace?
Episode Date: February 17, 2026Last week, a 5000 word post on X with the headline “Something big is happening” went viral. It was written by Matt Shumer, the CEO of HyperWrite, an AI writing tool and in it he says he’s recent...ly watched AI go from a helpful tool to something that “does my job better than I do”. And he’s not the only one. The CEO of Anthropic, one of the biggest AI companies today, wrote an essay saying it could replace half of all entry-level white collar jobs in the next one to five years. What’s behind the sudden vibe shift? A good part of it has to do with the abilities of AI agents, which are basically AI models you give a task to perform for you, with the promise of little supervision.Are we on the precipice of something big? Or is it another way to build hype amid fears of a bubble? Will Douglas Heaven, senior AI editor for the MIT Technology Review, joins us to separate reality from hype. For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
This ascent isn't for everyone.
You need grit to climb this high this often.
You've got to be an underdog that always overdelivers.
You've got to be 6,500 hospital staff, 1,000 doctors, all doing so much with so little.
You've got to be Scarborough.
Defined by our uphill battle and always striving towards new heights.
And you can help us keep climbing.
Donate at lovescarbro.cairro.com.
This is a CBC podcast.
Hey, everyone, Jamie here.
Just before we get to today's show, I have a favor to ask.
We want to put together an episode featuring your stories about how you are using artificial intelligence.
So if you've got stories to share, please send them in to front burner at cbc.ca.
And I guess we're looking for stuff that goes a bit beyond I asked chat GPT for restaurant recommendations for a trip to Spain.
More like, have you been asked to train a...
that you think could replace your actual job.
Or maybe you run a business and have AI do your accounting.
Maybe you work in law and you've used AI to summarize large volumes of contract data.
How did that go?
Email us at frontburner.ca.
Again, with your stories about how you're using AI and we'll be in touch before we use them in an upcoming episode.
Okay, here's the show.
Last week, a 5,000 word post on X with the headline, Something Big is happening, went
viral. It was written by a guy named Matt Schumer, the CEO of Hyperite, an AI writing tool.
And in it, he says that he's recently watched AI go from a helpful tool to something that, quote,
does my job better than I do. And he's not the only one from the tech world with similar warnings.
The CEO of Anthropic, one of the biggest AI companies today thinks it could replace half of all
entry-level white-colored jobs in the next one to five years. His company's lead safety researcher just quit,
penning an open letter about how hard it was to get anthropic to let, quote,
values govern actions.
A good part of all of this has to do with the abilities of AI agents,
which are basically AI models you give a task to perform for you
with the promise of little supervision.
So are we on the precipice of something big?
Can we afford to ignore the rise of AI agents?
Or is it another way to build hype amid fears of a bubble?
Will Douglas Heaven is here today.
He's a senior AI editor with the MIT Technology Review.
Will, hey, it's great to have you.
Yeah, it's good to be here.
Thanks for inviting me.
So it seems that much of the excitement and concern
that we've been hearing out of the AI industry
over the last little while has been centered on the rise of AI agents,
like Anthropics Claude.
So maybe let's start there.
What are AI agents?
Yeah, it's good actually to think of them as AI agents.
So like, the idea of agents is not new at all.
Agents are just pieces of software, machines that go and do tasks on our behalf.
So, I mean, your thermostat is an agent.
Your Roomba, if you have a robot vacuum cleaner as a kind of agent.
And I know, some people may remember, I think in 2010, there's what's known as like there was a massive crash in the U.S. stock market.
And there was a flash crash.
And that was lots of little autonomous algorithms, you know, trading on people's behalf.
The big buzz around AI agents is now they're simply smarter.
Like sort of under the hood of an agent is an LLM like Claude or ChatGPT or Gemini.
So in theory, they can do a far wider range of tasks and they can go off and sort of do things more intelligently than agents ever used to be able to.
and you can hook them up to, you know, loads of other software,
sort of your email, your social media accounts,
if you're completely reckless, your banking software, all that kind of stuff.
The buzz around AI agents is the next wave, if you like,
of the buzz around chatbots, which, you know, kicked off three years ago with chat GPT.
You know, here was a kind of software that we could talk to and it talked back,
but, you know, talking only gets you so far.
Now, if you have an agent, then it can go off and,
you do stuff, not just talk. So that's where all the excitement's at.
Well, just tell me a little bit more about kind of the crazy things that we've been seeing
happening lately. Like, what can the latest iteration of this technology actually do?
The sort of the demos we've seen from the companies developing these things were initially
sort of like a browser agent that could go off and on your behalf, you know, book a vacation
or concert tickets or restaurant reservations and that kind of stuff, which, you know, it's neat,
but it's sort of, it's limited in the sense that it's not clear that that's really more
useful or quicker than doing those things yourself. I think where the big change will happen
is going to be less visible. It's going to be the multi-agent systems. We have lots of agents
interacting within businesses, sort of in the enterprise. So if you think of how manufacturing was
completely revolutionized over the 20th century with sort of automated workflows and conveyors
of robots and machines putting stuff together. You can sort of use that analysis if you think of
what agents might do behind the scenes in a hidden way, you know, agents handing off office tasks
to each other. You can imagine you're a company making stuff and selling that inventory. You could
have an agent taking in orders, talking to other agents deciding what to prioritize, what to build,
who to ship it off to, and all that kind of back office stuff could be automated in theory.
Just on your point that, you know, you would have to sign over a lot of control to these agents if you
wanted them to do certain things, right? Are there any safeguards that would exist right now that would
prevent me, for example, from giving all of my personal information and financial information
and access to my finances over to one of these agents and like telling it, I don't know,
to just invest in the stock market for me.
Nothing to stop you doing that.
I mean, previously when so agent Soho was something that you would only get from, you know,
for one of the big companies like Anthropical or OpenE, I don't know for sure,
but I imagine they would have some kind of safeguard or some kind of warning not to do that.
But now we're seeing agents, you know, free to download open source.
you can sort of tinker with them and adapt them however you want.
And those, yeah, you can do whatever you want and people have been.
So if you wanted to do that, you could.
It would be extremely ill-advised and I don't think anyone should do that yet.
The amazing thing about Maltbook is that it was itself was vibe coded with the help of an agent
and put together really quickly, but it means it has all kinds of security flaws.
in it. It was sort of revealing sensitive information from people, you know, your agent would
give up sensitive information to other people's agents. Because it was such a frenzy of activity,
it was hard to really get the bottom of exactly what was happening. But LLMs and agents that run
on LLMs can't tell the difference between an actual, you know, well-meant instruction, piece of
text that they're meant to read and, you know, a malicious set of instructions that, you know,
tell the agent to go and download its owners' bank details and upload them to this website or
whatever. There's nothing built into the software itself that would stop it from doing the bad thing.
And just, um, Mootbook, just for people listening, right? It's like this kind of social media site,
right, where AI agents can talk to each other? Yeah, it was builders and your social network for
agents and with a sort of cute little tagline that, you know, humans are welcome to, to observe.
It was a lot of fun.
It was just, it was sort of AI theater in a sense.
And everybody out there is so keyed in to what's going on in AI right now and so eager to
know what the sort of the next wave is going to be that it's not really surprising that
something like Maltbook, which, as you say, you know, is a social, social network for agents
took off.
And, you know, some people were very quick to say, you know, this is the future of the internet.
this is the future of AI agents.
Yeah, Elon Musk did, right?
Yeah, Elon Musk says a lot of things.
You know, I know the Mold book stuff got just a ton of attention
and generated a lot of headlines,
but there does seem to be this real vibe shift lately
where people in the industry are warning of real, tangible, practical consequences here.
And also a tangible difference in the new,
kind of models that are coming out, right? And so I'm thinking about that post written by
Matt Schumer, this guy who's in the industry, and he compares this moment that we're in to the
month before COVID hit and upended the world. He talks about how very recently, just at the
beginning of February, as I mentioned, new models were released from Open A&A and Anthropic,
and something clicked, according to him. And now he is not needed for the technical work of his
job. So for example, he says he tells AI that he wants to build this app, tells it what he wants
it to look like, and then he walks away while the AI writes tens of thousands of lines of code,
and he writes that these models are starting to display something that resembles judgment and
taste, that they are unrecognizable from models even six months ago, and that it's coming for
law, medicine, writing, design, et cetera. And I've seen responses to this very viral post that he wrote
oscillate from everyone needs to pay attention to this to this is an overblown ad for paid
AI subscriptions, right? That it's like total hype. And so where did you land after reading that?
I'd say very much in the latter camp that it's overblown hype. I mean, I'd say a couple of
things about it. Like, whenever you read something like that, I mean, ask yourselves, you know,
who is this person? Why might they be writing this? Yes. As you say, you know, this, this
This guy has an AI company and he has an interest in hyping up the technology that his company sells.
There was also something quite strange about comparing it to COVID.
This was not only an enormous worldwide catastrophe that killed millions of people,
but there was also something inevitable to it.
It was very hard to stop the spread of COVID once it started.
And I'm sure it's deliberate.
I mean, you hear this a lot from AI people.
but the sort of rhetoric is that AI is coming and we can't do anything about it.
You just have to prepare, which is kind of weird when you think about it,
because this is technology that people are making.
And, you know, the people making it can decide what kind of technology it is
and how fast it comes.
I don't know whether what he says about his job sort of changing like that so quickly
because the AI now writes all his code.
I do know that AI encoding is one of the areas where I think AI has had the most,
impact so far. I mean, AI is very, very good at writing code. But even on that specific point,
opinions are really polarized about exactly what that means. Whenever you see claims about this,
when you start to push on them and ask exactly what is the AI doing, it quickly gets
sort of hand-wavy and unclear. In some cases, for some software projects, I think the AI
can do quite a good job at generating most of it.
But that does not mean that AI can now just generate all software securely or not.
Again, just let's go back to Maltbook, something that was vibe coded and it had all kinds of hideous
security flaws in it and it wasn't a very good piece of software.
I'd also say that this is not, it's not the first time at all that we've heard, you know,
grand claims made by people in the AI business.
You know, last year there was a really viral essay called AI 2027.
sort of try to present a picture of what AI was going to be like in 27, which is just a year away
and the world was going to be changed. I think this is the messaging that comes out of Silicon
Valley in particular all the time. And I really do think you need to take it with a pinch
of salt. I think something big is happening. It's really unclear exactly what that is and how
wide its impact is going to be and how fast it's going to happen. So it's definitely
an exciting time and all of this is worth watching but anytime someone tells you that you know something
big is going to happen and you know that person has sort of privileged insight and is coming to warn us
I do roll my eyes a bit.
Scent isn't for everyone you need grit to climb this high this often you've got to be an underdog
that always over delivers you've got to be six thousand five hundred hundred
hospital staff, 1,000 doctors
all doing so much with so little.
You've got to be Scarborough.
Defined by our uphill battle
and always striving towards
new heights.
And you can help us keep climbing.
Donate at lovescarbro.cairot.cairot.
At Desjardin, we speak business.
We speak startup funding and comprehensive game plans.
We've mastered made-to-measure growth
and expansion advice,
and we can talk your ear-off
about transferring your business.
when the time comes. Because at Desjardin business, we speak the same language you do. Business.
So join the more than 400,000 Canadian entrepreneurs who already count on us and contact Desjardin
today. We'd love to talk, business.
What do you make, though, of the argument that it's these latest models people seem to be
fixating on from Anthropic and Open AI that are kind of unrecognizable to the models that
that we saw six months ago.
Let me give you an example.
I watched a friend of mine the other day
create this program that scraped available music online,
matched the beats per minute,
and created this live, ongoing music,
along with a live sort of ongoing graphic.
In other words, the work of a DJ.
And his point to me was,
and he was using Anthropics' latest model,
was that he couldn't do this six months ago
with this technology.
How are you thinking about it?
that argument. I think the technology is getting better all the time, and you do sort of have
these leaps in in capability. Again, you know, just bringing healthy skepticism into this, which I think
you always need. I think it's incredible that these models can do that at all, you know, the example
you just gave. Yeah, they really could not do that six months or a year ago. You can now via code
a website and all kinds of simple apps, which is really, really cool.
depending on where you're standing, it's either really, really cool or terrifying if your business is to, you know, make websites for people or to code these apps.
Things are changing at that level.
But it is still quite a big gulf between coding up something like that, which is relatively simple.
And there will also be hundreds of thousands of examples of that type of code that the models will have been trained on.
So they were probably relatively good at doing stuff that involves internet scraping and that runs as an app or a website.
There's a big gulf between that and generally novel code that solves a problem that has not been solved before that requires a very complex piece of software engineering.
We may get that.
I mean, because coding has really, really improved recently, but we're not there yet.
So just the idea that because it can do this thing, it's also then going to be able to do the other far harder thing, really really.
quickly, I think it's still an open question, which is, again, way, I think sort of knee-jerk, somewhat
scaremongering posts about how, you know, everything is going to change overnight.
We Should All Panic is not sort of a helpful part of the discourse, but there is always a lot of that.
You know, I mentioned in the intro news of Anthropics Safety League, Mrenunk Sharma, resigning last week with this open letter.
which I'm sure you've read, it talks about how, well, he talks about how the world is in peril,
not just from AI, he says, but a whole series of interconnected crises.
And he talks about how he had repeatedly seen at the company how hard it is to truly let
our values govern our actions.
That's a quote.
I want to ask you to try to interpret exactly what he's speaking to in his own experiences at
the company, but broadly speaking, do you think these companies are thinking hard
about their responsibilities to create safeguards
around this technology right now?
I mean, I wouldn't want to say they're not.
You will always find people
who think the company should be doing more
and are not taking the risks seriously enough.
And that's simply because people within AI
have very, very different views
about what safety should look like.
Depending on your sense of how quickly the technology is developing,
which is still a gut feeling
rather than something that we can properly
predict. And then once it's developed to some more powerful future state, whether it will be
something we can or cannot control and whether or not it'll be risky, depending on your instincts
about all of that, you're going to have very, very different opinions about whether the company
you work for is doing everything it should. And then, of course, the companies themselves are,
like any company is balancing all the resources required to work on safety with developing
products, making profits and all that kind of stuff, which, you know, for some researchers, I think
that sits uneasily for them. If you're a researcher or an engineer and you think you're ushering
in this incredibly well-changing, powerful technology, and you personally believe that there are
risks inherent in that endeavor, then you may be, you know, the type of person that thinks your
company should do absolutely everything it possibly could to make sure it's safe. You know,
Whereas other people at the company might think we're building technology, which sure has some risks, but we're on top of those.
And it's not something that they would worry about as much as the person sort of whistleblowing or leaving or writing an open matter.
We've moved through quite a lot of stuff here today, a lot of these latest headlines that we're seeing.
And just as our listeners navigate this real influx of AI news and the discourse kind of between AI boosters and naysayers,
And I'm sure they are kind of filled with the same anxieties that, frankly, I am too.
Like, what is this going to do to my job, to my livelihood?
What kind of future will this be like for my children?
And so, like, you know, is there a final thought that you might want to leave people with here as they try to consume this information and try to make sense of it?
Yeah, I think we should all try and be as aware as possible of what's happening, like both in terms of how.
how quickly the technology is being developed and what uses it's then being put to by different
industries, different sectors, specifically to the question of job security.
I think I would try and sniff out the scaremongering and dismiss that because for every viral
blog post like Matt Schumers that we were talking about, there are far more sober studies
and analyses that show that in practice, this technology does not live up to the hype,
that it cannot do human tasks in the workplace as well as Sam Altman might have been telling
this it can.
That's not to say that it's not going to be able to.
It just means that it's not doing it.
I think history has shown us that changes in society and economies happen more slowly
than the technical innovation itself.
yes, we may have amazing new technology that's now here and in theory can do these things,
but actually putting that in practice in human settings where, you know,
human workflows and processes are, you know, chaotic and messy.
And I mean, time and time again, we've seen that when you take that cutting edge technology
and actually put it in those situations, it rarely works as salt.
And I think it's going to take some years to figure out exactly what it can do.
and maybe more importantly what it can't.
In the meantime, I think, yes, there probably are risks to jobs,
but that may not be because the technology really can do your job.
It may just be because managers and encounters in companies believe it can.
So they'll fire a lot of people and take the software that can do it cheaply.
Probably will be quite a bit of upheaval,
but it won't be exactly as we're told.
It's not going to be AI agents coming in,
doing all our jobs anytime soon.
Okay. That's a fairly optimistic note to end on.
So shall we end there?
Will, thank you so much for this.
Yeah, no problem. Thank you.
All right. That's all for today.
I'm Jamie Poisson. Thanks so much for listening. Talk to you tomorrow.
For more CBC podcasts, go to cbc.ca.ca slash podcasts.
