The Highwire with Del Bigtree - THE GREAT (AI) REPLACEMENT
Episode Date: August 16, 2025As Artificial Intelligence rapidly evolves, the world watches with both awe and alarm—but the real crisis may not be sentient machines, but massive human displacement. Jefferey Jaxen breaks down a r...evealing new study that exposes which careers are most vulnerable to AI takeover, and examines the growing push for government intervention to slow—or steer—this runaway technological revolution. Are we prepared for what’s coming?Become a supporter of this podcast: https://www.spreaker.com/podcast/the-highwire-with-del-bigtree--3620606/support.
Transcript
Discussion (0)
This technology is being rolled out faster in any technology we've ever seen in our lifetime.
It's basically dropped in our lap, this monster right after COVID, and now we're kind of playing catch-up,
and we're trying to deal with it.
And some of the shock headlines, we've shown these before, but this is, you know, how bad could it get?
This is how bad it could get.
Experts predict AI will lead to the extinction of humanity.
We've seen a lot of people talk about this, talked about it on this show.
But then there's another one.
Time magazine.
AI is as risky as pandemics, and nuclear war top CEOs say.
Well, we've heard a lot about pandemics from our government over the last four or five years,
but we haven't heard a lot about the dangers of AI.
We've heard we're starting to, but pandemics have been rammed on our throat, the dangers of those.
And what is the danger?
Well, one of the dangers is it's going to take away everyone's job.
And, well, we have kind of a test of that already.
We just went through COVID, the lockdowns, the mandates, a lot of people lost their jobs.
And in 2020, August of 2020, to be exact, this was an article about,
by revolver, COVID-19 lockdowns over 10 times more deadly than pandemic itself.
So it says in this previous research, this is five years ago, previous research on job
displacement and mortality has found that displaced workers face a significant increase in mortality
rates from which lost years of life can be estimated. Job losses and permanent job separations
have been shown to correlate directly with increases in heart disease, drug overdoses, lung cancer,
liver disease, among other factors of increased mortality risk. So I didn't hear Elon Musk talk about this.
I didn't hear Sam Altman talk about this.
I just heard, you know, it's inevitable AI is going to take over your job and we'll give you some universal basic income and keep you happy and maybe you can find another skill.
What about what about life years loss?
What about this?
What about what we just went through with the COVID pandemic?
We have a test right there.
The testing goes so well.
And now we're just going to accept this with AI.
Okay.
Well, here's how bad.
What are we looking at here?
Well, here's Goldman Sachs.
This is a bank analysis.
They predict and they have to be pretty active.
from the predictions, about 300 million jobs will be lost or downgraded by artificial intelligence.
Here's the International Monetary Fund, IMF.
AI had hit 40% of jobs and worsen inequality because it doesn't just sweep across the world
and equally take people's jobs away.
If you're living in a poorer country and your jobs get taken,
you're looking at a very different survival situation than if you're in a country like ours
and maybe you have to shift careers.
So, I mean, there's so much nuance to this.
so much nuance to this and now there's a study there's one of the first studies here that's showing
kind of this AI these tentacles are reaching now into into the career spaces of a lot of people what does
that look like well the Daily Mail wrote about this and they said reveal the careers at highest
risk of being replaced by AI and so i'm going to show some of the pictures from this article they
have what's called an AI applicability score and that's the percentage of how much this technology
is reaching into these jobs.
You can see here, historians, writers, authors,
you know, the next page, this one hits home,
news analysts, reports, reporters, journalists,
oh oh, we have editors, data scientists on the next page,
and then it keeps going, economic teachers.
You can go down the list and you can look at all of that.
And I want to talk just for a moment on a little side rail here.
This news analyst reports, this is a key part of our environment right now,
not just you and I, but the world as we're getting this information.
And one of the journalists, one of the better American journalist, Matt Taibi,
he put out an ex post talking about this, and he said,
this is why AI is dangerous.
Ultimately, he has no ability to assess and detect incorrect media reports.
It overcounts the authority of certain media brands and undercounts primary sources.
Well, someone took that and put it in a chat GPT and said,
hey, AI, what do you think about this guy talking about you like this?
And this is what Chat GPT said.
This is what AI said.
Yeah, that post from Matt Taiibi nails a very real and valid criticism.
It says, it says AI systems, especially large language models, like me, tend to overprioritize
institutional sources and underweight raw primary data.
Then it goes on to say how bad it is.
It says training data is heavily skewed towards trusted domains, meaning major media, academic
publishers.
It says systems are tuned for broad generalized truth, not investigative.
nuance, what we're doing here, citations and weighings are often aligned to mainstream consensus,
which can miss or ignore legitimate counter-narratives. I mean, the last five years, the truth was a
counter-narrative. So you would admit AI would have missed the COVID pandemic, the truth happening
there. But it even goes on further to say this. It says, Tabe's point about undercounting primary
sources is spot on. It says raw documents, FOIA releases, emails, transcripts, leaked court docs.
These often contain the truth before the narrative is spun.
AI's not getting those.
And so it gets filtered out of AI's perspective.
That's AI for you when it comes to information and news and journalism.
It can't do it.
It misses the nuance.
And the story is the nuance, typically, especially now with fact checkers and trusted sources
and narratives being pushed from top down.
That is a major criticism.
So I guess it's a silver lining when people say it's coming for our jobs.
Maybe, but it's not really that accurate.
And so let's go back to the study now.
Let's go to the actual study showing this.
And it's talking about measuring the occupational implications of generative AI.
And they only use one AI.
It was called Microsoft Bing co-pilot.
So that is to deal, that deals typically with just facts, raw data, things like that.
So just keeping that in mind, it wasn't talking about the robots that are flipping burgers
or doing, you know, starting to two incisions on the operation table.
It's just talking about language models and search models.
So it says this.
It is tempting to conclude that occupations that have high overlap with activities
AI performs will be automated and thus experience job or wage loss.
And that occupations with activities AI assists with will be augmented and raise wages.
This would be a mistake as our data did not include the downstream business impacts of new technology,
which are very hard to predict and often counterintuitive.
Take, for example, ATMs, which automated a core task of bank tellers, but led to an increase
in a number of bank teller jobs as banks opened more branches at lower costs, and tellers focused
on more valuable relationship building rather than processing deposits and withdraws.
So to this point, this is the conversation piece now moving forward.
AI researchers, businesses, they focused on building machines to replicate human intelligence,
basically building machines to automate humans, to take humans out of the equation.
simply replace workers.
So what we're talking about here is automation
rather than augmentation.
So not to make better,
make humans better, make their experience better,
make their work better, make it more possibilities.
They just want simple automation.
And so the excessive focus on human AI like devices
and developments amplifies the market power of a few
who only control these technologies.
And that is the key issue.
And if you,
If you don't believe me, let's look at Hollywood.
This is just this week in LA Times.
They're having a hard time right now.
This is their headline.
As AI changes how movies are made, Hollywood Cruise asks what's left for us.
It says past technological shifts.
Remember, augmentation versus automation replacement.
Past technological shifts, the arrival of sound, the rise of digital cameras, the advancement
of CGI changed how movies were made, but not necessarily who made them.
Each way brought new roles, boom operators and dialogue coaches, color consultants,
and digital composers. Innovation usually meant more jobs, not fewer, but AI doesn't just change the
tools. It threatens to erase people who want to use the old ones. That is the key here. And what has
AI been doing? It's basically a carp launch to train on our data, what we have been creating.
That's why you see Mark Zuckerberg, even Elon Musk, these people that own Jeff Bezos,
Amazon, these people that own these overarching systems are basically data mining us to feed their
AI. Well, finally some legislators, Josh Holly, one of them, co-author of a bill, are starting to
address this and trying to put some breaks on. New congressional bill bans AI companies from training
on copyright works or personal data without consent. So people will say, well, I have to sign a consent
form to even get on the platform. This bill actually addresses that. It says the bill also prohibits
use of data if consent was obtained through coercion or deception or as a condition of using
a product or service through which the covered data exceeds what is reasonably necessary to
provide that product or service.
Wow.
So this bill, not law yet, says you can't just have a waiver and say, yeah, you sign this
and you can use our product, but while you're using it, we're going to take everything you
create to feed our AI to basically put you out of a job.
So legislators are thinking, so this is good,
this is how we move past this.
It's not just a foregone conclusion,
it's not doom and gloom.
There are solutions to this.
The human spirit can find ways around this
and rein this in.
That is my belief.
Well, I'll tell you, Jeffrey,
I think about this a lot.
My son is 16 years old.
He's starting to look at colleges.
He talks about things like, oh, I want to be a lawyer,
and I just really wonder how many law jobs
are they going to be.
You know, my daughter is 11.
She wants to be addressed.
designer or something like that, you know, I think, well, I think there maybe where you get to be
creative, maybe there's a shot, you know, somewhere in there. But these are things that as parents,
I think we're all starting to look at is what am I guiding my child towards? I'm definitely
looking at that list of the first jobs we think going to be wiped out by AI. And ultimately,
though, I think as we talk about kids, the most important thing is just teach them to be critical
thinkers, to be creative beings that can work the way through any situation.
As long as they're not dependent on AI to think for them, I think they'll find a way through.
But, Jeffrey, we're going to stay on this topic.
Obviously, this AI issue, when you talk about our mission statement dedicated to eradicating man-made disease,
I think the existence of human beings fits into that.
And when you have as many headlines, as many CEOs of creatives thing saying this could be the end of our species,
I think it requires that you and I stay on top of this.
So thank you for that great reporting.
It continues to be, I think it'll probably end up being.
the biggest conversation in our lifetime as we are making this transition to a whole world
that is just absolutely a black, terrifying, unknown. So thank God you're here to keep us abreast of it.
