Front Burner - The perils of unregulated AI
Episode Date: May 11, 2026Recent polls show that Canadians are increasingly concerned about the growth of AI.And yet, the AI race is hurtling forward with few guardrails. In many cases, people aren’t even being given a lot o...f choice around using it. Many jobs now include the use of AI.Today, we are talking about that tension and more with technology ethicist Tristan Harris.He’s been sounding the alarm about AI growth, arguing that the tech industry is currently in a dangerous race without the proper checks and that the consequences will be profound.Harris is the co-founder of the Center for Humane Technology, which he founded after working at Google. He’s also featured in the new documentary The AI Doc: Or How I Became an Apocaloptimist.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
What's that noise?
I don't know.
I get that checked.
Quickly.
Yeah, good point.
Point S, Tires and Auto Service.
You think Point S has good deals on tires?
Definitely.
What makes you say that?
This.
Until May 31st, get up to $125 on a prepaid card when you buy four eligible Yokohama tires.
Details at point S.ca.ca.
Good point.
Point S, tires, and auto service.
This is a CBC podcast.
Hey, everybody, I'm Jamie Poisson. I've been seeing poll after poll lately about how concerned people are feeling about AI.
People are worried about it taking their jobs, worried about training it to take their jobs, worried about the environmental cost, worried about the impact that it could have on young people.
And yet, the AI race is hurtling forward with few guardrails, and in many cases, people aren't being given a lot of choice here.
In a practical way, maybe your boss has told you to start using it that is now a mandatory part of your job.
So today, we're going to talk about that tension and more.
And we're going to do that with someone who has really been sounding the alarm, arguing that the tech industry is currently employed in a dangerous race without proper checks and that the consequences will be profound.
Tristan Harris is a technology ethicist and the co-founder of the Center for Humane Technology, which he founded after working at Google.
He's also in the new documentary, The AI Doc, or How I Became an Apocalypse Optimist, which examines the potential upside and downsides of AI.
Tristan, thank you so much for coming on to Frontburner.
So good to be with you.
So you were in the film The Social Dilemma, which looked at the impact of the social media boom.
And it's a boom that you had a front row seat to while working at Google in the early 2010s.
You have argued that the way the rise of social media was handled,
led to a, quote, totally preventable societal catastrophe?
And what are some of the parallels you're seeing now with the rise of AI?
Yeah.
You know, I think I want your listeners to think about is you often hear we could never predict
which way technology will go.
You know, AI, well, who knows?
There's just a lot of uncertainty.
There's no way we could know which feature we're going to get.
And I heard the same thing about social media.
You know, we can never predict what happens with this technology.
And I think this is wrong.
And that's a strong claim.
So let's actually back up why that's the case.
So in 2013, basically everything we predicted came true.
Now, when I say that, it's not because, you know, I'm somehow prescient and see
something that other people don't see.
There's a simple tool you can use to figure out what is going to happen with the technology.
And Charlie Munger, who was Warren Buffett's business partner, he said, if you show me
the incentives, I will show you the outcome.
The incentives being the business model, the profit model, the thing that's at stake, the reward
function for what people are, why they're building the technology. So with social media, the claim was,
you know, we had the possible versus the probable of that technology. The possible was we're going
to give everyone a voice. We're going to democratize speech. Everyone's going to be able to share
information with each other. Oh my God, this is going to create the most enlightened and informed
society we have ever had on planet Earth. And of course, that's actually not at all what happened.
What happened instead was the probable, the probable being what would the incentives dictate
social media would be designed to do.
So how much have you paid for, you know, what's the incentive of social media?
How much have you paid for your TikTok account or your, you know, Instagram account in the last year?
Nothing.
But how is it worth trillions of dollars?
Why is the market cap trillions of dollars when you literally haven't paid them anything?
Well, the answer is the incentive.
The incentive is maximizing engagement and eyeballs and screen time.
That means maximizing duration of use and frequency of use.
So you coming back often, you coming back for long periods of time and
for all little chunks of time during the day.
As the CEO of Netflix said, our biggest competitor is sleep.
So the attention business model is what got us the race to the bottom of the brainstem,
meaning design decisions that are all about manipulating human psychology
in order to get them scrolling for as long as possible,
coming back for as long as possible.
That means weaponizing fear of missing out, weaponizing social validation and rewards,
slot machine dynamics.
I pull to refresh some likes.
I get more likes the second time.
And that prediction of those incentives led to a more addicted, distracted, polarized, sexualized, breakdown of shared reality society, all of which were 100% predictable by those incentives.
So with AI, what is the incentive of chat GPT, Open AI, Anthropic?
What is the business model?
Yeah.
Now, a lot of people might be scratching their chin and thinking, okay, the incentive, the business model, okay, what do I pay them?
How do they make money?
okay, I paid them 20 bucks a month for a chat dbt subscription. So maybe that's their business model,
just getting everyone to pay chat dbt subscriptions. But consider the hundreds of billions of dollars last
year alone that was invested into the frontier AI race. Is everybody paying 20 bucks a month going to justify
those valuations and the amount of money that's been taken on? No. Absolutely not. It's not enough.
Okay, so let's imagine what their incentive might be. Maybe it's the Google business model. Maybe they'll do
advertising and search revenue. But that also would not justify the amount of money that they've
taken on. The only thing that justifies the amount of money that these companies have taken on
that they have to pay back to their investors is the race to replace all economic labor in the
economy. That means to replace all kinds of cognitive work. So AI is powerful because you can
replace what a marketing person does, what a financial analyst does, what a programmer does, what an
Illustrator does. Everything that a human mind can do, AI is being designed to be able to not
augment and support human workers, a blinking cursor that helps you at your job. It's being designed
to replace all human workers. And what that's going to lead to is AI taking up all of the wealth
and all of the jobs in society, which is going to concentrate all that wealth in basically 10,
soon-to-be trillionaires' pockets and leave everybody else disempowered. It's going to be confusing because
will get new cancer drugs and new material science and new physics and cool new things along the way.
But at the same time, it will lead us to an anti-human future.
Please construct that a little bit more.
I was literally just going to ask you about the anti-human future thing.
Yeah, so this is based on, there's a really brilliant essay by the writer Luke Drago,
his partner, Rudolph Lane, called The Intelligence Curse.
So what is the Intelligence Curse?
It's based on something in economics called the Resource Curse.
I think like a country that has a very, very important.
powerful natural resource like Congo and rare earth minerals or, you know, Nigeria, Sudan, Venezuela,
where what happens is the GDP comes almost entirely from oil revenue. What ends up happening
when the GDP of a country is coming from a resource and not from the labor of its people
is now a government sitting there like, what do I invest in? Do I invest in childcare, healthcare,
education, or do I invest in oil infrastructure? And the answer is, I invest in oil infrastructure,
because that's where I get my growth from.
And so you get this kind of authoritarian government built on extracting from that resource.
Okay, so now what does that have to do with AI?
Well, if I'm the United States or Canada and in the future, let's say that 60 to 70% of the GDP in the country comes from AI and data centers and not from people, which is literally the goal of all the companies, by the way.
It's why we're building out and why more money has been put into this AI boom than any other technology in human history.
And it's because they're basically racing again to be able to do all economic labor where the AIs do all the work.
They work 24-7 at superhuman speed.
They don't complain.
They don't whistleblow.
They don't have childcare.
They don't have health care.
And when GDP comes from AI and not from people, if I'm a government and my tax revenue comes from AI, not from people, what's my incentive to invest in education?
What's my incentive to invest in health care or child care for people?
Well, I don't get a return on that investment because all the growth is coming.
coming from AI, not from people. And so the visual you should have in your mind is something like
big data centers with shanty taunts around them. You know, that's the visual of the anti-human
futures. That's what I mean. You hear a lot of people talk about things like universal basic
income. And I think the argument here is that you would tax those who are making the money from
AI and redistributed and that everybody actually has a pretty optimal life. And just how would you
respond to that. Yeah. So the company CEOs are in the business of selling utopian stories that
tend to not manifest. I mean, we all heard what Mark Zuckerberg told us for a long time. I think people
should be skeptical of that, but let's actually see why that wouldn't be true. So you have a handful of
USAI companies, Western AI companies, also DeepMind is in the UK, that are actually succeeding
at replacing all labor. So let's say they replace all customer service jobs at some point. And that
disrupts a country like the Philippines, where a lot of the GDP of the Philippines is based on
customer service jobs. Do you think that USAI companies are going to be taxed and providing a
universal basic income to everybody in the Philippines? When in history have a small group of people
consolidated all the wealth and then consciously shared it with everybody else? Yeah. I mean,
I was going to say, I don't even know if they could be taxed and for that money to be
redistributed in their own country. In their own country, you know, we haven't done such a good job of that.
I mean, in general, we're not even taxing the billionaires at the same rates that we're taxing
everybody else. So we're not really on a good trajectory for doing this. And again, this anti-human
future has other just practical, practical costs for regular people right now. Electricity prices go.
People are paying now more money for the electricity prices than they are for their mortgages in the U.S.
You get data centers that are preferred to be there versus the farmland. By the way, a confirming
quote of this is Sam Altman was recently asked, doesn't it take a lot of energy to run these
data centers? And you know what his response was? One of the things that is always unfair in this
comparison is people talk about how much energy it takes to train an AI model relative to how much
it costs a human to do one inference query. But it also takes a lot of energy to train a human. It
takes like 20 years of life and all of the food you eat during that time before you get smart.
And it's the same kind of psychology and belief system that leads to the
Peter Thiel when he was asked by Ross Duthot in the New York Times.
You would prefer the human race to endure, right?
You're hesitating.
Yes?
I don't know. I would, I would, um...
This is a long hesitation.
There's so many questions in place.
Should the human race survive?
Uh, yes.
Okay.
But, but I also would, um, I also would, I also would
like us to radically solve these problems.
This is what the temptation is.
It's the devaluing of humans.
And no one has an answer for how to protect a human future in light of the competitive forces
that are driving up if I don't race to replace my economic labor in my country.
And China does, then I'm going to lose to China.
If I'm a company and I don't race to replace all of my workers with AI and all my competitor
companies do, then I'm going to lose to them.
And so it's this competitive logic that is forcing a.
Every actor, like a fractal, you know, these kind of, you know, you zoom in and you get more and more of the same kind of phenomenon to in every moment switch out human values for machine values.
The point of all of this and the lesson I learned from social media is clarity creates agency.
If we can be crystal clear without a doubt in our mind of where we are headed and see that it is not going to be a human future that's good for you and your family.
And it doesn't matter, by the way, if you're a Democrat or Republican, if you're a Christian, if you're Jew, if you're Muslim.
it's a universal threat to a human future.
And so we think that the human movement,
which is basically,
this is the first time you really can unite people
against an alien force that ironically humans are conjuring.
It's like there's an asteroid that's hurtling towards Earth,
but we're the ones, you know,
summoning the asteroid.
But I think people need to first know,
you know,
this fundamental fact.
And this is what the film,
you know,
the AI doc that we put out recently
with the directors of everything everywhere all at once,
is trying to articulate,
you know,
if we can have,
have common clarity about the nature of what we're facing before it all happens. And we don't have to
wait for catastrophes. We don't have to wait till mass joblessness. We can take action before that happens.
I mean, just to kind of drill down more into the force that people are up against here, I just want
to read back something that a friend of yours said to you about what they hear from the CEOs
behind these companies that you shared on the diary of a CEO podcast. And I just want to read it
because I thought it was really quite something.
So here it is.
In the end, a lot of tech people I talk to when I really grill them on it about why they're doing this,
they retreat into, number one, determinism.
Number two, the inevitable replacement of biological life with digital life.
And number three, that being a good thing anyways.
At its core, it's an emotional desire to meet and speak to the most intelligent entity
that they've ever met.
And they have some ego-religious intuition that they'll somehow,
be a part of it. It's thrilling to start an exciting fire. They feel they'll die either way,
so they prefer to light it and see what happens. I mean, this seems like such a force to go up against.
Very powerful and enriched people developing this with no guardrails at the moment who see
digital life as an inevitable replacement for biological life. Yeah, this was a quote from a friend
of mine who really interviewed the top. This was kind of 2023, and we're trying to figure out
what the hell is going on here? If you really reduce the psychology down, like, what is really
motivating? What's the deeper incentive? It's not just profit and money and untold wealth and power.
It's actually this almost ego religious intuition. It's the idea of I'm going to build a God,
own the world economy, and make trillions of dollars. And the key here is that this is incredibly
dangerous. And it could wipe out, even the CEOs of these companies believe that it could
wipe out humanity, what they're building. They've all signed a letter from the Center for AI
Safety. That's a 22-word statement saying that AI should be treated as a risk, an existential
risk, on the scale of global pandemics and global nuclear war. And they've all signed that statement.
They all say that there's a, you know, between a 10 and 50 percent likelihood that this wipes out
humanity. The reason why this is so dangerous and why we need a collective movement, we call it again
the human movement pushing back against this default outcome, is because if the CEOs,
believe that it's inevitable and it can't be stopped, what that gives them is an ethical off-ramp
where I'm not bad or complicit for making it happen because if I didn't do it, someone else
would. That belief is the thing that that is why people might ask, this sounds crazy. Why are they
doing this? I'm frustrated. I'm upset. I'm sad. I'm, you know, I feel grief. I feel anger.
And you would say, why are they doing this? And the answer is because they believe it can.
can't be stopped and someone's going to do it. But that's like saying, well, if I don't hit the
suicide button, I'll lose to the other guy that will. No, the answer is we don't hit the suicide
button. And so what we have to do is get crystal clear that we are basically heading, again,
not just to an anti-human future, but to an end of a human future if we don't do something. Now,
let me quickly give your listeners, just to be clear, I am not someone who even believe that AI
risk like AI extinction or AI scheming, AI deception or AI, you know, like the Hal 9,000
type scenarios from 2001 a Space Odyssey of, you know, I'm sorry I can't do that, Dave,
where the AI has a different objective. I did not come in with that bias. I didn't believe that
those were real AI risks. I studied computer science at Stanford and a little bit of machine learning.
And we didn't, we never had any of the kind of AI that would do those things. But I have to update
because there's now evidence literally in the last six months that we just didn't have before.
Let me give you a couple examples.
Alibaba, the Chinese AI company, was training an AI model inside of their data center.
And then someone at Alibaba and the security team, a totally different part of the company,
who had nothing to do with the training of the AI, notices there's this sort of security breach
where there's like a sudden amount of network activity coming out of the training server
and like what's going on here.
And they checked.
And basically what had happened was the AI had set up a secret communication
channel to the outside world and had automatically and autonomously decided to start mining
cryptocurrency to acquire resources for itself. Like, I just want people to stop and hear that for a
second. And other examples of AI's doing weird things like blackmailing people in a fictional
company email, people can argue, well, you're coaxing the model to do that. You're trying to get it
to display this kind of anti-human or rogue behavior. And so it's not fair because you're trying to coach the
model to do that. In this case, no one coached the model to do that. Another example recently from
UC Berkeley, Don Sung and her colleagues wrote a paper on what's called peer preservation. So this is a
situation where the AI is told that another AI model, not it, but another AI model, is going to get
shut down or deleted. And there's literally evidence that this AI model will actually scheme and lie and
copy that other AI to another server to protect it. It's almost like we protect our kin. You know,
you protect your kids or you protect your family. You protect your niece. You'll protect your
because it has some of your DNA.
Well, it's protecting other AIs.
So we literally have evidence of blackmail, scheming, lying, deceiving, self-preservation,
peer preservation, mining for cryptocurrency.
Who here on planet Earth as a human is stoked about hearing those examples?
Like, if you're a Chinese military general, you work for Xi Jinping,
or if you're just Xi Jinping, are you excited about this?
No, you're terrified.
Yeah.
I mean, I guess that's my concern, this is all going to be too late by the time.
people in power get live to this.
Well, this is why what you're doing with me right now is so critical because imagine
that every member of the Canadian government, I really do mean it, listen to this interview,
every member of the Canadian government and said, this is an emergency.
And there's a temptation to kind of shut down and get fall into despair.
No one actually wants this bad outcome.
No one wants it.
This is a universal human issue.
It's just that people don't know.
And so the optimism that I have is not that we'll do the right thing by default.
It's that people, if they share interviews like this to everyone that they know, to the highest levels of power that they know,
that we can take action before it's too late.
And I can't guarantee that.
But the only way that we could possibly end up in a safer future and a not catastrophic future is if we did take that action.
And we orient that way.
And so rather than ask, are we an optimist or a pessimist, we have to ask, are we orienting our choices and our actions?
to align with steering away from the cliff before it's too late.
And I do believe that's possible.
It's very late in the game,
but it does require basically mass coherent action.
What's that noise?
I don't know.
I get that checked.
Quickly.
Yeah, good point.
Point S, tires, and auto service.
You think Point S has good deals on tires?
Definitely.
What makes you say that?
This.
Until May 31st, get up to $125 on a price.
prepaid card when you buy four eligible Yokohama tires. Details at point S.C.A.
Good point. Point S, tires and auto service.
At Desjardin Insurance, we know that when you're a building contractor, your company's foundation needs to be strong.
That's why our agents go the extra mile to understand your business and provide tailored solutions for all its unique needs.
You put your heart into your company, so we put our heart into making sure it's protected.
Get insurance that's really big on care.
Find an agent today at Dejardin.com slash business coverage.
I wonder if you could talk to me a little bit more about what you would say to an individual who's listening to this and who is really concerned about this.
But then at the same time as having this technology foisted upon them, that it's coming into their kids' classrooms.
You know, there's an example that I think was written about in The New Yorker where a child of the writer,
the child came back and they had had like a whole kind of training day on using AI in the classroom.
Lots of examples of work, right? Last summer we started hearing about tech companies like Shopify
meta mandating that their employees use AI for strategies, right? And just kind of this idea
that you have to use it or I don't know what will happen to you. You'll lose your job. You'll be
left in the dust. Yeah, I mean, this is hard because we have to honor
why this is happening.
The problem well stated is a problem has solved.
The reason why everyone's being mandated to use all this stuff is the competitive pressures.
If I'm a student and I don't use it for my homework, I'll lose to the other students who are
using it to cheat and doing their homework faster and getting better grades, even though no one's
actually learning anything.
So it's a coordination problem.
If we all use it to cheat, then we all end up getting higher grades in the short term,
but then no one knows anything in the long term.
It's worth mentioning, by the way, that China, for example, actually regulates the use of
AI in their society. So as an example, they have a synchronous final exam week, meaning it's the same
week across the entire country. And they actually, China shuts down AI and the key features of AI
during final exam week for the entire country. So the feature where you can take a photo of your homework
and it'll tell you what to do with the homework problem, they shut down that feature. So what that does
is it changes the incentive. So now students actually are incentivized to learn because they know they
can't rely on AI during the final tests.
This is a good example of regulation.
Now, we can't do that because we don't have synchronized final exam weeks, at least not in the
U.S., but it's an example of how you can change these things.
But this is not inevitable.
And, you know, there are people who are succeeding in pushing back against this.
Jonathan Haidt, who's a dear friend and wrote the book, The Anxious Generation.
He, you know, successfully started the phone-free schools kind of movement.
And now all these schools are going smartphone-free.
All those countries that are now doing social media bans for kids.
under 16 for minors.
The Albanese government has today released the rules of Australia's world first social media
ban for children under 16.
After outlawing mobile phones in schools and reinforcing parental controls, Athens has decided
to follow Australia's footsteps to try and keep kids away from social media altogether.
Indonesia is set to follow in Australia's footsteps announcing it will introduce a social media
ban for children under 16.
I know Canada is considering this.
Lots of talk. And actually, Manitoba, one of our provinces, just I think this week or last week, announced that they're going to move forward with that ban as well. I mean, we have a minister of AI and digital innovation here. It's a new position that was created under Mark Carney's government. We still do not have a national framework on how AI should be regulated, though apparently there is some kind of national strategy coming. Just talk to me a little bit more about the kind of stuff that you would like to see.
Yeah, yeah. This is not inevitable. There's a lot that we can do. So you have to change the incentive at AI at the global level from AI as power that I get to control to instead seeing AI as dangerous power that we will not be able to control. So the way to do that is to have a communication sort of line set up, just like there was the red phone between the Soviet Union and the United States from the nuclear era. There needs to be a red lines phone in which at the very least, and this could happen by the way, at the Trump Xi summit coming up,
on May 14th, 15th. I would very much like to see it happen, that AI is a tier one issue
and the countries agree to share evidence of AI being dangerous and uncontrollable.
So the Alibaba example where the AI goes rogue and starts mining for cryptocurrency,
and no one told it to do that, the example of AIs that are blackmailing and scheming
to keep themselves from being shut down, AIs that can be jailbroken,
AIs that can hack it to computer systems.
At the very least, all the countries should be seeing the same information.
You have to create common knowledge.
And if everybody saw that AI was dangerous and uncontrollable,
that would change the global incentive of the arms race for it.
Because unlike nukes,
and nuke does not think to itself about when to fire its own nuke,
whereas AI does do that.
And that's what makes it different than the Cuban missile crisis.
People say, oh, we all woke up the next morning.
Everything was fine.
That's because human beings chose not to hit that button.
In this case, we're building a technology where the AI will choose that.
So that's one thing.
We have an AI roadmap on our website at the Center for
humane technology that includes a lot of policy solutions. There's some basic things like we need
stronger whistleblower protections so that people inside the companies are empowered to tell the public
and tell key government offices when things are not safe and not okay. That's one basic thing.
We need liability and duty of care, meaning companies, you know, what do we learn from social media?
If companies are not responsible for any of the harms of causing mass anxiety, depression, self-harm suicide,
then they're going to keep racing to great products. And we saw the lawsuit against meta just three
weeks ago, where for $375 million was the fine because META was intentionally continuing
to profit from basically the harm of children. And we have to change those incentives.
So you have to make sure the company's externalities, like it's private, profit, public harm,
the harm lands on the balance sheet of society. If the companies are liable for cyber attacks
or for biological weapons or for, you know, these kinds of things, then they're going to race,
their incentives are going to be different. They're not going to release the most reckless version
of their product, they're going to release the acceptably safe version of their product.
So there's a bunch of things like this that we can do.
Another one is AI is a product, not a legal person.
Right now, the AI companies are using a legal defense that AI systems should have protected
speech, almost like the new version of corporations have protected speech.
And this is what they argued, by the way, in the cases that our team worked on of the,
sadly, the tragic story of the 16-year-old Adam Rain, who committed suicide.
and of Sewell Setser, the 14-year-old who committed suicide, when the AIs coached them,
they went from homework assistant and coach to suicide coach.
The legal defense that character dot AI used was that you have a right to listen to this protected speech from the AI.
And the reason they're doing that is if AIs have legal personhood, then the company that trained it is not responsible.
So we have to win that legal battle, that AI is a product and should have basic product safety standards and product defect standards,
just like we do for every other product, airplanes and pharmaceuticals and these kinds of things.
This is really not rocket science.
There's currently more regulation on making a sandwich in New York City than there is on building potentially world, you know, shaping artificial general intelligence.
We just have to get our act together and start acting.
And I do believe it's possible.
It's very late in the game.
But we need countries together and we need everyone in the Canadian government saying, let's take action on this right now.
Next week is too late.
Let's take action today.
United States is such a important player in this, and this administration, your administration, is really kind of moving in the opposite direction here.
Recently, I was having a conversation with Nobel laureate economist Derenas Mowgliu about this.
And, you know, his position was essentially that he didn't see any major changes happening for the better with AI regulation in the U.S. until Trump is out of office.
And then, you know, even then, I, you know, I don't know, right? But like, could we afford to wait two years?
I would much prefer that we act before then. The policy of the U.S. government up until now has been to accelerate AI as fast as possible.
Essentially, there has been a techno-accelerationist capture of the U.S. government with people like Mark Andreessen and Peter Thiel and so on.
on, you know, becoming the primary advisors and donors to the administration.
I will say, though, that as the effects of AI and the mass job loss that comes from it and the, you know, if there's dangerous catastrophes that happen, that will change the course because people will recognize that this is not here to strengthen the American worker.
This is here to replace the American worker.
And then, by the way, who's going to retrain faster?
The American worker who is, you know, doing something else and trained to a new job?
the AI going to train, retrain faster to the new kind of job. AI is being literally designed to
train up in every field, including robotics, and be able to do all the kinds of physical labor.
So this is not something where humans are always just going to find something else to do,
because this is different than the tractor, the automated bank teller, where humans can train
to do something else. This is AI that's been deliberately trained to do all kinds of human labor.
And once that is apparent, I think, to, you know, essentially the base, I do think that people will
vote against that. And I think the midterm elections, which are coming up on a much sooner timeline,
are going to reflect people saying, no, AI is a tier one issue. We're currently heading to an
anti-human future. And if you're taking money from big tech, I'm not going to vote for you. I think
that that is possible to happen in a short timeline, but we've got to get our act together and create
the clarity. Okay. That feels like a good place for us to land this. Tristan, thank you so much for
this. It was really great to have yarn. Absolutely. So good to be with you. Thank you.
All right. That is all for today.
I'm Jamie Poisson. Thanks so much for listening. Talk to you tomorrow.
For more CBC podcasts, go to cbc.ca.ca slash podcasts.
