Front Burner - Escape, immortality, AI: Silicon Valley's blueprint for the future
Episode Date: August 22, 2025Elon Musk wants a million people living on Mars within 20 years. Jeff Bezos imagines a trillion humans in space, living in a constellation of space stations the size of major cities within a few gener...ations. Sam Altman, CEO of OpenAI, is preparing for a future where rogue AI could destroy civilization, and is stockpiling land, gas masks, and gold in the event it leads to disaster. These plans, which appear ripped from the world of science fiction, instead represent designs for the future held by some of the most powerful people in the world. Why are tech billionaires so consumed with escaping earth — and what does it mean for the rest of us?Today, guest Adam Becker — an astrophysicist, journalist, and author of More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity — joins Front Burner to explain the dystopian future being planned by the tech elite: one defined by ideas like space colonization, “technological salvation,” AI superintelligence, and the pursuit of eternal life.For transcripts of Front Burner, please visit: https://www.cbc.ca/radio/frontburner/transcripts
Transcript
Discussion (0)
You know, shopping for a car should be exciting, not exhausting, and that's where Car Gurus comes in.
They have advanced search tools, unbiased deal ratings, and price history, so you know a great deal
when you see one. It's no wonder Car Gurus is the number one rated car shopping app in Canada
on the Apple app and Google Play, according to AppFollow. Buy your next car today with Car Gurus
and make sure your big deal is the best deal at Car Gurus.ca. That's C-A-R-G-G-U-R-U-R-U-R-E.
us.ca.car gurus.couros.ca. This is a CBC podcast. Hi, I'm Jonathan Mopitzy and for
Jimmy Poisson. Elon Musk is selling the idea that one million people will be living on
Mars in just 20 years. OpenAI CEO Sam Altman has talked about the possibility of
of rogue AI becoming so powerful, it cannot be controlled, what some call the singularity.
Amazon CEO Jeff Bezos said we should have as many as one trillion people in space within
a few generations, living in a constellation of space stations the size of major cities.
Here's another tech billionaire, Peter Thiel, talking about the potential emerging humans
with machines or transhumanism.
You would prefer the human race to endure, right?
you're hesitating yes i don't know i i would i would um this is a long hesitation
there's so many questions implosion should the human race survive uh yes okay but i i also would
like us to to radically solve these problems all of these ideas are what our guest today
adam becker refers to as a philosophy of technological salvation and a vision of escape
Becker is a journalist, astrophysicist, and author of the book, More, Everything Forever.
He recently spoke to Jamie about the visions for the future shared by many of the world's most
powerful people, visions which could very well define our future. Here's their conversation.
Adam, hey, it's great to have you. Thanks, it's great to be here.
Before we get into some specific examples, just walk me through what spurred your interest in this question of what tech oligarchs and billionaires are planning for their and for our future?
Sure. So the short glib answer that I could give is that I've lived out here in the San Francisco Bay area now for over 10 years.
And I just went to one too many parties where people were going on and on about the glorious future of AI and the glorious future in space.
And I thought, you know, somebody's really got to correct the record here.
None of that stuff's going to happen.
Like throw some cold water on this.
Yeah, exactly.
But yeah, I mean, even before this most recent election here in the U.S., because, you know, the book was done before that election happened.
It was already very clear that tech oligarchs had too much power and that they had ideas about the future that just did not work, you know, that they were more like science fiction than real science, but they were presenting them to the public as if they were, you know, not just plausible but inevitable ideas about space colonization and super intelligent AI and so on and so forth. And I thought that's really dangerous. Like people will believe these guys.
because we have a tendency to think that billionaires are experts.
And really, they're not experts in science or technology, even though they're, you know, CEOs of tech companies.
They're just experts in how to make billions of dollars and sometimes not even experts in that.
A term I see come up again and again is techno-utopianism.
And just, yeah, what does that mean?
It's part of this larger idea that tech can solve every problem, that if we just make technological,
progress go faster, we can invent a technological solution for any problem at all. And therefore,
you know, we don't need to worry about big social or political problems. We can just focus on making
tech go faster and it will lead to, you know, a utopia for everybody. And of course, that's not
true. I want to do some specifics here. Yeah, yeah. Let's start with space colonization. Sure.
So Elon Musk, for example, wants a million people on Mars living in a self-sustaining colony within the next 20 years.
Can you walk me through what his plans are for humanity on the red planet?
Yeah. So he has said over and over again that, you know, we need a backup for humanity and the way to make this happen,
the way to make sure that humanity will survive, even if there's some catastrophe on Earth like a giant asteroid strike, is to make sure.
that there are a million people on Mars by 2050,
and that they will have everything they need to be self-sufficient
so that they can survive even if the rockets from Earth stop coming.
And then we'll have the first humans laid the groundwork for permanent presence on the surface.
This is just a sort of rough idea of what things will be like for the first city on Mars.
My guess is we'll probably put the launch pads a little further away, or the landing pads, just in case.
And he sees this as this grand vision for saving humanity and our first step out into the cosmos
and becoming this space-faring species, which he sees as our inevitable destiny.
The only problem is that none of that's true.
Yes.
And that plan won't work.
Well, just tell me more about why it won't work.
You know, you've said that life on Mars would be more uninhabitable than Earth after a nuclear Holocaust.
And you've said that the gravity is too low, the radiation levels are too high.
Yeah.
There's basically no air and the dirt is made of poison.
Real cell.
Yeah.
Yeah, yeah, yeah.
I mean, basically the problem is that Mars is absolutely miserable.
I mean, it's so bad that, you know, not only would Earth be better after a nuclear war,
it would be better after an asteroid strike like the one that killed off the dinosaurs
was 66 million years ago.
You know, that was the single worst a day in the nuclear war.
the history of complex life on earth. There was, you know, this pulse of heat that cooked
animals where they stood and there was, you know, widespread wildfires. And then there was
dust and sulfur dioxide that blotted out the sun for like a decade and cut the bottom
out of the food chain. And something like 75% of all animal species went extinct and 99.
Point whatever percent of all individual organisms died. And that was still, like the day that
happened or any time in the following 10 years, it was still nicer here on Earth and easier to
survive than it is on Mars or has been at basically any point in Mars' history. And we know that
because mammals survived, unaided without like spacesuits here on Earth. There were mammals before
that and there are mammals afterwards. And we know that because we're here. We survive. We
descended from those mammals. There is no mammal then or now that could survive unaided on the
surface of Mars for more than a couple of minutes tops, if that. You know, the air pressure is less
than 1% that of Earth's at sea level, and there's no oxygen. So it's much, much worse than,
say, the top of Mount Everest. If you were on the surface of Mars without a spacesuit, you would
die by asphyxiation while the saliva on your tongue boiled off because the air pressure is so low.
Jesus. Yeah. So why is he so...
seized with this idea that he seems to think that it's this necessary civilization saving thing.
Look, Elon Musk talks a good game. He says, you know, he's doing this because he wants to save
humanity. This is like an incredible thing to have like this amazing city on Mars and a new world.
And it's also an opportunity to, I think, for the Martians to rethink how they want civilization to be.
There's a lot of freedom and opportunity in Mars to do a recompile on civilization.
But I think that his actions, especially in the last few years and really the last few months,
make it very clear that he doesn't seem to care very much about humans.
just the cuts that he made in the U.S. government with Doge, the best estimates are that he could be responsible through those cuts to U.S. aid alone, millions of deaths. And he does not seem to care. It's very convenient for him to be able to say, well, look, you can't get in my way because I'm going to save humanity by taking us to Mars and space. You know, there's a chance that maybe this is all just a grift and he doesn't.
and it's just a thing that he's saying because it sounds good.
But I really think that, like, insofar as he believes in anything,
he actually believes in this because it sounds good and he's completely insulated
from the consequences of his actions.
And he doesn't need to listen to anybody like me telling him that he's wrong because
he has so much money that doesn't matter that his dreams are impossible.
Like, he'll try to pursue them anyway.
Lots of people will die.
He'll fail.
and he won't be held accountable.
Can you talk to me about this user agreement at Starlink?
It reads, so wild.
It reads, quote,
the parties recognize Mars as a free planet
and that no earth-based government
has authority or sovereignty over Martian activities.
What is that about how does it fit into the conversation
that we're having?
Look, that's part of the dream, right?
A lot of what's going on with Musk
and the rest of these tech buildings.
billionaires, is they want to transcend all possible limits by pursuing perpetual growth and
solving every single problem with technology, which in my book I call this the ideology of
technological salvation.
So Starlink Internet is what's being used to pay for humanity getting to Mars.
So I'd just like to thank everyone out there who is bought Starlink because you're helping
secure the future of civilization and helping make life multi-planetary.
and helping make humanity a space-bearing civilization. Thank you.
And one of the things that they want to transcend more desperately than anything else is legal
limits. They want to escape from any form of accountability, responsibility, safety regulations.
Another tech billionaire, Mark Andreessen, is extremely explicit about that, actually,
as is, you know, like a bunch of the libertarian escape plans that have been funded by Peter Thiel,
for example. But for Musk, that's one of the benefits of Mars. He sees Mars as a place where he can be in
control and nobody can tell him no from the government. The same way that he complained about, I think
it was Elizabeth Warren, being like a bossy mom telling him no. Musk and Senator Elizabeth Warren going
at it on Twitter in a public tussle, the senator first tweeting this, quote, let's change the rigged
tax code. So the person of the year will actually pay taxes and stop freeloading off.
everybody else. Then he fires back. He says to her, stop projecting the next day, adding
ultimate insults, quote, please don't call the manager on me, Senator Karen.
Which, you know, there's a whole lot of misogyny there to unpack, but like he also just
really doesn't like the idea that anybody in the government could tell him no, which I think
is also part of why he backed Donald Trump. But yeah, so he sees Mars as an escape. And of course,
The irony is that there are treaties that already regulate, like, what's going on in space
and who has control over various bases that could eventually be set up in space in the future.
And you can't just abrogate the international outer space treaty of, I think, 1967 by, you know, a clause in an end-user license agreement.
But then again, when has that kind of legal detail mattered to Elon Musk?
Since we're on the topic of conquering outer space,
I think it would be good to talk about where Jeff Bezos fits into this.
He wants humans living up in space.
But are his plans different from musks?
Where does he fit into this?
And how does he think about space?
space colonization as helping to, like, solve the problems here on Earth.
You know, credit where credit is due, actually, before I get into it, he acknowledges,
he knows that Mars is terrible and he's even made fun of Elon Musk for his obsession with
Mars.
You know, Bezos has said that Mars is terrible, and he's right.
Mars is terrible.
But Bezos's ideas for space are not significantly better.
Bezos wants to build big cylindrical rotating space stations, rotating so there will be, like,
know, a sort of artificial gravity on the inside. And he wants these things to be like truly
enormous, big enough that each one can hold maybe a million people. And he wants to put about a
million of those into space or something along those lines because he has said he wants a future
where there are a trillion people living and working in space. We have two choices. We either go
into space, or we switch over to a civilization of stasis. And personally, I do not like the idea
of stasis. Our grandchildren and their grandchildren will live in a much better world if they can
continue to advance and develop and use more energy and all of the things that we've enjoyed
for hundreds of years as a civilization. And that is not feasible, to put it mildly. He has said,
Rather than, like, Musk saying he wants to have a million people on Mars by 2050, Bezos, again, credit where credits do, he's said that, you know, this is something he sees for us a couple hundred years down the line, which is, you know, somewhat more plausible, but it's still, it's not happening in many reasons why that is a very bad idea, even putting aside the physical limitations on how much material that would take, how much energy that would take, you know, the kind of technology would need to build such things, just sociologically. That's a really bad idea.
you put a million people into like essentially a can in space and one act of lunacy could lead
to a million people dying in, you know, less time than it takes to eat an apple just by exposing
the interior to the vacuum of space. And that would be it. But I think the nicest thing I can say
about it is, it does seem very clear that Bezos, at least, is really earnest and genuine about this.
He's been going on and on about how this is a great plan since.
he was in high school. So it's not like this is something that he just came up with once he
became a billionaire. This is, this is like a deeply held belief. And, you know, and I think once again,
it's kind of convenient. He says with a trillion people living in space, first of all, we won't
have to worry about limited resources here on Earth. He's very explicit about that. And second,
you know, with a trillion people, there will be, he said, you know, a thousand Mozart's and a thousand
Einstein's. And, you know, what about the, what about the people with that level of talent who
don't have the chance to develop or express it who are alive today, but can't do anything
with their skills or abilities because, you know, they're just struggling to survive due to
phenomenal levels of wealth inequality in the world today. Well, look, like there's this
1970 poet, Gil Scott Heron. Yeah, yeah, yeah, whitey on the moon. Yeah, yeah. Essentially, it's a
critique of the Apollo 11 moon landing,
juxtapositioning the resources spent on space travel.
At a time, Black America and much of the country
was struggling with object poverty.
Our producer, Matt, who has been working on this interview
with you, he's been talking a lot about that poem
in the context of your book.
Because, like, this is one of the major critiques here.
These space ambitions,
it disincentivizes investment in the issues
facing mankind today. Hunger.
or homelessness and just, you know, talk to me a little bit more about the philosophy of long-termism
and how these like investments in AI and future tech stand to take away from the issues
concerning life here on Earth right now. And I guess also give these billionaires an excuse,
you know, not to invest in that stuff here, right? Yeah, yeah. I mean, it's a great poem and I think
gets at a lot of the problems here.
I'm a big fan of space, right? I have a PhD in cosmology. I think that studying space is really
incredible and important. And also one of the oldest practices that humans engage in, right? We have
been studying the night sky for much, much longer than recorded history. We know that the ancients
did that before the invention of writing, you know, some of the earliest pieces of art or art, you know, of the
night sky. I think it's important to look up at the sky and be curious. But that doesn't mean
that our future lies in the stars. I think that when you make that jump, not only are you
ignoring all of those challenges that I outlined, which, you know, are not just challenges with
Mars. Mars, and now I'm going to say, like, give credit where credit is due to Musk, he did correctly
identify that Mars is the next most habitable planetary body or something like a planet in the
solar system after Earth. You know, like including all of the moons and whatnot and everything,
like it's still, Mars is the next most habitable and it's that bad. But there is this idea,
and it goes along with this whole ideology, a technological salvation, that we have to go out
to space, we have to find a way to, you know, spread out into the universe because that's the way
to maximize the number of people that could ever exist. And then if we can make their lives
happier, then that's one of the best things that we could do with our time. And we should focus
on that, almost to the exclusion of making people's lives better here and now. And this is this
philosophy of long-termism that, you know, is a branch of a different philosophy called
effective altruism, and this was influential for people like Sam Bankmanfried, right, the disgraced
crypto mogul. And there's a whole like research center at Oxford University about this stuff and a
whole bunch of extraordinarily well-funded student groups at many universities around the U.S.
and the UK and around the world. And they're extraordinarily well-funded because this is a
philosophy that's extremely amenable to billionaires, right? Because it says, oh, forget about the
problems of here and now, like, oh, global warming or wealth and equality.
or, you know, the threat of nuclear war, or crumbling trust in public institutions and
the erosion of democracy, those problems, sure, yeah, they're important, but, you know,
what's really important is ensuring the glorious future in space and avoiding an AI
apocalypse, and these things are just so much more important. And this is not to say that
effective altruists don't, you know, also make contributions to dealing with problems here
and now, but they are less interested in that, and they are less interested in addressing,
like, systemic causes of problems here and now, and more interested in, like, donating
money to help people in impoverished places in the developing world, rather than, like, asking
questions, like, hey, why is it that some countries have so much more money than others?
And what can we do about that? And, like, why is it some people, even in developed countries,
have so much more money than others, and is that just? And what do we do about that?
Because you can't solve those problems just by donating money to, you know, a nonprofit.
You know, shopping. You know, shopping for a car should be exciting, not exhausting.
that's where Car Gurus comes in. They have advanced search tools, unbiased deal ratings, and price
history, so you know a great deal when you see one. It's no wonder Car Gurus is the number one
rated car shopping app in Canada on the Apple app and Google Play, according to AppFollow. Buy your next car
car today with Car Gurus and make sure your big deal is the best deal at car gurus.ca. That's C-A-R-G-U-R-U-S-C-A.ca.
car gurus.ca.ca.
We are gathered here today to celebrate life's big milestones.
Do you promise to stand together through home purchases,
auto upgrades, and surprise dents and dings?
We do.
To embrace life's big moments for any adorable co-drivers down the road.
We do.
Then with the caring support of Desjardin insurance,
I pronounce you covered for home, auto, and flexible life insurance.
For life's big milestones, get insurance that's really big on care
at Dejardin.com slash care.
Let's dig into AI.
So you have written that space is a location
of the tech billionaires' futuristic dreams,
but AI is the magic that fuels them,
specifically this idea of artificial superintelligence,
which is essentially, I think it's for me to say hypothetical
form of AI that would possess a level of intelligence
far exceeding, that of humans across all areas of knowledge
and problem solving, the possible danger of AI superintelligence
has led the likes of Sam Oatman to prep for disaster.
Yeah.
You know, he's talked about stockpiling guns, gold, potassium,
and a big patch of land and big sir that he could fly to.
And so, like, yeah, just what world ruled is an inherent sense of danger
and cataclysm play in the vision of the future shared by so many of the guys
you write about in your book.
I mean, it makes things simple.
Right? Like a complicated, messy world requires hard work, sacrifice, difficult decisions, and can be uncomfortable and scary. But if the world is very simple, if there's a utopia that you're aiming at that would solve all problems and there's an apocalypse that threatens to derail it that would kill everybody, then clearly the most important thing is to find a way to aim at the utopia safely while avoiding the apocalypse.
And this threat of super intelligent AI destroying the world and killing everybody and the promise of a super intelligent AI, you know, achieving godlike powers, but in service of humanity to solve every single problem, that fits the bill very nice.
And this is not to say that like these guys don't really believe it.
I don't know that Sam Altman really believes that Sam Altman's a slippery guy.
But there are a lot of people, both tech billionaires and, like, the intellectual movements, pseudo-intellectual movements that they fund, like effective altruists, long-termists, effective accelerationalists. These are all like, I know it's a lot of jargon that I'm throwing at you, but these are all subcultures out here in the Bay Area that think about these ideas and have related sets of ideas about how the future works, mostly around like this AI utopia or an AI apocalypse that threatened.
to derail it. And so you get people making claims like, oh, you know, we have to be willing
to run the risk of nuclear war to avoid an AI superintelligence coming into existence before we can
figure out how to, you know, align it safely with the interests of humanity. Of course, the one
article of faith that's shared here is not just that technology can solve all problems, but
that a super intelligent AI is not just possible, but inevitable, coming soon, and could do
literally anything. And there's no reason to believe any of that and a lot of reason to believe
that it's not true. Tell me more about that. I'm like so curious to hear how someone like you
is thinking about AI and like, could it be good? Like, is it going to happen the way that anybody
says it might happen? I don't, I have no idea how to think about it. I know the feeling.
Look, this idea of an AI god, a benevolent one or a destructive one, it rests on so many extraordinarily shaky assumptions.
You know, the basic idea is it goes back to, you know, this concept, the singularity or an intelligence explosion, the idea that you get an AI that's good enough to build a better one that can then build an even better one and so on and so forth until you.
you know, it leads to this runaway explosion of intelligence and that leads in short order to
a godlike artificial intelligence. That scenario is like the fundamental scenario that all of
these guys, and they are almost all guys, are talking about. There's a lot of problems with that.
Intelligence is not just like a number that you can crank up and down like that. There is,
in fact, no evidence for a single trait called intelligence.
You know, this intelligence explosion scenario just assumes so many things about the nature of intelligence, the nature of AI, none of which are true.
There's no reason to believe that this is going to happen.
And the scenario is primarily, you know, it's one that are ultimately originated in science fiction.
And if you go and talk with people who actually study intelligence and people who've been working on artificial intelligence for a long time, yeah, there's a couple of them who really believe in this stuff, but most of them don't.
And certainly like the cognitive scientists and psychologists who work on intelligence, they do not believe that there is like some single thing intelligence that you can crank up and down like you would need to be able to for this scenario to go through.
It just doesn't work that way.
And, you know, and this also forgets about things like diminishing returns, the fact that, you know, if you have exponential growth in something like intelligence, whatever that is or anything else, and you throw more resources.
at it, the increase in intelligence or whatever it is that you're going for, eventually
it's going to peter out.
And we've already seen this with computer technology, right?
Moore's Law said that we were going to be able to cram more and more transistors in to
the same space.
It would double with alarming regularity.
And that held for decades.
And now it's fallen apart because we've hit the limit.
And, you know, we've gotten transistors down to the size of just a few atoms.
And also, of course, there are all sorts of problems that you can't just solve with, quote, more intelligence, right?
Like, because there's all sorts of problems in the world today that, you know, throwing more intelligence and more technology at those problems is not going to make them easier.
You know, how is more intelligence and more technology going to bring about a lasting durable piece in the Middle East, for example, right?
That's not a problem of intelligence.
It's a problem of politics and persuasion.
And these are difficult problems.
that there's no single thing
like intelligence that just solves them.
Do you worry about rogue AI?
Huh, cool.
Yeah.
That's helpful.
That makes you feel better?
Yeah.
I mean, look,
What I worry about is AI systems being used to further concentrate wealth and power in the hands of a few and how that will make it harder for humanity to address the problems that those few wealthy people are creating or making worse, right?
Like a lot of the reason why we as a society, like a global society, have had trouble addressing climate change is that there are powerful wealthy people who do not want.
us to address climate change because they are profiting off of the existing systems, and
they, you know, are concerned about the short-term loss of profits that they would feel,
ignoring that eventually there would be massive economic consequences if global warming
is just left unchecked. So that's what I worry about with AI, that it's just going to take
problems like that and make it worse. And we already see signs of, you know, people using AI as an
excuse to just continue business as usual in the face of massive problems like climate
change. You know, you've got Eric Schmidt, former CEO of Google and billionaire tech venture
capitalist. He said just a few months ago, oh, we're not going to meet our climate goals anyway,
so, you know, we should use even more energy and power and resources to pour into AI to get
to superintelligent AGI faster so it can solve climate change for us. Yeah. And that's just not how the
world works. That's not how anything works. It's such shoddy thinking to say something like
that. And so reckless, what are you going to do? You're going to pour massive amounts of resources
and burn even more carbon to bring a godlike machine that nobody can even define and nobody
knows how to build online. And then even if you did do that and you built it and you turned it on,
what would it say? It would say, oh, well, you want to deal with climate change or you probably
They shouldn't have built me.
Big mistake.
Yeah.
Talking a bit more about the politics of all of this, you have referred to the tech billionaires
embracing Donald Trump as less a shift and more of a homecoming.
And what do you mean by that?
Well, there's always been this sort of disdain for government.
within the tech industry in Silicon Valley, that, you know, government is the problem, that they need to leave us alone, that nobody except us few tech elite can actually understand what this technology is or where it's taking us as a society.
And so democratic accountability, like, you know, the ability of the people through their elected government to reign in tech, that that is.
is something that can't be allowed, and only we can guide the future of tech, and the future
of tech is really the only thing that matters, so only we can be trusted with the future of
humanity. And when you start talking like that, it's just a hop-skip and a jump to embracing
fascism, right? And so it's not super surprising to see somebody like, say, Mark Andreessen,
lovingly paraphrasing the author of the fascist manifesto.
To paraphrase a manifesto of a different time and place, beauty exists
only in struggle. There is no masterpiece that has not an aggressive character. Technology must be
a violent assault on the forces of the unknown to force them to bow before man. We believe that we are,
have been, and always will be the masters of technology, not mastered by technology.
And it's not surprising to see lots of people in the tech industry thawning over an outright monarchist
like Curtis Jarvin, who himself is a creature of Silicon Valley,
and have, you know, the vice president,
also a creature of Silicon Valley, J.D. Vance,
speaking favorably about Yarvin's ideas.
I think that what Trump should do,
like if I was giving him one piece of advice,
fire every single mid-level bureaucrat,
every civil servant in the administrative state,
replace them with our people.
And when the courts...
These sort of ideas about, you know,
government as the problem and trust, trust us, we will guide the future without input from, you
know, the normies out there and the rest of society. That's just something that's like baked very
deep into the founding ethos of Silicon Valley. And you can see this in a lot of earlier work
on what exactly is the ideology at play. Okay. Adam Becker, thank you very much for this. It was really
such a pleasure
it's not every day
that I get to spend
so much time
with an astrophysicist
Well, thank you for having me
this is a lot of fun
All right, that's all for today.
Frontburner was produced this week
by Joitha Schengupta,
Matthew Amha,
McCannsey Cameron,
and Matt Mews.
Our sound design was by Sam McNulty.
Our YouTube producer is John Lee.
Music is by Joseph Shabbison,
Our executive producer this week was Ali Jains.
The show was hosted this week by me, Jonathan Mopitzy, and by Ali Jains.
Thanks for listening to Frontburner.
We'll talk to you on Monday.