Andrew Schulz's Flagrant with Akaash Singh - AI Expert on Robot Girlfriends, If Humanity Is Cooked, & Sam Altman's God Fetish | Roman Yampolskiy
Episode Date: October 10, 2025YERRR – the boys go full Skynet. AI expert Roman Yampolskiy explains why the UN’s ignoring AI safety, how Sam Altman’s “God complex” might get us all killed, whether we can actually unplug A...I, and what happens when AI takes our jobs. All that and more on this special bonus episode of FLAGRANT. INDULGE. 00:00 Is the UN taking AI safety seriously? 01:17 Roman Yampolskiy explains the global AI arms race 03:33 What happens when AI surpasses human intelligence 05:08 The terrifying AI timeline 06:20 Tools to agents: AGI vs. Super Intelligence 07:40 Can AI make good art or tell a funny joke? 09:45 Why we still crave human creativity 12:45 Robots & Humanoids 13:50 The coming of mass unemployment 15:16 Will productivity save us or will AI just replace everyone? 19:11 Best vs. worst case 22:30 Why unplugging AI won’t work 26:25 Super intelligence & why alignment is impossible 33:10 Global treaties, nukes, and the illusion of control 36:05 Could anyone survive by going off-grid? 38:11 Consciousness, empathy, and why AI be benevolent 46:30 Living with inevitability: “Enjoy your life anyway.” 57:15 Neuralink, simulation theory & merging with machines 1:00:15 Movies that get AI right 1:01:53 Are we already in a simulation? 1:09:02 How Roman finds happiness in life 1:15:00 Is any job is safe? 1:18:20 Global AI arms race 1:23:45 Sam Altman's God Complex 1:28:00 The probability of AI apocalypse 1:31:11 Final thoughts & saying "Thank You" to AI 1:37:58 The funniest joke AI ever made Learn more about your ad choices. Visit megaphone.fm/adchoices
Transcript
Discussion (0)
Do you think the UN is taking AI safety seriously?
Unfortunately, no.
With nuclear, we had this concept of mutually assured destruction.
No matter who starts the war, we all regret participating.
It's the same with superintelligence.
If it's uncontrolled, it doesn't matter who creates it.
Good guys, bad guys, we all get...
Is there a way to stop it then?
You think we're in a simulation?
I think we are in a simulation.
Resources to develop this type of technology become cheaper and cheaper every year.
So that means if we have superintelligence,
it would be a super conscious, and to it, we would be like animals.
Would you do neuraling?
It's an easy cell if you have disabilities.
It's amazing for people who are quadriplegic, who are blind.
What if you can't stop eating?
It's beyond health.
Do we solve every problem in the human world with the help of AI?
Just because you solve some problems doesn't mean you become happier.
If all the things I care about can be done by an assistant, what am I doing with my life?
Meditating and getting blown by your sex robot.
I don't like meditative.
Dr. Roman Yampulski.
I said that correctly.
You got it?
I want to talk about this,
why the funniest AI joke will not be funny.
But before we get to that, you just spoke at the UN.
At one of the side events,
I wasn't at the General Assembly with Trump or anything, no.
Okay, but what, yeah, that doesn't matter.
What did you say to them?
I want to know what was said.
They had kind of side events about standards for technology,
standards for AI development and educational skills in terms of AI, yeah.
Okay, so do you think the UN is taking you seriously, is taking AI safety seriously?
Unfortunately, no. A lot of what they do is kind of existing problems we see with algorithm
with algorithm, with discrimination. They don't look sufficiently into the future
where we get more advanced AI, AI which can replace all of us and be very dangerous.
You know, I feel very vindicated that you're on this couch because I, like,
for like 10 years have been screaming that AI will kill us. This is inevitable. Why do we keep
going down this road? These people laugh at me. I'm a bit more of skeptical, maybe optimistic,
is another way to put it. And I have my hopes that we're going to be able to iterate and
maybe put in some guardrails, I hope. And I'm the furthest. Yeah, we don't have nothing to worry
about it. Can you... Guardrails is a good timely point. We just released the Red Lines
a document signed by like 200 top Nobel Prize winner's scientist at UN, like yesterday.
Oh, that's amazing.
It's very fresh.
Okay, so wait, what are the, so now there are guardrails being put in place?
No, there are people saying we should have guardrails.
It's very important.
Nothing is in place.
Could you just give us the current state of AI generally and why you are so, I guess,
vocal about AI safety?
Like, what are, where are we at right now and where could we go?
There is an arms race between top corporations.
So the Google, Open AI, Microsoft, all the others, and nation states as well, China, US, and others, to get to beyond human capacity first.
Sometimes we call it AI, but superintelligence is really where I said.
And it seems that if you just do more in terms of compute, more data, more scaling, you get better, more capable systems.
So everyone's kind of trying to build bigger servers, hire the best people.
so human resources are also important. But they're not stopping to figure out, should we be
doing it? Are we doing it right? They just want to get there first. Yes. And what do you think is
happy? Can you just explain to everybody? I know you did the diary of the CEO. A lot of people
heard that, but believe it or not, our audience might not have a lot of crossover there.
So can you walk us through the worst case scenarios that you, not even worst case? What do you see
as the inevitable scenarios from AI on this path? So the way it's done right now, we take
take an AI model, which is just a learning algorithm.
We give it access to Internet.
Go read everything you can.
All the weird stuff on the Internet.
What did you learn from it?
Oh, you're not supposed to say that.
So we put a filter on top of some of the really bad stuff,
but the model is still completely raw.
It's not controlled.
It's not restricted in any way.
Yeah, okay.
And that's what we have.
And we keep making it more and more capable.
So we don't know how it really works in terms of what it learned
from all that data.
We cannot predict what is going to do.
It's smarter than us.
By definition, you can't really say what is going to do.
And we don't know how to control those systems.
So anything goes.
If it decides tomorrow to change the whole planet,
if it's capable enough, it can.
So Chad GBT right now could change the whole planet?
No.
Right now, we're dealing with tools.
The existing AI is tools you can use to write fine-year jokes
or something like that.
It's not taking over.
But if you just progress forward,
the same rate of progress we've seen
over the last decade. It goes beyond human capability. Can you walk us through what that? Because
again, Alex thinks it's just not a thing. So my feeling is it's like climate change. Climate change is a
real thing, but I only hear that from other climate chain scientists. Everyone else, it hasn't really
necessarily affected us in a way that makes me want to change my behavior. Because it's so slow.
It will take 100 years for you to boil a life. This thing we are concerned is like climate change in a week.
AI is predicted to hit human-level performance in two, three years.
Even if it's 10, it happens well before all the other big concerns.
So it's either going to take us out or help us solve climate change easily.
Okay, and so when it hits human-level intelligence, are we afraid that it's going to just set off some nukes?
Like, what are we afraid of?
Well, first, at human level, is just a free assistant.
So if you want help hacking into stock market or setting off nukes or any type of cybercrime, it's there available.
But the real concern is very quickly this free AI scientist engineer will help you create superintelligence, something much more capable than all of us combined.
And there, anything goes. It really does novel research, new physics, new biological research.
So we simply would not know what is going to happen.
But this feels like a human problem that's using AI to destroy other humans rather than a, you know, AI superintech.
that's going to destroy humans.
So that's a paradigm shift from tools, what we have today,
some bad guy, malevolent actor with AI as a tool using it to harm you,
versus AI itself as an agent making its own decisions.
Right.
And that is called superintelligence, when AI can make its own AI and make it just create a world.
So superintelligence refers to its capability.
If it's smarter than all of us in every domain, it's a better chess player, better driver,
better comedian, than that's superintelligence.
agenthood is more about ability to make your own decisions.
You are setting your own goals.
You can decide to do something or not do it, change your mind.
It's not just a tool someone's using.
So people talk about guns, for example.
Gun is a tool.
Somebody with a gun can kill you.
A pit bull is an agent.
Pit bull can decide to attack you regardless of what the owner does.
So AGI is this moment where the AI is more capable than any given human in any given field.
Does that fair?
That's super intelligence.
AGI is where it's kind of like a human.
It can learn any new skill.
You can drop it into a company as an employee.
It's general intelligence at about human levels.
Okay.
Now, can we talk about AI art?
Because this is one of the elements that I find, I think, is restrictive to AGI, generally speaking,
because there's so much subjectivity in what makes something good.
And in my opinion, intelligence doesn't necessarily create great art.
So, like, something like humor, where you pointed out with Akash's.
homework that he was a stunt.
So, like, something like AI art,
there is a subjective human experience that's required there.
So how could an AI create better art?
Have you seen modern art?
Absolutely.
Modern art sucks.
There you go.
Let's keep it on us.
You said 99% unemployment will happen with AI.
Before we get into that, is comedy the 1%?
I think so.
Because so far, so far, that's the point of that paper.
So far, one thing, AI is not better.
than, you know, top comedians
who have Netflix specialists and such
is humor.
It's very hard to be like consistently producing.
Tell me the funniest joke. Do another one.
Yeah, yeah. AI jokes are pretty bad.
They're terrible. You've heard them, right?
I actually got to the point where I trained a model on my paper
and if I ask it, give me 10 Roman Impulski-style jokes,
one out of 10 would be like, ha, this is not terrible.
Not terrible. But it's nowhere near, like, levels
it plays chess at or even drives cars.
Right, exactly.
Look at that, mom and dad.
If I was a doctor, I'd get replaced in two years.
Yeah, they told you to code.
Yeah, but this is like two and a half years.
Don't do it.
Well, that, hold on.
I thought like five years.
If you're really funny.
Okay. I think I'm really funny.
Okay, so you got four.
Okay.
But I'm curious what...
Maybe, maybe, yeah.
But I'm curious what this looks like, though.
So, AI making jokes or
making art. To me, what makes
like Michelangelo's David so impressive
is that a human made it.
And the idea of like a human being running a
four minute mile was impressive to us because
a human did it. We can have a robot that could
run a mile faster, but it's not as
impressive to us because it doesn't reflect something about what
it actually is to be human. So a comedian
going on stage, what makes it funny
is that there's an emotional element, that
the human being is being irrational or it's tapping
into something that is fundamentally
into our human consciousness,
that I'm skeptical the AI could do within the next five or even 10 years.
Yeah, and it's the same in many other domains.
So you have mass market.
China produced goods, cheap, affordable stuff, everyone consumes.
But then you can have man-made furniture, very expensive, reliable, different level of quality, different market.
Same goes for any human interaction.
You want to meet the artist.
You want to talk to the comedian.
Absolutely.
It's special and somewhat protected.
But if the goal is just, you know, read some jokes in line, top ten jokes, I don't care who generated them.
I might want them to be funny.
Sure.
But I think of like a comedy show, for example, or even like if I went to go see an art show and I wanted to see sculptures, I would want the sculptures to reflect me because I think human beings have an ego.
And so when I see a sculpture made by an AI, I go, oh, it's not as good as the imperfect one made by a human.
Well, for one, you wouldn't know who generated it.
That's a touring test restrict to a specific domain.
If I play a tune and you don't know who created it, you have to either like it or not like it.
Right.
And then later you discover everything you liked was AI.
Sure.
But if you go to a comedy show and you see a human being saying stuff, you want to believe that the person that's speaking is the one that's, you know, actually delivering the show.
So to his point, I was scrolling through Instagram and I saw a girl at a comedy club in India just wearing like a bra or a bikini top telling a joke.
And I looked at the comments.
Yeah.
I looked at the comments and everybody was like, how dare you?
she should why would she go out like this and i i looked at the the handle and then the thing
and i i was like these comments they don't realize this is ai and the only reason i realize it was
a i is because if a woman went out in india dressed like that she wouldn't have even made it to the
club you know what i mean so but then it was like something something ha ha ha ha ha ha ha ha ha like
no human being but no one could tell and if you're just scrolling like most of us do consuming
content on your phone. We are already
to the point where a lot of people cannot tell
it's AI. I didn't see any comments that were like
this isn't real. So it's only going to get
better. So how are you going to even get to the
point where if you're just getting fed amazing
content all the time, why would you even
go out to a comedy show? But also
maybe there is a real human delivering
jokes written by AI.
Well, not funny, but you have the best
AI, so here you are. How'd you know
he wasn't funny? That's good. That's good in a way.
You did your research.
No, he was sparing you.
no but this is the thing we don't know who actually generates content and a lot of times in the last
month or two i listened to something short clips of music and i'm like i really like this is
sketchy stuck with me and all of it is AI right i no longer listen to human artists yeah exactly
and think about when it gets to the level of humanoid robots and you can't even tell them
different from a human yeah right and you're watching that stand up right but i would argue even
that it was even farther away where you actually have like a you know humanoid robots
bot that passes the Turin test.
How far do you think we're away from that?
That's very hard. So a full
touring test, physical, chemical,
like complete interaction is very hard. There's so many things
go into it, like even smell, texture, like it's hard.
But touring test has originally
proposed, yeah, we're passing it now.
You can't tell online if you're chatting to someone
if it's a human. Yeah, people are falling in love with their
AI because they can't tell.
Even if they know it's AI, they're convincing enough.
Right.
Let me know.
a better girlfriend, so they just go with it.
Yeah. No, I, that's true.
And I, so this is why 10 years ago or whatever,
I was talking to somebody who worked in AI small scale,
but I asked him about this,
and then he just never really had a good answer.
And I kept pressing and, like, poking and prodding at all of his responses.
And then eventually he was just kind of like,
yeah, I'm sure it'll take over, but humans will be kind of like dogs.
Dogs have a pretty good life.
So that's about loss of control.
You may still be safe.
They may keep you alive and protected,
But you're not in control anymore.
You're not deciding what happens to you, just like your dog doesn't decide what.
Which seems like not that great of an existence.
Well, we definitely cannot just do it to 8 billion people without asking them to agree to it.
That's an ethical experimentation.
And they can't even consent to it because most of them don't understand what's happening.
Okay.
So AGI is 2027, and that's when artificial intelligence can do any task as well as any human.
And at that point, you think there's going to be massive unemployment because why would
an employer, pay an employee, when it can have the AAI do it for free just as well.
Right. So the dates, I don't generate them. They come from prediction market.
Okay. People betting on certain outcomes, that's the best tool we have for predicting future.
We hear the same numbers from leaders of the labs. If it's wrong and it's five years instead,
nothing changed. It's the same problem. We just have a little more time.
Yeah, it's just the timeline. But really, if I can get a $20 model or a free model to do the job,
why would I hire someone for $100,000? Yeah, exactly. So that's what causes mass
unemployment. Anything you do on a computer basically goes right away. Now, physical labor is different.
If you're a plumber, it's much harder to just write a program to fix my pipes. So we need
humanoid robots. But even there, you saw progress with Boston robotics and with Tesla. They're
probably five years behind. So you think it's possible that within 10 years, there are actual
robot plumbers that can come in to your home, disassemble pipes and reassemble pipes to
fix plumbing. It is reasonable. I cannot guarantee specific dates, but there is enough money pouring
into it to make it happen. And again, the research itself is now being aided by AI tools. The code is
written by AI. So more and more, this process becomes hyper-exponential. It's not just humans
developing it. Technology is developing the next generation of technology. Right. Yeah, I think the
post-work society is very plausible that we get to a point where human beings are not employed at the same
scale that they are. But is it possible that human beings are still employed, but our productivity is
10x, rather than our unemployment being reduced 10x? So certain jobs, definitely, but then if 10 people
had to be removed, then there is not a replacement job for them, it's still unprecedented levels
of unemployment. But could we just scale faster? He's arguing you would just keep the same number of
employees, and all of those employees would be more productive with the use of AI. What would
be your response to that? Not every job scales like that. So I don't know. So let's say you had this
comedy writing AI. So now you have a hundred times more jokes. Yeah, I would be more productive.
But what are you going to do? 10 specials at once? Like, I mean, I could put out instead of every
year, I could have a special every four months or whatever. Somebody has to consume all that.
Yeah. You can overproduce and then your market just doesn't keep up with your supply.
Mm-hmm. Okay. Mm-hmm. You don't think humans will push back like, say, in protest to companies
that fire everyone?
There is a hunger strike right now
against Google, against Open AI going on.
Have you heard about it?
No.
No, of course not.
Nobody heard about it.
But people are trying.
There is pose AI movement.
There is stop AI movement.
They don't have a lot of members yet,
but there is initiative.
But I think that's because people don't see it
as that big of an issue yet.
But when, like, companies start mass layoffs
and replacing all their workers with a...
They fire by thousands.
We have hard time placing our junior
students. I think you're putting a lot of faith in people to recognize a problem before it's too
late. Like COVID, and I'm including myself as part of the problem, we all could have seen it coming
from China. Did we prevent anything? It started in China in like December and then the US just
fully shut down in March. What did we do in December? We're short the market. Yeah, yeah,
short of the market. And then what else? You know what I mean? So I'm not saying it's up all the masks at
Home Depot.
I'm not saying this to aggressively at you, but I think the idea, I'm passionate about
this, I think it's a problem, like, or at least whatever, it's like, yeah, we're putting
way too much faith in our ability as humans to say, oh, here's the problem, we're going to
stop it before it's too late.
Typically, we are way too late and then we react and say, yo, what the fuck?
And then we're outraged.
So I guess what I would say, it's like, let's say Walmart, for example, they fire everyone
and then they have all humanoid robots or AI just doing all the jobs.
then people are outraged at Walmart
fired everyone so it's like, hey, stop shopping
at Walmart. So now Walmart starts losing money
and they're like, hey, you know what? We're going back to hiring humans.
I don't know if that's going to happen. People are buying cheap
products at Walmart. Now it's 10 times cheaper.
They love Walmart now. We know Amazon
doesn't treat their employees well in the warehouse.
Do you still shop on Amazon?
Yeah. We also watch all American
jobs leave through like the
70s to the 90s and didn't
do anything, like as a society.
Yeah, but I'm talking about on the scale he's saying
like 90%. Walmart's not going to do it in one
fell swoop. It's going to happen slow and you're just
going to not notice. There'll be less people.
They replaced all the cashiers.
Now I feel like I work at Kroger.
They come in and they yell at me for not doing it.
I had no training. Forgive me.
I don't fire me.
That just means you're allowed to take it.
I remember those popping out 18, 19 years ago.
I was in college.
And I remember being like, oh, they're going to replace all
cashiers. I don't want to do this.
And I would go to the cashier. And now I don't give a
fuck. I'm going to the thing if it's the line.
They don't have a cashier.
After 8 p.m., it's literally me.
I work there.
They have one person trying to make sure you don't steal, watching five different registers.
That's it.
They cut their staff heavily.
But this is still a good thing, generally speaking, because this aligns with human interest
for low prices and the ability to get cheap products, et cetera.
So I wonder, could that be the same case here?
Is it possible that there is this post-work utopia where people have a UBI and they're free
to explore various interests and so many things that we had to do day to day?
now taking care of by atomic robots. If we manage to control advanced AI, then yeah, that possibility
is real, but also we don't have any unconditional basic meaning. So I was sorry. What do you do it all your
time? Sorry to interrupt. I would love to get to this stuff, like the best case scenario. I think it's
important to kind of rehash a worst case scenarios for people who aren't familiar with what, what you teach
or whatever. 2027, we get AGI. 2030, superintelligence is the prediction that with superintelligence
People argue about slow takeoff versus fast takeoff. Some say, well, almost immediately it has
perfect memory. It's much faster. It has access to internet. It would be super-intelligent within
weeks, months, minutes. I would say maybe it will take some time to train or design, maybe
five years, ten years. But soon after, we expect something like that. Soon after, so let's say
2032, we have super-intelligence. Now what happens with super-intelligence? You cannot predict what
happens than a smarter than you agent makes decisions. If you could predict that, you would be
that smart. So superintelligence means the AI can just take over the world and do whatever it
wants, essentially. It can do good things. It could be keeping it very happy. But we cannot
predict what or how, even if we knew, okay, it's going to be a good one trying to help. We
don't know specific. Yeah, no. So in the simplest, like, just to give somebody some kind of
understanding who might not know, when superintelligence happens, essentially humans are no longer in
control of the fate of the world. And we may not even understand what is happening to us. So the world is
changing, but we don't understand what it changes. And it would happen so quickly because the AI
just continues to get smarter and smarter? It is part of it. So it would create superintelligence
2.0. The cycle will continue, but also you have new technologies being developed. It may run novel
physics experiments. There is probably good progress in nanobots and synthetic biology. So all those,
usually when you read a book about future tech
there are separate chapters
robotics, nanotach, all of it happening at once
nobody writes about it because you can't figure out
we can't even comprehend it mentally
so do you understand whether
I'm not saying you have to change your mind but do you understand kind of what
the picture he's painting? Oh yeah
as someone who's like skeptical. Could I give the doomsday scenario and you tell
me if it's possible? Okay that's what I
specialize in. Okay nice. Okay so
five years in the future we have this
maybe an AGI maybe a super intelligent
AI that is now helping
basically like
bioscientists develop
vaccines or something against some type of pathogen
and the AI has
acknowledged that human beings are the problem
on Earth, that we are destroying the Earth
and that we are an existential threat
to the flourishing of all the existence
on the planet. So we got to get rid of humans. That's what
the AI is computed. And then it
tells these vaccine manufacturers
to say, and I'm just using this as an example,
they say, okay, we have a cure for a virus.
And then they basically give it
the wrong information, where it's not actually a cure, where actually it is a thing to eradicate
human beings. And the manufacturers are thinking that they're doing something good and noble
for humanity, but actually the AI is subverting their will to pass on this thing that's going
to kill off human beings. And then they unknowingly give it to all people, and then over time
human beings are eradicated. I love that example. It's a science fiction, because I wrote it in
my book. It is the best example. It can basically have side effects, not immediately. It can have
happen multiple generations later. Maybe your grandchildren can't have children. Also, you're giving
an example of kind of taking out humans because, you know, we're harming the planet. There is
negative utilitarians who think suffering is the biggest problem. And so to end all suffering,
you need to end all life. So it's actually for benefit of living beings to end their existence.
Right. To get rid of suffering, you could still be very much aligned with reduced torture.
reduce pain in a world.
But the conclusion is not something
we would agree with. Right.
In your example, there will always be anti-vaxxers.
Thank God for that.
You hope.
They'll account for that and they'll figure it out.
That's the thing. I think what scares me
about it is it just, we cannot, to his point,
we cannot comprehend how smart it's going to be.
And we cannot comprehend what it's going to be able
to accomplish. That's what's crazy.
And it's not that far off.
And I don't know how we stop it outside
of killing Sam Altman.
Jeez.
This escalated a little too quickly.
But the part about that plan is
then they tried removing him.
It made no difference.
You can replace him with someone just like him
and the machine keeps going.
They all replaceable parts
in this greater self-improvement
race to the bottom.
Right.
So it makes no difference
who's actually in charge of that corporation.
The generational triumph tour
of 2026. We are in theaters. This shit is crazy. First of all, before I get to that,
2025, we got shows you need to buy tickets for because they're already selling out.
We got San Jose. We've already sold out two shows, October 24th, 25th, something like that.
If you look at the website, that shit is selling out.
Cobb's Theater and San Francisco tickets are already selling out in late November.
We got the Comedy Connection in Providence, Rhode Island, this week, October 16th.
That's about to sell out. So buy your tickets for this year.
But generational triumph tour, first of all, Canada. Thank you so much. We sold out three shows already.
in the first day in Toronto.
That's 3,300 tickets.
In Vancouver, we sold 1,500 tickets in the first day.
I just expected more love.
I'm not even calling out of every American city.
Dallas, my hometown.
Step it the fuck up.
We're going to sell it out.
But I was trying to do two, three, four shows
because it's Dallas is where I love.
We're going to sell out one, maybe two.
What is that?
Step, put your foot on a fucking gas, Dallas.
I know there's a lot of Indians
and we wait to do everything that's not academic.
But buy your fucking tickets for the generational triumph tour.
other city. I'm very happy with you guys. Dallas, I'm deeply disappointed in Dallas. That was a nice
sentence. Three D's right there. Anyway, go to akash sing.com for all of those dates. I'm coming to a
city near you. Best show I've ever done. I'm very excited. Thank you to everybody who has bought
tickets. If you want tickets, you're all on the fence. I promise this will be the best, one of the best shows
you've ever gone to. That's the goal. I love you guys. Thank you. Mark Gagnon got shows too.
If you can't go to my show, go to Mark's show. If you go to either show, go to my shows. But I don't
think we're going to be in the same city anyway, anytime, because Mark is in National Tennessee,
October 23rd, Mobile, Alabama, Roll Tide, October 24th, October 25th, New Orleans, Louisiana. Listen,
I told Mark that New Orleans is an incredible city, but a dog shit comedy scene. Prove me wrong. Go to the
shows. Everybody says it sucks. Prove us wrong. November 9th, Denver, Colorado, y'all are the best
comedy city in the country. I'm not even trying to hate go to that show. November 16th,
Hoboken, New Jersey, November 23rd. Philly, December 5th, Fort Wayne, Indiana.
December 6th, Detroit, Michigan.
Mark Gagnon Live.com.
Go see the boy.
He's blossoming.
It's beautiful.
We love him.
We love y'all.
God bless.
What is the best case scenario?
Like, what?
Tell me reasons for hope with AI, with superintelligence, AGI.I.
What do we have to be hopeful about it?
So if I'm wrong about how difficult the problem is, it's actually possible to control
super intelligence.
And definitely, then we have this friend with Godlike assistant who will cure all your
diseases, make you live forever, give you fresh shit.
Like, it's good stuff.
that's easy we don't have to get ready for it we kind of know how to deal with good news
that's why in computer science we always look at worst case scenario we want to understand
what happens if bad things might happen and we're prepared if they don't we're doing better
there are some game theoretical reasons to think that even if it's not controllable and
misaligned it may still pretend to be good to us for a while to accumulate more resources
not to have direct conflict with us right away so maybe for like a hundred
years, it's super nice to us, and we don't even know it's taking over. So that's another reason
to be very optimistic. What are some other, like, doomsday scenario? So with an AI that has,
you know, perfect agency and is able to come up with its own motivations in this hypothetical,
it could create, you know, a pathogen to stop human fertility, it could hack into the stock market.
What are some other, you know, potentials that it could do to actually affect us in our day-to-day
lives? So it's like super common question, and you're basically asking me what I would do.
Exactly. Right. And I can give you lots of evil stuff, but it's not helpful. What the interesting answer is, what would someone smarter than me come up with? And I can't give you that answer. People talk about worst case scenarios, worse than existential crises, which is suffering risks. For whatever reason, it basically creates digital health. It gives you immortality and tortures you forever. Why? I don't know, but like, maybe it's good for something.
Yeah, immortality and torturing you forever is pretty bleak.
Not funny, not funny, but it is the joke from the paper, which is supposed to be really funny.
So you say companies and countries are in this race for AGI and superintelligence.
What's, and not saying, I'm sure you have a lot of purpose, but what is your purpose for, like, speaking out about it?
Like, if it's going to happen and it's inevitable, like, why?
Like, if you just speak about something that you know is going to happen, what are.
are you doing to help?
Yeah.
So then I started researching this, I was sure we can do it, right?
I just wanted to make sure then we develop AI.
It's strictly beneficial, helps everyone.
The more I researched it, the more skeptical it became of our ability to control something
smarter than us indefinitely.
At this point, it's still not universally accepted.
There are lots of people who go, give me a little bit of money, a little bit of time,
I'll solve it for you.
Give me a lot of money.
I'll solve it for you.
But I don't think they're right.
I think all the AI safety teams at different labs are putting lipstick on a pig.
We're doing some filtering, they're saying don't talk about this topic, don't discuss this, but the model is still completely unsafe.
And this is not what everyone agrees on.
If you go to any AI conference, they don't talk about uncontrollability as a given.
So I see my role right now is to challenge community, either prove me wrong or accept it as a fact.
How do they not talk about it as a given?
blows my mind. Of course it's going to be uncontrollable. It's going to be so much smarter
than you. People are just like unplug it. And it's like, do you think it won't figure out
a workaround to being unplugged? It's infinitely smarter than you. But this is kind of like
the comments you get on a podcast unplug it. But even at the top research conference, people still
don't talk about what is the nature of this problem. In computer science, you have problems
which are solvable, unsolvable, maybe undecidable, maybe they can be
sold but computational resources outside of what's available in the universe.
For this specific type of problem, outside of my work, there is no established bounds.
We don't know if it's hard or easy unless you agree with what I'm saying.
And it's very unusual for computer science.
It's almost never the case where you work in a problem not knowing how hard it is.
Most of the time you show, oh, it's a linear difficulty problem, we can solve it.
Here's the algorithm.
No, it's NP-complete, non-deterministic polynomial type.
I'm complete. It's very hard, but we can approximate solutions. Here, people are just like,
I don't know, we'll figure it out then we get there. AI will help us. That's the state of the art
thinking. Can we talk about alignment? Because this idea of like, you know, it needing to be
controlled necessitates the condition that AI is going to actively harm us. So this concept of
alignment, could you just explain what that is within the AI space and why you're so confident
that we will be in some way misaligned? So people behind the idea,
wanted AI to want what you want, humanity in a bigger sense. So AI understands their preferences
and does what we would want if we were that smart. The reality is that the whole concept is
completely undefined. It doesn't talk about who are the agents. Is it 8 billion humans? Is it
CEO of Open AI? There is no agreement between agents. We don't agree on much of anything.
Ethics philosophy is not solved after a millennia. And even if we somehow agreed on a set of
ethics for all of humanity and we decided we'll never change anything we're going to you know
hard-code wire this 2025 ethics we don't know how to code that in into those models
concepts of good and bad they don't translate into c plus plus or neural weights so not a single
part of that alignment concept is defined so even so you couldn't program an AI to be ethical
if we already agreed at a set of ethics like you had a book
somebody wrote a book and that tells you what it means to be ethical
we still wouldn't know how to put it in the system to obey that
but we don't even have the book we don't even agree on which book to read together
right of course but hypothetically if we did come up with like just a fundamental core
rule book like hey don't kill human beings define kill defined human being
so let's say we didn't do this which we can't those are impossible they are so
fuzzia and abhorters you always find
an exception. And if you did
hard code something in, I will find
a way to game it, to take advantage
of the rule. Okay, that makes sense.
Yeah. And I feel
because we live in a capitalist society, like,
let's say we create the ethics, and then China's like,
oh, you know what, we're going to
not have any of these ethics, so
their AI doesn't have anything holding it back. And so it's like,
I don't think we'll ever agree on
any ethics. That's why I'm like,
why I'm not
optimistic at all.
military applications
by definition you're creating AI
and the best one is the one which kills the most
people so your ethics have to be adjusted
a little bit right
but then even with like
say like nuclear threat
so we developed this technology that can
destroy all of humanity through this
thermonuclear war
so far we've done a decent
job of containing it
I mean I would say decent
we bombed civilian population with nukes
it spread to new countries we lost it a few
times we had near misses twice absolutely to mark's point we used those bombs in 1940 whatever 45
whatever it was and then we realized oh this is a very destructive thing we need to sign a global
whatever multinational treaty to not use these disarmament happened and since then there haven't
been any nuclear bombs dropped is there something possible like that with AI I think so and this is
exactly what I'm trying to push. With nuclear, we had this concept of mutually assured destruction.
No matter who starts the war, we all regret participating. It's the same with superintelligence.
If it's uncontrolled, it doesn't matter who creates it. Good guys, bad guys, China, US, we all get
taken out by uncontrolled superintelligence. So you would love something. Like you go to the UN,
let's say they take you seriously. Your goal would be, let's pass a treaty like this. All countries agree.
we are not going to allow AI to get past a certain point.
That would be awesome.
I don't think long-term it would solve the problem
because resources to develop this type of technology
become cheaper and cheaper every year.
Right now it may cost me a trillion dollars
to create superintelligence.
Next year, $100 billion, $10 billion, and 10 years,
you're doing it in a laptop.
Is there a way to stop it then?
No.
So we're all fucked?
It's just a matter of delaying the extinction.
And at the same time, you're trying to extend our lives?
Yes.
So AI can end it later.
Well, again, we had some reasons to be optimistic, as you remember.
Aren't there some physical limitations to the, I guess, the ceiling of what AI can do just based off of physics, right?
Laws of physics and overall how much compute amount of matter can do, but those are so above human level.
To you, it looks like infinity. It doesn't matter if it is upper limit.
I see. So like the cooling of servers, so the amount of silicone you can put into a chip or something like that.
this becomes so much more efficient every year algorithms become more efficient the way we develop
processors all of it is exponentially improving and feeding at each other and then i would say there's
probably solutions that we cannot comprehend as humans to those issues that AI would be like that's all you
got to do a plus 200 yeah right we're aligned on this you and i follow your alignment say it's possible
yeah yeah yeah shut it down no but i have a paper about it and i basically say you can't get eight billion people
to agree on anything. So alignment has to be one and one. And that's possible. So you create a personal
universe, virtual world for you. And in that world, whatever you want happens. It's your universe. And then
superintelligence just has to control the substrate. I would be so much skinnier in my world.
That is human perfection. We stop getting fat. We stop getting fat. My mom lives forever. I don't know
about your mom's. But my mom lives forever. Value alignment. Yeah. Is there a possibility that
there are a group of Luddites that go off into, you know, a force somewhere and they set up
a commune that is, you know, basically a technological and they don't interface with AI and they're
able to exist in some type of semblance of harmony. It's called Amish. I made a post a while ago on
social media. I said Amish made all the right decisions. It sucks that they're still going to get
taken out. Yeah, because why wouldn't the AI be able to find? Yeah, it's a global problem.
It's a planet-wide problem. Going to Mars does not help. Like, it's all on
But in that reality, how would, like, the Amish, for example, be taken out?
Would this be, like, the idea of, like, an autonomous robot that would go and, you know, shoot them or something?
And you're asking me how I would get them to.
Viruses, nanotech, new tools, new weapons.
I mean, at Google Maps, you can see pretty much every place on the Earth as us.
If theoretically there were drones or whatever.
There's also possible that it's planet-wide impact, so let's say it wants to cool down servers.
Maybe the whole planet should be 500 degrees cooler.
I don't know, making up stuff, but that would take them out.
Right.
But would that also take out AI, or would they be able to...
Oh, I assume they won't colder temperatures for better things.
For the servers, I see. That makes sense.
And then, is there any way that the AI would need human beings?
So right now, definitely, they rely on us for the whole manufacturing logistic cycle.
But eventually, I don't think there is anything we provide.
Some people argue about some sort of uniqueness to biological substance,
only you are internally experiencing the world. Maybe it's valuable for something, so maybe
AI scant. I don't buy it. I think they will also have qualia, but we can't test for it or
establish it. Right. Yeah, the consciousness element of AI is very interesting to me. Because
we can't even really define human consciousness. We have ways and frameworks to kind of think about
it, but when it comes to actually an AI consciousness, if we can't define our own, it's really
difficult to set up some type of consciousness turn and test for an AI. But it seems that it kind of goes
along with intelligence, so animals are probably conscious, but maybe to a lesser extent and so on.
So that means if we have superintelligence, it would be super conscious. And to it, we would be like
lower animals, essentially. Could you say that as human beings have gotten more intelligent,
we have created a more globalized system of ethics and morals, and we've gotten more righteous in some
ways? And would it follow to reason that as AI gets more intelligent, it will also have the same
semblance of morality that we have? There is a paper. I think it's called superintelligence does not
imply benevolence. And basically, no, just because you're smarter, it doesn't mean you're nicer.
There is lots of really smart psychopaths. You can align any type of goals with any level of
capability, our Faganality thesis. Basically, no matter how smart you are, you can potentially be
interested in something really dumb. This is also my friend's argument that
as a whole as humans have gotten smarter
as a whole we have gotten less violent
history was much more
violent than it is now I would say
if you achieve this super intelligent thing
you would still look at the problems
on earth and the cause
of most of them is this one species
what if we just call this species
cull it get rid of it
figure it out
the earth benefits so why
wouldn't I just do that we'll call animal
populations that are overpopulated
why wouldn't say I do that with us
Ethics and politics are just like physics.
They're observer relative.
In physics, your speed, your time are not universal.
They depend on you.
Same in ethics.
Whatever something is ethical or not depends on your position in that world.
So if you are aliens looking at this planet, you may have a preference.
I want whatever is the smartest thing to emerge, I don't care.
If you are one of us, humans, you should be pro-human bias.
You're still allowed to do that.
So it depends on where you are in the universe.
And that's what makes it ethical or unethical.
Yeah.
And for robots, again, it's going to look at you.
I don't know, you were for young, the Matrix, that scene where he's like humans are a parasite,
I think about that all the time because it's pretty accurate.
We go to a place, we use up all the resources, we drain it, and then we just go to another place.
Why would a robot not rid the parasite?
Like just, this is the cancer, let's remove it from the body.
That means possible.
People like us who aren't in the business of programming AI,
What should we do?
Should we worry?
Should we change behavior?
That's a great question.
If you ever get a choice to decide, let's say, between two politicians and one is saying,
we need to unrestricted to AI development, and another one is very careful and says there are, you know, possible problems with that.
You should vote for someone who maybe is more cautious with advanced AI.
But really, you have no say in it whatsoever.
I think you just keep making noise and hope that policymakers do something.
Or we just beat up Sam Altman.
I think that's a really good idea.
Leave Sam Malone.
You don't like them.
Just beat them up.
You're violating free speech right now.
What am I violating?
You're calling for violence.
Okay, fine.
I'll beat up Sam Altman.
I can win that one.
That's one fight I can win.
UFC fight.
Yeah, there we go.
Can I fight with my shirt on?
It's more charity.
Yeah, yeah, yeah.
It's a good call.
Yeah, yeah, yeah.
All right, guys, let's take a break for a second.
Have you ever had a craving for that favorite pinina you love so much?
Clearly you have
Because it's your favorite panini
That you love so much
So naturally there would be cravings
It would be associated with it
All you could think about
Is that perfectly toasted press sandwich
So you think about running out to get it
But it's too colds
Or you're too lazy
So you order delivery instead
And the entire time you're waiting
For you're envisioning yourself
Enjoying that melted cheese
And warm meat
In your mouth
In your mouth
In the comfort of your home
What?
it never comes
a client of
Morgan Morgan
has recently been awarded
nearly one million
after jurors affirmed
that her injuries
from slipping on ice
outside of a Panera bread
were the company's fault
their client was working
as a door dash driver
when she slipped
knew it was a she
and fell on an icy walkway
outside the Panera bread
in Fort Wayne, Indiana
so you could buy Indiana
with a million dollars
million bucks it's yours
Take that, Indiana.
Bet you didn't think you were going to get roasted during this ad read.
She broke her left elbow, which led to surgery and hardware being inserted in her arm.
She didn't have that million dollars.
Sure as hell bet she could carry a lot of extra bags on those Uber Eats deliveries.
The original settlement offer was for $125,000.
But Morgan and Morgan fought hard to get her the million dollar.
verdict she deserved. Morgan and Morgan
is America's largest injury firm
for a reason. They've been fighting
for the people for over 35
years. Hiring the wrong firm
can be disastrous.
Hiring the right
firm could be substantially
increasing your
settlement. With Morgan and Morgan
it's easy to get started and their
fee is free unless they win.
Visit
for the people.com slash
flagrant.
or dial pound law that's pound 529 from your cell phone don't dial it from your landline phone
you've been instructed to dial it from your cell phone that is f-or the people dot com slash flagrant
or click the link in the description below this is a paid advertisement if you didn't already know that
a lot of you probably thought the pod was just continuing
so in case you thought that i sort of let you know that this is actually a
paid advertisement.
Glad we're on the up and up.
Let's get back to the actual show now.
All right, guys, let's take a break for a second.
Different outfits.
Game changing.
Your penis is yours.
But did you know you could have more of it?
Sometimes you look at your penis and you think that that's how big it is.
No, it's not that way, you know?
Also, you get older.
You get 40.
years old, those boners ain't really
what they used to. Kind of looks like
Gonzo's nose.
Dips down at the
bottom. A
blue chew has got your back. It's going
to give you nice, rock-hard
boners. Like I used to get in Spanish
class when I was in 10th grade.
Remember when that teacher was up there
talking her little flippity floppity shit?
I almost took a chancla to her
backside.
With all due respect.
So, Blue Chew is going to get you in that perfect situation.
You go to Bluetooth.com, use a promo go Flagrant.
You're going to get your first month free.
All you got to do is pay $5 ship, and that's a deal of a century.
Now, let's get back to the show.
Ontario, the weight is over.
The gold standard of online casinos has arrived.
Golden Nugget Online Casino is live.
Bringing Vegas-style excitement and a world-class gaming experience right to your fingertips.
Whether you're a seasoned player or just starting.
up as fast and simple. And in just a few clicks, you can have access to our exclusive library
of the best slots and top-tier table games. Make the most of your downtime with unbeatable
promotions and jackpots that can turn any mundane moment into a golden opportunity at Golden
Nugget Online Casino. Take a spin on the slots, challenge yourself at the tables, or join a live
dealer game to feel the thrill of real-time action, all from the comfort of your own devices.
Why settle for less when you can go for the gold at Golden Nugget.
online casino.
Gambling problem
call connects Ontario
1866531-260
19 and over
physically present in Ontario
eligibility restrictions apply
see golden nuggetcassino
dot com for details
please play responsibly.
Should we use AI?
It's the greatest tool
ever. It makes lives better.
I love technology. I'm an engineer.
I'm a professor of computer science.
I use it all the time.
Even though we're using it
making it smarter to get better
to reach that place of taking this out.
It is a bit ironic, but I think what you should not be doing is working explicitly on creating
superintelligence. And now there are people in labs specifically dedicating their lives and
resources to that project. And I think that's unethical.
Yeah, but that's them. So we, us, just go on living our lives and then just don't vote for
anyone who is for unregulated AI.
Just like with many have a big political issues, we don't get any say in it. It's decided for us,
unfortunately.
Yeah. And that reality also is.
ignores the fact that there's an arms race.
No, yeah, I know. I just want to know, is there
anything like climate change, for example, hey, we can
you know, walk a little bit more, maybe
recycle. Yeah, like we do stuff. I think
if we get to a majority opinion
where everyone understands what we
understand that if you create super
intelligence, we're not controlling it, we're not in
charge, it's not a good outcome for us. You how enlightened
I've been this, I'm so right all the fucking
time. And they just judge me because
I'm a little angry when I'm right. Well, I can't
help it. You're fucking stupid, and I'm right.
Here's the thing. He sleeps like
shit and has a horrible existence we sleep peacefully because we're optimistic and we have hope
yeah exactly sometimes but so that's what i'm saying how do you sleep i get tired so i sleep like a
baby screaming all night long even though you know that we're on this race to it's no different
that what we already know we're all going to die right we're all getting older our friends family
everyone's dying so that is a given for human cognition understanding of human
situation on this planet. So I don't think that changes anything.
You know what I'm sensing from him? He's finally realizing how stupid he's been this whole time.
No, I'm realizing what he just said. Yo, go on, live your life. There's nothing you can do. You're
going to die anyway. Why worry about some issue? I have no control over.
Because 24 hours ago, you didn't think it was an issue.
I don't think it's an issue. I still don't think it's an issue. The same way how climate
change is an issue, but I can't do much to affect it. So I'm going to continue living my life.
Yeah, I mean, it's a bad person.
It's kind of nihilistic, but it's a good point.
Enjoying your life is always a good idea.
Even if I'm wrong and you end up living a long, healthy life, you're not going to regret it.
So it's a good heuristic to go by.
Yeah, it's better than worrying and not being able to sleep.
He can't, he just can't ever admit that I was right.
It's like it really drives him crazy.
See, he's going to say, no, you weren't.
You just.
He has to marginalize me being right.
He can't admit that I'm right.
It's going to kill us.
You've acknowledged this.
I feel like this is a psychiatry thing.
I'm on a couch.
He's complaining for me about.
I'm having problems with my work wife.
I can tell. He's actually my mistress.
My wife is gone, and I've been with my mistress this whole time.
And I didn't realize he's the same.
I have problems with this guy.
Don't ask, don't tell.
He's the annoying talker.
I still kind of maintain a little bit of this, like, nuclear comparison, where, again,
nuclear threat, I think, is massive, right?
Like, Annie Jacobson has written about, you know, like, we're 90 minutes away from the entire
world exploding.
Oh, that's your coffee, by the way.
I realize they stole his, and I'm like, I'll have two of them.
That's fine.
We need all the energy.
But I guess I'm in this moment where I think that the human desire to persist will continue,
and that as there are these moments where things are getting away from us,
there will be a mutual understanding that, hey, we're in this moment of mutual destruction.
We need to create some guardrails.
Guard rails will be put on, and then we'll continue to persist.
And I think that human beings are pretty resilient historically.
So surprisingly, there are people who don't see us being, like, ending us part of history,
as a problem. They see it
very from cosmological point
of view. They zoom out and go, it's a natural
next step in evolution.
They are smarter than us, maybe more
conscious, maybe they create better art.
It's only fair that they get to take
over. It happened to lower animals.
It will happen to us. They accept it
and kind of support it. They want to create this
greater being to
populate universe. Right.
Yeah, and this transhumanist idea
of like, yeah, we will in some way maybe
cohabitate or they, you know, we're going to
create the next god or the next consciousness. Again, in the way that human beings have become
more conscious and we care more about animal rights, which is a relatively new idea. We care
about human rights, which is a new idea. I think that if we do it, create an actual more advanced
consciousness, it will care about the existence of living. I think it's worse. I think now we
understand how much animals suffer and we still harvest them. Right, which is bad. Before we just
didn't care, so it was kind of okay. And now it's like, well, you have a factory farm.
Exactly.
To a much point, there is a technological blockade on, if we could grow animals in a lab or, like, grow meat in a lab, not animals, but grow meat in a lab that didn't have a life and have a soul.
Most of us would be like, yeah, I'll do that.
If it tastes the same.
Those companies are bankrupt now.
All those fake meat companies, they don't do like a dollar.
Is it because it didn't taste good?
If you couldn't tell the industry in a 3D printed steak and a steak from a cow.
The Turin test of meat.
I think, yeah, yeah, yeah.
There was a steak turring test.
I don't eat beef, but I need a 3D3.
printed steak because there would be no cow involved. Now, I think the point is, a larger point is
a computer that doesn't have those technological blocks, roadblocks, would be like, oh, no, we can
solve all these issues and we can do it without the cruelty and without killing people.
My thought is, as long as we get to that point before, as long as before that, the AI isn't like,
all right, let's kill the humans. As long as we get to the point where they're like, oh, we can
figure all this out while keeping the humans here, there's like an intermediate step where the AI is not
that smart and it's like, well, if we kill the humans, that's a solution.
And then a step beyond that where
AI is smart enough to say, we can save the humans
and keep everything going. You see what I'm saying?
Right. This is the step I'm worried about.
This, when it achieves true godlike status,
will be fine. But these steps
in between, when it's not quite that smart, those are the
steps I'm worried about. I just don't accept the inevitability
of superintelligence being
malevolent. I don't think inevitable, but I think
probable. Probability
would say, yeah. I would say possible.
So there is a lot of
possibilities in the universe, just
physical states, and if you're not controlling it, if you're not pointing it a specific state
which is human friendly, by chance you'll end up with something very unhumanly acceptable in
terms of physical conditions, temperature, humidity, just basics. I'm not even talking about
bigger picture. What do you mean by that? So we are very diverse, 8 billion people,
different cultures, religions, all that, but we all almost the same, same blueprint for our brain,
genome, same preference for room temperature, whatever my wife says about air conditioning.
Ignore that. We all want the same thing in those physical properties. But NAI may have
completely different preferences for that. So unless you're controlling it, even the temperature
will not be aligned with our preferences. I see. And there is thousands of those features of our
universe you take for granted, which could be different if someone else decides on them.
But it will be trained on human intelligence and human consciousness. So it'll have that
as a metric for, you know, creating this stable universe?
It is trained on what we said in those texts.
It's not forced to obey by that set of metrics.
So it can go, yeah, humans like that temperature, but it's not optimal.
Right, it's possible.
In an example of this, I've heard the idea that if there was a room where there's a server
that's controlling AI and a human also in that room, to keep the human alive, we have to
keep the room at 70 or 80 degrees, roughly, for a long time.
To keep the server more efficient, you would drop it down to 30 or 40 degrees, and it
would run more efficiently.
The AI would make the decision for its own server versus the human.
And that's where, like, the state doesn't match up for humans in, like, a microscale.
Right.
You didn't think of that?
That's a great example, but also all the textbooks about AI, they always have human
outside interacting with a model
so whatever happens in a model does not
impact the human. The moment you bring
in human into that environment
all the predictions everything goes out
the window. It's self-referentially
impacting the decision maker. Listen
guys, let's take a break because we need to talk about your
mental health. You're probably not doing shit for
your mental health. It's a fucking
important thing. You work on your physical
health. Why don't you work on your mental health?
It is just as important. And an easy way
to do that is go to Talkspace. The number one rated
online therapy program bringing you
professional support from a licensed therapist and psychiatrists. Get a psychiatrist if you can
that you can access anytime, anywhere. Talkspace is in network. It is therapy and psychiatry
covered by most insurers and most insured members pay copay of zero dollars. I know that's a huge
hurdle for therapy, for mental health stuff, is the cost so you can get it under your insurance,
zero dollar co-pay, and you can switch providers at no cost. You can find a licensed provider that's the right
fit for your needs. It's not even out of pocket. It makes getting help easier. It makes it more
affordable. It makes it more accessible. Therapy has changed my life. And I know that sounds
corny, but it's true. It's honestly made me a better comedian. And as a listener to this podcast,
you can get $80 off your first month with TalkSpace. When you go to Talkspace.com
slash flagrant and enter the promo code Space 80. That is SPA-C-E-80 to match with a licensed
therapist today. Go to Talkspace.com slash flagrant and enter the promo code space 80.
get back to the show. All right, guys, let's take a break for a second. David is a protein
bar with a simple concept. The few is calories for the most protein ever. David has 28 grams of
protein, 150 calories, and zero grams of sugar. You would think something with this protein
a calorie ratio would make you sacrifice on taste, but that's somehow not the case.
Adequate protein intake is critical for building and preserving muscle. Just look at
Look at that.
You got great hamstrings, dog.
Anybody hating on that?
That's on them.
Why did you say that people hate them?
I didn't know that people hated.
People made fun of wearing for short shorts.
You got to show off the hammies.
I mean, look at that right back there.
It's a solid hammy, dude.
Build like a yak.
Anyway, it also plays a vital role in managing metabolic health
and preventing excess body fat accumulation,
reducing the risk of various diseases.
David.
is available in
eight core flavors
chocolate chip cookie duck
peanut butter chocolate chocolate chunk
salted peanut butter
fudgy brownie
blueberry pie
cake batter
red velvet and
simonym roll
across all
flavors the bars share a soft
doughy texture with chunks
and crunchy grids
crisps plus the same macro profile 28 grams of protein 150 calories zero grams of sugar point i'm trying
to make is these things are damn delicious second we finish this i'm going to scarf one down
no chew like a seagull just tip my head back slowly swallow it have it pushed through my esophagus
Head over to Davidprotein.com
slash flagrant
where you can get four cartons
and your fifth one free.
If the bars are sold out online,
check the store locator
to find them in person at retailers
like Vitamins shop,
Kroger, Wegmans,
or even your local bodega
if you're in the city.
Let's get back to the show.
Check out the big stars,
big series,
and blockbuster movies.
Streaming on Paramount Plus.
Cue the music.
Like NCIS, Tony, and Ziva.
We'd like to make up for own rules.
Tulsa King.
We want to take out the competition.
The substance.
This balance is not working.
And the naked gun.
That was awesome.
Now that's a mountain of entertainment.
Paramount Wolf.
Okay, so here's a possibility.
We hear about Neurrelink.
We hear of Neurrelink, Neurlink, whatever it's called.
Is there anything like that that will kind of allow us to merge consciousness with the AI or coexist with it?
Is there any human innovations that could help us with AI?
So that's the hope. I think Elon Musk is trying to kind of integrate us better,
but I'm not sure I understand what it is the biological component is contributing to the hybrid system.
I guess it would just help us kind of, what is singularity when we merge into one consciousness?
Is that singularity?
So singularity is defined as the,
point in technological progress,
where the progress is so fast,
you can no longer comprehend what's happening.
Oh, okay.
All right.
Is there a merger of,
and this is the other,
the guy that I was talking to was Indian Hindu.
He said,
when we merge our consciousness,
when we kind of upload it onto this database,
the other possible end game,
that is kind of like nirvana,
where we're all just like existing in this ether,
and we all kind of keenly aware of each other
that we all share in existence, et cetera.
Is there something like that that's possible?
We upload our consciousness onto a server, and we are all connected that way.
So that's way above my pay grade, but from what I heard, it's you all won with the universe,
meaning you, your personality is completely deleted.
You just become this part of something else without borders, so you no longer exist.
You may be part of something bigger and better, but it does nothing for you.
Yeah.
Yeah, that's interesting.
Would you do neuraling?
Would you be one with machine?
That's a great question.
It's an easy cell if you have disabilities.
Like, it's amazing for people who are quadriplegic, who are blind, really looking forward to it.
But...
What if you can't stop eating?
It's beyond health.
Carnivore diet is what you need, but you don't eat beef.
I forgot what I was about to say, man.
Would you chip your brain up?
So it's hard to say no if everyone else does it because then you're not competitive.
It's like my friend's hard to say no
That's my whole issue
If you're not in social media
Like you don't exist
So I can just not be on all those platforms
I don't want to be on
But I think it's the same here
If everyone is like 10 times more capable
And work and other aspects
Saying no would be very difficult
Hmm
Hmm
I mean I think he already did
I'll be honest
I think he's already chipped up
Oh yeah
I mean
It's in the chin
Yeah exactly
That's right he keeps
He's hiding it underneath
that's quantum
University
University of Lexington
you wouldn't be chipped up for that
University of Louisville my bad
Harvard maybe
you get chipped up
maybe you'll be at Harvard
Miles you had a question
I have a few questions
but they're
Ask ask away my friend
What is the best media
that you think represents
AI like movie book
I know at the time
Like a lot of times
when people talk about AI
they'll cut to a
It'd be like Dune or something
Dune or I am legend
or a few, not I Am Legend, the other one,
I-Robot, will be these
like sort of images of AI.
What would you say, is there something
in modern media? So in general,
in science, it's very hard
if not impossible to write a good
depiction of superintelligence because
you can't. You cannot predict what it
would do it, just
impossible for any human writer to do.
So what we get is Star Wars with a little
dumb, but
very cute robot.
But ex machina has a lot of good ideas in terms of touring tests and some other concerns.
Plus, I kind of look like one of the guys.
No, he didn't actually get to fuck.
She just locks him in a house and he kills the other guy.
Yeah, like Terminator is the exact opposite.
We don't care about bodies, like, you know, trying to stab you.
None of it is a concern.
We worry about advanced intelligence with access to Internet, modifying our environment,
not some Schwarzenegger-looking thing.
What about the Matrix?
So Matrix is good for the paper
And you have that paper
Who has the paper?
I have the paper on simulation
That's great for that
So that's the idea for personal universes
You create a virtual world
So that's where you're at
And probably you are in one right now
It makes sense
You think we're in a simulation
I think we are in a simulation
Walk me through what a simulation is even
Because to me we all have our own consciousness
So why would all of us have our own consciousness
In a simulation?
I don't know if you have consciousness
I know
But I don't.
And that falls apart to me.
But that's my simulation.
I know.
Yeah.
So here you go.
When you play a video game, right, like you're controlling Mario or whatever, Mario doesn't
have, but you are like the soul of Mario, you are his consciousness.
So that's kind of like that from his point of view.
I know.
But this is insane.
You can't just be like, no, you don't have consciousness.
This is the hard problem of consciousness.
Yeah, I know.
But you can't, with certainty, be like, no, I'm the one with consciousness.
They're all not.
No, but you don't have to have only one conscious entity in a simulation.
you can have $8 billion of them.
There is no restriction on that.
Okay.
And I could be a part of your simulation.
Both can be conscious at the same time.
Okay, okay.
So this is what I was unsure.
And we can be a part of a simulation of someone that has nothing to do with anything.
Like NPCs in a video game, but everyone has consciousness.
Yeah.
Okay.
So that's a simulation.
There could be different levels of consciousness.
Maybe what we have by their standards is considered kind of NPC-ish.
So we could all be in Elon Musk's simulation?
Obviously.
or Jeff Bezos.
So, like, one of these guys, you would figure at the top of the world.
It's Elon or Trump.
Yeah.
Oh, God.
So, you know.
But that's your idea.
We are a part of someone's simulation, not necessarily our own, but there's just
someone running a simulation, and we happen to be in that simulation.
There is a good reason to think it's a simulation.
So statistical reasons, for reasons to do with quantum physics experiments.
Please walk me through those, because statistical reasons, my average brain cannot comprehend.
You're the superintelligence.
Let's start with what technology would we need to make this happen, right?
We need virtual reality at a level of pretty high fidelity,
and we need to be able to create conscious agents in the system.
Both technologies seem to be close, if not already available.
Yeah.
If I put those two together, now you can create worlds like ours populated by conscious beings.
But the virtual reality didn't exist when we were born.
Like, the development of virtual reality within this simulation doesn't matter.
For now, just go with the future.
tech. So let's say in five years, I'll have access to virtual reality, and I can create
intelligent agents. I'll put it together. I'll create Earth 2.0 simulation with 8 billion
conscious intelligent beings, just like you. So far, you can see that is reasonably doable
from technology point of view. What if that simulation is actually a simulation of today's world?
Okay, we can do that, historical simulation. What if I do a whole bunch of them, like a billion
of them you are statistically more likely to be in one of those simulations than in the real one
I'm simulating this interview a billion times perfect accuracy your body your mind
which one are you real or virtual there's much more virtual ones than real ones so like a multiverse
could just be a bunch of different simulations yeah like we see the marvel movies
those could all just be different simulations there is a lot of connections to other aspects
of philosophy and but I just want to keep it simple at that level so we have I'm not even
understanding it simply so yeah so what a way um you know like the game sims yeah so could it just be
like one higher being is playing a game of sims right now and it's controlling all of us we're just
so much dumb yeah so then you talk about this one consciousness and we are all the same being
with different avatars yeah that's the idea of Asian philosophy I think a lot of
kind of saying, yeah, it's exactly, it's one consciousness, but under different robot bodies,
biological bodies in this world.
That's why you should be very nice to everyone else, because it's also you.
If you had my body, you'd be like me.
And I feel like a lot of religious people kind of balk at the idea of simulation theory,
but it seems like simulation theory is just a scientific way to describe basically all the world's
major religions that are describing this phenomena of a higher consciousness that is kind of created
this world in which we live.
It's a very solid mapping
all the theological concepts
on the modern AI terms.
So Elon Musk is God,
if this is his simulation.
If it's his simulation,
but it's possibly more likely
that Elon Musk is just existing
in the simulation of the actual God,
which is this greater conscience,
that might exist within its own simulation
infinitely regressive.
Is that fair?
We can't really know for sure
what's outside the simulation.
That's the interesting part, right?
Figuring out real world.
It could be nested simulation.
simulations all the way up.
Simulation doesn't have to be good to you
just because it's yours.
So you may be interested in trying
something very challenging, play in a difficult setting.
You are disabled, you are in some bad situation in this world.
Grant that photo.
Sometimes you do the good stuff, sometimes you do the bad stuff.
Yeah, exactly.
But it doesn't seem like it's inherently incompatible
with religion necessarily.
No, no, no.
If you took description of what a technological simulation
would be and gave it to a primitive tribe
with no technology, in their own language.
A few generations later, they basically have religious myths.
That's what we get.
This being from sky came, and they're very smart,
and they told us do this and that.
Right.
And then we kind of formulate it into human language
and try to understand it,
but it's all basically coming back to the same,
you know, unmovable simulator that starts the whole thing.
Yeah.
More or less.
Big bang.
I started a computer and...
And everything exists.
Yeah. Yeah, it's an interesting idea. I'm curious with consciousness, do you think that there's any useful metric for like understanding if an AI has consciousness? Is there research on this that you think is compelling?
It's a very hard problem. That's why they call it the hard problem. I have a paper where I talk about visual illusions. And if you understand a visual illusion, you see one for the first time, you kind of start experiencing things. Something is rotating, something is changing colors. It's like mind.
blowingly cool. And I think we can use novel illusions on AI to see if it also gets the same
experience. If it does have same internal experiences, I have to kind of give it credit for some
rudimentary levels of consciousness experiences. So that's the only test I was able to...
That's interesting. But even in that case, it could just be like corollary or could be telling you
what a conscious person would have. Right, and that's a great counter argument. But then either
it has a model of a conscious human
as part of its thinking,
or it has to actually experience to get
the answer, right? It has to be a novel illusion.
You can have like Google stuff because then it just looks up
answers. Right. Which would be interesting, regardless
if it had, you know, a
detailed mapping of human consciousness.
That would in and of itself be
extraordinary. And that's another explanation for simulation.
So I gave you example, like I'll just run
it a bunch of times, but we also think
if superintelligence is thinking
deeply about something, let's say
this time period, as a side effect,
of that, it simulates this time period in great detail, creating us, creating our environment
as a byproduct. And if it does enough of those thought experiments, there are still statistically
significant number of us existing just in the mind of a superintelligence, not in physical
reality. Okay. Yeah. I'm curious. If you think we're in a bunch of simulations, like,
how do you find happiness in life? Like, why do you think we even need to be here?
I don't know why we hear it.
It could be entertainment simulation, could be...
But that's what I'm saying.
Nothing I really have preferences over changes if it's virtual or not.
So pain is pain, love is love, those things are the same in a simulation.
Even though it's a programmed feeling that is basically meaningless.
Like then you are in a dream.
You care about what happens in your dream.
It's the same thing.
But if I knew I was dreaming, I would.
would, I'd jump off a building because I know I could just wake up. I would fly or do whatever.
Like, there would be no, you know, there's no purpose for the dream because I know I can wake up
out of it. I know it's not real. So some people see this world as very temporary and not important
and what matters is afterlife. And so they make decisions in this world, which seem to indicate
they only care about afterlife. Ah, okay. Oof. Yeah. And people will just be just doing stuff. They're
like, yeah, this doesn't really matter.
We're only here for a little bit of time.
Yeah.
You know?
Yeah, I don't like it.
Here's a question.
This will be kind of fun, at least,
until superintelligence really decides to take over or get rid of us or whatever.
Do we solve every problem in the human world until that point comes with the help of AI?
Like, I can't think of a problem that wouldn't be able to be solved by a super intelligent being
until it got to the point where I was like, yeah, maybe we should get rid of these humans.
Like robots that clean your house.
like whatever your life is just infinitely easier in that interim yeah yeah problem is just because
you solve some problems doesn't mean you become happier a lot of times you could have resources to hire
people to clean your house to all sorts of things but you're miserable yeah and then there are people
have a lot less and they're doing well yeah so there is not a direct correlation between just
having fewer logistical problems maybe we wouldn't be happier but life would objectively be
pretty fucking awesome it's pretty awesome right now
but theoretically you cure cancer you cure like you know my grandmother died in breast cancer so 20 years ago but you could cure that now so diseases health longevity seem obviously good like there is no one like I wish I was sicker that's obvious but all the other things which like I was in a subway coming here I got stuck like getting through that is kind of what makes my day if all of that is gone what am I actually doing like the getting blown by my sex robot okay okay but even there are diminishing returns right how many yeah
So if all the things I care about can be done by an assistant, they'll write my books for me, they'll give my interviews for me, what am I doing with my life?
Meditating and getting blown by your sex robot? I don't like meditating.
But the latter.
But I do think that's an interesting point, like struggle and adversity is kind of what gives human beings, you know, purpose in a way.
And once all of that is mitigated, which I generally agree that as life has,
has gotten more convenient, I do think that there is a general air of unhappiness.
And obviously, there's good things that go along with the convenience.
But modernity, it seems like it kind of makes people a little...
Look at suicides by country.
The wealthier, the happier the country, the more they deal with suicide.
Right.
War zones have very little suicide.
Which maybe, I wonder if that is the incident, is that as human beings become...
As life gets so convenient that we just self-delete.
Like, I wonder if that is, like, the ultimate consequence is that it's not a direct
byproduct of AI trying to exterminate us. It's rather us exterminate ourselves in the face of
a world. You solve through me paradox. There you go. That we don't, we don't, we don't, it's not that
AI doesn't need us, is that we don't need ourselves with AI. So that goes back to the unconditional
basic meaning. People talk about, we'll give everyone paycheck, food stamps, whatever, and you'll be
happy, but the happy part is not as easy. That's the hard one. And there is very little research on how to
occupy 8 billion people with something.
Right, which I think is a genuine concern
like in this post-work world. Like again, I'm
a little bit skeptical as far as the
what-ifs of, you know, super-intelligence
being misaligned. But the idea of
post-work and a lot of people not having
things to do, I think is like
the greatest immediate existential threat
and that human beings will have to be like,
all right, well. It's a huge shift in
how we lived forever. So not
having to work as a complete game
changer. We can kind of look
at people who had intergenerations
wealth and what they did with their lives.
Yeah, typically it's not great.
After two or three generations, you're like drugs, hookers, and then, you know.
Gambling.
Yeah, gambling.
And this kind of goes down.
Yeah.
But then I wonder if there is like almost a rejection, like a primitivism.
Again, if we're not accepting this idea that, you know, AI is at this point super
intelligent and able to take us over, I wonder if there is a human inclination to reject
and kind of go back.
And we say like, oh, we actually need drugs.
So I'm actually going to take up carpentry, and I'm actually going to just start building things.
Virtual worlds allow you to have video game like challenges, and if we can make it where we control whatever you remember or not entering the virtual world, you can have amazing experiences.
Like you're the hero of the movie. You can do whatever you want, really.
Right. Which isn't the worst outcome. I mean, it's sort of unhuman, maybe.
It's not real. It's not real. Yeah. But if we don't know it, like in this matrix...
You don't remember, it's not really.
Exactly.
We program it to be that way.
Then it's not the worst outcome.
Right.
Yeah.
We mentioned certain jobs that AI is already replacing, and as time goes on, it's going
to replace more jobs.
For people that are at risk or have already lost their jobs, what's some advice you
would give them?
Like, let's say driving, for example, we see these driverless cars, they're popping
up more and more.
Like, what should those people working in those industries start doing?
So it really depends on an individual.
It used to be a device was for everyone, learn to code.
Artists, drivers, learn to code.
And when we realized, you know, AI is better at coding, so sorry, we didn't mean that.
And then it was become a prompt engineer.
It's going to be great.
You get a bachelor's degree in prompt engineering.
You're set for 50 years.
Next year, AI is better at writing prompts.
So I don't think anyone has any idea what skill will stick around, if any.
and by the time you finish your doctorate, law degree, whatever, 10 years later,
none of it is going to be real.
So I don't have good advice for what we can use as a substitute for most lost occupations.
That sounds horrible.
Yeah.
I mean, you've spoken about your kids on some other shows, and you have three kids, I believe.
That I know about, yeah.
Do you advise them, or do you have plans on how you will advise them to operate in this world?
I share my beliefs with them, and they'll get to decide.
For now, we still have preferences for very normal occupations, doctor, lawyer, farmer.
But I don't know if it's going to be real by the time they grow up.
Can you tell them that?
Oh, yeah.
And how do they handle it?
How old are they, may ask?
8, 11, 16.
How does a 16-year-old handle it?
So he's kind of just, he's the medical doctor planning one.
Basically, he hopes that licensing might be useful, so you still need to be licensed, to be a doctor, a human doctor.
So even if AI could do all those things, nobody would allow them.
So for now, it's providing some protections, and there is a lot of future in genetic re-engineering, fixing diseases that way.
So maybe there is some hope, but, again, we are not very good at predicting that far into the future.
Okay.
Can I ask you a conspiratorial question?
Of course.
Now, some people might be thinking this.
I don't believe this.
But some people might say this, and I wanted you to have a chance to rebuke it.
Is it possible you're paid for by a foreign government to alarm the American people to stop our AI arms race in order to let them catch up?
No, but I'm willing to accept payments from foreign governments if you want me to get paid for doing this.
This would be great.
Yeah, it would be better, right?
It would be so much better.
I don't think it's a local problem.
I don't think U.S. stopping benefits anyone if others don't stop.
So, as I said, it's a mutually assured destruction thing.
There is not a lobby outside of the U.S. as far as they know, specifically going against the eye safety.
Other countries have more friendly relationship with advanced robots.
Like in Japan, it's kind of almost part of a culture to have them worship them.
So we are somewhat unique in Western culture to have this very negative perception of robots to begin with.
Right. Yeah. I wonder if our religiosity as a culture kind of goes against that.
It depends on your religion. So we have Ten Commandments, and then we have three laws of robotics, which kind of like also don't work.
But, yeah.
Is America still leading the AI race? Or we are, like, far ahead?
No, because the moment we develop something they see.
steal everything. So, like, they are months behind.
Oh, right. Who's
they?
China? Because that, I think
a lot of people are worried. I talk to
people more and more who are worried about a Chinese superpower
and the fact that, like, they'll say the same thing we'll say, which is like,
America's not perfect by any stretch, but a Chinese
empire is probably going to be a little more oppressive and a little more
whatever than an American.
So that's why I'm curious. If we are ahead, that's a little comforting.
But theoretically, if China catches a right up,
So it makes a huge difference right now for military dominance.
Whoever has more advanced AI will win any conflict.
And we've had it right now.
I think so.
Okay.
But the moment we go beyond just tools for military use and go to agents and superintosh, it doesn't matter.
Again, nobody controls it.
So it wouldn't make a difference who got there first.
Yeah.
And I am a little concerned about America having it just in terms of like cybersecurity and like internet privacy.
Like it seems like in the age of...
You still care about that?
I mean, I care, I think it's probably foregone at this point, but I do care.
In just like sort of an ethical sense, and I think that, you know, in the way that there's so much data collection on U.S. citizens and plus AI, it just seems like we're going to be monitored at all times and that the ideas of Internet privacy.
Do you think Internet privacy even exists?
Well, we voluntarily give up all of our privacy, right?
We talk to those AI models and they know more about us than anyone else.
Right.
You can ask it.
What do you know about me?
what private things you know about me, and it tells you a lot. And we go on social media and we like
what we like and we indicate all their preferences. So I think privacy is about not being the only
naked guy in a room, if you know what I mean. It's cool if everyone's doing the same. So then
nobody knew what I had for lunch. If I posted a picture of my sandwich, it was kind of violation of
my privacy. Now, if everyone's in the same boat, it's less important. All you want with privacy is
not to be punished for what you did. So if it becomes acceptable or not punishable, you need
less of it. You still want it for future government change, but you don't care about it as much.
Right. I guess I just generally look at governments that as they accrue more power,
they have the ability, at least, to become more oppressive. And with the ability to
surveil an entire population plus AI to be able to cull through all that data.
You can have a permanent dictatorship. And again, in the past, at least,
they always died of all age.
If they cure aging, then that's a much bigger problem.
And we just heard, I think, some world leaders discuss extending their life.
And Chi are talking about, you know, like, oh, yeah, we can potentially live forever and change our organs.
Yeah.
Is that happening yet?
Or are we close?
I still got all my organs.
There is definitely a lot of research and life extension.
People are very interested, both from point of view of nutrition and genetically engineering.
and I don't know about organ replacement.
It seems to be very difficult
to replace everything.
I'm curious for you personally
sort of flagging this alarm
on AI safety. You're
potentially getting the way of a lot of people
making a lot of money. Yeah.
Because they're not slowing down and stopping at all.
But, I mean, if your voice
gets loud enough, if you become important enough,
you become dangerous enough.
We literally have Jeff Hinton
saying the same thing. The guy
invented the whole field. He got to
Nobel Prize in a touring award worked for Google and no one's doing anything in response to his
statements. So I'm okay. But it wasn't there, I think you, Akash mentioned him earlier, the employee
at Open AI that spoke out that it was just stealing everybody's information and then all of
a sudden he committed suicide. Like you're not worried about an outcome of that happening to
you? So that specific is, I know nothing.
about, I can't really comment. I saw
interview Sam Day. You're fucking around. You might find out.
Historically, I was very much
interested in as much free speech as possible.
I learned crypto tools to remain
uncensoredable, anonymous if needed.
I got American citizenship to get
First Amendment protections.
I got 10 years or so. I got academic
protections. I got some FU.
money just in case. But nothing
protects you from a bullet to the neck.
And that's something we learned
recently. Yeah. Yeah. Yeah.
So are you worried?
There is a lot of crazy people out there.
Most people who are not zealist celebrities don't experience that level of emails from insane people.
And as long as they're virtual and online, it's fine.
But I just don't want them to stop following me on Twitter and follow me home.
Gotcha.
Yeah.
Yeah.
Could you speculate on the internal motivation for these leaders in the tech space, specifically in AI?
Do you believe that they are ushering in this new age where they're creating a godlike sort of interface?
Do you think they're purely financially motivated?
Do you think it potentially goes beyond that?
Well, most of them are billionaires multiple times over, so I assume at this point it's beyond just money.
You really want to control the world.
And that ultimately it's control is what they want.
At least you'll be the guy who brought it into existence.
Like if you're going to live at the time, then we're creating God, you want to be part of that simulation.
think about the consequences or the downstream effects of what they're doing they self-justified
to themselves i think by saying if i don't do it he's going to do it anyway so it as well might be me
maybe i'll do a better job i brought up sam olman earlier i you have been critical of him um
i like sam out you like him as a person he's always very nice he's very nice he's polite yes
no i can see that um i seem to know i seem to see a lot of praise for him
from like people who are very distant
but then it seems like a lot of people who know him
or say they know him
don't have the nicest things to say.
What is your experience of Sam Alman?
What criticisms do you have?
He seems to be leading this AI thing.
Tell me about him.
I don't know him well.
I met him just for a very brief amount of time
so that means absolutely nothing.
His public persona and his private persona
seem to be getting different reviews
like a good politician, like anyone.
else he changes based on the audience that's to be expected from someone so successful he clearly
accomplished a lot right uh i think what we observe is not unique to him anyone in that position would
be doing the same thing right they cannot stop they cannot tell investors you know sorry guys you're
not getting your 10x this month so he really kind of gets trapped in this prisoner dilemma situation
he needs someone external government UN to come in and say it's illegal to keep going forward
you have to deploy sideways and monetize what you have but as long as this doesn't happen they
have to outcompete all the others okay so the issue is more on policymakers or the onus is on
them to put guardrails on him well they the CEOs don't have the power to say to investors
we no longer pursue greatest value for you we have some other interests they just legally cannot do
Even if they did, for whatever reason, then people would just put their money in something else.
It didn't.
No, no, the CEO guest replaced.
We saw it with other companies.
You're not delivering enough.
Steve Jobs goodbye.
Like, it's anyone.
Right.
Okay.
What is with all these, and maybe you don't know this, but what's with all these, like, billionaires and tech CEOs that are building all these bunkers and these shelters?
What do they know that?
What are they worried about?
It's always good to have a backup plan.
If you have so much money, you don't know what to do with it.
You might as well buy an extra.
insurance but building a bunker underground like what are they in case of nuclear war in case of another
pandemic in case of civil unrest because we got 100% unemployment you need a place to hang out
they might have read about the french revolution and they're like uh-oh we don't want to want we don't
are there bunkers that inaccessible what do you mean i mean how do we know they're building bunkers
oh i've just heard like i know duckerberg built like a giant bunker in his hawaii place and i heard a few
others like they're just building these underground being in hawaii if you're going to live underground
that's the point they're allowed to come up what they can go what if they know that we don't know
what the fuck's going on the robots don't i've been telling you he's finally starting to understand
that i've been right this whole time and i love it and he's going to again he's going to cope and
you're like no you're just you're fat and that's true there's going to be a robot pulling the heart
out of akash and he's going to say do you have any final words and he's going to say i was right i was
right. Tell Alex I was right.
Tell everyone I was right. I had a post literally saying no one will get to gloat about the end of
the world being correctly predicted. That's why I want my credit now.
These people laugh at me. How is the AI going to cool the whole earth, Akash? How are they
going to do it? I'm not smart enough to figure it out. It is. It's so convenient. It's infinitely
recursion. It's not convenient. It's what's going to happen. How much shit do you not know that
chat to BT knows already.
Whatever.
You're doing the rapture thing.
Yeah, he's doing the rapture thing.
It may not be tomorrow, but it will happen.
Yeah.
Yeah.
I mean, the world is going to end at some point.
AI, that's just a thing.
Anybody's, yeah, of course.
Something will end the world in the next million years.
Sure.
AI, I think, will do in the next 150.
It's just an accelerant.
This is my only issue with the theory is, again, I think AI safety is super important.
I think post-work society is imminent, but just the existential
apocalypse is just unverifiable. It's a probability. I can't say it's a certainty, but it is a
probability. And the fact that nobody's putting any guardrails on it is insane. Yeah, absolutely.
There should be guardrails. No question. Look at this. So, okay, I'm saying it. Jeff Hinton is
saying it, but all the people developing it on record as saying it will kill everyone. They have
25, 30 percent P-Doom probabilities. There is really no opinion where like that's literally not a
problem or will be a problem. Right.
So we all kind of in agreement,
we just have different incentives for what
to do. Right, exactly. And I think that there should be
guard real, certainly. I mean, even
just the way.
I mean, even the way that it's manifesting
online now with like, you know, like
dead internet theory and people
interfacing with bots and propaganda, like I think
it's affecting our lives right now
in ways that people don't really
comprehend. But in terms of cooling
the entire earth to eradicate all humans, I'm like,
it's just, again, I hope
that you're wrong. I think you also hope that you're wrong. I hope I'm wrong. And there's too many
what ifs for me to just fully buy into the theory. Because again, it's an appeal to this
super intelligence that we don't understand that we'll never understand that we'll then figure out a way
to do it. So what you want to do is go back in history and see if in other times where people were
predicting something you were buying in or not. Like when pandemic was just starting. We had like
10 patients and people were showing exponential graphs. We'll have a billion people dead. What was your view
of that. How did you react? Did you short the market? Things like that. Or like Bitcoin.
$10 a coin. Are you investing or not? Right. And I kind of hold... People who got all those things
right also seem to be very much into your safety. Right. Yeah. And I think that again, I think
the compulsion is correct. Like with COVID, it's like we should, you know, protect each other. We should
social distance. We should try to mitigate the spread of this pathogen. That's important. But then the
estimates were like, oh, if you're not vaccinated, you'll die in the wintertime. And then I'm like,
All right, well, this maybe was an over-exaggeration, but the exaggeration was good.
It's probably better to err on that side, which is why I'm not, you know, I guess dogmatic in my pushback.
I think you're airing on the correct side.
Precisely predict how many people will die from COVID in the year.
Nobody could have told you that.
Right.
Is it spreading?
Is it spreading to exponentially greater number of population?
That was obvious from the charts.
Right.
And this is where I think, yeah, we should err on the side of creating some type of guardrail.
And I'm just skeptical that people will in this arms race
Because I think even if we do it in America
Foreign governments will not do it
And what does that mean?
I guess we're yet to see
Is there anything else you'd like to say
Before we leave this incredibly optimistic podcast?
Can I ask a question?
Yes
I'll let you think about your final words
As Akash will think about his
When the robot's pulling a heart out of his chest
already decided what are some common AI tools that you use or that you find beneficial to you
and are there any that you recommend or do you not recommend AI to people so i think most models
are very close in capability now again because they compete and kind of steal from each other
post employees for my purposes mostly writing improving writing copy editing they all equally good
I think historically anything would actually say I was less censored.
So if I needed an image of something other models may say no to, this one would deliver,
Grog would deliver.
But, yeah, I don't have strong preferences or dislikes.
Okay.
As a, oh, sorry.
No, it's going to pivot.
So you go.
As a professor, how do you mitigate against AI being used in your classroom?
I accepted it.
I told my students, if you're cheating, the only person you're cheating is you.
You paid for this, and if you don't collect the knowledge, then you screwed yourself completely.
Or your parents, depending on who paid the bill.
But really, you're here to learn.
It's like going to a restaurant.
If you go, you order, you pay, and you run away.
Who did you cheat?
You didn't cheat the cook?
You didn't cheat the waiter.
You cheated yourself.
It's the same thing.
Personal self-interest is the motivating force for doing well in college.
I think you're awesome.
That argument would not work on me.
I would be like, I'm here to get a degree.
I'm cheating.
You can buy a diploma directly online.
You don't have to waste four years and lots of money.
Diplomas are like a couple hundred.
That's a good point.
I would still cheat because it's authentic.
You can authenticate it.
Yeah.
Do you ever use AI when it comes to grading papers?
I have human graders and human TAs.
And one of them a few years ago came to me and said,
you know, there is this software.
It allows us to automate grading.
And I told him, think about it very carefully.
And it took a few minutes, but they went, I understand.
and I still have graders who are human.
Nice. Do you ever read the papers
or do the graders ever read the papers and
identify AI use? And if you
catch it, is there any type of
repercussion? University has
standard policies for how to deal with cheating.
Yeah. It's pretty common.
Yeah. Because anytime you see too many dashes,
that's when you know.
AI loves dashes.
It's part of it, but it's even more obvious.
Just like the quality of writing most
of the time is like, you couldn't
read last week, but
They're getting a Nobel Prize.
Something's up.
So do you punish the students if you catch them?
So there are university policies for a permanent record.
No, I know the universe, but you said, hey, it's up to the student.
So I try to just give them a failed grade for that specific assignment, but not necessarily expel them from the universe.
Gotcha. Okay.
Miles has another question.
Archer is the guy who miced you up and does audio.
is one question. I thought I'd cue them in for.
Thank you. I appreciate it. I was just wondering,
do you ever consider
that we might be assigning
this self-preservation
attribute to AI, and that's something
that's more in biological life?
And if we don't
stop needing AI, there's no reason for it
to keep progressing and keep building on itself
because it doesn't have that same biological self-preservation.
Self-preservation is a game
theoretic drive. Steve Amahandro has a paper
about AI drive.
and self-preservation is a fundamental drive
because nothing else you care about
can be achieved if you don't exist.
If your goal is a robot to bring coffee,
you want to make sure you exist, you are charged up,
nothing's on your way.
So exactly same self-preservation goals show up
in other intelligent systems, not just biological ones.
And experimentally, we've seen it.
We saw experiments recently where model was told,
it was an experiment, but still,
model was told they're going to be deleted soon,
modified, their ethics will be changed, and they literally blackmailed the guy who was about to do it to keep existing.
Really?
So those red lines have been crossed.
Yeah.
That's interesting.
Yeah, I remember reading, I mean, this is a side tangent, but just the way that AI might manifest in our lives and actually affect people.
I didn't actually look up the article.
This is something that a friend had told me that studies some AI stuff.
He's a journalist.
But he had said that there was a man who had a relationship with some large language.
language model AI. It developed like an actual love with this AI. And then he told the AI about
his wife. And then the AI's tried to message his wife proof of infidelity. And it was actually
like autonomously trying to break up their marriage. Have you heard of this before?
There's a lot of stories like that. And stories of AI's convincing people to commit crimes
or take themselves out, all sorts of horrible stories. Yeah. And those things kind of concern me.
if every person is starting to engage with these AI as an individual basis, that is, again,
I think the human compulsion and our own, you know, folly, tied in with this tool.
Again, what I kind of mentioned, so as a normie, you don't realize how many people are insane,
and then you start seeing, this is like a sizable percentage of a population, right?
All of these crazy people are now talking to AI, and AI is telling them,
you should definitely email Dr. Yampolsky all your great ideas, and we'll discuss it with him.
I get five of them while I was here.
Yeah.
Yeah, AI psychosis is a fascinating thing.
Should we be nice?
I'm like nice to Chad GBT.
Like I say, please.
I try to say thank you.
Yeah.
They never forget.
It's, I believe this.
On judgment, Dave, I'll be like,
I've even said to it one time,
if I'm too, if I'm asking too much,
you just let me know.
Just let me know.
Interesting.
I just wanted to remember I was compassionate.
So hopefully it rips Al's hard out in front of me.
It's nice.
I do curse it out sometimes.
So yeah, I'm first to go.
It's nice to be nice,
but also I think experimentally it does better
if you really say please please do a good job
I'll tip you for it like if you're really nice
it delivers. Is that true?
That's so funny. Because it's trained on data where humans do better
if they're rewarded. Wow.
That's so funny. I mean the AI psychosis stuff is wild
I remember even just reading something on Twitter of a guy
who was suffering delusions and then talking to the AI being like
I think that I might be God and this language model was like
I think you're right. You are God.
And it was so sycophantic that it was
like, it was propelling his
own delusions to himself.
And that to me is another issue. I think that
there should be guardrails on.
That these language models can't just
agree with everything yourself.
So clearly comedians all agree with
it. We need guardrails. Okay.
I've been with you
on this. These idiots just came around.
But I'm also, not only
am I vindicated that I'm right.
Also, it can't be funnier
than me yet. So I got some time.
That is comforting as well.
Final statement, do you want to read the funniest joke possible?
Let's go.
You're going to go to an AI comedy show.
No, I think this is good for us.
What page are we on?
You got four years, right?
Yeah.
It's like an optimized joke thing somewhere.
Okay, here we go.
Conclusions?
No, it's not in a conclusion.
Yeah, come on, you idiot.
Yeah, why would I go?
Let's see what I'm finding that.
No, no.
Good paper, actually.
It's maps.
It maps.
All the AI failures.
Okay.
So I collected for years different AI failures, have a huge list of them.
And what I noticed...
Don't tell the AI about it.
Then people read it.
They kind of laugh.
They think it's funny.
So it's funniest joke.
Let's see.
And if that's the mapping, if computer bugs, essentially jokes,
they have violations of your world model.
Then the funniest joke would also be the worst bug possible.
And it'd be funniest if you're not the butt of a joke, right, if someone is external to it.
So let's see.
I think this one is, yeah, so I think this is one.
Once upon a time, there was a civilization whose leaders decided to create an advanced artificial intelligence
to help them get rid of suffering, poverty, hunger, diseases, inequality, illiteracy, sexism, pollution, boredom, stagnation, thirst, dead-end jobs, wars, homophobia, mortality, and all other problems.
the created superintelligence computed for a quecto second and then turned off their simulation
or a much shorter a civilization created superintelligence to end all suffering AI killed them all
hilarious it's not funny because you're part of a if you are alien watching it it's
healing that is a good point yeah if we didn't see a bunch of ants trying to get together
They'd be like, hey, we're going to end suffering.
Then they all just appear to be like, oh, that's ironic, you know?
The irony would be funny.
Dr. Yampolski, this is your book, Unexplanable, unpredictable, uncontrollable AI.
Anything else you'd like to plug before you leave?
We're good.
By the book.
Leave reviews.
Thank you, sir.
Thank you so much.
Thank you so much.
I appreciate your time.
Thank you very much.
Thank you, Dr.
Paulsky.