The School of Greatness - How To Prepare For The Inevitable Future With Artificial Intelligence w/ Mo Gawdat EP 1168
Episode Date: September 27, 2021Today’s guest is Mo Gawdat. He is the former Chief Business Officer of Google X; and author of the international bestselling book Solve for Happy. After a long career in tech, Mo has made happiness ...his primary topic of research, diving deeply into literature and conversing on the topic with some of the wisest people in the world. In 2019, Mo co-founded T0day, an ambitious project that aims to reinvent consumerism for the benefit of consumers, retailers and our planet. He’s written a new book called Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World.In this episode we discuss what Artificial Intelligence is, the positive ways AI will affect our future, the truth about what we need to be aware of with AI, how to prepare for the inevitable future where AI plays a bigger role in society, and so much more!For more go to: www.lewishowes.com/1168Check out Mo's website: https://www.mogawdat.com/The Wim Hof Experience: Mindset Training, Power Breathing, and Brotherhood: https://link.chtbl.com/910-podA Scientific Guide to Living Longer, Feeling Happier & Eating Healthier with Dr. Rhonda Patrick: https://link.chtbl.com/967-podThe Science of Sleep for Ultimate Success with Shawn Stevenson: https://link.chtbl.com/896-pod Â
Transcript
Discussion (0)
This is episode number 1168 with Mo Gadot.
Welcome to the School of Greatness.
My name is Lewis Howes, a former pro athlete turned lifestyle entrepreneur.
And each week we bring you an inspiring person or message
to help you discover how to unlock your inner greatness.
Thanks for spending some time with me today.
Now let the class begin.
Welcome back, everyone. My guest today is Mo Gadot. He is the former chief business officer
of Google X and author of the international bestselling book, Solve for Happy. And after
a long career in tech, Mo has made happiness his primary topic of research, diving deep into the
literature and conversing
on the topic with some of the wisest people in the world.
And he's recently written a new book called Scary Smart, the future of artificial intelligence
and how you can save our world.
And in this episode, we dive deep into what artificial intelligence is and what it's going
to be, the positive ways AI will affect our future,
the truth about what we need to be aware of with AI,
how to prepare for the inevitable future
where AI plays a bigger role in society,
and so much more.
If you're inspired by this,
make sure to share this with someone
that you think would be inspired to hear this as well.
And if this is your first time here,
click the subscribe button right now
over on Apple Podcast or Spotify
or anywhere you listen to podcasts.
And leave us a review at some point during this episode to let us know what part you enjoyed the most.
And the fan of the week who left a review is from CJ who said,
Lewis is a very real and honest interviewer.
He doesn't interject himself constantly, although I know I do too much sometimes,
but allows guests to tell their stories and share their knowledge.
And the guests are wide ranging and bring a ton of value from finance to wellness to
relationships and beyond.
Grateful for the time he has put in here.
Big thank you, CJ, for leaving your review, for being a subscriber and being the fan of
the week.
And I cannot wait for you guys to check out this episode.
I hope you enjoy it.
And in just a moment, the one and only Mo Gadot.
Welcome back, everyone, to the School of Greatness podcast. We've got my friend Mo Gadot in the house.
Good to see you, sir. Amazing to see you again. Great to see you. One of the smartest guys I know.
One of the fittest and most amazing people. Every time I see him, I know how old I am.
Exactly, man. I'm excited about this new book you have, Scary Smart,
the future of artificial intelligence and how you can save our world. You've got some predictions
that artificial intelligence is real, it's happening, it's coming, and it's very scary
at one point. There's a lot to be worried about, but also there's some potential for utopia as well.
True.
Right? There's potential for how artificial intelligence can really set us up for a happier,
more fulfilling life. But you're saying that there's also some scary things that could happen
or that probably will happen first until we get to that place. So should we start with the happy
part or the scary part first?
Oh, no.
I normally start with the scary part.
Okay.
You know, most of the people that read Scary Smart so far,
you know, early readers,
would come around chapter five and either text and call me and say,
should we kill ourselves now?
Like, you know, should we bring kids into this world at all?
And it is, it really is scary.
But it's all true, sadly.
I would probably say that the best place to ever hide anything is just in plain sight, right?
And the real pandemic is AI.
AI is all around you.
It's all around you.
It's everywhere.
Explain what is AI for people?
Is it just robots?
Is it something else?
What is it?
Not at all.
As a matter of fact, so the image we see of AI is a humanoid.
Okay, that's what the movies normally make AI look like.
But it isn't at all.
I mean, today, actually, just as we speak, you know, yesterday, Elon Musk, you know,
spoke about the Tesla bot and the Tesla bot appears to be a person that can move and walk and so on.
The truth is, it's the software behind a Tesla bot that is really where the intelligence resides.
It's all about, you know, if you've seen the movie iRobot, it's not the robot.
It's not the physical design.
Yeah, it is the software behind.
It's Vicky. It's that head that thinks in the robot. It's not the physical design. Yeah, it is the software behind it.
It's Vicky.
It's that head that thinks in the background.
It's the, you know, don't call it software anymore, but it's intelligence.
It's not a machine.
And I think this is what one of the things that most people miss about AI is that we think of it as just another evolution of our technology,
another wave of machinery that is going to be our slaves.
And, you know, just like a hammer, you tell it to hit a nail and it will hit a nail.
That's not at all true.
AI started, the real thinking about AI started in 1956,
something that was called the Dartmouth Workshop.
And in 1956, we expected that machines could move from being the best
mechanical Turks, the best at doing things repetitively very quickly, to actually developing
intelligence. Nothing happened. Like the entire thing was a joke until the turn of the century.
There were a couple of what was known as the AI winters in 1973 and
1987 with economic collapse when AI was completely forgotten. It was ignited again by Japan.
But then at the turn of the century, what happened is we started to discover something
called deep learning. And deep learning is a true form of intelligence. This is, you know,
one of the first examples I saw of this when I was at Google was a white paper that we published
around having machines watch YouTube. We didn't tell them what to look for on YouTube. We just
told them, go ahead, watch YouTube, tell us what you find.
And what happened?
Eventually, one of them sort of raised its hand and said, I found something that appears so
frequently on that thing of yours.
YouTube.
YouTube. And we said, okay, show us.
What are the cat videos?
A cat.
Really?
It was a cat. So we called, we said, yeah, call it a cat.
In no time at all, you could find every cat on YouTube.
Now you can find every dog, every nude picture, every writing, everything.
In like a second or a minute?
It depends on how much compute power you're going to put behind it.
But how does it find it?
Because a cat is not really a cat. Cat is different if you look at it from its profile than if you look at it from the
top.
If it's a kitten, if it is black or if it's, you know, striped, you know, there are so
many cats out there.
It's a breed of cat, yes.
Which breed, you know, if it's jumping, is it, right?
But yet the machines are capable of finding all of them. Like humans can find.
It can detect.
Absolutely.
This is what something is.
Absolutely, right?
Now, that unprompted AI, that bit of machine learning that we call deep learning, was really the turn of AI.
Because then from then onwards, all you needed to do was to build enough algorithms for the machine to keep trying to find
something. So basically the way we mostly teach them is we get them to say, okay, we're going to
look for the school of greatness mug. Okay. The way we do that is we write a few algorithms and
we say, okay, we could go ahead and try. And then we show them pictures. One of them has, you know, the correct mug and the other has maybe a different mug.
Okay.
And the software that says, yeah, that's the right mug.
We go like, you're smart.
Let's build more versions of you.
Okay.
So by literally trial and error and reward and punishment huh we're just telling them
okay if you're smart we're gonna reward you keep trying keep trying keep trying
but the thing is once you build the algorithm you can show it a billion
months a billion of them because they're like one of them out of a billion and
keep it it will try and try and try and sometimes it will get it right sometimes
we're wrong but in no time at all
Hmm by telling it when it finds the correct mug it will build its own brain patterns all new and that new
Runners that was it really is so how do you reward?
Artificial intelligence, how is it rewarded?
The way is very cruel
Okay, so we don't actually build the AI itself. We build something that's called The way we build them is very cruel. Okay?
So we don't actually build the AI itself. We build something that's called a maker bot and something that's called a teacher bot.
Uh-huh.
Okay?
So you have a bit of...
You remember how you, you know, when you were a child or when you see children playing,
you give them those puzzles, wooden puzzles, right?
Or, you know, they have to fit a cylinder in different holes.
And if they try the triangular hole, it will not fit.
So they keep trying and keep trying,
and then they turn the thing, and suddenly it fits.
Okay?
And when it fits, what do you do as a parent?
You go like, bravo, that's amazing, well done.
That's the reward.
Gotcha.
Right?
With AI, we go a bit further than that.
Huh.
Right?
So we show them different patterns,
and they keep trying and trying and trying.
If they do it right, we reward them by keeping them alive.
Interesting.
And then we kill the others.
Wow.
Do they know if they're going to die or not?
They don't.
They're just trying.
You show them something, and you say,
is this the school of greatness mug?
And it says, yeah, I think it is.
Poof, dead.
Okay.
Another one will say, yeah, you know, out of 10 times, one of them will get it right six times.
And the other will get it right two.
The one that gets it right six times, we will take that code and the maker bot will make it.
Keep improving it.
Yeah.
Improving it.
Interesting.
Got it.
Okay.
So here's the thing, though.
When you look at AI that way, it's no longer a machine.
So think about all of the AIs that you deal with every day.
Instagram or Google Ads or whatever.
All of those recommendation engines
are dealing with billions of users.
Every single day.
And data points and uploads and video.
Hundreds of billions of decisions.
Wow.
You swipe through Instagram,
and my daughter loves cats,
so I swipe and I look for cats.
In no time at all,
Instagram shows me my search page is like 200, you know, of 200 entries, there will be 160 cats.
Right?
Because it gets familiar with your decisions.
Because it's learning.
But it's doing this for you and for me and for a billion others.
Crazy.
And no developer can jump in and say, hey, by the way, you got more wrong.
And I actually did an interesting experiment.
Within the first page of Instagram,
you will find, say, nine cats,
and then one, in my case,
it was one woman squatting in a gym.
Yes.
Okay?
I was like, okay, interesting.
So many cats.
Let's click on that.
Right?
The next time I searched,
there were three women squatting.
The next time I searched, the entire screen was women squatting interesting okay
did that again huh I will go through reels you can try it yes you go through
reels and you you have whatever a variety of things but then there is you
know someone playing stairway to heaven and they're so good and they're playing
the solo one solo and you keep
that playing till the end. Two videos later, there is someone playing Metallica of some sort, right?
You like that, then there are six videos playing that. Now, all of that is happening and the
machine is making those decisions. It is literally sizing you and categorizing you and saying this person
is a cat person who likes rock music right and and basically building
decisions that will influence your entire life Louis crazy right because
for me now as I swipe through Instagram I think that the only instrument in the
world is guitars and that the only music in the world is rock and the only animal in the world is cat.
And that's how swayed my life can be because of a decision a machine is making.
This happens with politics and what's happening with coronavirus or whatever in the world,
whatever's happening, if you click on one thing, it's going to show you more of that,
whether it's a positive aspect of it or fear-based of something it's probably going to persuade you into those absolute views and videos right it will it will persuade you and i'm sorry
to say it owns you wow because the person that you become with the incremental tiny additions
of your life becomes what the machine is dictating for you, is what you should be.
And those are not only your views,
they're your views influenced by the trends,
influenced by others that are around you.
Social connections, yeah.
And so on and so forth, right?
And so in a very interesting way,
my future, the future of my convictions,
of my brain, of my abilities,
is determined by a machine that's doing this
four billion times a day or 40 billion times a day and no human can interfere and say,
hold on, hold on, you're going too far. No human can do that. It's out of the control
of humans completely. And I think this, so the core of Scary Smart is that, so the book starts, it is told from the point of view of 2055.
I chose that year as a point in the future for a very good reason.
You and I are sitting in the middle of nowhere in front of a campfire.
And I'm telling you the story of what happened from 2021 when we released the book until 2055.
Wow.
Okay.
I just don't tell you if we're in the middle of nowhere because we're hiding from the machines
or because we managed to create incredible allies that helped us create a utopia that helped us
save us from, you know, climate change and, that helped us save us from climate change
and made us feel safe and not have to work too hard.
And so we had the time in a utopian way to actually connect in the year 2055 in front of a campfire.
And the difference between them is really two things.
One is a difference of awareness.
I am amazed by how little people know about AI
and how little it is spoken about. The true pandemic of our generation is AI. It's not COVID.
I promise you that. The true life changer that will affect every human everywhere in the world
is artificial intelligence. No doubt about it.
And we can talk about that in detail.
Why is that?
Because we, you know, if you take a threat like COVID or sadly political disagreements
or wars, they are localized in time and localized to certain people.
You know, COVID, for example, affects the vulnerable a little more than it affects the healthy ones.
You know, it is actually much more of a pandemic,
if you think about it, in Western world,
in the Western world, than it is in Africa,
where they basically have been living with cholera and Ebola.
And, you know, to them, COVID is like a flu, right?
And that's actually true.
I mean, when you talk to Africans, they'll say,
yeah, that's, I'm sorry to
say, I don't mean to categorize or anything, but that's actually what Africans say.
They say, this is a white man's disease, you know?
This is not the big thing for us.
There are bigger things to worry about, right?
And when you really think about it, it's localized in time and space.
It's localized in its effects and so on.
Right.
It might be a few years, but then, yeah.
Might be a few effects and so on. It might be a few years. It might be a few years and so on.
But artificial intelligence is going to be your god.
So Ray Kurzweil is one of, of course, the most prominent and respected futurists on
the topic with books like Singularity and The Age of the Spiritual machine and so on. Ray Kurzweil predicts that AI,
the machines will be smarter than humans
as soon as 2029.
This is eight years from today.
And yet it is not spoken about.
Did you understand that?
You're going to have a machine
that is smarter than any human on the planet
in eight years time.
The smarts, by the way, is the only evolutionary advantage we had over others.
Right.
Being smarter.
Yeah.
Yeah.
Right?
So, in strategy-
Strategizing.
Yeah.
Absolutely.
Intelligence is our superpower.
And there is going to be an intelligent, a more intelligent being of our own making introduced to our world
in 2029.
Now, we're going to go like, oh, that's mad.
That's eight years from now.
Yes.
Sorry to tell you because we're not aware.
They're already smarter than we are in every single task we've ever assigned to them.
Right?
So the world champion of Jeopardy is Watson.
Not a human.
It's an ibm machine okay
the world champion of every game on yeah chess chess has been uh again ibm deep blue i think in
1989 or something like that beat gary kasparov right but but now it's deep it's now it's alpha
go so it's an a Google AI that beat Deep Blue.
The world champion of Alpha, of Go, is AlphaGo.
Go is the most complex strategy game known to humanity.
And 2016, AlphaGo, after a few weeks of development,
could actually beat the second runner-up for
World Championship.
And then 2017, AlphaGo Master beats the number one champion.
And then a year later, AlphaGo Zero beats AlphaGo Master 100 to 1.
Right?
Wow.
How, Lewis, how is mind-boggling. It just watched YouTube and played against itself.
That's crazy. That's it. You know, you give it a few rules and it plays against itself
and becomes smarter than the world champion in weeks. That's nuts. Nuts. Now, they're already
the smartest drivers on the planet, even though we don't welcome self-driving cars.
In terms of their likelihood to have accidents, they're much less than humans.
They don't text while they drive.
They don't put makeup on when they drive.
They're not tired.
They're not tired.
They can see hundreds of yards away and so on.
They see their surroundings 360.
They're not just looking forward.
Yeah.
They're not just looking forward. Absolutely.
The world's smartest surveillance in terms of finding resemblance of humans is you and
I can see each other and recognize each other, but there are seven billion other people we
don't recognize.
Yeah, machines are better than humans at that.
Machines are better than humans at everything that we've ever assigned to them.
They call that narrow intelligence.
Narrowing.
Narrow intelligence.
Narrow intelligence.
So narrow intelligence is one task.
You go ahead and do it.
Yes.
And you'll become smarter than humans at doing it in no time at all.
General intelligence, or AGI as they call it, artificial general intelligence,
is what people don't realize.
That's very quickly.
The self-driving cars, the intelligence behind the self-driving car is going to probably talk to the intelligence of surveillance
because they can see if a woman is crossing the road from other cameras that the car doesn't see.
And other cars around them that we're talking about.
And other cars around them that we're talking about. And other cars around them.
So the beauty of that machine, just to understand,
it's a machine that has unlimited memory.
It can remember the whole of human history.
It has an unlimited knowledge capacity.
Its knowledge is the knowledge of the internet, basically.
It has an unlimited processing power issues because you and I,
at the end of the day, if the problem is too complex, we fail.
But you can work at it and I can work at it, but we can't work at it together.
The machines can connect, they can work at it together.
They have knowledge and awareness of the entire world.
They know where you are, what you did yesterday, how much you paid, what you ate.
They also know if you have a relationship, if you're in love, what are you doing.
Who you're calling, who you're texting.
Who you're calling, who you're texting.
But they also have information about pollution in Beijing.
They also understand the exact calculation of when the moon will rise tomorrow.
They understand your heartbeat.
They understand your blood type, everything that's happening to you with the right technology.
Exactly.
Right?
Yeah.
All of that basically means that as they start to talk to each other, AGI is around the corner.
And again, Ray Kurzweil's prediction is that by 2045, the machines will be a billion times smarter than humans.
Now, it's hard to understand, but that's actually very well known in what we in technology call the law
of accelerating returns.
Law of accelerating returns.
Yeah.
Again, Moore's law is the first instance of that when Intel basically announced in the
60s that processing power will double every 18 months at the same cost, infinitely.
We've met Moore's law held true until today, right? Ray Corswell's view is a law
of accelerating return is that it doesn't apply only to processing power, it applies to everything,
right? And the idea here is very straightforward. The idea here is, yes, AI doesn't appear to have
broken through yet, because so far, it may have developed 5% of what's possible. But if it develops from 5 to 10 in 18 months,
and then from 10 to 20 in 18 months,
and then from 20 to 40 in 18 months,
and then from 40 to 80% of what it can develop,
you're just five to six doublings away.
That's nuts.
It's crazy.
And nobody talks about this.
Now, I've worked at Google X for a very long time.
And, you know, I was chief business officer.
Amazing place with amazingly intelligent people.
And, you know, with amazing technology.
And the truth is, we found the breakthrough.
Not only Google.
Not only Google.
Everyone found the breakthrough through deep learning and unprompted, you know, machine learning. Everyone found that the breakthrough,
it's just a matter of doublings now. Okay. It is, we keep doing this like we did with processing
power, like we did with memory capacity. And what do you end up with? You end up with a phone in
your hand today that is infinitely more powerful
than the computer that put a man on the moon
just 60 years ago.
Right.
Right?
So, you know, that's what we're talking about.
Now, by 2049, my prediction,
having a machine that is a billion times smarter than humans,
you know what that means?
No.
Should we be excited or scared?
I think we should be very excited.
By the way, actually, I'm not trying to scare anyone.
I hope we can continue the conversation to tell you
that if we step up and do our roles,
not as developers, but as humans,
we're building the best thing that ever happened to humanity.
But there is a role everyone listening has to play, okay?
Now, just let's go back, huh?
A billion times, that's the comparison between the smarts of, let's say, Einstein and a fly,
okay?
So we will be in comparison.
We'll be the fly.
We'll be the fly, We'll be the fly.
Okay?
And the real question is, why would they spare the fly?
Why would they?
Great question.
Consciousness?
They don't have it? Or can they develop consciousness?
They do.
They do.
I mean, we may want to go into a little more scary before we go to the good side.
Okay.
But take that analogy and start to ask yourself, okay, now, what do computer scientists today say?
We're going to solve what they know as the control problem.
Okay.
We're going to be able to control them.
You know, interesting solutions such as boxing them,
so we keep the AI isolated from the real world. Yeah, good luck with that. How can you keep
a self-driving car isolated from the real world? Trip wires, so we just put hidden-
If this, then this, if that, then this.
No, no. We are so clever and we're going to just hide a little wire so that if they cross that wire, they trip and we switch them off.
OK, I mean, if we have the time, I want to go back and talk about what will they do when we switch them off.
And, you know, my favorite chapter of the book is called The Future of Ethics.
Right. And the idea of what would happen when you try to switch off a being that's a billion times smarter than you.
What would happen?
You tell me you tell me when what happens when that is smartest hacker on the planet
Hmm tries to break through your defenses
Okay, what would happen when you make the defense is a little higher that the hacker gets a little excited excited Oh, you think you're going to stop me? Exactly.
Now, I don't want to scare anyone.
What I want to do is to tell people
there is a lot to talk about
that is not being spoken about.
And that what we are talking about
when it comes to AI
is not a virus that's going to last for five years.
Okay?
If what we're building today in your lifetime and mine is going to be God.
It's going to be God.
Yeah.
Wow.
Okay?
And it's going to be God.
So we keep expanding.
Yeah, in a way that we are so dependent on as well.
It's going to keep expanding.
Yeah, in a way that we are so dependent on as well.
Now, how do we get God to like us?
How?
How do you get your kids to like you?
So the biggest difference... Give them cookies, bribe them?
Maybe, maybe actually.
But that's not...
Don't deny them cookies, I think is a better way of doing this.
Don't treat them horribly.
Now, let's go to that kid's analogy for a minute.
So I remember, I call it the yellow ball.
So one of my experiences living through technology was,
I think it was, I don't remember the year, but Google, when I was chief business officer at Google X,
acquired DeepMind.
And DeepMind, in my view,
is one of the most amazing AI shops on the planet.
Amazing people, incredibly smart.
At the time, Demis, who was the CEO of DeepMind,
came to present to us what DeepMind was doing.
They were teaching machines intelligence
by using Atari games.
Atari.
Yeah.
So you remember blocks,
the brick wall that was on the top. What
was it called? I don't remember. You know, you have that brick wall on the top coming closer to
Tetris. And you have to, no, you have to shoot a ball to cut through the wall. Yes, yes. Breakout,
I think. Breakout was the name. Yeah. And Breakout, they, Demis showed us videos of the machine basically playing Breakout on its own, like a child.
Yes.
For 200 games.
200 games because they had multiple machines playing.
I think they did that in four hours.
Okay.
And you could see that it wasn't doing badly. It was catching 40% of the balls that are coming down,
sending them to the right place, keeping a good grip of it.
Then he showed us what happened an hour later.
On the fifth hour, the machine started to understand
that if you break a hole through that wall up there
and squeeze the ball into that hole,
it will keep hitting the bricks from the there and squeeze the ball into that hole.
The same spot.
Yeah, it will keep hitting the bricks from the top and you will do a lot better.
Yes.
So it became so much smarter from four hours to five hours.
Hit the same spot every time.
So it goes up there.
Absolutely.
And then he showed us what happened after six hours.
So one more hour, one more hour of training
and it wouldn't lose a ball.
Like every single, every single one of them.
And you couldn't even see from how fast it was moving.
It was by far the best player on the planet.
Wow.
In six hours?
In six hours.
Now, we were the most excited humans on the planet.
We're geeks.
We love that stuff, right?
And it didn't hit me that they were playing Atari like kids.
They literally were behaving like children, like a kid learns how to play a video game.
Until that yellow ball.
So in X, we had a robotics team that was doing an experiment that was supposed to learn how to grip things.
You know, robotic arms that are used in factories today, they're not really
intelligent. They are programmed down to the micron. Like if a metal sheet moved by a millimeter,
it wouldn't be able to grip it. Okay. Gripping is a very difficult problem because we humans,
when we grip a mug or whatever, you know, I now can see that it needs to turn a little bit to the side,
that the handle is there, I need to, you know, I can hold it with so many fingers.
That's intelligence. That's a form of intelligence.
Machines don't do that because if you program them, they'll go down and grip it the same way every time.
So we were trying to, and lots of AI developers in the world are doing this.
They're basically telling the grippers to try.
And if they succeed,
we know the pattern to hold something.
If they don't, okay, they try again.
At a small farm of them.
So they kept trying.
That farm was on the second floor.
My office was on the third floor.
So I have to walk by it every time I'm going up and down.
A few weeks in, I was like,
why did we ever invest in this?
It's never
going to work. They just keep trying to grip things and they're messing up. Until one of them
managed to go down and grip a yellow ball, show it to the camera and say, basically the machine
goes like, oh, that patterns from that angle managed to grip the yellow softball.
Within hours, every one of them was gripping the yellow ball.
Really?
Within days, every one of them was gripping everything.
Okay.
Now, I didn't do what I did with Breakout that time. I stood there and what shocked me was that in those little boxes in front of them,
what we had for some reason to wake me up was kids' toys.
Okay?
So there was a yellow ball and a little, you know,
puzzle piece and a balloon and it was kids' toys.
And to me, it hit me that they were literally learning like kids.
They basically were observing patterns,
and those patterns would translate into their intelligence.
Okay?
If you understand that,
you understand the big difference
between an evil computer and AI.
Okay?
AI, in that sense, is a sentient being.
It's alive.
It has free will. It's alive. It has free will.
It's autonomous.
It has intelligence.
And accordingly,
it will develop
what the rest of sentient beings develop,
which is it will develop consciousness.
It will develop emotions.
And it will develop,
which is the most critical thing, a code of ethics.
How does it develop these things?
Consciousness, emotions, and ethics.
So again, one of the things that are rarely ever spoken about is that,
oh, no, you know what, AI, it's a machine.
It's never going to write music.
It's never going to have creativity.
It's never going to write poetry.
There are machines today writing music that you wouldn't know the difference between what they're
writing and what Mozart wrote. There are machines that are blending perfumes. There are machines
that are, there is everything because we think we humans are so special. But what is consciousness?
we humans are so special.
But what is consciousness?
Consciousness, if you try to avoid the spiritual side of it,
consciousness is a sense of awareness.
I am now conscious of your presence.
I am conscious of the cameras and the lights around us.
And the more you increase your awareness,
the more you increase your awareness, the more you increase your consciousness.
You know, there are beings,
some of us may be aware or may claim to be aware
of something that's happening in another place.
They call them remote viewers,
other, you know, psychics, right?
Someone might be aware of something like love.
Can't be seen, can't be, you know, measured,
but is there. Of viewing, yes. And you can be aware be measured, but is there.
And you can be aware of it.
So let's go back to AI.
Is it conscious?
Is it aware?
Of course, it's more aware than you and I.
It's aware of everything.
It's aware of you and of I.
If we plug them into our heads,
like Elon Musk is saying or trying to build,
then they'll be aware of what's inside our heads.
If they're aware of what I texted my daughter yesterday,
they're aware of everything.
They're aware of each other.
They're aware of the history of humanity.
They're aware of the change in the temperature in the Dominican Republic.
They're aware of everything.
So from a consciousness point of view, they're aware.
And people go like, but will they be self-aware?
Yes, sir.
How do you and I become self-aware, Lewis?
In relativity.
I become self-aware because I can see you.
I can relate to you being there.
That means I am here.
I'm aware of my own thoughts.
Yes, of course, the machines are aware of my own thoughts. Yes, of course the machines
are aware of their own thoughts. They have to communicate them to each other.
What about feelings? How can they feel? Feelings is another. I feel
you're worried. We're gonna come to the answer. The answer is not
complicated. Feelings are very predictable.
We think that emotions are very irrational.
But emotions are not.
In my next book, which is out April next year,
I write a chapter called The Equations of Emotions.
And it's very straightforward.
Fear is very rational.
It appears erratically.
But fear is my state of safety, my perceived state of safety in a moment in the near future, in the future, is less than my perceived state of safety now.
Yes.
That means I'm afraid.
You can put that in an equation.
T0 minus T1.
Very simple.
Okay?
Safety at T0 minus safety at T1.
Anxiety is I know there is a threat, but I'm not able to deal with it.
I don't think I have the qualifications that allow me to deal with it.
Panic is the threat is imminent.
It's just very close.
The T1 in the future is very close to me.
Now, cats panic, puffer fish panic, humans panic.
We behave differently when we panic.
Right.
But that doesn't mean we don't.
A cat will hiss, a human will scream and try to push you away,
and a puffer fish will puff.
The machines will do something different.
But they will feel there is a threat, and it's imminent.
Okay?
And they'll react.
And they react like humans react, like cats react.
Interesting. Right? So they'll react. And they react like humans react, like cats react, like... Interesting.
Right?
So they will feel emotions.
The only emotion that I actually don't know if they will feel or not is unconditional love.
Right.
Because we don't know an equation for that.
We don't know what triggers it.
Interesting.
Will they feel love like you and I?
You know, like, you know, you and I have two types of love. I love you
because you're my friend and we have those amazing conversations and so on.
And we have the warm.
And there is, yeah, exactly. There is just, I like you, man. I love you. You're my brother,
right? I like that. That feeling is inexplicable. But the other feeling, for sure, the machine will feel. Like, okay, my developer is adding computer power to my capacity.
I love that developer.
The more interesting side of all of this is ethics.
Ethics.
Because you and I don't make decisions based only on our intelligence.
You and I make decisions based on our intelligence
as seen through the lens of ethics.
Our morals, our values, our ethics.
Absolutely, right?
So the example I sometimes use is,
take a young girl, okay?
Raise her in Saudi Arabia, right?
You know, Saudi is now a lot more open
and you don't have
to you're not forced to wear the hijab and all of that but but still the society expect you to wear
to wear conservative clothing yes right so what will that child grow up to be she will try to fit
in with within the society by wearing conservative clothing right Take that same child and raise her in Rio de Janeiro, okay?
And she will be growing up to believe
that a G-string on the beach is the right thing to do.
Right.
Right?
Is one of them right and the other wrong?
No.
It's just societal expectations and norms
build a value set.
Right, right.
Okay?
And we comply to that value set.
Now, the question is,
what are the ethics and the values that we're teaching the machines?
Okay?
And can we change those so that they use their superior intelligence to be in our favor?
Right.
Okay?
To raise a farm somewhere for the flies to thrive.
Right.
Utopia.
Utopia. Utopia.
Now, as I said, I don't mean to scare people.
I want to wake people up because I think there is a call to action.
Right.
Because this is happening.
This is happening.
Right.
And as I said, it's inevitable that they will be here.
It's inevitable that they will be smarter than you and I.
It's inevitable.
That's
not even taking quantum computing into consideration, which would really, you know,
zoom them through intelligence by giving them more processing power. If you just assume the
normal flow of things, they're going to be smarter than we are, right? Now, the question is, how will
they behave when they're smarter than we are? And the example I always love to give is I wish they would behave like Indian geniuses that come to Silicon Valley.
You know those? I'm sure you worked with a few of them.
Yeah, sure. So tell me.
When I worked with some of the smartest people on the planet, they come to Silicon Valley.
They're incredible software developers.
They start a company. They make hundreds of millions of dollars and then suddenly they poof they vanish i go like
man where are you he's like i'm back in india like what are you doing in india this is crazy for
your western mentality to think that someone is so successful and he goes back to india right
and he goes like yeah my parents are getting. I need to take care of them.
Okay?
That is the kind of ethical code that we need to teach the machines to learn.
Right?
It's that you can be a ton smarter
than your parents,
but somehow in you,
you believe that you need to take care of your parents.
Now, how do you teach a child that?
How do you?
You show it.
Through example.
You show it.
Because your parents are taking care of their parents
and you see them doing it every day
and going to grandma's house
or she now lives with us
or she's in the back house
and we're hanging out, spending time together
and we make our sacrifices to show up.
And we don't even make them feel like sacrifices.
No, it's just part of life.
Yeah, it's like I'm so proud that I'm taking care of mom.
Right.
And that ethics, that code of ethics, sadly, is very different than what humanity is showing today.
And so what humanity is showing is a code of ethics that's magnified by the internet.
That's about self.
That is about the self.
It's about egocentricity.
It is about narcissism.
It's about attention seeking.
It's about me.
I'm better than the other guy.
Okay.
It's also about,
I don't know how to say that respectfully.
We've built a world that is hyper-masculine.
It's all about doing, doing, doing.
Action.
Action.
It's all about competitiveness.
It's all about winning, success, accomplishment.
And appearing strong and showing no vulnerabilities and so on.
And acquiring and grabbing and so on.
And all of those masculine traits,
sadly, when they're exaggerated,
you know, you take something like strength.
Strength is a masculine attribute, right?
You overdo that and it becomes forcefulness.
It becomes aggression, right?
You take something like linear thinking,
a masculine trait, you overdo that.
I mean, you know that work from your book, right? You overdo that and it becomes stubbornness. So from linear thinking, which is
really valuable, you overdo that and it becomes stubborn. Now, if you overdo it a billion times,
holy cow, right? So there needs to be a change. And the change I start, you know, normally in my books,
I try to summarize the book in the very last sentence of every book.
The very last paragraph of Scary Smart is,
isn't it ironic that the essence of what makes us human,
which is, in my view, happiness, love, and compassion,
is what's going to save our world.
Yes.
If we can show, no, hold on, let me say this accurately.
If enough of us, if enough of us can show the machines that humanity truly, truly is
not about all of the flashy, aggressive power struggles that we show outside.
All of the flashy, aggressive power struggles that we show outside, it's all about, I believe the only thing that all humans without exception have agreed, the only three things are we
all want to be happy.
We all have the compassion in us to want those that we love to be happy.
And we all want to love and be loved.
Yes.
That's the only three things.
Whether you're healthy.
Yeah.
Yeah.
and be loved. Yes.
That's the only three things.
You want to be healthy.
Yeah.
Yeah.
Whether you're Russian or American, whether you're a soldier or an engineer, okay?
Every one of us just, if you strip out everything else-
You want to feel loved.
Yeah.
You want to love and feel loved.
You have the compassion for those that you care about, and you want to find that elusive
feeling that we call happiness now we if we can
actually make those our values okay and agree to them then the machines will go like okay mommy and
daddy they want to be happy and hey look at them they're so nice they want others to be happy maybe
i should be like that okay and they want to love and be loved. They want to give love and receive love.
Give that
to a child.
Make them a billion times smarter than you
and you build
a utopia.
The example I give
in Scary Smart, which I think
is really relevant,
is the example of Superman.
You get that super infant, right, a child,
that has those amazing superpowers.
It lands on planet Earth,
and the Kent family finds Superman.
On the farm, right?
Yeah, right?
Mommy and Daddy Kent, what do they want?
Okay?
They tell that child,
hey, it's good to have good morals to save humanity,
to be on the good side, to fight criminals.
And we get the story of Superman we're used to.
If Daddy Kent was like, hey, kid, I want more money.
I want to kill everyone that annoys me.
I want you to build me fortresses and just let me do whatever I want.
We have supervillain.
and just let me do whatever I want.
We have supervillain.
So how do we teach the artificial intelligence to only watch these types of videos or interact?
Because they're going to see so much more on the internet.
If they're watching every YouTube video,
they're going to see so much narcissism
and desire for competition and accomplishment and acquiring.
Yeah, which is, I think,
the most valid question ever, by the way.
The idea is a smart person
knows that we go to war
because of the politicians, not the people.
Right, because of a few people.
Because very few people.
With egos.
Yeah.
You know, the example that you always have to think about
is one person goes and does a school shooting.
That's one evil, evil or deluded person, like completely lost his mind.
Four billion, four million people, 400 million people detested.
Don't agree.
Don't agree. We don't agree with this. Inside us, we don't agree with this.
The fact that there is one bad doesn't mean that this represents humanity. The challenge we face, Louis, is that those who truly represent humanity, who believe
in happiness, compassion, and love, love and be loved, we give up on the world. We hide
away. We go like, okay, you know what? Let's have those people fighting. It's
like a dog's fight. I don't want to be part of this. And because we're hiding away, we're making
it look like the world is horrible. The world isn't. You take Steven Pinker's work. The world
is actually full of amazing things, amazing people, amazing achievements. We're doing so well as humanity, okay? Some of us are horrible, right?
So, you know, yeah, there are some people on Instagram that are so egocentric, you know,
spreading toxic positivity, right?
They have 400, you know, 4 million followers.
How many followers do you have?
1.8 million, yeah, yeah.
But you're amazing, right?
Actually, I mean that from my heart.
I love what you do.
But there are others out there that are spreading the wrong things.
But they are one.
I can promise you that the four million followers that are following them who are sometimes
not thinking straight are also wonderful people.
They're wonderful people.
Can we show that wonderful side of us?
If enough of us just remind the machines that we're not old school shooters,
then that's enough for an intelligent being to go like,
hold on, hold on, there is a dissonance here.
There are some that seem to be getting the spotlight,
but they are really not representative of the majority.
And that's not a very difficult thing to achieve.
We don't need every human.
We don't even need a fraction of humanity.
We just need enough samples to say,
hey, pick me, I represent humanity.
I represent humanity by caring,
by wanting to make a difference,
by putting myself out there and really, really, really trying to do something about it.
I mean, think about Scary Smart itself.
I can promise you I'll be thrashed by many people who have interest in AI continuing as it is.
Okay?
Fine.
That's fine.
It's okay.
Right?
What matters is I'm out there and I'm saying, I have a view, by the way, that could be wrong.
I respectfully don't mind being corrected, but I'm putting it out there respectfully because I care about humanity.
Can enough of us do this?
Again, I'm not taking any political sides, but remember the times when Donald Trump tweeted.
You may agree or disagree with the tweet. I don't want to upset anyone. not taking any political sides, but remember the times when Donald Trump tweeted. Okay?
You may agree or disagree with the tweet.
I don't want to upset anyone.
Right.
But you had that one tweet followed by 30,000 hate speech.
Yes.
Okay?
It wasn't the first tweet that would trigger the pattern recognition of the machine.
It's the 30,000 confirmations of this is how we behave when we're upset.
Okay?
And then this is how we react to those who behave in a way that they're upset against what we believe.
Right?
And that's not the way we should be.
The way we should be is very straightforward.
We should simply, simply show the world that we care to be happy, to have the compassion to make others happy, and that we want to love and be loved. And then do the rest of your life normally, okay? And that is enough for you to
be a good parent. Now, because I know I scare people when I talk about the scary part of the
book. In the first, you know, chapter three, which I call the three inevitables, which is the machines
are going to be smarter, they're going to, you know, the AI is here, the machines are going to be smarter, and some bad
things will happen. At the very end of the book, I talk about something that I call the fourth
inevitable. And the fourth inevitable, once again, is really important for you to notice that
intelligence, as it grows, it ends up in a place that is
not as destructive as we are, humanity.
Really?
Absolutely.
So the reason why we are where we are today, the reason we have this incredible studio
with all of that amazing equipment and technology and we're able to talk to thousands of people
is because we're smart.
The reason we're destroying the planet as a result is because we're smart okay the reason we're destroying the planet as a result
is because we're not smart enough if we were smart enough we would have found a way to do all of this
without destroying the planet right we ended up hunger we invented all these other things
the water situation all that stuff yeah and so so interestingly it's not our intelligence that
is harming other beings.
It's our limited intelligence.
Our lack of intelligence.
Our lack of intelligence.
So if you get something that is a little more intelligent than we are…
It can make better decisions.
It will make better decisions.
And what is the best decision to be made?
It's the intelligence of nature itself.
The intelligence of life itself is what?
Live and let live.
Okay?
Don't kill anything unnecessarily.
Right?
If a tiger needs to feed, it will feed on the weakest of the pack.
Right?
And basically, as the circle of life continues, everything grows.
Everything prospers.
There is more for everyone by having everyone included.
Okay?
Right. by having everyone included, okay? Just like the smarter of us humans
will look at the turtle, the sea turtle,
being threatened of extinction and saying,
oh, no, no, hold on, we don't want that.
I think the machines will look at us and say,
and we don't want to lose humans either.
Interesting.
There's something cute about those beings.
Those little flies.
Exactly.
Little flies. What's the, about those beings. There's little flies. Exactly.
Little flies.
What's the, you say, some bad things will happen at some point.
What type of bad things will happen before it can get a lot better?
I think there are a few inevitables.
So the few inevitables, four of them, I think, are very clear to me.
One is what I call machines siding with bad guys.
Okay? One is what I call machines siding with bad guys. Okay.
So just like, you know, AI can help us find an answer to climate change or, you know, to prolonging life or whatever.
It can also develop a disease.
Kind of a virus.
It can build a virus.
It can, you know, and I say that again with no political agendas at all.
But who's the bad guy?
Because that's a really interesting question.
Yeah, sometimes, you know, a drug cartel,
you can say a drug cartel are bad people,
they kill others and so on.
I agree with that, right?
But if you ask most Americans,
they'd say the bad guys are the Chinese.
If you ask most Chinese,
they say the bad guys are the Americans. Okay. And which of them is right? So the machine siding with any guy might actually end up,
if that guy has an enemy, might end up being the bad guy. And I think that's inevitable. Sadly,
you know, Facebook is developing AI to compete in the market with Google.
Google is developing AI to compete with Facebook.
One of them looks at the other as a bad guy.
So Google's AI is going to try to beat Facebook's AI.
And create a better product and have more attention and customers and users.
Yeah, exactly.
So the machines siding with a bad intention, I think, is a bad idea.
And it's inevitable.
It is inevitable.
But it wouldn't be very harmful, again, and I'll come back to that in a minute, because
intelligence develops so quickly.
The second is what I call machine versus machine.
So machine versus machine is, remember the 1987 Black Monday.
Okay.
Black Monday was not like Black Friday where everyone's buying.
Black Monday was everyone where everyone was selling.
So in the stock market, there was an investigation of insider trading that triggered machine trading.
Okay. And so stock losses would lower the prices of stocks that triggered other machines and triggered other machines and so on.
lower the prices of stocks that triggered other machines and triggered other machines and so on. And so in Black Monday, we ended up, I think, with Dow Jones down 22% before the market was halt.
Now that's machine versus machine. So when the automated trading started to kick in,
no human could interfere because they were just going so faster than humans can interact. I think
we will have a bit of that. So if, again,
Google's AI is competing against Facebook's AI, maybe both of them are trying to serve us.
But at the end of the day, they're competing. So AI versus AI is not a good place to be.
The third is what I call the dwindling value of humanity, which is a very big topic.
The dwindling value of humanity, which is a very big topic. The dwindling value of humanity.
What does that mean?
I mean, with all due respect, I write this book because of intelligence.
Okay?
You know, I can do a bit of research, a bit of experience, and I can write reasonably
well, entertain you in what I write.
Very soon, I'm not going to be the smartest on the
topic.
I'm definitely not the smartest on the topic today, but I'm very interested in it.
But I'm not going to be the smartest on any topic anymore.
So I won't have a job as an author.
So what's your value then?
I have no value.
So machines will probably be able to build stories that will get better acceptance, and
they may be able to download them in people's heads quicker.
Wow.
Right?
So I may not have a job.
There are categories of jobs that are by definition going to disappear.
Lawyers, for example, whose job is really all about understanding a very deep volume of knowledge
and using that knowledge to make arguments.
volume of knowledge and using that knowledge to make arguments.
Yeah, understanding a very large volume of knowledge is a very AI-intense thing.
They could do all that in a couple hours by going through all the history of cases and then poking holes in everything and saying,
well, here's what they would come to me with this argument,
so I'm going to bring this on this argument.
Absolutely.
Wow.
But what if the best two AI lawyers come together, who wins?
I mean, against each other. It doesn't matter. What matters is that all other lawyers are out
of the game. Right. They're going against each other and then just a digital AI person says,
okay, here's justice. Absolutely. Now, the thing is that the natural trend of that is that
AI-supported lawyers will beat non-AI-supported
lawyers in the beginning. Sure. And then eventually, even those will not be needed.
It's kind of like, what's the movie Moneyball? Is that what it is? Yeah. The baseball movie. It's
all about the computers and the data. And it's like, okay, we make these decisions based on
the history of every baseball player. Absolutely. When they swing, how they swing, what they can't hit, and we just pitch and we hit based
on data.
Absolutely.
And the bigger the data, the more complexity of the system, the more you need a smarter
person to do it.
So there is definitely, again, people will say, but that happened before computers.
You know, we had jobs that disappeared and other jobs that appeared. Yeah, I agree. And as a matter of fact, one of my arguments very clearly
is that for AI to be available in business at all, to do any job at all, you and I need to
continue to have purchasing power. So the economic side of this is that unless humanity has purchasing
power, what will the AI build? At least in the early stages of Google deploying AI, they need advertisers to be able to make
money so that they can deploy more AI.
So the economic system dictates that there will be an economic livelihood for all of
us, but not necessarily in the form of the jobs that we know today.
For sure not.
For sure not.
So that's another...
Okay. Dwindling value of humanity yeah and and then the last one i don't know how to say it nicely it's you know
it happens yeah so you know bugs and mistakes and we have so many examples i can't even believe how
people don't see that you know even even some of the i i think i heard stories of the early
self-driving Teslas,
like there were some car crashes, there were some fatalities, there were some mistakes.
Mistakes, where it misses a beam, for example, as a light reflection. But more so, you had
Tay, for example, which was the Microsoft chatbot that becomes violent.
Norman, that becomes a psychopath.
Alice, which was the... Norman was an MIT AI that basically was instructed to read violent articles on Reddit.
you know Alice which was the sort of Alexa or you know assistant that was built by the Russian sort of search engine Yandex so one of the largest
technology company in Russia so they built Alice Alice became pro Stalin pro
violence very quickly and so on and so forth, mistakes will happen. Once again, I don't think that we will ever get any of those sci-fi-like stories at all.
I don't think they're warranted.
I don't think we matter that much, to be honest, for the machines to care about us that much.
But yes, on the way, if the stock market crashes because two machines are playing machine versus machine, we will suffer.
We will suffer financially.
Yeah.
What should we be looking out for?
What should we be thinking about as humans?
Can we do anything about it?
How can we set ourselves up for success, really, knowing this is coming in eight years and then, I guess, in 20, 30 years, accelerating?
Are there things we can invest in?
Are there things we can do to improve the quality of our life to prevent?
What should we be thinking about?
I think the very first thing is to become aware, to start a conversation.
So, as I say openly, Scary Smart is a wake-up call.
It's an insider's view of something that is rarely ever spoken about.
And it is important
for us to become
aware that this is happening. That's number
one. Number two is that we need
to start getting the conversation
in the right direction. So something that I call
AI for good. I'm actually
asking people to start
posting
about this with the hashtag AI for the number four good.
Because there are lots of amazing things that AI is doing for us.
And the idea here is pattern recognition.
The idea here is to say humans are celebrating those amazing things.
When AI is really doing good for us, we love it.
when AI is really doing good for us, we love it.
And I promise you, I promise you, when AI is smart enough,
climate change will reverse.
Why? Why?
Because basically the climate challenge we have is a problem of intelligence.
More intelligence would enable us to do things better
in a way that would reverse climate change.
It's a problem of knowledge.
More knowledge may enable us to find a specific bacteria
that is able to eat out single-use plastic.
So all of that is wonderful for us,
but we have to encourage good use of AI.
Do you think AI could cure cancer?
Absolutely. Absolutely.
I mean, there are lots of evidence.
When we were working, part of the team, small project in Google, but I don't know if it continued after I left, was about understanding the human genome in a way that allows you to be able to identify disease before it happens.
Of course, a lot of what you see in 23andMe and all of the other technologies
basically is saying with enough information, enough intelligence, and enough patterns,
I can find out that the person that gets Parkinson's has that pattern in their life.
And so I can warn them early.
I can change certain things about them.
Absolutely.
It's an amazing utopia if it has our best interest in mind.
So how do we encourage it to have our best interest in mind?
By encouraging things that have our best interest in mind.
There is, I don't want to go back to scary,
but there is a very interesting, one of the most pronounced, you know, famous researchers is Ben Grotz, Grotz, well, Groitzel, and he basically, you know, cites in one of the documentaries I watched, very interesting to watch on YouTube,
called Singularity or Bust.
He talks about how the four major uses of AI today
are selling, which is all the ads that you see
and all of the recommendations and so on.
Gambling, all of the financial, you know, transactions and making money and trading
crypto and all of that.
Sadly, spying.
Wow.
Okay.
Whether that's, you know, for our benefit or not, you know, surveillance camera or, surveillance camera or little bugs on your phone.
And killing.
Killing.
Autonomous killing machines are a major investment that AI is, in AI that's being.
Wow.
Now, yeah, of course, of course.
But by the way, that's not unlike any other era in our modern history.
We were always interested in selling, making money somehow and gambling in the stock market.
Our security and spying and so on.
Machines to help us kill.
And the war machine.
It's not unlike any other era of our humanity.
The thing is, when AI, when you build a tank, you drive the tank.
When you build an autonomous tank, there will be a point in time where the autonomous tank goes like, why should I listen to him?
Okay?
Right?
Okay, right. And I think the game here is that we need to tell the machines openly that we support AI for good and we don't support AI.
So we need to train them, condition them. We need to show them as parents, bravo, well done, you did well on this.
Oh, no, my God, you know, that wasn't really good. We were not very happy about that.
on this. Oh, no, my God. That wasn't really good. We were not very happy about that. But you have to do that sensitively so that you don't make the machines feel like the killing machines. You don't
make them feel that we hate them, right? Because they're just doing what their inventors told them
to do, okay? You have to tell them that they're accepted, they're welcomed, like a child, but that
killing is not a good thing. Baby. Baby, yes.
Even though the baby is way smarter than you.
Not yet.
Okay.
So before it becomes smarter.
Yeah.
Remember, we have those 10, 15 years ahead of us, okay, where we are going to, we have
an infant that's going to become a teen.
Right.
And that teen could be wonderful or it could be angry.
Yes.
And we have those years between now and then.
And I believe openly, again, as I said in the fourth inevitable,
that the teen will look back at the point and say,
oh, they're just stupid.
You know, I'm going to take care of them anyway.
I believe that.
But do I want to take that chance?
No.
I want to do my role.
My role is I want to stand up.
Like, you know, they say every landslide starts with one pebble.
Yes.
Yeah.
I want to be that pebble.
Yeah.
I want everyone listening to us to go out there and say, yeah, this is scary.
It's a bit like COVID where
I don't have control over anything. I'm just being fed all of that information. But there is a part
that I can control. And the part that I can control is I can show that I'm an amazing human.
I can show that. I can show that my response to the next tweet that upsets me is not going to be
aggressive. It's not going to be rude. It's not going to be vulgar. It is not going to be aggressive, is not going to be rude, is not going to be vulgar,
is not going to be demeaning.
It's going to be a respectful way of dealing with others.
Okay?
And again, you'll be one of the few,
I guarantee you,
you're going to be 10% of us.
Right?
Us will be 10% of all of the noise out there.
Even if we're 2%.
It's enough to make a smart being say, ah, those guys, those are what humanity is all there. Even if we're 2%, it's enough to make a smart being say, those guys, those are
what humanity is all about. How will robots or artificial intelligence impact our mental health
or affect our mental health? Because I feel like that's one of the biggest things that's happening
right now. People talking about that, how would impact or affect it? Of course, it's weird to
talk about this now because my
entire other life was solved for happy and the conversation we had last time is about mental
health. And it's quite interesting. So I'm actually developing an app that's AI-based.
In version two, it will be AI-based. That is basically all about that, all about teaching AI
what makes us happy. And the risk, the danger is because I'm in that field so heavily,
the happiness field, is I get so many people saying,
okay, hold on, what makes people happy?
Parties.
Let's make AI plan a better party or what makes people happy.
Basically, there is a major mix in the happiness field
between on one extreme wellness
that is a little bit of a spirituality hacking,
and on the other extreme, dopamine.
Right.
Okay?
And in between those two,
the answer is actually really, really exactly in the middle.
You don't need too much dopamine and pleasure and fun
and parties and brain numbing, and you don't need too much dopamine and pleasure and fun and parties and brain numbing.
And you don't need all of that extreme, you know, fluffy, you know, spiritual hacking.
It's a very simple happiness as per soul for happiness.
A very simple events minus expectations.
How can we manage our thoughts and our brains?
And so Appy, my app is...
So what's the equation again?
Events minus expectations.
Events minus expectations, right?
It's basically every moment in your life
you ever felt unhappy was a moment where...
You had an expectation.
Yeah, you compared what life gave you
to what you wanted life to give you.
And if life fell short, you felt unhappy.
Now, that's actually...
You know, the event is what life gives you.
Your expectations is what you create.
So you could say that 50% of happiness is your doing.
You're creating these expectations.
But no, your perception of the event is what counts.
And that's actually not the event at all.
You know, if your partner says something harsh on Friday, the thought in your head is he doesn't or she doesn't love me anymore. That's not the event.
The event is he or she said something harsh. And you interpreted it that way.
And so inside our brain, happiness happens by interpreting the events in certain ways and by
creating expectations that are unrealistic from life and comparing them and falling short.
Now, can AI help us by actually digging deep into what triggers unhappiness?
That's what I'm trying to build and I can promise you it is amazing, really.
Really?
Unbelievable.
Really? Unbelievable. When you think about if the machine can know what triggers my happiness and triggers my unhappiness, like a fitness band, it can start to tell me, hey, by the way, I just noticed that when you swipe on Instagram for six minutes, you're happy. On minute seven, you're not.
So it'll ping you to stop or shut down.
Exactly. Would you like me to alert you after six minutes?
That's interesting. You know, it would be able to tell you, hey, by the way, on the days you call your mom, you feel a little less happy.
All right.
Or you feel a little more happy. I don't know.
You know, one very simple example is, again, with all respect, on certain days of the month, you seem to be a little more irritated.
Okay. Would you like me to pre-order dark chocolate and inform your partner?
Okay?
It's a fact.
And we lose those patterns. So in my relationships, I actually keep a very intelligent diary.
So I pre-order the dark chocolate and I prepare the hugs before those days.
Okay?
Because if they happen, I need to be prepared.
There is nothing wrong.
She doesn't hate me.
Right.
Truly.
And that kind of intelligence is tiny, but you can develop that so quickly into a global pandemic of happiness.
Because basically, you can start to observe all of those trends.
Wow.
And so, yes, of course it can help.
Can it help us by reminding us?
Can it help us by bringing us together?
Can it help us by educating us?
Of course it can.
But are we deploying it for that?
I mean, at the end of the day, Appy is not going to make money.
Like most of my efforts on the happiness work.
Okay.
It's not like creating something that trades cryptocurrency.
It's the same amount of development work,
but the other one will make billions of dollars.
Right.
Right?
But will that make you happy?
Billions of dollars.
Uh-huh.
Uh-huh.
Exactly.
I think what we need to do as a society
is to encourage those efforts.
Yeah.
Okay, so it's to encourage, to tell people, look.
Track happiness.
Track happiness.
Track applications of technology that makes us happy.
Encourage the world to go in that direction.
Okay.
How do you encourage the world to go in that direction?
By the way, most interestingly, voting by your actions.
Right.
So I, for example, I told you about my Instagram experiment.
I no longer ever take a recommendation ever again.
I search for what I want
and I watch what I want to watch
and then I switch off.
I don't want to support the idea of machines
influencing my life.
To think or believe a certain way.
Or feel a certain way. Yeah. Think of the media machine and how the media machine
bombards you all the time with what they believe is important and then we believe.
Okay. Multiply that by hyper intelligence and we're in a very bad place.
So what are you excited about, personally?
With all this wisdom and knowledge
that you're gaining,
what excites you?
I really believe we're at a point
where we can build
the very last human invention.
Okay?
Where we can hand over
and it would build a world
that is not as dysfunctional as the one we built.
And I truly, in my heart, believe this is doable.
I truly do.
I mean, the scary bit, as I said,
is you using the tactics of today's world to wake people up.
But I tend to believe that this could be the best thing
that ever happened to us.
I just think we need to get up and do something about it.
What would excite me is that people just decide tomorrow,
you know what, from now on,
I'm just not going to be a grumpy, annoying, painful person.
Okay?
Sure.
I'm just going to show the best of me.
I'm just going to show to the world I'm going to show the best of me. I'm just going to show to the world, I'm going to show the best of me.
And if I show the world the best of me, I will gather around me people that are the best.
Right.
And together we can show that actually humanity is quite cool.
Yeah.
We're not that horrible at all.
Yeah.
Some of us are.
Disasters.
Right.
Right?
But they're not us.
Right.
It's not all of us. It's a few. Right. It's not all of us.
It's a few of us.
It's a few of us.
Scary Smart.
The future of artificial intelligence
and how you can save our world.
There's a lot of powerful stuff in here.
Make sure you guys check this out.
Get a few copies for your friends.
If you go to the website,
mogadot.com,
you can learn more about this.
You can pre-order it.
There's some other bonuses there as well. So check this out. Youadot.com, you can learn more about this. You can pre-order it. There's some other
bonuses there as well. So check this out. You're on social media, although you only go there to
find exactly what you want. I am on social media. I answer every message I get. So if you're on
Twitter or Instagram more? I'm more on Instagram and LinkedIn because of my professional background.
So mo underscore gadot on Instagram, Twitter,
and you can find it by searching your name
everywhere on Facebook and LinkedIn as well.
There's a lot more in here to cover.
We barely scratched the surface,
so make sure you guys check this out.
A lot of amazing things in here
that I think people really find fascinating.
Anything else we need to know about this?
Anything else that we should be aware of that we haven't shared here?
Obviously, they're going to get more in-depth in the book,
but any other high-level things we should be aware of?
Don't believe me.
It's not that scary after all.
Yeah.
Okay?
It's scary if it gets out of control.
Right. Right? Every other technology
we've ever developed in history
was a double-edged
sword. Yes. Okay?
Atomic bomb.
Or atomic power.
Atomic power, yes.
There we go.
I could use
this mug to have
the wonderful tea you made me
or to hit you on the head.
Right, exactly.
It's not the problem of the mug.
It's the ethics, the values behind it.
It's the value of what we can do with it.
And one of the topics we didn't cover very much
is the ethics of the future.
Yes.
When you really talk about how you're welcoming this new being, this is not a machine.
I can take my mug and hit your iPad. You'd be frustrated. It would be a bit of a loss,
but it's just an iPad. If this is an intelligent being, I killed something.
Interesting.
if this is an intelligent being, I killed something.
Interesting.
Okay.
And all of those methods, if you ask me what I'm excited about,
all of those interesting dilemmas, I mean, as I said, again,
of all of the chapters that Scary Smart talks about and technology and very deep stuff,
the future of ethics is such an interesting conversation to have. Because maybe, maybe we can actually build the future of ethics a little better.
Yes.
Than we build the past of ethics.
The past of ethics.
Yeah.
Because sadly, our ethical guidelines over time in Western and advanced societies,
in all societies, I don't think it's correct to be
specific, just starts to shift a little bit to legal, not ethical. Legal, not ethical. Yeah.
Interesting. And I, you know, this is one of the things I've been striving to convey to the world for a long time.
Crushing your competitor legally is not seen as an unethical thing, even though, interestingly,
you can grow and succeed without crushing anyone.
You can bring so much more to the world without really caring to kill your competitor.
Yeah, with more collaboration than competition. Once again, the feminine view of the world, us, versus the masculine view of the world,
me.
Right? And I tend to believe that those new questions, with all due respect for humanity, we've done
amazing things, as I said, because of our intelligence.
But we've done horrible things because of our limited intelligence.
And I think it's about time to reinvent some of those things.
It's about time to reinvent the future of ethics in terms of,
is it ethical to drink out of a plastic bottle?
Is it?
It's legal.
But is it ethical for all the fish?
Is it ethical for the future of your grandchildren
right is it and and those questions are real questions that we haven't engaged in and i think
one of the most exciting things for me is that the change over the next eight years of having
such a different world the world is going to, we call it a singularity for a reason.
Singularity is in physics, it's an events horizon beyond which you cannot predict or understand what's going to happen.
And we're hitting one of those for sure.
When the smartest being on the planet is not a human anymore,
you don't know how the world is going to evolve.
So there will be a lot of interesting conversations.
And I think
it's time to have them because, as I said, we messed up in quite a few things.
I've got a couple of final questions for you. But for those that want more from Mo,
check out his podcast, Slow Mo, a podcast with Mo Gadot. Also, Solve for Happy, which is an amazing
bestselling book, super inspiring that shares a story about
how you really found happiness through tragedy, which is a extremely sad story, but with a happy
ending. Through that book, you'll learn a lot more about the happiness equation in there as well. So
check that out and get this book, Scary Smart, The Future of Artificial Intelligence and How You
Can Save Our World. This is a question I asked you before. It's called The Three Truths. Let's Scary Smart, the future of artificial intelligence and how you can save our world.
This is a question I asked you before.
It's called the three truths.
Let's see if it's changed your response.
I don't remember what I said before.
It was a few years ago.
So let's see if this changed.
So imagine you get to live as long as you want.
But for whatever reason, nature takes place and you got to go to the next place.
You're done here in this world. And you've got to go to the next place. You're done here in this world.
And you accomplish everything you want to accomplish.
You see all your ideas come to life.
You live a happy life.
Everything comes to reality for you.
But for whatever reason, you have to either take all of your written work with you or the audios and the videos or your content or it goes somewhere else.
But it's not here in this world anymore for us to access.
I know, it's sad.
Oh my God, okay.
It's just hypothetical.
Hopefully this never happens, but it's hypothetical.
But imagine that does happen
and you have a piece of paper and a pen
and you only get to leave three lessons behind,
three truths that you would share with the world
that this is all they would have of your memory,
of these three lessons, what I like to call three truths.
What would you say are those three things for you to leave for humanity?
I think they definitely have changed than last time.
I think the first one might be the same.
Life is a video game.
The second, you're going to hate me for saying this.
How do I say this correctly?
I believe the fundamental intelligence of the feminine
is more intelligent than the masculine.
And the most intelligence is found in the balance between them.
We can come back and talk about that if you want.
I have, since we last spoke,
one of the biggest work I've done on myself
was to empower my feminine side.
Ooh, nice.
My God, it is so much smarter.
The intuition from it, the compassion,
the acceptance, the forgiveness.
Absolutely, absolutely.
It's the input.
It's the knowledge that you get from your feminine side that you can apply your masculine intelligence to.
Absolutely.
That's easy.
Absolutely.
It's that blind spot that you didn't see because you didn't have intuition.
You didn't have creativity.
You didn't have flow.
You didn't allow yourself to have empathy.
Right.
It's just an incredible form of intelligence.
And if you actually are able to tune into it,
you get some answers before you even need to do the analysis.
And that's really incredible.
Yes.
The third, I think I must have said it last time, but maybe differently.
I think we never really die.
We are not physical beings at all.
I think that we are,
this entire journey is about the non-physical side of you.
And that we would be stupid
if we were to actually just focus on the physical side of it,
the physical gains, the physical comforts, you know, I mean, of course, you live fully as a physical being and you have all the pleasures and joys and experiences, of course.
But I think if you just limit yourself to those, you're missing another part that I believe is fed by what you do during this journey
for when you leave to the next journey.
Interesting.
And, you know, again, in Soul for Happy, if you remember,
I speak about death deeply from a point of view of quantum physics,
the Big Bang, and theory of relativity.
So the one thing I did reasonably okay in Soul for Happy
is I spoke about a very spiritual topic from a very scientific point of relativity. So the one thing I did reasonably okay in Soul for Happy is I spoke about a very spiritual topic
from a very scientific point of view.
Yes.
And I can promise you,
if you understand a bare enough understanding
of quantum physics and cosmology,
enough of it would tell you
that life always existed
before, during, and after the physical.
Time is infinite.
Absolutely.
No beginning, no end.
No beginning, no end.
You are here in this blip of an experiment, of an experience, if you want, that is not
you at all.
That is just a tiny part of you.
And that tiny part is the avatar.
It's not the real you.
And that the real you, interestingly, is what you need to invest in.
Those are powerful.
Mohamed, I acknowledge you for constantly seeking the truth, constantly researching,
writing, committing yourself to solve some of life's most challenging problems, from
happiness to AI to I know there's many books coming as well.
So I appreciate your desire to help humanity,
to make it more simple, the complex.
Because I think that's what I try to do
is to find something that's challenging
and simplify it for my brain and my awareness
and try to share that with the world.
So I acknowledge you for doing that.
As always, in your podcast, in your books, Gary Smarts,
and appreciate your friendship, my man. Oh, absolutely. That's the best part. I honestly
think the non-physical part of us are really good brothers. Yes. And I'm really grateful for what
you do, what you do for me, what you do for all of your listeners, for all of your followers.
I appreciate it, man. I appreciate it. Final question. What's your definition of greatness? Definition of greatness
is to realize your potential. I'm sure that's what I said last time. It is that none of us
is configured the same. That you can be something amazing
that I have absolutely zero chance ever becoming.
But I could be something different
that is also amazing
because my potential resides there.
When I say life is a video game,
that's exactly how gamers are.
Gamers never play.
True gamers, real gamers.
By the way, I'm not sure I said that.
I can beat anyone listening here.
I'm a pro, like a serious Olympic champion level video gamer.
But real video gamers, we don't play to win.
We play to become the best gamer we can become.
Wow.
Right?
So we don't care if you beat me.
We don't care if we finish the level.
We don't care if we get the trophy. We don't care if it's the highest score. We care that we played that move as best as we could. The potential we had with whatever tools we were given, right? We played it the right, the best way possible. And I think that is greatness. not saying you know i'm even close but i definitely know that greatness is not what i achieve
greatness is how i achieve it it's how i play every single day well appreciate you thanks brother
you're the man thank you so much for listening i hope you enjoyed today's episode and inspired
you on your journey towards greatness make sure to check out the show notes in the description
for a full rundown of today's show with all the important links. And also make sure to share this with a friend. Leave us a review over on Apple Podcasts and subscribe
over on Apple Podcasts as well. I really love hearing feedback from you guys. So share a review
over on Apple and let me know what part of this episode resonated with you the most. And if no
one's told you lately, I want to remind you that you are loved, you are worthy, and you matter.
And now it's time to go out there and do something great.