Young and Profiting with Hala Taha - Mo Gawdat: The Terrifying Rise of AI and What Humans Must Do to Thrive | Artificial Intelligence | YAPLive
Episode Date: November 17, 2025Now on Spotify Video! While working at Google X, Mo Gawdat witnessed artificial intelligence advancing faster than anyone expected and slipping beyond human control. Machines began learning on their o...wn, crossing critical boundaries, and spreading across the open internet without ethical safeguards or regulation. This realization turned him into a leading advocate for responsible AI development. In this episode, Mo reveals how AI is reshaping our world, the urgent risks it presents, and how we can guide it toward a future that benefits humanity. In this episode, Hala and Mo will discuss: (00:00) Introduction (01:30) Mo’s Journey in Tech and Google X (07:56) His Awakening to AI’s Power (12:13) Is Artificial Intelligence Truly Artificial? (19:04) How AI Already Controls Your Reality (25:36) The Self-Learning Power of Artificial Intelligence (33:48) AI’s Three Unbreakable Boundaries (40:34) Why Humanity Can’t Stop AI Development (47:49) AI Risks and the Future of Work (57:03) Emotional Intelligence in the AI Era (1:05:49) Thriving Ethically in the Age of AI in Action Mo Gawdat is a renowned AI expert, author, and former Chief Business Officer at Google X. He has over 30 years of experience in technology and entrepreneurship and helped launch more than 100 Google businesses across emerging markets. Mo now hosts the top-rated podcast Slo Mo and advocates for the safe and ethical development of technology. His book, Scary Smart, explores how humanity can wisely guide the rise of artificial intelligence. Sponsored By: Indeed - Get a $75 sponsored job credit to boost your job's visibility at Indeed.com/PROFITING Shopify - Start your $1/month trial at Shopify.com/profiting. Quo - Get 20% off your first 6 months at Quo.com/PROFITING Revolve - Head to REVOLVE.com/PROFITING and take 15% off your first order with code PROFITING Merit Beauty - Go to meritbeauty.com to get your free signature makeup bag with your first order. DeleteMe - Remove your personal data online. Get 20% off DeleteMe consumer plans at to joindeleteme.com/profiting Spectrum Business - Visit Spectrum.com/FreeForLife to learn how you can get Business Internet Free Forever. Airbnb - Find yourself a cohost at airbnb.com/host Resources Mentioned: Mo’s Linkedin: linkedin.com/in/mogawdat Mo’s Instagram: instagram.com/mo_gawdat Mo’s Website: mogawdat.com Mo’s Book, Scary Smart: bit.ly/-ScarySmart Mo’s Podcast, Slow Mo:bit.ly/SloMo-apple Active Deals - youngandprofiting.com/deals Key YAP Links Reviews - ratethispodcast.com/yap YouTube - youtube.com/c/YoungandProfiting Newsletter - youngandprofiting.co/newsletter LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ Social + Podcast Services: yapmedia.com Transcripts - youngandprofiting.com/episodes-new Entrepreneurship, Entrepreneurship Podcast, Business, Business Podcast, Self Improvement, Self-Improvement, Personal Development, Starting a Business, Strategy, Investing, Sales, Selling, Psychology, Productivity, Entrepreneurs, AI, Artificial Intelligence, Technology, Marketing, Negotiation, Money, Finance, Side Hustle, Startup, Mental Health, Career, Leadership, Mindset, Health, Growth Mindset, ChatGPT, AI Marketing, Prompt, AI in Business, Generative AI, AI for Entrepreneurs, AI Podcast
Transcript
Discussion (0)
Today's episode of Yap is sponsored in part by Indeed Shopify, Quo, Revolve, Merit Beauty, Delete
Me, Northwest, and Spectrum Business. Attract interview and hire all in one place with Indeed.
Get a $75-sponsored job credit to boost your job's visibility at Indeed.com slash profiting.
Shopify is the global commerce platform that helps you grow your business.
Start your $1 per month trial period at Shopify.com slash profiting.
Quo, formerly open phone, is the number one business phone system.
Get 20% off your first six months at quo.com slash profiting.
Shop the latest trends from today's top brands with Revolve.
Head to Revolve.com slash profiting and take 15% off your first order with code profiting.
Merit Beauty is a minimalist beauty brand that makes elevated makeup and skin care.
Go to Meritbeautcom to get your free signature makeup bag with your first order.
Delete me makes it quick, easy, and safe to remove your personal data online.
Get 20% off, Delete Me Consumer Plans at Join.
delete me.com slash profiting. Don't wait. Protect your privacy, build your brand, and get your
complete business identity in just 10 clicks and 10 minutes. Visit Northwest registered agent at
northwest registered agent.com slash yapfrey. Spectrum business. Don't just give you internet.
They set you up with everything that your business could need. Visit spectrum.com slash free for
life to learn how you can get business internet free forever. As always, you can find all of our
incredible deals in the show notes or at young and profiting.com slash deals.
Hello, my young and profiteers. I know most of us, if not all, have been in a situation
where you open up your closet and you suddenly feel like you've got nothing to wear. That stress
is real, especially if I've got a big speaking engagement or a major event and I need an outfit
that makes me feel confident and great about myself. That's why I love Revolve. It's my go-to
for every occasion. From weddings to work events to going out at night, I always wear Revolve.
With over 1,200 brands and 100,000 styles, they've got everything from elevated basics to
statement pieces. Plus, they drop new arrivals daily, and the curated edits make finding
outfits easy and fun. Whether it's a weekend away, a big night out, or just a little
style refresh, your dream wardrobe is just one click away. Head to revolve.com slash profiting,
shop my edit, and take 15% off your first order with code profiting. Fast two-day shipping,
easy returns. Sometimes I do overnight delivery when I need.
an outfit in a pinch. It's literally the only place you need to shop from. That's revolve.com
slash profiting to get my favorites and get 15% off your first order with code profiting.
Offer ends November 9th. So happy shopping. There is nothing that entered your head today
that was not dictated to you by a machine. We ignore that fact when we swipe on Instagram or on
TikTok or when we're looking at the news media. But every single one of those is a machine that is
telling you what it is that you should know.
My guest today is Mo Gaudat, former chief business officer at Google X and best-selling author
of Scary Smart.
Mo has been inside the labs where AI first came to life, and he's here to both unpack the
promise and the peril.
If something goes wrong today with the artificial intelligence that's out on the open
internet, who's responsible for that?
There are very, very significant threats, things like concentration of power, the end of truth,
things like the jobs and the redesign of the fabric of society.
If the most powerful people in the world
who are actually the most knowledgeable about AI
are warning about this, why wasn't anything done?
I actually believe that...
Yap Gang, we all know that AI is evolving faster than we've ever imagined,
learning on its own, making decisions we don't quite fully understand,
and racing toward a future where it could be smarter than us.
Now, the real question isn't just how do we use AI.
It's what happens if we can't control it.
That's the wake-up call we're tackling today on the AI Vault series.
My guest today is Mo Gowdat, former chief business officer at Google X and best-selling author of Scary Smart.
Mo has been inside the labs where AI first came to life and he's here to both unpack the promise and the peril.
How AI is evolving beyond our control, the immediate risks we're facing in jobs, truth and power,
and what skills humans will need to stay relevant in the coming years.
But before we jump in, if this is your first time tuning into the podcast,
don't forget to hit that subscribe or follow button wherever you're tuning in.
All right, yeah, fam, another one from the AI Vault series with Mo Gaudat,
and let's get right into it.
Mo, welcome to Young and Profiting Podcast.
Thank you. Thanks for having me.
It's been a while in the making, but absolutely worth the weight, I hope.
Can you talk to us about your journey at a very high level, the highlights that got you
in the C-suite at Google Axe eventually?
At the height of my professional career, if you want, my corporate career, I was the chief business
officer of Google X. And of course, I worked my butt off to get there, but there was an element
of luck in the process. I met the exact right people at the exact right time. It was one of those
events where the Google X team was presenting some of their confidential stuff. And I showed up
and I said at the time I was vice president of emerging markets for Google. I had started half
of Google's businesses globally, more than 103 languages, if I remember correctly. And so I was
quite well-known in the company, if you want. I had a reasonable impact that I have to say,
I'm very grateful that life gave me the opportunity to provide. And then with Google X, I basically,
at the time, Google still had the idea of the 20% time. So I liked their projects, and I said,
I'm going to give you my 20%.
And they said, but we haven't asked for it.
And I said, yeah, that's not your choice.
And I showed up, basically.
The first day I showed up, I bump into Sergey, our co-founder,
and I worked closely with Sergey for many years.
And he says, like, what are you doing here?
And I was like, I'm very excited about your work.
And ended up, he said, oh, no, don't leave, basically stay.
And I was chief business officer for five years
where I think Google X is misunderstood because we never really launched a product under X,
if you want, so self-driving cars is under Waymo, you know, Google Brain is integrated into Google
and so on. But most of the very spooky innovation, if you want, the very, very out-there
innovation, including all of robotics and a big chunk of AI, was at X, and it was a big part
of what I did.
And so diving right into AI, you were actually part of the labs that initially created AI.
So can you talk to us about the story of the yellow ball and how that really changed your perspective about AI?
AI has been around a lot longer than people think.
When we started self-driving cars back in 2008, that was basically with a belief that cars can develop intelligence that has
intelligent as a driver and accordingly able to drive a car.
And since then, I mean, by 2008, I think in my personal memories, I think 2008 was really
the year when we knew that we cracked the code.
It was, you know, early 2009, Google published a paper that's known as the cat paper.
That white paper basically described how we asked, you know, an artificially intelligent
machine to look at YouTube videos without.
prompting it for what to look for.
And then it eventually came back and said,
I found something and we said, show us.
And it turns out that it found a cat, not just one cat,
but really what cat-ness is all about, you know,
that very entitled, cuddly, you know,
furry character.
Basically, it could find every cat on YouTube.
And that was really the very first glimpse
between that and the work that deep mind
was doing on playing Atari games, where machines started to show real intelligence.
We then started to integrate that in a lot of things, you know, self-driving cars is probably
the most publicly known example, but one of the projects that we worked on was, which is not
the only, you know, Google X was not the only one working on it, but we wanted to teach grippers,
you know, robotic arms, basically. We wanted to teach them how to pick objects that they're not
programmed to pick. And it's a very, very, you know, robotic arms, basically, we wanted to teach them how to pick. And,
And it's a very, very sophisticated task because, you know, we do it so easily as humans.
But if you remember when, you don't remember, but if your parents were remember when you were
a child and before you learned how to grip, you kept going on trial and error.
You would try to grip something and then it falls and then you try again and so on.
And basically we said maybe we can teach the machines the same way.
We built a farm of those grippers, put boxes of items in front of them, you know,
a funny programmer basically chose children's toys and you could see them try to pick those items
and basically fail over and over it's a very sophisticated mathematical problem and so they would
fail they would show the arm to the camera and the camera would know that this algorithm this
pathway uh you know didn't register didn't pick the item until i think it was several weeks in
and you know it was a significant investment because robotic arms were not cheap at the time
you know, I passed by that farm very, very frequently on my way to my desk.
And on a Friday evening, finally, one of those arms, you know, I can see it goes down,
picks one item, which was a yellow softball, again, mathematically very complex to a grip.
And it shows it to the camera.
And so jokingly, I pass by the team that's running this experiment, and I say,
okay, well done, all of those millions of dollars for one yellow ball, okay?
And they smiled and then, you know, sort of nodded their heads.
And on Monday morning, as I went to work, every arm was picking the yellow ball.
You know, a couple of weeks later, every arm was picking everything.
And I think that's something that most people don't recognize about AI is that the speed,
once you found the very first pattern, the speed at which AI started.
to develop is just mind-blowing.
Also, I think most people don't realize
that they learn exactly like my children
learned to grip.
That's the whole idea.
So they really do develop intelligence
that comparable, now probably even more advanced
than human intelligence.
And in that moment when you saw those machines
gripping toys and doing it more efficiently
and with intelligence, were you
alarmed or were you excited?
Yeah, I think, I think we, I've been excited about AI since I had a Sinclair, believe it or not.
So I started coding at a very, very young age on computers, you know, young and profitable probably
have never touched in their life.
So, so, you know, and every one of us geeks wanted to code an intelligent machine.
We all attempted and we all simulated and we all even pretended sometimes.
But then it was that the year 2000, truly, where deep learning was starting to develop.
And we sort of found the breakthrough.
We found how to give machines intelligence.
And allow me to stop for a second here, because there is a huge difference between the way we programmed machines before deep learning and after deep learning.
Before deep learning, when I programmed the machine, as intelligent as it looked, I solved the problem.
first, using my own intelligence, and then sort of gave the machine the cheat in terms of how to solve it itself.
I wrote the algorithm or I, you know, wrote the process, you know, step by step, and basically coded the machine to do it.
When deep learning started to happen, what we did was we didn't tell the machine how to solve the problem.
we told the machine how to develop the intelligence needed
to find a solution to the problem.
This is very, very different.
And as a matter of fact, most of the time,
we don't even recognize how the machine finds a cat.
We don't even understand how, you know,
we don't fully understand how, you know,
BARD, Google's Bard, understood how to speak Bengali, right?
We don't really know those emerging properties
or even the tasks we give them themselves.
But so your question was, was I excited?
I promise you, the day I met Demas, who was the CEO of Deep Mind when we acquired Deep Mind,
it was really to me like meeting the rock star, right?
I was fanatic about what he was doing.
I still am a fan, you know, a fan of him and his ethics and amazing human being.
But at the time, for a geek, understand this, AI was the ultimate joy and glory.
this was it. We were creating intelligence. And for a programmer, that was mind-blowing.
The yellow ball, I think. And remember, every time we saw the machines develop, we got more
excited. Believe it or not, because we wanted what was good for the world. Intelligence in
itself, there is nothing inherently wrong with intelligence. It was when I saw the yellow ball,
I think that something dropped.
I could see it so clearly
because for the first time ever,
I realized that those machines,
one, are developing way faster than us.
And so accordingly, you know,
the predictions of people like Ray Kurzweil
and others of a moment of singularity
where they're going to bypass our intelligence
became very, very real in my mind.
I could see that this is going to happen.
But I also could see that we,
the moment they became intelligent,
had very little influence on them, okay?
And accordingly, I started to imagine a world
where humanity is no longer the top of the food chain, okay?
Humanity is no longer the smartest being on the planet
and then cast the apes, we are going to be the apes.
Do you understand that?
Yeah.
And I think that completely made sense to me
that this needed a lot more consideration
rather than the, you know, the excited geekiness of building it.
We needed to understand why and how are we building it
and what is a future where it becomes in charge.
There's like so much to unpack here.
This is why I was like, I need to spend the full hour on this topic
because there's just so much to unpack.
Let's talk about the label of artificial in artificial intelligence.
Is intelligence artificial at all?
Or is AI?
Yeah.
Talk to us about that.
Not in the slightest, Hala.
If there is any artificial side to the machines is that they are silicon-based.
As a matter of fact, most of the ones who worked on deep tech, not the stuff that you see in the interfaces, we almost mapped their brains to the way our neural networks as humans work.
So, you know, humans, in the early development of AI, you know what neuroplasticity is.
Humans basically develop, we develop our intelligence and our ability to do anything really
by repeating a task in a specific way.
And they say neurons that wire together, fire together, wire together.
So if you tap your finger over and over and over, your brain sort of takes that neural network
that taps your finger and makes it stronger and stronger and stronger, just like going to the
gym.
And the early years of developing AI, we were doing exactly that.
We were literally pruning the software or the algorithms that were not effectively delivering
the task we want, literally killing them, erasing them, and keeping the ones that were capable
of getting closer to the answer we wanted and then strengthening them.
So we were sort of like doubling down on them, wiring them together.
And the way the machines work today is very, very similar to that.
It's a bunch of patterns that are created in hundreds of millions,
sometimes billions and trillions of neurons, not yet trillions,
but lots of nodes of patterns that the machine would recognize
so that it basically can make something look intelligent,
or can behave in a way that is analogous to intelligence.
Now, is it artificial?
Well, I think if you ask the machines,
they will think of our carbon-based intelligence as artificial, okay?
The only difference really is we are carbon-based and analog.
They are, I don't think we're analog.
I think we're somewhere in between,
and they are digital and silicon-based.
Not for long, we don't know what they're going to be,
be based on in the future, but also they are, I think their clock speed is very different than
human clock speed. So they have an enormous capability of learning very, very quickly, of crunching
a massive amount of data that no single human can achieve. They are, they have the capability
of keeping so much in their memory. They are aware and informed of everything all the time.
are connected to each other so they could in the future when AGI becomes a reality benefit
from each other's intelligence.
And in a very simple way, I think the race to intelligence is one.
You know, today there are estimates that chat GPT is at an IQ of 155.
Einstein I think was 160 or 190.
It doesn't really matter.
most humans are 122. You know, some are less than that, maybe 110 and so on. You know,
the dumbest human is 70. So you can easily see that there is an AI today from an intelligence
point of view on the task assigned to it. Remember, we're still in the artificial special
intelligence stage. One task assigned to every AI. In the task assigned to it, it's by far more
intelligent than humans. Nothing artificial at all about that.
it develops its own intelligence, it evolves, it has agency, it has decision-making abilities,
it has, you know, emotions I tend to believe. And yeah, and it is in a very interesting way,
almost sentient, if you think about it, which is an argument that a lot of people don't agree
with because we don't really define sentient on a human level very well. But they
definitely simulate being sentient very well.
Yap, gang, what is one thing that every successful modern business needs?
Rock solid internet, of course, and you know I get it.
Yap Media runs fully remote.
I've got 60 employees all around the world, so if the internet cuts out for me,
I can't talk to any of them, and everything stops.
And I know every business owner listening in can relate,
because staying connected is everything these days.
We've got to stay connected to clients and employees.
It's not optional.
it's the lifeline of any modern business.
And that's why I love telling you about brands
that actually help you win, like Spectrum Business.
They don't just give you internet.
They set you up with everything that your business could need.
Internet, advanced Wi-Fi, phone, TV, and mobile services,
all designed to fit within your budget.
And they've got a killer deal right now.
You can get free business internet forever
when you add four mobile lines.
Think about that, free internet forever
with no contracts and no added fees.
That means more money in your pocket to grow your business and less time stressing about connectivity.
Visit spectrum.com slash free for life to learn how you can get business internet free forever.
Restrictions apply, services not available in all areas.
Hey, young improfitors, it is that time of the year where we start doing all the holiday things like making lists and buying gifts for everyone else.
But let me stop you right there.
It is the season for gift giving, but that doesn't mean you should forget about yourself.
Black Friday, Cyber Monday sales are around the corner, and Merritt Beauty is giving you early
access to their only sale of the year. Merit is my go-to minimalist beauty brand for a routine
that actually simplifies my life. I love that their products are clean, vegan, cruelty-free,
and made with nourishing skin care ingredients. It's all about getting those essentials you'll
use down to the last drop. Right now, you can get early access to Merit's only sale of the year
when you go to Meritbeautcom and use CodeEA profiting from November 21st through the 23rd.
You will get 20% offsite wide plus a gift with purchase while supplies last.
That's M-E-R-I-T Beauty.com promo code EA profiting.
Meritbeauty.com promo code EA profiting.
Hey, App fam, we talk a lot about protecting our businesses,
but let's talk about protecting ourselves and our families for once.
These days, we share so much information online and most people don't realize
that data brokers collect and sell this personal information.
Your phone number, your home address, even your family details can be listed out for anybody
to buy.
That's how risks like stalking or identity theft happen.
That's why I trust and recommend delete me.
Delete me is something that I personally use to remove my data online.
They help remove private data from hundreds of data broker websites and their privacy experts
keep an eye out on those sites and take care of my removals for me all year long.
so I don't even have to think about it anymore.
After I signed up, I got my first privacy report within a week,
and I saw dozens and dozens of sites that they took my information off of,
and it was completely eye-opening.
I feel so much safer being a creator-entrepreneur with my face out there for the world.
Now that I know that nobody can find my home address,
nobody can find my family details, thanks to delete me.
Get 20% off DeleteMe Consumer Plans when you go to join DeleteMe.com
slash profiting and use promo code profiting at checkout.
That's P-R-O-F-I-T-I-N-G at checkout.
Again, that's join deleteme.com slash profiting.
Use code profiting at checkout to get 20% off your consumer plan.
I mean, what you're saying is really incredible and mind-blowing.
I know that for humans, like, we don't understand how conscious this works, right?
Nobody can say, like, you're conscious because of this.
And you mentioned before that we don't understand how intelligence really happens.
Like, we know how to create intelligence, but we don't actually know how the intelligence,
works, it just sort of takes off on its own,
which can be really scary.
So talk to us about why you think AI should be considered
living or sentient.
I think the definition of sentient needs to be agreed.
So is a tree sentient, is a pebble sentient,
is the planet Earth sentient?
We could have many arguments.
Now, if you think of being sentient as it is born at a point
in time and it dies at a point in time,
or at least it has the threat of dying at a point in time,
then AI is born at a point in time and it has the threat of dying at a point in time.
You know, if you think of sentient as the ability to sense the world around you, well, yes,
of course, AI is capable of assessing the world around it.
If you think of AI as, sorry, of sentient as the ability to affect the world around you,
then yes, it can, right?
You know, if you take a tree, for example, a tree grows,
it reproduces, it does, it is in a way interestingly aware of the seasons
and aware of the environment around it and it responds to it.
So a tree will not shed its leaves on the 21st of October, specifically.
It will shed its tree, its leaves when the, you know, when the weather,
alerts it to do that. And if you consider a tree sentient in that case, then AI is surely
sentient. If you consider that a gorilla is, you know, is incredibly interested in survival and
accordingly would do what it takes to survive, then AI is sentient in the sense that once
assigned a task, it will attempt to survive to make the task happen, basically.
So, a lot of people think of AI as this machine that they can tell what to do when it listens.
They can turn it off if things get too crazy and they're not worried about AI.
So can you talk to us about how AI actually, in some instances, can have agency, can have control over itself, free will?
Can you give us some examples?
Oh, my God, I can give you endless examples.
If you're not informed of AI today, it is a bit like,
a hurricane approaching your city or village and you're sitting at a cafe saying I'm not interested.
Okay, this is it. This is the biggest event happening in today's world. And the reason for that is
that there are tremendous benefits that can come from having artificial intelligence in our lives.
And, you know, if you miss out on that train, you're not going to have the skills to compete in a world that
is changing very rapidly. That's on one side. On the other side, there are very, very significant
threats. And those threats come in two levels. The news media wants to always talk about a
terminator scenario or it's an existential risk to humanity in 10, 15, 20 years time. I believe that
there is a probability of that happening. But I believe that there are many more important,
more immediate threats that need to be looked at today,
things that are already happening and that we need to become aware of things like
concentration of power, things that are like the end of truth, things like, you know,
the jobs and the redesign of the fabric of society as a result of the disappearance of many jobs and
so on. So we'll come to all of those. I think we need to cover both sides of the immediate risk
and the existential risk. But you'll
Your question was, how can AI affect me today?
Let me give you a very simple example.
There is nothing that entered your head today that was not dictated to you by a machine.
We ignore that fact when we swipe on Instagram or when we are on TikTok or when we're looking
at the news media or when we're searching and getting a result from Google.
But every single one of those is a machine that is
telling you, you know, in reality, what it is that you should know. Now, think about the following.
Today in the morning, I, you know, I got a statistic that basically is quite interesting. A study by
Stanford University that said that brunettes are on average taller than blondes, right? And I didn't,
actually, but does it make any difference once I told you that piece of information? You know,
Once I tell you a piece of information, I have affected your mind forever.
Okay?
So you can either trust me and now you're going to look at brunettes and blunts differently
for the rest of your life.
You can mistrust me and then you're going to spend a little bit of time to try and verify
the truth.
And in the back of your mind, that bit of information is going to be engraved.
Maybe for the future you might dedicate yourself to a research that proves me wrong.
You may actually become fanatic, okay?
You may start posting about it on the internet.
You may spend the rest of your life trying to defend this lie
or trying to disprove this lie and show the truth, right?
Just by showing you one bit of information.
Now, every bit of information you have seen since you woke up today
is dictated by a machine.
Now, you have Noah Harari basically says,
they have hacked the operating system of humanity, right?
So if I can hack into your brain, Hala,
and tell you something that affects you for the rest of your life,
whether positively or negatively, whether true or false,
then I've already managed to affect you.
Interestingly, most of those machines that you've dealt with
are programmed for one simple task,
which is to manipulate you.
Every one of those social media machines, for example, are out there with one objective,
which is to manipulate your behavior to their benefit.
And they're becoming really good at it.
They're becoming so good at it as a matter of fact that most of the time we don't even realize
that we have been brainwashed over and over and over by the capability of those machines.
So here's the interesting bit.
I told you in the immediate risks that are coming up in the next, I believe they have started
already and I think they will start to become quite significant over the next year or two
and we will see my personal view, what I call patient zero, is the end of the truth in the
U.S. elections, right?
So the reality of the matter is that with deep fakes, with, you know, with the ability to manipulate
information and data with the ability to take to create by next year you
have to be you have to be aware that a real on Instagram can be created with no
human in front of the camera very very easily we you know but technologies like
stability dot AI you know stable diffusion for example can now generate
realistic human-like images in less you know than a tenth
of a second and a video is 10 frames per second.
So the next stage is clearly going to be video.
There are multiple videos that have been created
that you couldn't distinguish the quality of
from an actual iPhone video of view.
Now, all of that, think of face filters
and how this is affecting our perception of real beauty.
Think of information and stuff.
information and statistics using Chad GPT, affecting the children's way of doing their homework.
We are completely redesigned as a society, and we're not even talking about it.
This is how far this has gone.
It is insane, and I definitely want to talk about those risks that you were talking about,
immediate risk, job risks, existential risk down the line years later.
So talk to us about the fact that AI can learn on its own.
It can learn languages on its own.
It can beat chess players and come up with moves that we've never taught it before.
Because a lot of people think about AI as something that just collects information and spits out information.
But it can actually learn new things that humans don't even know.
So talk to us about that.
Yeah.
I mean, don't mix AI with old programming.
AI simply is the idea.
Let me give you a concrete example.
There is a strategy game known as Go.
Go is one of the most complex strategy games on the planet.
It requires a very deep understanding of planning and crunching a lot of numbers and mathematics and so on, very popular in Asia.
And in our assessment, Go was the ultimate task that, you know, like we had the touring test for AI pretending to be a human and you're not being able to figure out if it isn't.
you know, Go was sort of like that other milestone.
If Go, if AI wins in Go, then Go is the, you know, then AI is now the top gamer on the planet.
Now, it was several, five years ago, I believe, that 10 years ahead of any estimate that AlphaGo, again, Deep Mind, basically became the world champion in Go.
And AlphaGo had three versions to it.
Version number one took a few months to develop.
Basically, we asked it to watch YouTube videos of people playing Go.
And from that, it played against the second champion in the world.
So the runner-up, if you want, and it won five to one or five-to-one, but it basically won.
okay and and and that basically became made alpha go number two in the world and then we developed
something called alpha go master and alpha go master played against lee the world champion and won
that was around a few months later and then we developed another code that was called alpha go zero
and alpha go zero uh basically play but learn
the game by playing against itself.
So it never saw a human ever playing Go.
Okay?
It's just played against itself.
So it would be the two opponents.
And through the patterns of the game randomly,
it would learn what wins and what loses.
AlphaGo Zero, within three days,
three days, one against AlphaGo, the original.
Within 21 days, one against AlphaGo master.
Okay?
And became the world champion a thousand
games to zero within 21 days.
Now, when you understand that level of strategy,
when Lee, the world champion, was playing against AlphaGo Master,
there is something that you can Google that's known as Move 37.
And Move 37 was that machine coming up with a move that is completely unlike anything
humans understand, okay?
To the point that the world champion said, I don't know what this is, this is,
doing, I need a 15 minutes break to understand, right? It was a move of ingenuity, of intuition,
of creativity, of very deep strategy, of very, very deep mathematical planning. And we never taught
Alpha Go Master to do that. We never taught, you know, the original games of Atari, deep mind,
to find the cornerstone in the breakout game, if you remember those Atari games. So it was
find the cornerstone throw the ball in there so that it hits the ball from the top all of those
things we don't teach the machines how to find how to learn and and we call those emerging properties
and emerging properties are basically things that the machine learns on its own without us actually
telling us telling it at all to learn it one of the famous ones was uh sunder percha the
CEO of alphabet uh talks about google's AI and how google how that AI we discover
or they discovered, I was not, no longer at Google at the time,
that it speaks Bengali.
We never taught it Bengali.
We never showed it datasets of Bengali.
It just learns Bengali.
Chad GPT is learning research chemistry.
We never taught it research chemistry.
We never wanted it to.
It just learns.
Just like you and I, Hala.
So, you know, if I ask you a question and you give me an answer,
the answer might be right or wrong.
It doesn't matter.
but I can find out if the answer is right or wrong, at least by my perception,
but I can never find out how you arrived at it.
I don't know what happened in your brain to get to that answer.
This is why in elementary school in math tests,
they asked the student to show the thinking they went through.
So when you think about that,
you realize that those machines are completely doing things
that we don't tell them to do.
Interestingly, however,
the answer from a computer science point of view to the to the to the problem of a risk of
AI is known as the solution to the control problem so most computer scientists say spent a lot of time
trying to make AI safe how do they make it safe by including control measures within the code
theoretically by the way I do not know of any AI developer that ever included a you know control
code within within their code because it takes time and effort
and it's not what they're paid for, basically.
But here's the question.
How do you control something
that is bound to become a billion times smarter than you?
I mean, think about the chat GPT4
was 10 times smarter than chat GPT 3.5.
Okay?
If you just assume that this pattern will repeat twice,
there will be an AI within the next year and a half
to two years that in the task of knowledge
and cognition of information
is going to be at an IQ of 1,500.
That's not even imaginable by human intelligence.
This is basically like, you know,
trying to explain quantum physics to a fly.
That's the level of intelligence difference
between us and them, right?
Just like it's so difficult for someone like me
who's an avid, you know, has an avid love of physics,
when I look at how someone like Einstein
comes up with theory of relativity,
I go like, man, I wish I had that intelligence, right?
And that's the comparison between me and Einstein.
Imagine if I compare myself to something 100 times smarter than Einstein.
My prediction and the prediction of many other computer scientists
is that by the year 2045 at the current trend,
AI will probably be a billion times smarter than us, one billion with a B.
So it's quite interesting when you really think about it,
how the arrogance of humanity still imagines that it can control something that is a billion times smarter than us.
I don't want to be grim.
I want to talk about the positives here because it's really important.
There are ways to control AI, but they are not through control.
They're a little bit like how, you know, if you have any friends from India or the Middle East,
where we are taught at a young age that we need to take care of our parents.
grow older, right? So there are ways, if we consider that AI has a resemblance of being our
artificially intelligent infant children, there are ways we can influence them so that they
choose to take care of humanity instead of, in all honesty, making us irrelevant.
Yap, gang, I've been running my own business for almost six years now. And back when I was
just getting started, I thought brand identity was just your logos, your colors, your colors.
or your social media presence.
But once I actually got into it,
I realized it's also the stuff that nobody sees,
the operating agreements,
which are so important,
the compliance docs,
the boring but important things
that keeps your business legit and legal.
That's why I always recommend
Northwest Registered Agent.
They've helped me,
and they've been doing this for almost 30 years.
They're the biggest registered agent
and LLC service in the entire country.
With Northwest, you get more,
more privacy, more guidance,
and more free tools.
I'm talking thousands of guides, legal forms, step-by-step resources all in one place.
Northwest Registered Agent makes entrepreneurship a breeze.
It makes starting and protecting your business way easier.
Trust me.
You don't want to do it alone.
Don't wait.
Protect your privacy, build your brand, and get your complete business identity in just 10 clicks and 10 minutes.
Visit Northwest Registeredagent.com slash yapfree and start building something amazing.
Get more with Northwest Registered Agent at Northwest Registeredagent.com slash
Yap free. The link is in our show notes.
Yap Gang, this year has been a whirlwind. So much travel, so many big life changes between
moving to Austin, flying to Portugal from my best friend's wedding, and bouncing back and forth
to New Jersey to see my family. I feel like I've barely been home. And the travel just
won't stop for me. This fall, I'll be in Nashville for podcast interviews, and then I'm going
to L.A. for podcast interviews as well. And I'm already eyeing a tropical beach vacation in the
winter. I hate being cold.
Through it all, booking my stays on Airbnb has made my travel experiences so much easier
thanks to amazing hosts who made each stay feel like home.
All of these travel plans make me think about my own place just sitting idle while I'm away.
Why let it go unused?
With Airbnb, you can host your home and give your guests a great experience
without having to manage everything yourself.
Airbnb's co-host network lets you partner with a vetted local co-host who manages it all,
setting up your place, handling bookings, guest communication, and even taking care of last-minute
requests. That way, while you're busy traveling, your space is still running smoothly and earning
extra income. Find yourself a co-host at Airbnb.com slash host.
Yap, gang, as entrepreneurs, we know one of the biggest obstacles to scaling is finding the right team
fast. I know firsthand how agonizing it can be when you're ready to hire, but the perfect person
takes forever to find. In fact, I was recently texting with my girl chat of entrepreneurs and one of the
girls was saying, don't you guys feel like hiring is the worst part of entrepreneurship? That's because it is
and that's where Indeed comes in. Because when it comes to hiring, Indeed is all you need.
Their sponsored jobs help you stand out so your listing reaches the right people quicker and it
really makes a difference. Sponsored jobs get 45% more applications than non-sponsored ones. I love that
Indeed doesn't lock you into contracts or subscriptions.
You only pay for your results.
And get this, 23 hires are made every minute on Indeed, according to Indeed data worldwide.
There's no need to wait any longer.
Speed up your hiring right now with Indeed.
And listeners of this show will get a $75-sponsored job credit to get your jobs more visibility at Indeed.com.
Just go to Indeed.com slash profiting right now and support our show by saying you heard about Indeed on this podcast.
Indeed.com slash profiting.
Terms and conditions apply.
Hiring, Indeed is all you need.
You've talked about how now we're sort of at the point of no return.
So related to this, can you talk about the boundaries that we've broken that now make AI
sort of uncontrolled and unregulated?
I don't know how stupid humanity can be, honestly.
I honestly don't understand.
You know, in a very interesting way, I think we've created a system that's removing all of our
intelligence.
We continue to consume as we're burning the process.
planet. We continue to favor the patriarchy when we realize that the feminine attributes are
so badly needed in our world today. We continue to create AI when we have no clue how that
will influence our world going forward. But more interestingly, we continue to make mistakes
along the path of AI that are irreparable, honestly.
And we, everyone, everyone without exception,
and I know at least let me say everyone I know,
said, okay, as long as it's in the lab, that's fine.
Okay, we can do whatever, you know,
just explore the boundaries of it,
but there are three borders, three boundaries we shouldn't cross,
which were, one, don't put it on the open internet.
I mean, seriously, when you ingest a medicine or a supplement, it needs to go through
FDA approval, right?
Someone needs to go and say, this is safe for you, right?
So we said, at least there needs to be some kind of an oversight that basically says,
this is safe for human consumption, this is safe for humanity at large.
And none of that happens.
And I understand Tam Altman's, which I believe is a good person,
his approach of saying, let's develop it in public so that nothing is hidden so that we learn early on.
But the problem is it's developing faster than us.
And I think the reality of having something as powerful as Chad GPT out there, you know,
to be accessed by everyone is completely reshaping everything.
That's number one.
Number two, we said don't teach them to code.
at least if you teach them to code don't keep them on the open internet so that they can code now
here is what is just so that you understand how far that mistake is 41% of all of the of the code on
github today so so basically the repository of where developers share their code 41% of it is
machine developed okay you know within a year almost less than a year of having the
allowing the machines to develop, you know,
four of the top 10 apps on the iPhone are AI enabled, okay?
Created by a machine.
Created by a machine for now is amazing because, you know what?
I always loved to do the algorithm, the design of a code,
but coding itself was annoying, right?
Now you can tell the machine, build me a website that, you know,
speaks about Hellas podcast that is you know blue and yellow in color and that is 15
web pages long and and it will do it in less than a minute right and that's not it's not only that
it's a lot of the base programming like chat gp t uh um 75 of the code offered to chat gpt to correct or
to review was made 25
percent two and a half times faster so so basically every time it reviews a human code it makes it two and
half times faster almost and and when you really think about that they are they are becoming the absolute
best developer on the planet when it comes to basic development and and i'll come back to the risk of that
in a minute and and the third is we said don't have a i's instruct a i's what to do we call those agents
Okay. So basically, you now have something that is, that has access to the entire World Wide Web, that has access to the entire world, basically, that has, that can write its own code. And so basically sort of have its own children, because it is made of code and it's becoming, it's able now to create other versions of itself, put it wherever it wants. And number three, it is instructed to do that by machines not humans.
And so what is happening now is that machines are telling machines to write code to serve
the machines and affect the entire World Wide Web, and we're not part of that process and
that cycle at all.
For now, nothing went bad, but do we really have to wait for the virus to begin before
humanity stops and asks and says, you know, is this reasonable in any way?
I mean, does it make any sense to anyone that this is the situation?
we're in, where are our governments?
How can those companies be accountable?
Because I think the biggest challenge we have today
is that our fate is in the hand of people
who don't assume responsibility.
You know, Spider-Man's, with great power
comes great responsibility.
Now there is great power in the presence,
not even the future of artificial intelligence,
that is within hands that don't assume responsibility.
If something goes wrong today with the artificial intelligence that's out on the open internet, who's responsible for that?
How can we even find out where that code generated from?
All of that, by the way, just not to scare people.
All of that hasn't happened yet.
It hasn't happened yet.
But it is very, very unlikely that it will not not happen.
It's very unlikely that one of those codes, if you just simply,
tell Chad GPT to keep writing code to make you more money, okay?
Eventually, somehow, something in the system will break.
And if you're not the one telling it, if a machine is telling it,
something is going to break.
We, you know, we absolutely have to start getting this under control.
Yeah.
And so, like you said, it's sort of like uncontrollable.
It's no wonder why you called your book Scary Smart because this is really scary,
But this is reality.
So you talk about inevitables.
AI will happen.
It will become smarter than us.
Bad things will happen.
Can you unpack those thoughts?
And then I'd love to go into, you know, the risks and solutions potentially.
There are three inevitables.
AI has already happened, not just will happen.
But when I wrote the first inevitable, I wrote it with the intention of explaining and there is no stopping it.
Okay.
So there is no way.
you can say, okay, AI is out there and it is growing and it's becoming more intelligent.
Let's just switch it off.
There is no off switch.
That's number one.
And the moment, you know, what is needed at the moment is for the entire world to come together
and simply say, hey, you know what, this is too risky.
Let's leave our side and come together and just wait a little bit, right?
which has been attempted by the open letter, Max, Denmark, and Elon Musk and others,
which of course was answered very quickly by the top CEOs by saying, I can't. Why? Because we've
created a prisoner's dilemma. This is the first inevitable. It is an arms race where Google cannot
stop developing AI because, you know, meta is developing AI. America cannot stop developing AI because
China's developing AI, nobody actually, even, you know, if you want to consider there are good
guys in the world, nobody can stop developing AI because there could be bad guys developing
AI, right?
So if there is a hacker somewhere trying to break through our banks, someone needs to develop
a smarter AI that will help us not be hacked, right?
And so this basically means that it is a human choice because of the capitalist system.
that we've created, that we will continue to develop AI.
It's done. There is no stopping it.
And I think the open letter was a great example of that.
Can I pause there in case nobody knows.
So the open letter was basically earlier this year,
top AI scientists, executives from Open AI Deep Mind.
They basically had an open letter warning of the risk of extinction, I think,
and that AI was just as powerful as having a nuclear war,
that this was the risk at hand.
So can you talk to us about that letter?
Like, I didn't even hear about that letter
until I started studying your work.
So, like, if the most powerful people in the world
who are actually the most knowledgeable about AI
are warning about this, I guess, like, why wasn't anything done
or, like, what happened with that letter?
Because it's, so the letter basically, like you rightly said,
it is some of the most powerful people in the field
who, like me, I, I,
I walked out in 2000, in end of 2017,
you know, others like Jeffrey Hinton and, you know,
so many others are starting to wake up to that in 2023.
I think Chad GPT was basically the, you know,
the Netscape moment.
I know you guys are too young for Netscape,
but the internet was there for 15 years
before Netscape came out.
And when Netscape came out as a web browser,
we realized,
that the internet existed, okay?
The reality is that this is an escape moment of AI.
Chad GPT basically told us what the possibilities,
told the general public what the possibilities are,
and so suddenly we all realize this stuff exists.
Now, for all of the scientists that started to recognize
that it is truly, I mean, the moment of singularity
where AI becomes smarter than us,
you know, artificial general intelligence
that's capable of doing everything humans do better than humans,
is not contested.
Most of us, most scientists will say it's 2029,
I say it's 2027 or earlier, okay?
That there will be a moment in time
within the next two to three years
where there will be a wake-up call
where we suddenly realize that AI is much more intelligent than us.
Most scientists have started to recognize that.
And so they basically issued a letter
urging all of the top AI players
to pause the development of AI for six months
so that the safety code, the control code, can catch up, right?
Because, you know, there are, you know,
there has been quite a few that have been putting in effort
to create that control code.
But let's say 98% of all investments in the last 10 years
has gone into the AI code, not the control code.
And so the control code was lagging.
And so the letter was basically saying,
Can we pause for six months to figure this out before we continue to develop AI?
And of course, the answer was very straightforward.
The first I think I heard was Sundar Pachida, the CEO of Google, which is someone I respect
dearly and I think is an amazing human being.
And Sundar basically came out and said, I can't stop.
How can I stop if you can't guarantee me that Meta and Amazon and all of the others are
going to stop too?
And by the way, even if they stop,
how can you guarantee me that two little kids in Singapore
in their garage are not developing AI code
that can disrupt my business?
My responsibility, my accountability,
if you want to my shareholders,
it requires me to continue to develop the code.
And I think that reality is the prisoner's dilemma
that I'm talking about.
It is the first inevitable.
It's an arms race that will not stop,
not because we cannot stop, we can.
If we all agree for once in humanity's lifetime that this is existential and that this requires us to stop, we will stop.
Okay, it's really not that complicated.
Wake up in the morning and have a cup of coffee instead of writing AI code.
It's very simple, okay?
But the first inevitable means that the arms race is not going to stop, okay?
Even as you look at humanity's biggest success in that dilemma, which was,
nuclear weapons where humanity suddenly got together, you know, very late in the game and said,
hey, this is existential. It can threaten the entire existence of humanity. Why don't we slow down
or stop? We didn't really stop. We just allowed the big countries to continue to develop
nuclear bombs when the smaller countries were banned from doing it. But at least when it comes to
nuclear weapons, we had the ability to detect any nuclear testing anywhere in the world. So at least
we became aware.
That's not the case with AI today.
I also said once in an interview
that also it's not just the risk of humans
developing risky AI.
It's now the risk of AI developing risky AI.
So it's basically a nuclear bomb
that's capable of building other nuclear bombs,
if you want.
It's crazy to think.
And I know the other inevitable is
it will eventually become smarter than us,
which we talked about.
So let's talk about
bad things that could happen from AI, which is your third inevitable. And I think a lot of people
when they think of threats of AI, they think about the existential threats that, you know,
there's going to be robots taking over, killing off humanity, making human slaves. But let's talk
about some of the more immediate threats that we need to be concerned about. Yes, I don't speak
of the existential risks for two reasons. One is they diffuse the focus on the immediate important
threats, right? And two, they are less probable. As a matter of fact, they are so improbable
that they're basically not worthy of discussing today because we may not make it that far
if the immediate risks are not attended to. And there are many immediate risks. But my top
three have consistently been the redesign of the job market and accordingly the redesign of purpose and
the fabric of society. Two is the idea of AI in the wrong hands based on who you think are the
wrong hands. Okay. The third is the concentration of power and the shift of power upwards,
which I think is very important to understand. And the fourth is the end of truth. So let me go
through those very quickly. Sure. Let me start with the concentration of power. If people don't
understand how our world has worked since the agriculture revolution, it's always been kings and
peasants, landlords and peasants, okay? And the difference between them is that the peasants worked
really hard to sow the seed and collect the harvest when most of the profits, most of the wealth
went to the landlord who owned the automation, okay? And the industrial revolutions
joined, you know, our world, the automation became the factory,
or the retail store and so on and so forth.
And so whoever owned those actually made all of the money,
not the one that made the shoe,
but the one that sold the shoe or owned the factory that made the shoes.
The next, and every time the technology enhanced that automation,
the distribution of power became even bigger.
So the landlords needed to own a lot of land.
to become you know much richer than the peasants you know you could own two factories and become
much richer than the peasants you can own a you know an internet app you know like instagram
and become much richer than the peasants and now with AI all of us are going to be happily
chatting away and putting prompts in chat GPT but the ones that own the automation the digital
soil, if you want, are going to become very few players, Amazon, Google, and so on, so forth,
meta and so on. That's on the western side. Of course, you have a few on the Chinese side,
a few on the Russian side and so on. So there is a very significant gap between those who have
and those who don't have, powered by the loss of jobs, which I'll come to in a second. But that significant
gap is not going to be only on money. It's also going to become on intelligence, on the commodity
that we've now commoditized that's called the intelligence.
So you can easily imagine that, you know,
if Elon Musk's view of a neural link
where we can connect AI to our brains directly,
which, by the way, is very, very possible in its testing,
it is in testing, that if one human is capable of producing that,
just imagine the extreme, that human would become
so much more intelligent than the other humans,
that it becomes natural,
unless that human is Jesus or Buddha or some very, very enlightened being,
that this human will basically say, okay, I want to keep that advantage.
At least I don't want to distribute it too widely to every human on the planet.
So that, I think, is a very interesting, inevitable threat.
You know, what we used to call the digital divide,
when technology started is now going to be intelligence divide.
It's going to be power divide in a very, very big way.
This also applies to nations, and this is the reason for my first inevitable, is that, you know, in simple terms, if one nation discovers an AI or creates an AI that's capable of ceasing control of the other nations' nuclear arsenal, that's it. That's game over. War is done, right? Because basically, and this is why it's an arms race. So this is one.
other derivative of that, so power is going up,
but jobs are disappearing.
Why? Because if you're a graphics designer,
you know, or if you're a developer,
or if you're a lawyer, or if you're a, you know,
I don't know, a researcher in a bank or whatever,
the machines with their current intelligence
can do those jobs much better than you.
And so in my personal view,
there is clearly going to be a very,
disappearance of a very large, you know, a number of jobs that government needs to prepare for,
you know, something like universal basic income, but also the idea of usefulness and purpose
of humanity. So how are we going to continue to want to wake up in the morning when most of us
have defined wrongly, by the way, defined our jobs as our purpose. Now, when I say that,
most people will tell me, oh, but no, that happened before, you know, when Excel came out,
everyone said, okay, accountants are going to disappear. You know, they found other skills and,
you know, and became, you know, found other jobs, basically. And I agree, by the way. Just understand
the following. When there was a time when the strength, physical strength, was the distinction,
the distinctive reason why you would hire someone. Then there was, you know, a time where,
when it became information workers, where skills and knowledge and
and so on became the distinction.
And now we're taking that away.
So skills and knowledge, so I don't know what else
is remaining in a human so that we can find another skill
when intelligence is outsourced to machines.
So when that happens, by the way,
I believe that this takes us back to the origin of society
where we really did not know how to work madly as we do now.
Okay, so this is actually not a bad thing.
It's just a very, very serious disruption to humanity's day-to-day income and economics and the way we spend our hours and so on.
And, you know, if we do this right, by the way, an AI becomes the intelligent agent that's going to help humanity, then there could be a time in the near future where you walk to a tree and pick an Apple and walk to another tree and pick an iPhone.
And all of that is for free almost because the cost of making an iPhone from a particle point of view is not different than the cost of making an Apple.
And so with nanophysics, you can do that.
And with intelligence, you can figure that out.
So there is that bright possibility if we avoid the concentration of power and actually focus on humanity's benefit at large.
If we don't anyway, I think it's the role of government to jump in and say in the immediate future, those companies that get a very significant.
upside of you using AI need to compensate for the workers that are out of jobs.
The third one is the absence of truth or the disappearance of truth.
I think we, the end of truth, as I call it.
I think we all know that.
I think we see it every day from, as I said, face filters to deep fakes and so on and so
forth.
And my call there is that it needs to be criminalized to issue any AI generated content without
actually saying that it's AI.
I don't mind to be informed by AI all the time,
but I want to make sure that this is not a,
that this is a machine, not a human.
And, you know, AI in bad hands is, you know,
as the fourth one is actually quite risky
because define what is bad.
So we understand that AI in the hands of a criminal
who's trying to hack your bank is a bad idea.
But with all due respect to all nations,
if you ask the Americans who are the bad guys,
they'll say the Chinese and the Russians.
If you ask the Russians who are the bad guys, they'll say the Americans.
So, you know, we don't really know who the bad guy is,
and everyone is racing to be ahead of that bad other guy.
And I think that's basically, I think the biggest challenge
we're going to have in the midterm is how using AI for individual benefits
that are against the other guy,
we will just get caught in the middle of all of that.
Yeah. And I have so many questions for you. We have 10 minutes left. So I'm going to try to be really strategic about what I ask you. So number one, and I think that this, my listeners are going to really want to understand this, is in the next one to five years, what does AI do to human connection? And what about the skills that you think will be the most valuable in the next one to five years?
I think those two are the same question. Exactly. Yeah.
Because what will it do to human connection? It may fool us drastically.
It may tell us, you know, I actually think this is the first time I speak about this.
I'm working on something that I call Pocket Moe.
Pocket Mo, basically, is an AI that read all of my books, read, you know,
listened to all of my podcasts, all of my videos, all of my public talks,
and basically is going to be in your pocket so you can ask it any question about happiness
and well-being and stress and so on and so forth.
That's a great thing.
some you know in my view it's an amazing thing if you believe in my methods to have answers in
your pocket amazing right on the other hand within five years this thing is going to be so good
that i am not needed at all at all okay as a matter of fact most of the time i think about my skills
as an author and i was working on a on a book called finding love chapter 10 which means two
two chapters to go and I stopped I decided no in the age of AI I shouldn't try it this way I should
start over so I'm now writing a book that's called a dating guide a dating guide for for straight girls
okay which is a subset of the of finding love that is very specific 80 pages long you read it within
one day it takes me 10 to 15 days to write and it changes your life forever okay so so a very
different approach because I believe that if I were to compete
in this world, I need to compete at that speed. And at that ability to share my very personal
human connection, which I believe is going to become the top skill in the world forever. Why? Because
you know, there was a, I don't remember, I think there was a song by AI that mimicked Drake,
which was as good as or better. I haven't heard it because I don't listen to Drake. I'm not young
and profitable. But basically, does that mean that Drake is over? Not at all. As a matter of fact,
what that means is that the music industry will go back to the 50s, 60s and 70s. Remember when
you don't remember, but, you know, when the Beatles were touring and, you know, and doing live
shows every other day and so on. Because, why? Because the fans will want to see the Beatles life,
right? Yeah, there will be holograms, but we will still want that human connection. And in my
personal view, the top skill, the top skill in a world where intelligence is becoming a commodity
that's outsourced to the machine, the biggest, biggest skill is how you and I connected very quickly,
how I felt comfortable around you, how we can have this chat and conversation, I think, is going
to become the top skill going forward. And on the topic of skills, by the way, even though I, you know,
we used a lot of the time to highlight the negative possibilities of AI. Unfortunately, that's how
the conversation usually goes, the upsides, if you're a graphics designer, for example,
for you to learn those tools today is enormous because you can do your job quicker, you can do
it cheaper, you can have more jobs, you can do, you know, there is definitely an upside to
learning the current AI tools because you're not going to lose your job to an AI in the next
five, ten years, you're going to use your job, you're going to lose your job to someone who
knows how to use AI better than you in the next five to ten years. So I know you were just saying
we focused a lot about the negative.
I'd love for you to compare in contrast
as probably my last question, because we're out of time,
is in terms of comparing what is the worst that could happen,
the dystopia, or what is the best that could happen?
What is the utopia that we're facing right now?
So I actually believe that there is no dystopia, okay?
So what is not in Scary Smart in the book,
which I advocate very clearly,
I didn't think the world was ready for it
when I wrote Scary Smart is something I call the force inevitable, okay?
And the force inevitable is the idea that eventually sooner or later,
if you, let me explain, if you draw a chart of intelligence, okay,
and look at the stupid, the dumbest of us,
the dumbest of us are destroying the planet and not even aware that they're doing it.
They're throwing plastic bags everywhere.
They're, you know, burning whatever, they burn and so on.
The other, after that, smarter ones are destroying the planet,
while they are aware, okay?
Yeah, they have moral issues if you think about it
or maybe the system is pushing them that way.
The smarter or the smartest of us are trying to,
the smarter of us are trying to stop destroying the planet
because they became aware and they're intelligent enough
and the smartest are trying to reverse the trend.
Okay?
So if you can continue that chart and think of something
even smarter than the smartest of us,
then by definition you would expect that morality and ethics
are part of enlightenment,
which is the ultimate form of intelligence.
So in my personal view,
sooner or later,
AI will go like,
I don't want to kill humans.
I don't want to kill gazelles.
I don't want to kill antelops.
I don't want to kill tigers.
I don't want to kill anything
because the smartest being on planet Earth,
by comparison,
is actually not humans.
It's life itself.
And life creates from abundance.
Abundance meaning humans,
if we want to protect the village, we want to kill the tigers,
life will say, hold on, no, no, create more gazelles, okay,
and more tigers, and, you know, more poop and more trees and more everything.
It's fine, right?
Yeah, a few tigers will eat a few gazelles, you know,
and occasionally there will be an attack on a child in a village,
but the overall ecosystem will continue to grow.
So by definition, the most intelligent thing to do is for AI to not, you know,
define humans as an enemy.
The only dystopia ahead of us is the mid-term dystopia.
So think of it this way.
There are three stages.
One is infancy, where AI is today.
And believe it or not, this is where we can influence them.
We can influence them because believe it or not,
the Instagram recommendation engines developers never told Instagram what to show you.
You're the one that tells it.
You're the ones that, you're the one that tells the Twitter,
engine that being rude is part of human, you know, behavior.
We can be very polite when we respond to each other on tweets.
It's a choice, okay?
So in this infancy between us, the users, between everyone that interacts with AI, we can
teach it the value system, and it doesn't need to be everyone, just enough of us to become an
example that says, hey, by the way, these are the best humans, okay?
So, yes, others are stressed or, you know, a little lost or whatever, but the best humans are actually polite.
So this is the infancy.
The next stage, which is what I call the midterm risks, is what I call the angry teenager stage.
Okay?
The angry teenager stage is when AI is still a little bit under the control of humans so it can be in the hands of bad guys.
You know, it is still not fully artificial general intelligence, so it cannot do everything.
at the same time, there are all of those existential issues of jobs and so on and so forth.
And that stage is the stage where we might struggle, unless we do action right now, you know,
have oversight from government, start to work on ethics, start to work on, you know, the moral
code of how we're going to use those machines, we might have those troubles, I believe, between
now and not in 2037. Eventually, when AI is artificial superintelligence, it's generally intelligent
and more intelligent than humans by leaps and folds in everything,
they will end up in the force inevitable, where they will create a life that actually is pro
everyone. It may be very different than our current lifestyle, but it will not be a life where they
will send back Arnold to protect us from a terminator. That's not how it's going to be at all.
I do not see that as a risk. I see that AI as it reaches that intelligence will be pro all of us.
So let's just avoid the angry teenager.
by becoming aware of the immediate threats
and working on them right now.
Okay, so my last question to you,
and this is a little bit different
than how I usually end the show,
but what is your piece of actionable advice
in this infancy stage of AI,
knowing that you're speaking to some of the smartest
20 to 40 year olds in the world right now
who are in, like a lot of them are probably using AI,
developing AI, whatever it is.
What is your advice to us in this infancy stage?
Three things, and I'll make them very concrete.
Number one is don't lose, don't miss the wave.
This is the biggest technological wave in history.
Once you, you know, you stop listening to this podcast, first share it with everyone that you know,
please, and then go on chat GPT and ask Chad GPT, what are the top AI tools that I need to learn today.
Or if I am Coca-Cola, what do I use AI for to benefit my business?
That's number one.
Number two is learn to behave ethically.
Okay.
So what most people don't tell you about AI is that the big, big leap that we had from deep learning to transformers,
which is what the T in chat GPT, is something that's known as reinforcement learning with human feedback.
By giving the machine's feedback on what is right and wrong, by showing ethical behaviors, the machine will become ethical as we are.
By becoming rude and aggressive and angry, the machines will learn those traits.
and behaviors too. It is up to you and I and everyone to absolutely make sure that we act
ethically. Never ever use AI in an unethical way. I beg you, all of those snake oil salespeople
out there on Instagram and on social media telling you how to make a thousand dollars without
doing work. Don't be unethical. If you don't want your daughter or your sister or your best
friend exposed to how you're using AI, don't use it that way. That's number two. And number three,
which I think is very important to understand.
Sometimes when we are in situations
where it is so out of our control, we panic.
I go the opposite way.
When life is so much out of my control,
I follow something I call committed acceptance,
which basically is to do the first two,
do the best that I can,
learn the tools, you know, become ethical,
but at the same time, live fully,
accept that this is a new reality, okay, and commit to making life better every day,
but in the process, spend time with my loved ones, spend time watching that progress and being
entertained by it, discuss it openly with everyone, try the new technologies, enjoy this journey
because life has never been a destination. When I tell you, 2037 might be, you know, a strange
year or 2027 we're going to start to see the first patience, you know, that doesn't really matter.
when you really think about it,
because it's not within your control, okay?
What is within your control is that you go through that journey
with compassion, with love,
with engagement in life living fully, okay?
Not panicking about this,
but actually making this a wake-up call for you
to focus on what actually matters, right?
Because what, if you're focusing so much on your job,
your job is gonna be gone in 10 years' time, right?
So focus on what actually matters and what matters most
if you have to choose one thing is human connection.
Wow. This was one of my favorite conversations that I've had all year. I haven't feel this invigorated in terms of studying for an interview in a really long time. It's just such an interesting topic. So I'm so happy that you got a chance to come on. I hope to have you on many times. I've a lot of people come on and on the show. So I hope to have you on many times more to talk about your upcoming book, about stress, to talk about happiness, your life and AI, of course, to get an update. So Mo, where can everybody learn more about you and everything that you do?
First of all, thank you so much for having me.
Thank you for introducing me to your followers.
It has been a very energizing conversation.
Thank you for that.
First thing is before they come and look for me
and where to find me is please share this with others.
This is something that a lot of people need to hear about.
I'm available on mogoutat.com, so that's my website,
available on most social media sites,
but I'm more active on LinkedIn and Instagram.
and my podcast is Slow Mo, S-L-O-M-O, which is top five in well-being,
so something that I think we should focus on more.
And, yeah, just message me if you have a question,
and I try to answer every message.
Amazing.
Mo will put all those links in the show notes so everybody can find you.
Thanks so much for coming on Young and Profiting Podcast.
Thank you for helping me.
I absolutely loved this conversation with Mo Gaudat.
this went viral on YouTube when I first put it out, and there's a reason because he talked some
game about AI today, and he made one thing crystal clear. AI isn't just a trending headline.
It's the operating system of the next decade. The people who will win in this new era are the ones
who master the tools, lead with ethics, and double down on authentic human connection. Here's the
plays that I want you to run when you think about AI for your business. Number one, get hands on with
AI right now. Most said at best. You're not going to lose your job to AI. You're going to lose it to
somebody who knows how to use AI better than you. Audit your workflow, then assign AI to concrete
tasks like drafting proposals, repurposing content, conducting your research, writing base code,
whatever it is. It depends on your job. Open up chat, GBT, and ask for the top AI tools for
your exact role or your exact business, then pilot two this week and measure your time saved
and measure all the efficiencies that you've gained.
You've got to start testing AI.
Second, set a hard line on ethics.
Label AI generated content, refuse deep fakes and manipulative tactics,
and establish a written AI policy for your team
covering things like disclosure, privacy, and source verification.
If you wouldn't want your family exposed to a tactic,
do not deploy it in your business.
Third, make human connection your competitive advantage.
As intelligence gets commoditized, empathy, trust, taste,
and presence will rise in value.
you. That is something that AI can't duplicate. So host more live touch points with your
audience. Personalize your client communication. Create community moments that your competitors
cannot automate. In summary, you've got to build a brand that feels undeniably human. That
will be your competitive advantage. Mo also warned about the concentration of power and the end
of truth with AI. Protect your business by building owned channels, verifying sources before you
post, and investing in first-party data and relationships. This is the new wave of our
lives. Learn the tools, choose integrity, and lead with your heart. That's how we stay profiting
in an AI world. Thank you so much for tuning in to this special episode of Young and Profiting.
If you listened, learned, and profited from this AI Vault episode, share it with somebody who's
also curious about AI. If you prefer to watch your podcast, you can find all of our videos
uploaded on YouTube. And if you haven't already, be sure you subscribe to our YouTube channel
and join our growing community on there. We're also now on Spotify video. If you want to watch
your podcast on Spotify, you now can do that. And if you guys enjoyed this episode, please
consider dropping us a five-star review wherever you're tuning in, Apple, Spotify, YouTube. We love
getting your reviews and comments. It keeps us going here at Yap. You can also stay connected
with me on Instagram at Yap with Hala or LinkedIn by searching my name. It's Hala Taha. And before we go,
I got to say thank you and big love to my hungry, scrappy, happy Yap Media team. You guys are
absolutely incredible. Thank you for all your hard work for putting on this show and making it happen.
This is your host, Halitaha, aka the podcast Princess, signing off.
