We Study Billionaires - The Investor’s Podcast Network - TIP348: Will Artificial Intelligence Take Over The World? w/ Cade Metz
Episode Date: May 9, 2021In 2020, there were six companies that made up 25% of the S&P 500 and you know which ones they are: Facebook, Amazon, Apple, Netflix, Google, and Microsoft. The common denominator driving the growth f...or all of these companies is Artificial Intelligence. Google’s CEO, Sundar Pichai has described the development of AI as “more profound than fire or electricity,” but it is still very misunderstood. Everyone has a sci-fi image that appears in their head when they hear about AI, but what actually is it? Where did it come from? How fast is it growing? To answer these questions, Trey Lockerbie sits down with New York Times writer and author Cade Metz, to discuss his new book, Genius Makers, which lays out the story of how AI came to be. IN THIS EPISODE, YOU'LL LEARN: Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, and the other community members. The history of AI How it works and why it is used The challenges for AI ahead Some ethical dilemmas surrounding it and most importantly; Which companies will benefit from it the most BOOKS AND RESOURCES Join the exclusive TIP Mastermind Community to engage in meaningful stock investing discussions with Stig, Clay, and the other community members. Cade’s new book Genius Makers New York Times articles by Cade Metz NEW TO THE SHOW? Check out our We Study Billionaires Starter Packs. Browse through all our episodes (complete with transcripts) here. Try our tool for picking stock winners and managing our portfolios: TIP Finance Tool. Enjoy exclusive perks from our favorite Apps and Services. Stay up-to-date on financial markets and investing strategies through our daily newsletter, We Study Markets. Learn how to better start, manage, and grow your business with the best business podcasts. SPONSORS Support our free podcast by supporting our sponsors: River Toyota Range Rover Fundrise AT&T The Bitcoin Way USPS American Express Onramp SimpleMining Public Vacasa Shopify HELP US OUT! Help us reach new listeners by leaving us a rating and review on Apple Podcasts! It takes less than 30 seconds, and really helps our show grow, which allows us to bring on even better guests for you all! Thank you – we really appreciate it! Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://theinvestorspodcastnetwork.supportingcast.fm
Transcript
Discussion (0)
You're listening to TIP.
In 2020, there were six companies that made up 25% of the S&P 500.
And you know which ones they are, Facebook, Amazon, Apple, Google, and Microsoft.
The common denominator driving the growth where all of these companies is artificial intelligence.
Google CEO Sundar Pichai once described the development of AI as more profound than fire or electricity.
But it is still very much misunderstood.
I'm sure everyone has a sci-fi image that appears in their head when they hear about AI.
But what actually is it?
Where did it come from?
How fast is it growing?
To answer these questions, I sit down with New York Times writer and author Cade Metz
to describe his new book, Genius Makers, which lays out the story of how AI came to be.
In this episode, we cover the history of AI, how it works, and why it is used, the challenges
ahead, some ethical dilemmas surrounding it, and most importantly, which companies will benefit
from it the most. I learned a ton from Cade's new book and this discussion, so I hope you
enjoy it as much as I did. So without further ado, let's learn about AI with Cade Mets.
You are listening to The Investors Podcast, where we study the financial markets and read the books
that influence self-made billionaires the most. We keep you informed and prepared for the unexpected.
All right, everybody, I am sitting here with Cade Metz, the author of the new book,
Genius Makers, as well as other publications.
Cade, I'm so happy to have you on this show because artificial intelligence is a really
fascinating topic, and I think it's kind of underserved, and it is the driving force behind so many
amazing companies that we talk about all the time.
So I'm happy to dig in on it a little bit more with you.
But my first question, I guess, for you is what drove you to write this book about artificial intelligence?
It first started, actually, when I came back from Seoul, South Korea in 2016.
I had flown to Seoul to see this event, which I didn't write about for my employer at the time, Wired Magazine.
This lab in London, Deep Mine, had built a system to play the ancient game of Go,
which is like the Eastern version of chess, except it's exponentially more complicated.
And most people in the AI field, as well as the world's top go players, thought that a machine
that could beat the top players at the game was still decades away. But at the Four Seasons Hotel
in downtown Seoul, this machine built by DeepMind ended up winning this match, beating Lee Cedoll,
who was the best go player of the past 10 years. And there was this incredible moment when I was
watching the creators of this system, the researchers at DeepMind, watch this machine play this
match, this machine that they had created. And even they were surprised by how it was performing
and what it was doing. And it was playing the game at a level that they never could. And that was
a fascinating phenomenon. And when I came back, I resolved to write a book about the people
building this type of technology, including Demis Asabas, who was the leader of the Deep
Mind Lab. But as I dug into the book, I found so many other fascinating characters whose stories
sort of wove in and out of Demis's. And I found there was a great narrative story to tell about
the creation of what we call AI. And in use, you know,
Using that narrative and those characters, I could build on top of that and really explore
all the big ideas that you alluded to, all the ways that this technology is changing and how
it will change our world.
Well, there are a lot of characters in this book, right?
And one in particular I had never heard of before, and that was Jeff Hinton.
So my goodness, what a career Jeff Hinton has had.
It was just mind-boggling to hear his journey.
why don't you talk to us a little bit about Jeff and his importance to this evolution?
As I built the book, and this wasn't necessarily what I expected as I set out to write it,
was that Jeff Hinton was the central character in this story.
It was inevitable that the book began with him and end with him in a way.
It's a 50-year journey, really, of this one person.
And he's fascinating in so many different and unexpected ways.
You learn in the first sentence of the book that he literally does not.
not sit down. When he was a teenager, he was lifting a space heater for his mother in England
where he grew up, and he slipped a disc. And by his late 50s, his disc would slip so often
that it could lay him up for weeks and often months at a time. And so he literally got to the
point where he realized that he could no longer sit down. And what this means is he can't drive.
He can't fly in an airplane because the commercial airlines make him sit during takeoff and landing.
He's someone who has faced these enormous personal obstacles as he's trying to bring this one idea
that drives so much of the AI that we experience today and will experience in the future.
As he's trying to realize this idea, he's facing these very personal obstacles.
And it became a great metaphor for this 50-year-long effort to take this one idea and realize
that became the thrust in the book.
Well, you mentioned this 50-year journey, and I'm not sure people are actually familiar
with how long this has been going on and how long it's taken to get us to where we are today.
You talk a lot about in the book the Founding Fathers of AI going all the way back to
1956, where they predicted a machine would be intelligent enough to beat a world chess champion
or prove its own mathematical theorem within a decade.
And, of course, it took much, much longer than that.
So maybe give us a quick synopsis of the timeline going from the Mark 1 perceptron to AlphaGo.
That one idea that I talked about is called a neural network.
And the easiest way to understand this idea is that a neural network is a mathematical system
that can learn a skill by analyzing data.
If you've got thousands of cat photos and you feed them into a neural network, it analyzes
those photos and it learns to recognize a cat.
identifies the patterns that define what a cat looks like. And that's an idea, as you alluded to,
that dates back to the 50s. And in the late 50s and early 60s, a guy named Frank Rosenblatt,
who was a professor at Cornell University and worked at a lab in Buffalo, New York,
built what he called the Mark I perceptron. And it was an early version of a neural network.
And it worked in very simple ways. Basically, if you gave it large printed letters, like printed
letter A or a B or a C. We gave it many examples of those letters. It could learn to recognize,
which is an impressive task for a machine, particularly in the early 1960s. But it couldn't do much
more than that. And as he hyped the field, including in the pages of my current publication
of New York Times, there's this groundswell of belief that this system would do all sorts of other
things, but it didn't quite pan out. And by the late 60s and early 70s, the idea was practically
dead. And what was so fascinating to me is that at that moment, when this idea is at its lowest
point, that's when Jeff Hinton embraced it. He was a graduate student at the University of Edinburgh in
1971. And that's when he took hold of this idea and never let go. Even as the idea sort of ebbed and
flowed in the estimation of his colleagues, at times his advisor were the people who are working
right alongside him. Even as their extreme skepticism was standing right in front of him, he
continued to work on this idea. And that's the kernel of any great story, right? Someone who
believes in something, even in the face of that type of skepticism. And what you see is him eventually
realizing about 10 years ago in 2010, that single idea starts to work.
So let's touch on why it died in those early days. And it seems to go hand in hand with
Marvin Minsky and Seymour Halfort who published this book called Perceptrons that basically
hindered the belief that neural networks had any merit. And it just became this belief that
it would never work. And this kind of just perpetuated throughout the whole community. And
And I'm just curious where you think we might be today if they had never published that book.
It's such an interesting story, right?
The neural network, as built by Frank Rosenblatt, it did have a flaw.
It was good at recognizes those printed letters, but it couldn't recognize handwritten letters, right?
If there was any sort of variation in how the letter was put together, it didn't work.
It certainly couldn't recognize a cat photo.
And it couldn't, as Rosenblatt had promised, recognize the spoken word or do all sorts of other
extravagant things that he had promised.
It had a mathematical flaw, and that's what Minsky pinpoints.
But what ended up happening, you're right, is because of that book, people quit working on the idea.
There was still hope that that flaw could be passed, and that's what Jeff Hinton ended up doing.
And what you might have had was more people working on it and maybe finding the solution to that problem quicker.
But what's interesting is that Jeff did not quit, right? Among others, there's like a handful of others who continued to work on this idea.
In Jeff's lab at the University of Toronto, where he ended up, his students like to say that the theme was old ideas or new.
And what that meant was is that until an idea had been completely disproven, you can't.
kept working on until you found the solution.
And that's what ended up happening with the neural network.
In the mid-80s, Jeff, along with a couple of other researchers, found the solution to that
flaw, gave the neural network that missing mathematical piece.
And as they described it in the 80s, with that piece in place, that's pretty much what we
have today.
And it's driving all sorts of things in our daily lives when it comes to recognizing objects
in photos, a technology, by the way, that can also be applied to self-driving cars.
That's how self-driving cars see the world around them, how they recognize pedestrians
and street signs and the like.
It's what Siri uses on your iPhone.
It's how Siri recognizes the words that you say, the commands that you speak when you're
asking it for something.
The list goes on.
I'm curious.
Part of it is that mathematical revolution, but how much of it was also just processing power
and just waiting for the processing power almost to catch up to what is needed today to run an AI algorithm?
That's exactly what was needed.
So by the mid-80s, you had the math in place, in part because of the work of Jeff Hinton.
And a neural network could do some interesting things in those days.
But it couldn't reach the levels that we have today because of those two things.
You needed the data, you needed enormous amounts of data to train these systems.
You needed the photos and the sounds and the text.
And then you needed the computer processing power to crunch all that data, to analyze all that data.
By 2010, we had both.
The Internet gave us the data.
That's what gave us all the photos and the sounds needed to train this stuff.
And then Moore's Law, as they call it, had progressed to the point.
And we can talk about this at length later, perhaps.
But we had the chips that we needed to process all that data and pinpoint those patterns that can recognize spoken words or identify bases and other objects.
So you highlighted self-driving cars just now. And I wanted to talk about that because in your book, one thing I learned is that self-driving cars have actually been around since 1989 when Dean Palmerloo built Alvin, A-L-V-I-N-N, which these neural networks.
works. How have self-driving cars evolved since the late 80s?
Well, that's such a great example of where the neural network started to work and show that
sort of promise. Basically, what Dean Palmerloo and his fellow graduate students at Carnegie
Mellon University did is they built a truck with a giant camera on top of it. And it moved
very slowly. But what it would do is capture images of the world around it. And once you had all
those images, you could feed that into a neural network as well as the way that human drivers
would respond to what was around the car. As the car is seeing this particular scene, the driver
is behaving in a certain way when it comes to turning the wheel or pressing the gas. All that
gets fed into a neural network, and essentially the neural network learns to drive the car.
Now, there are real limitations there. The car would move very, very slow.
when it's driving on its own. And it couldn't do much more than, you navigate a highway,
a relatively straight shot. But they could drive this car across Pennsylvania in this way.
What we again needed was far, far greater amounts of data to train that car and the processing power.
They had neither of those things. But you can see the seed of this idea working.
it's really a lesson in often how long it can take to realize a technology.
Just because something isn't working at the moment doesn't mean it will never work.
There's a very, very long runway for a lot of these big technological ideas.
So I found it interesting that Tesla, for example, in their self-driving cars, are only relying on cameras.
Whereas Baidu and even Google, they're relying on LIDAR, radar, as well as these cameras.
So I'm just curious to hear your thoughts about Tesla's approach and moving fully away from those other technologies.
Is that something inherent in their IP that should be noticed in their valuation?
It's a big difference from the way others are building their cars.
And it illustrates this very thing we're talking about.
They want to build a self-driving car like Dean Palmerloo's car in the 80s.
Dean Palmerloo's car solely used a neural network to learn to drive.
That's fundamentally the way it worked.
It would gather the data.
You do this with human drivers behind the wheel.
You feed that data into a neural network, and it learns to drive.
Elon Musk and Tesla want to do that with modern technology.
Its cars are always driving the roads and gathering that data through their camera.
And as you collect more and more data, you can feed more and more data into this giant
neural network that can learn the behavior that a car needs to really navigate.
navigate the roads. Now, that's an enormous task. We're not to the point where you can do that.
We don't have enough data. We don't have systems that can learn every scenario that a car is going
to have to learn to deal with all the uncertainty and all the chaos on the road. But that's their goal.
You're right. That's different than the way self-driving cars work today and what others are trying to do.
What they do in part is they use LIDAR to map the world. They give the car a map of where
it's going to go. This is why these cars often have to roll out city by city. You have to map
San Francisco first to help the car navigate. Tesla wants to do away with that. They want to gather
enough data, feed it into this giant neural network to learn everything so that you can take that
learning and deploy it anywhere in the world. That's an enormous task, but that's their goal. And it really
shows you the two philosophies that are at work today. We'll see who's able to get there first.
Let's take a quick break and hear from today's sponsors. All right. I want you guys to imagine
spending three days in Oslo at the height of the summer. You've got long days of daylight,
incredible food, floating saunas on the Oslo Fjord, and every conversation you have is with people
who are actually shaping the future. That's what the Oslo Freedom Forum is. From June 1st through the 3rd,
2006, the Oslo Freedom Forum is entering its 18th year, bringing together activists, technologists,
journalists, investors, and builders from all over the world, many of them operating on the
front lines of history. This is where you hear firsthand stories from people using Bitcoin to
survive currency collapse, using AI to expose human rights abuses, and building technology
under censorship and authoritarian pressures. These aren't abstract ideas. These are tools real people
are using right now. You'll be in the room with about 2,000 extraordinary individuals, dissidents,
founders, philanthropists, policymakers, the kind of people you don't just listen to but end up having
dinner with. Over three days, you'll experience powerful mainstage talks, hands-on workshops on
freedom tech, and financial sovereignty, immersive art installations, and conversations that
continue long after the sessions end. And it's all happening in Oslo in June. If this sounds like
your kind of room, well, you're in luck because you can attend in person. Standard and patron
passes are available at Osloof Freedom Forum.com with patron passes offering deep access, private
events, and small group time with the speakers. The Oslo Freedom Forum isn't just a conference. It's a
place where ideas meet reality and where the future is being built by people living it.
If you run a business, you've probably had the same thought lately. How do we make AI useful in the real world?
because the upside is huge, but guessing your way into it is a risky move.
With NetSuite by Oracle, you can put AI to work today.
NetSuite is the number one AI cloud ERP, trusted by over 43,000 businesses.
It pulls your financials, inventory, commerce, HR, and CRM into one unified system.
And that connected data is what makes your AI smarter.
It can automate routine work, surface actionable insights,
and help you cut costs while making fast AI power.
decisions with confidence. And now with the Netsuite AI connector, you can use the AI of your choice
to connect directly to your real business data. This isn't some add-on, it's AI built into the
system that runs your business. And whether your company does millions or even hundreds of
millions, Netsuite helps you stay ahead. If your revenues are at least in the seven figures,
get their free business guide, Dismifying AI at Nessuite.com slash study. The guide is free to you
at netseweet.com slash study.
NetSuite.com slash study.
When I started my own side business, it suddenly felt like I had to become 10 different
people overnight wearing many different hats.
Starting something from scratch can feel exciting, but also incredibly overwhelming and
lonely.
That's why having the right tools matters.
For millions of businesses, that tool is Shopify.
Shopify is the commerce platform behind millions of businesses around the world
and 10% of all e-commerce in the U.S. from brands just getting started to household names.
It gives you everything you need in one place, from inventory to payments to analytics.
So you're not juggling a bunch of different platforms.
You can build a beautiful online store with hundreds of ready-to-use templates,
and Shopify is packed with helpful AI tools that write product descriptions
and even enhance your product photography.
Plus, if you ever get stuck, they've got award-winning 24-7 customer support.
Start your business today with the industry's best business partner, Shopify, and start hearing
sign up for your $1 per month trial today at Shopify.com slash WSB.
Go to Shopify.com slash WSB.
That's Shopify.com slash WSB.
All right.
Back to the show.
I once heard that Google had developed that thing called ReCAPTCHA, right?
when you're trying to log in somewhere and it's trying to check your identity or you have
their correct password, it'll basically prove you're not a robot, right? By asking you,
hey, in this photograph, identify the stop sign or identify the stoplights and you click on the
little squares and you identify what you see. And that is actually programming their AI to
see these things out on the road. You're actually helping all these people signing into their
accounts or helping the AI learn what those images are and how to identify them.
in the real world.
That's exactly right.
And it demonstrates, again, a neural network.
It shows you how fundamental this idea is.
So in addition to sort of the Tesla example we talked about, a neural network is just
used for perception.
It's a way for a self-driving car, whether it's a Tesla car or it's a Google car or a car
from Toyota.
It's a way of identifying objects on the road.
So a street sign or a pedestrian.
And the way you train that system is you need thousands of examples.
of a stop sign, right? And you need to feed that into a neural network. But you have to label the data,
as you say. You have to identify a stop sign for the neural network. But that's what you're doing
and you're signing up for those services. You're just saying this is a stop sign. And once you do that,
then the system can learn the task on its own, as long as it has a sufficient number of
stop signs that have been labeled.
Well, it highlights perhaps one of the challenges ahead for AI and its evolution because clean
data is very important, right?
Not all data is considered equal, I guess.
And so clean data is very important.
And I've read that, you know, 25% of machine learnings task is evolved in just cleaning up
the data involved.
And I'm curious how much that matters to you.
For example, Tesla is collecting just raw data from the same.
streets, whereas something like Google's Waymo is mainly doing simulated miles and not necessarily
having cars out there on the road. Does either one produce an advantage in your mind?
It's a mix of that. Google has cars on the road as well. They also do simulation.
HESA's going to use simulation as well. They're not to the point where they're just relying
on real world data. It's always a mix, and the mix can vary. So it's about finding the
right balance between those two things. You can do a lot in simulation. Cars can learn tasks in the same
way through a neural network through simulated scenarios. Basically, it's like a video game. Like you can
create a city for a car to navigate and you can train the car in that environment. But you're going
to miss those edge cases that you might encounter in the real world. You're not going to have every
situation defined in the simulation. That's why you have to do real world as well. So it's a balance
of the two. Ideally, you would have a simulation that represented everything in the world,
and you could feed that into a giant neural network and train it that way. That's the goal,
but that is still a pipe dream, right? We're not to the point where you can simulate the universe
and then have enough computing power to train your system using that simulation. We're by no
means there, but that is the goal. So Microsoft appears to have had challenges getting its AI department
up and running the same way as some of these other companies. And of course, if you think about these
neural networks, it's easy to understand how they apply to things like ad spend and being able to
target people correctly and predict what they might buy and serve this up in the ad space. So maybe
that's where it wasn't so applicable to Microsoft, but I'm curious, they've had a very long journey
with AI that has only kind of recently, I think, gotten up to par with some of the other companies.
They're sort of the portists in this race, I think. What do you think Microsoft's future looks like
as it compares to some of the other players in the space? Microsoft responds to the situation
very differently than some of its rival. It's amazing to me how these companies develop
particular personalities. And because of those personalities, so to speak, they will respond
to what's happening in such different ways, a neural network largely because of Jeff Hinton
and some of his students starts to work around 2010.
And the area where it starts to work first is speech recognition.
So that Siri example we talk about where you can speak a word into your phone and it can recognize
it.
That starts to work in a Microsoft lab outside of Seattle with Jeff Hinton and two of his students.
He's traveled from the University of Toronto and has this working at Microsoft.
And that type of speech recognition is pervasive now.
It will become enormously important to our daily lives.
But Microsoft, not only was it slow to kind of embrace that,
but it didn't really have a place to put it.
Let's not forget that.
So what happens over the next couple of years is that Google deploys that on Android phones.
Google had a platform that it could use to exploit this speech recognition.
recognition technology. They can get that onto a phone that was already in the hands of millions
of people, and they can start to use that technology. Microsoft was behind in the smartphone race.
They had already lost that race in some way. What that meant was they didn't have a place
for that speech recognition technology. But what I will also add is that Microsoft was slow to
realize that that same idea, a neural network, which was working so well in their lab with speech
recognition could also be used in all these other areas. It looked like to many people at the
moment that it was only a speech recognition technology. They didn't realize it was also a way
of recognizing objects and images, faces and images, of driving the types of robotics we talked about,
whether it's self-driving cars or robots in a warehouse or manufacturing robot. And now it's starting
to work with text, chatbots, as they call them. You can train a neural network to carry
on a conversation, that there are so many other areas where this has started to work. And Microsoft
wasn't alone in failing to realize that would happen. Most of the tech industry, most of the
field, the AI field, didn't realize this would happen. It was such a weird idea at the time.
There were only a handful of people who really believed in it over the decade. There was such
skepticism that it was hard to break out of that, even as it started to work in one area.
and even two areas, tech industry was slow to respond. But the industry is now catching up,
and Microsoft has caught up to this and other ideas. But, you know, it still doesn't have
some of the infrastructure that it needs to compete in some of these areas. Microsoft, for instance,
is not building a self-driving car, right? They still don't have a smart phone, but they can compete
in other way. Am I right reflecting on the book that Andrew Ng, I think, was at Microsoft
wanting to build a self-driving car solely to develop the technology,
not even to release the self-driving car,
and then ultimately ends up at Bidu?
This is actually a guy named Chi-Lu.
He was a top, yes, a fascinating guy.
We can get to this amazing story of how he tries to change the direction of Microsoft.
He was one of the top Microsoft executive
and started to realize that this idea was working.
And you're right.
One of the things he wanted to do was trying to try,
to convince the company to build a self-driving car, not just to put that sort of car out on the
market, because that's a way of learning where the industry is going technologically.
It's a way of seeing the new technologies that are coming to the fore. His analogy was that
Google had learned this through its search engine. There are so many technologies you need to make
that work. And by the way, nowadays, that includes a neural network. But the search engine,
was a way not only of serving a market for Google, but learning so many of the other
technologies that would become important in the years to come. So what he advocated was
going all in on a self-driving car, if only for the future of the company in general.
What also fascinates me about him, and this is a story that as I heard it, I couldn't believe
it. What he wants to do is change Microsoft's direction, even in this more fundamental way. He realizes
that Microsoft is so set in its ways that over the course of three decades, as it was rising
into one of the most powerful companies on Earth, it was sort of set in its way. This happens to
companies. They develop these personalities, as I said, and they see the world in a certain
way. And as the world starts to change, it's hard for them to change course. And what he did was,
together with a couple of friends of his, fellow technologists and engineers, he builds what he
calls a backwards bicycle. It's a bicycle where when you turn the handlebars left, it goes right.
And when you turn them right, it goes left. He resolved to learn to ride this bike, which is
It's incredibly hard, by the way. It takes weeks or months to learn to do this and essentially
forget everything you've learned when it comes to riding a bicycle. But he resolved to do this
because he felt it would show the company and his fellow executives that you could change
your way of thinking and that a corporation could change its way of thinking. And so he resolves
to do this and eventually get all his fellow executives on this bicycle and this is going to be
this way of moving Microsoft into the future.
And I don't want to give the punchline because it's too good.
But that's a key moment in the book where you see the way these giant companies operate
and how difficult it can be to change to it.
Trying to teach an old dog new tricks, basically.
You got it.
You touched on Google.
I want to talk about them because from your book,
you just get a sense for how much further along they are in so many ways.
and some of these other companies
and how much of an advantage
that just the data pool they have is,
walk us through the deep learning
that allowed Google to go from just punching keywords
into a search bar
to now being able to ask questions
in the search engine
and what that kind of meant for the company.
This is the big area of progress right now.
We talked about a neural network
working with speech recognition,
then with image recognition.
Now it's what's called natural language understanding.
the ability for a machine to understand the way we humans piece language together.
And this works in the same basic way.
You now have what they call universal language models,
and that's essentially a giant neural network where you just feed text into it.
This includes thousands of digital books, Wikipedia articles,
all sorts of other content from the internet,
including conversations that you and I might have or chat services.
You feed all that into a neural network, and it learns to recognize the vagaries of English,
right, how you and I piece those words together.
The remarkable thing is that you can then take that model, which trains, by the way, over months.
It learns language after months of analyzing all that data.
But you can take that and you can apply it to all sorts of tasks.
That includes question and answer.
You can apply it to a system like a search engine where you and I are asking a question
and it's giving a response.
You can apply it to a chatbots.
It helps these systems literally carry on a turn-by-turn conversation.
And that is something that has always fascinated the AI field.
50 years plus researchers have been trying to build a system that can carry on a conversation
in the way you and I do.
There's real progress there.
And it's also a way for these systems to generate their own books, generate their own articles,
tweets, blog posts.
It's another area where we're seeing huge progress, which is very promising a lot of ways and also very scary in other ways.
Let's take a quick break and hear from today's sponsors.
No, it's not your imagination.
Risk and regulation are ramping up, and customers now expect proof of security just to do business.
That's why VANTA is a game changer. VANTA automates your compliance process and brings compliance,
risk, and customer trust together on one AI-powered platform. So whether you're prepping for
a SOC 2 or running an enterprise GRC program, VANTA keeps you secure and keeps your deals moving.
Instead of chasing spreadsheets and screenshots, VANTA gives you continuous automation across more than
35 security and privacy frameworks. Companies like Ramp and Ryder spend 82% percent,
less time on audits with Vantta.
That's not just faster compliance, it's more time for growth.
If I were running a startup or scaling a team today, this is exactly the type of platform
I'd want in place.
Get started at Vanta.com slash billionaires.
That's Vanta.com slash billionaires.
Ever wanted to explore the world of online trading, but haven't dared try?
The futures market is more active now than ever before, and plus 500 futures,
is the perfect place to start.
Plus 500 gives you access to a wide range of instruments, the S&P 500, NASDAQ, Bitcoin, gas, and much more.
Explore equity indices, energy, metals, 4X, crypto, and beyond.
With a simple and intuitive platform, you can trade from anywhere, right from your phone.
Deposit with a minimum of $100 and experience the fast, accessible futures trading you've been waiting for.
See a trading opportunity.
you'll be able to trade it in just two clicks once your account is open.
Not sure if you're ready, not a problem.
Plus 500 gives you an unlimited, risk-free demo account with charts and analytic tools
for you to practice on.
With over 20 years of experience, Plus 500 is your gateway to the markets.
Visit plus500.com to learn more.
Trading in futures involves risk of loss and is not suitable for everyone.
Not all applicants will qualify.
Plus 500, it's trading with a plus.
Billion dollar investors don't typically park their cash in high-yield savings accounts.
Instead, they often use one of the premier passive income strategies for institutional investors,
private credit.
Now, the same passive income strategy is available to investors of all sizes, thanks to the
Fundrise income fund, which has more than $600 million invested in a 7.97% distribution rate,
With traditional savings yields falling, it's no wonder private credit has grown to be a trillion-dollar
asset class in the last few years. Visit fundrise.com slash WSB to invest in the Fundrise income fund
in just minutes. The fund's total return in 2025 was 8%, and the average annual total return
since inception is 7.8%. Past performance does not guarantee future results, current distribution
rate as of 1231, 2025.
Carefully consider the investment material before investing, including objectives, risks,
charges, and expenses.
This and other information can be found in the income funds prospectus at fundrise.com
slash income.
This is a paid advertisement.
All right.
Back to the show.
What is the point of that ultimately, right?
Like logically you would think or just assume that, okay, yeah, they're developing all
this to replace human beings to make the company more efficient.
not have to rely on human error or human judgment.
But that's not really the case, right?
I mean, Facebook has 15,000 employees just working on sort of monitoring the data coming in and whatnot.
So it's not like it's really eliminating jobs.
So what is the ultimate end goal for this, in your opinion?
You're right.
And that's something we can talk about at length is the jobs question.
There's so much progress in all these areas.
And let's rope robotics in as well, right?
The robots, the self-driving cars, robots in the manufacturing facilities, and in the warehouse
are getting better and better and better.
They're not necessarily eliminating jobs.
We don't see those self-driving cars on the road now.
There are limitations to them.
And we're still trying to figure that out.
The progress has accelerated, but it still hasn't accelerated to the point where it's just sort
of eliminating jobs and replacing humans.
The technology is not there yet.
You're right. One of the places it's not there yet, we can really see the limitations is Facebook.
Facebook likes to talk about AI as a way of dealing with all the harmful and toxic content
on its service, whether it's hate speech or fake news. Identifying hate speech and fake news
is a very difficult thing, even for a human being. It's a judgment call. Some hate speech is
obvious. Other hate speech is not.
If you and I have difficulty pinpointing what should and should not be on Facebook, a machine
is certainly going to have the same difficulty.
So it's an example of where neural networks can help.
So if you want to say, prevent people from selling illegal drugs on Facebook, right,
you can feed a neural network thousands of examples of marijuana and teach it to recognize
as a marijuana app and eliminate that from the service. And there's progress there. But that's
different from a lot of the other things that Mark Zuckerberg, friend, as you see in the book,
has told Congress that these systems will do, identify hate speech or remove fake news.
Those are enormously difficult problems. Just as putting a car on the road that can deal
with all the chaos and uncertainty that we human driver, that's very, very hard.
Even as we're seeing progress, even as those chat bots get better, that doesn't mean they can
carry on a conversation as easily as you and I are doing now, as nimbly as you and I are doing.
Yeah, there's definitely an ethical concern around the development of this technology.
And you mentioned the book that DeepMind, when they sold to Google, they demanded that an ethics
board would be put in place to kind of oversee the progress and ensure that it was not going
to be developed with any kind of malicious intent. And Elon Musk, as he mentioned, he's
famous for saying that AI is more dangerous than nukes. So it's very much a concern. And
furthermore, Google is working with the Department of Defense. And some of the employees were
protesting against one of its projects at one point, because who knows how it could be used.
So should an investor have concerns over owning pieces of companies like this, knowing that
their projects, which are quite secretive sometimes, might have ethical dilemmas, whether
it's technology that's being used for drone strikes, or simply extracting your health data,
profiting from that? What are your thoughts on that?
Well, if you're an investor, there may be concerns, but I think it really depends on what kind of
company we're talking about. What happened at Google was that it started working on a project
with the DoD to identify objects and drone footage. And that's something that could eventually
be used with weapons, right? It's a path towards autonomous weapons. But what you had was a consumer
company, a consumer internet company doing this. And it really surprised a lot of their employees.
And that's why you had that protest against what was called Project Maven, this DOD project
it to do that. Google ended up pulling out of the project because the protests grew to such a
level. Now, the situation is going to be different at other companies. You had some smaller
protests at Microsoft and Amazon, and both those companies, by the way, worked on that same project,
but it didn't have the same effect. Companies are built in different ways. You know,
Google employees over the years were encouraged to voice their opinion and
push back often at management, and you had that there and you've seen it in other places.
Even Google, though, is starting to push back against that type of attitude.
And if you step outside those consumer giants and you have companies that are built
specifically, say, for working with the military, the dynamic is completely different.
If an employee goes to work at a startup that is designed to work with the military,
they're not going to have those same issues.
Now, there's still going to be ethical questions.
Autonomous weapons is the big, big issue.
And we have startups, as well as traditional defense contractors who are working to build that.
And there is concern about the path that we're taking there.
But we're not there yet.
And what people are realizing more and more is that if you plant down on those efforts here in the U.S.,
it's just going to happen abroad with our arrival.
It's a big, complicated issue that we as a solution.
society, a global society will have to deal with. But you certainly have companies that are well-funded
who are working on this sort of thing. Now, I just wrote a piece about it in the New York Times.
A lot of these companies are outside of Silicon Valley. They're more in Southern California,
for instance, because the attitudes towards this type of thing are different. So if you're an
investor, it really depends on the dynamics within the company. Yeah, I've heard Google's ex-chief
Executive Eric Schmidt talk a lot about how important it is for these platforms to ultimately
be built in America, because you're right, they could go into other hands. And it brings up
a question around advantage. So Jensen Wang, Navidia's founder recently stated that Moore's
law isn't possible anymore, quote unquote. Basically due to more and more complexity,
teaching these machines is becoming more and more expensive, mainly due to electricity needs.
Do you see this potentially slowing down the progress of AI, certainly for smaller businesses,
because does it ultimately just concentrate technology further into the larger big tech companies
that can actually forward it?
There are two things that are going on there.
One, what Nvidia says is true, but they're talking their own book, as they say, right?
They are showing where they have an advantage.
The best way to think about this is for years and years and years, Intel built the chips
at the heart of our computers. They call them CPUs, the brain of a computer. What was in our laptops,
in our desktops, and it's in the computer servers in these giant data centers that run Google
and Facebook and Amazon. And by the way, you know, end up, you know, driving these neural networks,
training all these systems by analyzing all that data. But what InVitya's saying is that Moore's
law has prevented those Intel chips from improving at the rate they did. Every 18 months or so,
you can pack the same number of transistors onto a smaller and smaller package. And what that meant
was you're essentially getting more and more computing power out of these Intel chips. But that
has started to slow. So you're not getting as much performance in terms of
of gains year by year that you had in the path. But this has not hindered the AI development
because what worked when it came to training these neural networks, oddly enough, was gaming
chips. So these chips that would work in concert with Intel's chips. They were built to drive
video games and other graphics-heavy software application. As it turns out, those were
ideally suited to the math that's used to train a neural network. So basically, you offloaded
that work from the Intel chips onto these graphics chips. And that's what we're seeing now.
We're seeing specialized chips built by companies like Nvidia used to train the AI. And there's
huge progress there to the point where many companies are now specifically building chips
to train these neural networks.
This is something that's happening in startups,
both here and in China and in the UK and other places,
but it's also happening inside some of these giant internet companies.
Google has built its own chip to do this.
It's called the TPU.
Amazon has done the same thing.
Microsoft is moving down a similar road.
So what you see is all sorts of companies building new chips
specifically for this type of AI.
And like in any market,
Those big companies are going to have an advantage, right?
They have the infrastructure to run these chips.
The way they're served up to the world is through cloud computing services,
and they've got the money to do this as well.
So there is an advantage there in this area as in other areas.
Now, is that why Demis and DeepMind approached games to begin with to develop their AI?
I want to talk a little bit about AlphaGo, like you mentioned earlier.
is that the theory behind starting with games?
That's a separate thing, but it's a great thing to bring up.
Demis was a games player, as part of it, in the extreme, right?
This is someone who was a chess prodigy.
He was the second ranked under 14 player in the world when he was young.
And he ended up participating in this competition in Europe.
It was essentially a games playing championship of the world,
where games players would come from all over the globe to compete in a variety of games,
whether it was go or chess or poker, the list goes on.
Demis won this competition four out of its first five years, and the one year he didn't win,
he didn't enter.
This is part of who he is.
It illustrates his interest, also shows how competitive he is, how ambitious he is,
and that plays into deep mind as well.
But because of this, you realized that games were a great proving ground for AI.
He wanted to build technology and give people a real idea of where it was moving.
You want benchmarks to show the progress, and games are a great way of doing that.
And you saw this with that Go match in Seoul, South Korea.
That was an inflection point for the industry, because that system,
and at the heart of it was a neural network.
It won a match that captured the attention of Asia, certainly,
like you could feel this entire country when I was in Korea
concentrated on this match.
You could feel their emotions sway back and forth
as the match swayed back and forth.
It was a way of really getting people to understand what was happening.
Games are easy for us to understand.
And it happened here in the U.S. as well,
even though we're not go players,
in the way that the average person is a go player in Japan or China or Korea.
But it was a moment that people could really understand.
And that's part of what them is saying.
Well, I'm not surprised that after being at that event,
you got inspired to write an entire book on artificial intelligence
because that is just one of the most fascinating sporting events to some degree.
More people watch that than the Super Bowl, right?
and it was very different to see AlphaGo versus Lisa Dole than it was to see something like Casparov versus Deep Blue in 1996, right?
Tell us a story a little bit around Move 37 in Game 2 and Move 78 in Game 4 in that match and what it represents.
You're right. It was very different than Casperov.
I happen to be at both of these events, so I can speak with firsthand knowledge.
But the difference is chess is a game where someone like Casparov plays,
several moves into the future, right? He can map out where the game is going for step by step.
And that's how deep blue, the IBM machine built to play chess was built to look forward into the
future of the game and solve the problem that way. You can't do that with Go. There are too many
possibilities. You can't go through them all. And you see this in the way the top players play.
They play by intuition, by feel. They often move a piece just because it feels like the right
thing to do. If you're going to build a system that can beat the world's top players, you're going
to have to mimic that sort of intuition. Fundamentally, you're going to have to do that. And that's why
that event was so amazing, because the system would mimic that and not just mimic it, but exceed
that sort of human intuition and play in ways that would surprise even the season commentators
who were qualified, accomplished go players themselves. They couldn't understand what the machine was doing.
And that's what happened with Move 37 in game two.
It was this transcendent move after the fact the deep mind researchers went into the system
and pinpointed that move and told me that the odds of a human player making that move
were one in 10,000. And the machine made it anyway because it had trained to a level,
basically playing game after game against itself as the way it had been trained.
It trained to a level that it could outperform a unit and it could decide,
decide to make that move, even though a human wouldn't do it. That's a fascinating moment. I often
say that it was one of the most amazing weeks of my life, and I wasn't even a participant. I was
just an observer. What you saw in game four, after Lee Cito had lost the match, he'd lost the first
three games, which meant he lost the best of five matches system, but they kept playing.
And in game four, he himself had an equally transcendent move.
The odds of a human making move 78, as you mentioned, were the same odds, right?
One in 10,000.
He had his own moment.
And what he said afterwards was that the machine was teaching him new ways of playing the game.
And in the moment, you could see multiple examples of this.
He wasn't the only one who talked about that phenomenon.
And then a year later, when I went to China, the little town south of Shanghai to see this
machine play its next match against who was then the top player in the world, a 19-year-old
from China, the system had improved to the point where the human players couldn't compete,
for one. But also, you could see a year after it had first made its debut, you could see
so many of the world's top players changing the way they played the game.
because they had analyzed the games.
That phenomenon is very, very real.
That's sort of like the silver lining and all this to some degree, right?
Because it's a little intimidating to think about human beings just being replaced by robots.
But you point out that Lee Cedal went on to win, I think, seven of his next games against Grandmasters after he had played AlphaGo.
And it shows that it actually is improving human beings.
So speaking of improving human beings, what are your thoughts on Elon Musk and his ideas around putting chips in the heads of human beings to compete with the AI that is to come?
Yes, he has not only said that that is a path forward.
He has built a company to do this very thing.
It's called NeurLink.
And quite literally, they want to put a chip in people's heads to provide an interface between your brain.
and machines. You know, what he, this is a moment in my book as well. He talks about the time lag
between, you know, having a thought and having to key it into your phone, right? He wants to
reduce that to nothing. Now, it's quite an idea. And it brings up all sorts of ethical
questions, certainly. But before we even start to think about those, let's realize that
surgery of that kind, opening up the skull to put something inside your skull, is a
is a very, very dangerous thing. And at this point, it's not something doctors want to do unless
there's a real reason to do it. If someone has a life-threatening injury, if they have some other
medical condition that needs dealing with, you're going to open up this role. But you're not going to
do that with a healthy person. There are so many obstacles to doing that sort of thing. But
But Musk is intent on doing it.
Well, the reason we're talking about AI to begin with is because in 2020 last year, the
S&P 500 ended the year with a 16.25% yield.
And at the end of the year, Facebook, Apple, Amazon, Netflix, Google, Microsoft, just these
six companies made up 25% of the S&P 500.
And it just speaks to how valuable this AI is.
is, how it's driving these valuations in today's market. And I want to just touch on or discuss
how AI might continue to contribute and compound these companies in particular versus other
startups that we might want to look at. Well, I think it's fundamental. And it gets back to what
we were talking about before. You brought this up that what these neural networks needed
after five decades of research for data and processing power, it's those companies that have
those two things. They have these giant data centers that are filled with the machines that provide
the computing power and that store all that data, whether it's images or sounds or text. So when it comes,
for instance, to those language models that can drive everything from the Google search engine
to chat bots, so many other things, those companies have the advantage. It's just fundamental.
And if you're a startup or you're an academic lab, you just can't compete with that.
Now, what ends up happening is a lot of the technology ends up trickling down, so to speak,
to other parts of the industry and to academia.
What might seem like an entertainment technology now goes down in price.
Things get open-sourced, shared.
And so it eventually makes its way to the academic labs and the startups.
But by that time, these giant companies have moved on to something else, right?
There's a gap there.
And that's very real.
And it concerns a lot of people.
It's just the way it is.
And we'll see how it plays out in the future.
It sounds like what you're saying is somewhat related to the internet bubble of sorts,
where at the time, these were all quote unquote, internet companies.
Now they're just companies because the internet is so disseminated around the world.
So are you kind of saying that all companies will ultimately be AI companies to some degree?
That is an argument I've heard.
Yeah, it's funny how AI is a weird term.
Coined in the 50s at a time, and you alluded to this too,
when these scientists were sure that they would build these systems that could behave
like the human brain in a matter of years.
That didn't happen.
And it still hasn't happened.
But we still call it AI.
Each step of the way, you know, we're making these small games.
And what was AI in the past just becomes technology.
We continue to see that.
We might call it AI now.
In the future, it's just going to be part of our daily lives.
On this long road towards systems that can behave like the brain, we keep making the progress
and then it gets disseminate.
So what you say is true.
These systems that are so unusual and are the domain solely of these very large
companies will end up everywhere. Then they'll move on to it to another step.
Quantum computing, perhaps. It's your next book. Yeah. Exactly. Exactly.
Interestingly enough, PWC predicted that AI will add $16 trillion to the global economy by
2030. And McKenzie was predicting 13 trillion, which was the size of the global economy just in
2018. So which companies maybe of the six I mentioned before,
stands out to you as the one that will benefit the most from these advances?
I think all those big companies will benefit.
The one thing we haven't talked about, this isn't just a U.S. phenomenon.
Baidu, which is often called the Google of China, they were there from the beginning,
or there's this moment at the beginning of the book where Jeff Hinton auctions his services
off to the highest bidder, the services of himself and his two students.
Bidu is there at that auction, right, realizing what is happening.
China is a huge player here, not only because they have their own internet giants,
the government is behind those companies in a way the way the government isn't behind
the American tech company.
So this is a global thing.
The gap is between the big companies and the smaller ones.
And a lot of those big ones are in China.
I think that's the point.
That's a good point.
And I actually ended up taking a position in Bydu after my conversation with Kathy Wood
and learning a little bit about how they've approached it and interesting financials there
to check out for sure.
It also begs the question around chip suppliers, are those other kind of companies that we
should be looking at as this improves other companies that are providing the tools needed
to evolve this technology further?
Yes, but it's interesting.
A lot of it goes back to these internet giants.
You have Nvidia, a very important player in this field, we talked about them.
Intel's trying to get into the AI chip game.
They've tried multiple times, and they've been slow for various reasons.
It's like the phenomenon we talked about with Microsoft.
It's hard for these big companies to change direction.
Intel is trying.
There are all sorts of startups that are building this new breed of AI chip.
Many are here in the U.S., others are in China.
There's a big player in the UK.
But again, some of the central players, if not these central players, are the big internet companies.
It's Google.
It's Amazon.
Again, they are ahead of the game here.
There are really two AI chips that are used a lot at this point.
And that's invidious chips and the TPU built by Google.
And we'll see others get into that game.
And Amazon is one of them.
But it's interesting how the power is still centered are the big internet companies.
I want to touch on one last thing.
What about companies that are going to benefit from this AI's ability to predict things like cancer or eye disease or other health-related companies that we should be taking a look at as this progress develops?
That's a really important area for many reasons.
It's something where the technology is really needed.
and it's a place where the technology can be really effective.
A neural network, just as it can recognize a stop sign, it can recognize signs of illness
and disease in medical scans, whether they're x-rays or cat scans or the like.
I visited India at one point where diabetic blindness is a real problem, and they don't
have enough doctors to screen everyone in the country.
If you have AI systems, which are already starting to be tested, if you have AI systems that can identify those signs in eye scans of diabetic blindness, you can do a lot of good. Google is another player here. They have tested that type of technology at two hospitals in southern India, and I visited one of them. Deep Mine, which we talked about, is also an early player in this area. A lot of what they were doing, the medical field has
Now, then move back into Google.
So Google is a player here.
But you're also seeing pretty healthy startup ecosystem, not only here, but again in China,
working on this very thing.
And it can be applied to so many different types of disease, cancer detection, as well as
diabetic blindness.
It's a really hard thing to test and get approval for and deploy.
You need to make sure this stuff works.
and you need to make sure that we have the regulatory framework to deal with it.
But that's a big, big area.
Apple also comes to mind to me right with their Apple Watch because I think, you know,
that thing is tracking your blood pressure or your heartbeat or whatever.
And at some point, it could even notify you if it detect something that's off
and even predict something just based on all the other people that are wearing these Apple Watches
and collecting all of that data.
So they might be a player as well.
That kind of prediction, whenever people talk about predicting things with these algorithms, that is a hard, hard thing.
And I think there's a good reason to be skeptical of that, whether it's predicting what the stock market's going to do or predicting something in the healthcare field.
There are specific areas where that works.
But outside of those areas, it's really, really difficult.
And in many cases, these types of algorithms we've been talking about don't work as well.
In those specific areas, in an eye scan, there are certain physical telltale sign that diabetic blindness is on the way.
The way it works today is the human doctor looks for those telltale sign.
Now we have machines that can do that.
Again, it's something that can be identified and labeled by people, and then the system's learned to do it.
Prediction is hard for humans.
Anything that's hard for humans, it's going to be that much harder for machines.
And so that sort of prediction is something that we should be a little bit wary of.
This has just been an incredible conversation, really wide-ranging and fascinating.
And before I let you go, give you an opportunity to hand off to our audience where they can learn more about you, your new book, your other publications, anything else you want to share.
The new book is out now, both in the U.S.
and in the UK.
It's available from Amazon
and independent sellers,
audio version, digital version.
And then I'm on staff
at the New York Times
and I cover this stuff full time.
And so you can follow my work there
or on Twitter at Cade Nets.
Really enjoyed it, Cade.
Hope to have you again soon.
Love to. Thank you.
All right, everybody.
That's all we had for you this week.
Be sure to subscribe to the feeds
so that these podcasts appear in your app
automatically.
definitely leave us a review. We always love hearing from you. And while you're at it, go ahead and
ping me on Twitter at Trey Lockery and say hello. If you haven't already done so, be sure to check
out the dream tool we built at TIP Finance. Just Google TIP Finance. It'll pop right up.
So with that, we'll see you again next week.
Thank you for listening to TIP.
Make sure to subscribe to Millennial Investing by the Investors Podcast Network and learn how
to achieve financial independence.
our show notes, transcripts or courses, go to theinvestorspodcast.com.
This show is for entertainment purposes only.
Before making any decision consult a professional, this show is copyrighted by the Investors
podcast network.
Written permission must be granted before syndication or rebroadcasting.
