PBD Podcast - "AI Cults Forming" - Max Tegmark on China Running By AI, Non-Human Governments, and Global Control | PBD Podcast | Ep. 485
Episode Date: October 7, 2024Max Tegmark, a renowned MIT physicist, dives into the future of AI with Patrick Bet-David, discussing the profound impacts of AI and robotics on society, military, and global regulation. Tegmark warns... of the risks and potentials of AI, emphasizing the need for global safety standards. ------ 📰 VTNEWS.AI: https://bit.ly/3Zn2Moj 🎟️ ELECTION NIGHT IN AMERICA @ VT HQ: https://bit.ly/3XPbyt0 💸 BUY 1 VT WALLET, GET 1 KEYCHAIN & CARD HOLDER FREE: https://bit.ly/3N6VHkr 📕 PURCHASE VIVEK'S NEW BOOK "TRUTHS": https://bit.ly/3TOnlq0 📕 SIGNED COPY OF PBD'S NEW BOOK "THE ACADEMY": https://bit.ly/3XC5ftN 🧢 NEW FLB HAT - WHITE W/ RED LETTERING: https://bit.ly/3BgUAvR 🧢 NEW FLB HAT - RED W/ WHITE LETTERING: https://bit.ly/3MY7MIQ 🇺🇸VT USA COLLECTION: https://bit.ly/47zLCWO 📰 VTNEWS.AI: https://bit.ly/3Zn2Moj 🏦 "THE VAULT 2024" RECORDING: https://bit.ly/4ejazrr 👕 VT "2024 ELECTION COLLECTION": https://bit.ly/3XD7Bsm 🎙️ FOLLOW THE PODCAST ON SPOTIFY: https://bit.ly/3ze3RUM 🎙️ FOLLOW THE PODCAST ON ITUNES: https://bit.ly/47iOGGx 🎙️ FOLLOW THE PODCAST ON ALL PLATFORMS: https://bit.ly/4e0FgCe 📱 CONNECT ON MINNECT: https://bit.ly/3MGK5EE 📕 CHOOSE YOUR ENEMIES WISELY: https://bit.ly/3XnEpo0 👔 BET-DAVID CONSULTING: https://bit.ly/4d5nYlU 🎓 VALUETAINMENT UNIVERSITY: https://bit.ly/3XC8L7k 📺 JOIN THE CHANNEL: https://bit.ly/3XjSSRK 💬 TEXT US: Text “PODCAST” to 310-340-1132 to get the latest updates in real-time! ABOUT US: Patrick Bet-David is the founder and CEO of Valuetainment Media. He is the author of the #1 Wall Street Journal Bestseller “Your Next Five Moves” (Simon & Schuster) and a father of 2 boys and 2 girls. He currently resides in Ft. Lauderdale, Florida. --- Support this podcast: https://podcasters.spotify.com/pod/show/pbdpodcast/support
Transcript
Discussion (0)
Do you worry that maybe a guy who's got a lot of money builds an army of 200,000 robots
that'll be stronger than the military that we have?
Absolutely.
I went to the conference recently where a guy who had a lot of money was talking about
building 3 billion such robots.
Capable of doing what?
Everything we can do but better.
And this was not Elon Musk.
Warren Buffett said,
you have to kind of look at AI as like the nuclear bomb,
you know, it's like atomic bombs.
That's also the fear, because the fear is like,
let's not accelerate AI in our country and robots,
but somebody else does, and then the military drops.
So then what happens?
People who don't use AI get replaced by people who do.
If a Chinese company builds super intelligence,
after that, China will not be run by the Communist Party,
it'll be run by the super intelligence.
Don't think 10 years, think it to never the next two years,
crazy things are gonna happen.
The technology is here to stay,
and it's gonna blow our minds.
Are you for it or against it Patrick?
Maybe, maybe not.
One dirty secret, we have no idea really how it works.
And if we do this right with AI and use artificial intelligence to amplify our intelligence,
to bring out the best in humanity, humanity can flourish.
So in other words you believe the future looks bright.
Thank you so much again, appreciate you. Yes, please grab a seat.
Okay, so I'm going to get right into it guys.
I think we have to talk about a soft, subtle subject with AI.
Do you worry that maybe a guy who's got a lot of money
builds an army of 200,000 robots
that would be stronger than the military that we have?
Do you worry about that at all?
Absolutely.
I was at a conference recently where a guy who had a lot of money
was talking about building three billion such robots. Building three billion robots? Yeah. Capable of doing
what? Everything we can do but better and this was not Elon Musk. Who was this guy?
I'm not, this was one of those conferences where you're not allowed
with Chatham House rules. Is it a person that we would know or not really? Yeah, you know, yeah, I Mean, it's only a few people that can afford three billion robots
So what happens if let's just say
Because you know Warren Buffett said you have to kind of look at AI is like the nuclear bomb, you know
It's like atomic bomb. So he's a little bit more afraid of what AI is gonna be doing
but how should something like that be concerned about because Because the risk is many different facets of risk.
I wrote a few things down.
One is war.
How many of you guys fear what could happen with AI with war?
Military, robots, okay, a lot of people.
Next one I wrote humanity, right?
When Elon talks about he's a humanist, right?
He wants to make sure he can protect humanity.
Business disruption, pharma disruption,
criminal justice using AI, like that movie, Minority Report.
By the way, every one of these things,
there's a movie forward.
War, The Creator, I don't know if you've seen The Creator,
I thought it was a great movie, just came out.
Or I, Robot, Humanity, Matrix, or even the movie Her.
Have you guys seen that terrible movie Her
with Joaquin Phoenix, which was so weird
that a lot of people liked?
I wasn't one of them.
Can I defend Her just very briefly though?
I cringe almost always when I watch the sci-fi films.
I knew we were gonna get into a fight after two minutes.
But one really nice redeeming feature of Her
was it didn't have robots in it.
And I think that was a really important message.
It's so easy to immediately put an equal sign between scary AI and robots.
And her shows how just the intelligence itself can really give a lot of power, too.
Are you married, Max? Yes.
You're married, okay.
So if you're sitting locked in a room
and you're really smart,
you can't go out and touch the world
but you're connected to the internet,
you can still make a ton of money,
you can hire people,
you can do all your job interviews over Zoom,
and if some super intelligent AI starts doing that,
it can start having a lot of impact in the world.
Positive or negative? That's the world, positive or negative.
That's the thing, you know.
When I gave the definition of intelligence
as the ability to accomplish goals,
I didn't say anything about whether those were good goals
or bad goals.
And intelligence, in that sense, is a tool.
It's a tool that gives you superpowers
for accomplishing goals.
And tools are not, tools and tech, right,
they're not morally good or morally evil.
Now, like, if I ask you, what about fire?
Are you for it or against it, Patrick?
I'm for it.
You're probably for it.
I'm against who uses it.
An awesome barbecue, but you're probably not against using fire for, for making, you
are probably are against using fire for arson, right?
So we, whenever we invent some new tech, we also try to invent an incentive structure
to make sure people use that tech for good stuff, not for bad stuff.
And it's no different with, with AI.
We have, we have to, if we build these powerful things,
make sure that we have all the incentives in place
to make sure that people use them for the good stuff,
not for the bad stuff.
How do you do that though?
Like this guy that you're talking about
who wants to build three billion robots, okay?
Do you think he's a good guy?
I'm not asking for a name, but do you think he's a good guy?
I think he thinks he's a good guy, but you know, I'm pretty sure Stalin also thought
he was a good guy.
Fair enough.
Okay.
So now, so you know, do you in your mind at all see a future where robots don't exist?
Or is it no?
Listen, we have to accept the fact that within the next
whatever amount of time, we're gonna be coming to places
where robots are gonna be doing customer service,
military's gonna be customer service,
robots are gonna be in the military,
cops are gonna be robots, I'm gonna go to a restaurant
placing an order from a robot.
Do you see that as a near future,
that that's gonna be happening whether we like it or not?
Yeah, but I loved what you said there earlier this morning
about dreams, and in my dream is that we build
really exciting AI, including a fair number of robots,
and not just plop them into society randomly and see what people choose to do with them
But but rather than we we use a lot of wisdom to guide their use, you know, if you have
You ask how can we make how can we influence how people use technology, right?
So how do we influence how people use fire, for example?
First of all, we have social incentives.
If you get a reputation as the guy who always burns down
people's houses, you're going to stop getting invited
to parties.
And we also have a legal system we invented for that
very reason.
So if you do that and you get caught, you get maybe a couple years to think it over
on very boring food.
And when we make these technologies, if they are technologies that can be used to cause
a lot of harm, we want to make them such that it's basically impossible
to do that.
You know, you could,
there was a guy named Andreas Lubitz, for example,
who was very depressed,
and he crashed his German wing's aircraft into the Alps,
killed over 100 people.
You know how he did it?
He just told AI, autopilot, to change the altitude
from 30,000 feet to 300 feet over the Alps.
You know what the AI said?
Okay.
What engineer puts that in there?
When we educate our kids, we don't just teach them
to be powerful and do stuff,
we also teach them right from wrong.
We have to make sure that every autopilot and aircraft
will just refuse crazy orders like that.
If you put good AI in a self-driving car
and the driver tries to accelerate into a pedestrian,
the car should refuse to do that.
So there's a lot of technical solutions like this where you just make it impossible
for random nut cases to do harm.
And another big success story I think is basically every other technology that could cause harm,
we have safety standards, you know.
People...
That's why we have the Food and Drug Administration.
If you have, someone says, hey, I have this new wonder drug that's gonna cure all the cancers.
FDA is gonna be like, okay, well, where is your clinical trial?
Show us that the benefits outweigh the harms, and then until then, you can't sell it.
It makes sense to apply, we do the same thing for aircraft, for cars.
It makes sense to do that with AI systems
that could cause massive harm.
That way, all the wonderful products we're going to get
are going to be safe, kind of by design.
And we've given incentives to the companies
to really have a race to the top and make safe AI.
Because whoever does that first, they're
the one who gets the market share.
Yeah, but who regulates it?
So to create that, you know, 30,300,
that the guy just go three, zero, zero,
and maybe he wanted to do 3,000, he forgot one zero,
and it went from 30,000 to 300.
Or maybe like the, you know, it can't go less than 10% in a 15 second increment.
Okay, that's technology.
I get it.
But what I'm asking right now is, so this guy that wants to build 3 billion robots.
Yeah.
With today's regulation.
He really got to you.
Huh?
He really got to you.
No, did he not get to you?
Yes or no?
Everybody, are you kidding me? I bet he got to you. No, did he not get to you? Yes or no? Everybody, are you kidding me? I bet he got to everybody.
So this guy that wants to build 3 billion robots,
what regulation do we have right now to prevent him from being able to do that?
Nothing, basically.
So, but this is actually changing.
In all other areas, there was also a lot of resistance to regulation.
You know, when engineers started saying,
let's put seat belts in cars, the auto industry was dead against it.
They were like, no, that's going to kill the car market.
So they passed the seatbelt law in the US anyway.
And did it kill the car industry? No.
The amazing thing is that car sales skyrocketed after that.
Because people started
to realize that driving can be really safe and they bought more cars.
So we similarly just need to get past this knee-jerk reaction from some tech folks that
they're different from all other technology and should be forever unregulated.
There's a big food fight in California now, some of you might
have followed, there's this law called SB 1047 which was just passed by the
California Assembly. It's very light touch, it says stuff like, well if your
company causes more than half a billion dollars in damages or some sort of mass
casualty event where a lot of people die, you should be liable. If they had a
law like that for airplanes, people
wouldn't bat an eye.
But you have a lot of people now taking to Twitter saying,
this will destroy American AI industry, whatever.
If we just treat AI like other industries, we have safety
standards, here they are, level playing field, free
markets, once you meet the standards, here they are, level playing field, free markets,
once you meet the standards you can make money.
Then we'll be in very good shape.
The European Union already passed an AI law last year, China passed the AI law.
What's the European Union's AI law?
It's called the EU AI Act.
They're very similar to product safety laws for medicines, for cars, stuff like that.
And as soon as we get something like that in the U.S., first maybe in the States like
California and then federally, I think we'll be in a much better place.
And then someone can't just come along and build three billion robots without first having
to demonstrate that they meet safety standards.
You don't want to sell robots where the owner of the robot can be like, hey, here's a photo
of this guy, go kill him.
This guy that wants to build three billion robots, is he an American?
He's an American.
Who are the top 10 richest people in America?
By the way, I'm not going to ask you who because I would never question you like that. It's not my style.
Top 10 richest...
you like that. It's not my style. Top 10 richest. So you got, let's use the process of elimination. Let's kind of go through it. Let's have some fun with this. Listen, if we're going to have
some, we may as well have some fun here. All right, so here we go. Elon Musk. Could it be Jeff Bezos? Maybe.
Are you going to put like, that's why you're putting bright spotlights in my face here?
Could it be Mark Zuckerberg?
Could it be Mark Zuckerberg?
But look, I think it's not about individual people. It's really about training the right
incentives for all entrepreneurs so that they realize that the way they get rich is by... Look, I think it's not about individual people. It's really about training the right incentives
for all entrepreneurs so that they realize
that the way they get rich is by doing stuff that's actually
fair.
I don't want to make you feel uncomfortable.
So you got Ellison.
I don't think it's Buffett.
Kate.
Baltimore likes basketball.
Paige.
Sergey Brin.
Well, let's talk about those guys.
So here's what the Google guys said, if I'm not mistaken.
Larry Page wants an AI God, is what he said, okay?
And he's got the money to do it.
And he called people who are against that,
he uses a word called speciesist,
which is a derogatory term if I'm not mistaken.
Mm-hmm. Yeah I was actually right there when he said that famous quote. I'm the
first-hand witness. I think it got out because I wrote about it in my book.
So this is not bullshit. He actually said it. So what do you think about Larry
Page calling us speciest?, I don't judge people.
We do. This environment is very judgmental.
We just are. I want to compare you.
Everybody's entitled to their own dreams.
Yeah.
My dream is that my children will have a, and your children will have a wonderful future and a future where they don't need to compete against machines. It would be rather where AI and other machines make
their lives more enjoyable, better, more meaningful. Larry might say now I'm
speciest, I should feel sorry for the machines or whatever, but hey, we're
building them. They wouldn't exist if it weren't for us. Why shouldn't we exercise
that influence we have to make it something good? You know, why, why? I'm a very ambitious
guy and you are too, and I really respect that. So to me it's utterly unambitious if
we're like, well, you know, we think we're
so lame that the only thing we're going to try to do is build our successor species as
fast as possible. Where's the ambition in that? Why would we do something so dumb? It's
almost like Jonestown, some sort of suicide cult. I love humanity. I think humanity has
incredible potential, and I want to fight hard for us to realize it
So I want us to build
Technology that gives us in a future we and our children really want to live in
Call me species. That's my dream
We're the same you and I are the same no question about that
And I think a lot of people who are family folks are the same as well, but here's so in other words
I mean, let's
Yeah, so let me actually ask all of you guys in the audience also so raise your hand if you're excited about us
Building more powerful AI and robots that can really
You're excited. They can really empower us and help us flourish in the future.
Raise your hand.
Okay.
Now raise your hand if you're really excited about building AI which will just replace
us.
It's a little hard to see with spotlights, but I don't see any hands at all.
So I think we're all on Team Human here.
And fellow speciesists.
But let me ask you a question.
So for example, so now what if, because it's always the government's going to say you can't build robots.
Okay.
But then.
No, no, no. Push back. Like the FDA doesn't say you can't develop new drugs. They just
say that in order to sell them you just have to do the clinical trial and show them the
benefit that's outweigh the harms. Similarly, the government, the law would say, yeah, sure
you can build robots before you can sell them. You just have to make sure they meet the safety
standards we've all agreed on. So, for example, you can't sell a robot if it enables the owner to just go tell it to
do terrorism or murder people for you, right?
That's a safety standard.
We can specify it.
And it's perfectly possible at the technical level, actually, just like we teach our children
what they can do, what's good, what's bad.
Do that with machines also.
But the question I'm asking is the following.
Do you think Iran has a nuclear weapon?
They say they don't.
Do you think they have it?
Maybe.
Maybe, maybe not.
Okay.
How would we know that they don't?
Iran is a massive country, bunch of desert.
How can we know they don't have a nuclear plant?
We don't, I think, no, for sure.
But look at it, it comes back to incentives again.
Suppose they do have a nuclear weapon.
Why haven't they nuked us?
Because they have an incentive not to, because then they would get nuked too, right?
And similarly, why do companies have an incentive to build airplanes don't
crash? Because ultimately it's bad for them, right? So that's the whole point really of
having safety standards. They didn't used to be in FDA actually, and this company, which
I will not name, sold this drug called thalidomide and
said it's great for mothers to take during pregnancy if you're feeling a bit stressed
and have headaches.
And they didn't mention that there was early research suggesting it causes a lot of kids
to be born without arms.
And they sold it, they sold it.
It was a horrible tragedy, and eventually they got shut down here, and then they started
when it was banned in the U. the US they started selling it in Africa
So that's what happens if you have the wrong incentives, right? It wears if you have safety standards
The cut that that companies have to meet then when they try to maximize the shareholder value
They're actually gonna do the good things and not the bad things
Yeah, but but again, let me ask this maybe last question
and we'll transition to a different topic.
So we could have regulations for America that, hey,
these are the standards you need to go through until you do XYZ.
However, in America, we allow lobbyists to buy up politicians.
So what if somebody is building a robot company,
has massive
expenditure now for lobbying a few billion a year, 10 billion a year, and
they're able to go past certain laws to give them the leverage to continue
growing, and then, hey, don't come out with these laws, and maybe even other
countries who don't have to abide by our rules and guidelines, who have their
own money and they spend a bunch of resources, and we don't, and come 10, 20,
30 years from now, all of a sudden the military in another country is stronger than ours and we fell behind.
That's also the fear because the fear is like let's not accelerate AI in our country and
robots but somebody else does and then the military drops.
So then what happens?
You understand the concern?
Totally, totally, yeah.
So that's a very real concern.
AI can be very persuasive now,
and it's getting more persuasive by the day.
This morning I was reading about some new AI cults
that are forming on the internet and so on.
And so it's super important to have safety standards
also for systems that are out there on the internet
talking to people to make sure, yeah sure we understand what's going on there.
We're lucky in that we in the US have the strongest AI industry in the world.
And that gives us the opportunity to make sure that bad things aren't done with our tech.
And interestingly, we should do it regardless of the rest of the world for our sake and If you worry about China, for example
So so Elon told me this really interesting story. He was in a meeting with some top people from the Chinese Communist Party and
He started talking to them about superintelligence and he said, you know if if a Chinese company builds superintelligence
After that China will not be run by the Communist Party.
It'll be run by the super intelligence.
And Elon said that he got some really long faces, like they
really hadn't thought that through.
And then within a month of that, they passed
their new AI law.
So why would China?
They were afraid of the Chinese companies doing crazy
shit. In other words?
China they put in place their own regulation on drugs not to help us but to protect Chinese consumers
They they are reigning in their tech companies for doing crazy stuff because they want to stay in control
So here again, you know incentives incentives incentives
They kind of align each company has an incentive to make sure that they don't lose control, that none of
their companies lose control over their tech.
And then once that happens, at a grassroots level in individual countries, there's a very
strong incentive now for different countries to talk to each other.
The European FDA, the American FDA, and the Chinese one, they talk to each other. You know, the European FDA, the American FDA,
and the Chinese one, they talk to each other all the time
to harmonize the standards.
So someone who develops the medicine in the US
can get quick approval elsewhere.
We have these AI safety institutes,
which have just been created now in the last year.
America has one.
England has one.
China just started one.
These nerds are going to be talking to each other again, comparing notes, and that's how we're
going to eventually get some global coordination too. It's a mistake to be, I
want to add some optimism here, you know, because some people are so gloomy these
days, and they say, oh, you know, well, we're screwed because we're never going to
get, we don't have the same goals as China, we're never gonna get along. Hey, you know, we were not best buddies with Brezhnev
from the Soviet Union either, didn't have very aligned goals,
but we still made all sorts of deals
to prevent the nuclear war, because both sides realized
everyone would lose if we just lost control of that.
It's exactly analogous here.
All the top scientists across these
countries, if they tell their governments, you know, everybody loses if we lose control.
There's an incentive, whether they love each other or don't, you know, for them to just
coordinate. And this can be done. There's definitely hope there.
Okay, awesome. In regards to a lot of folks here run businesses,
they're entrepreneurs, small business owners,
from a millionaire to a billionaire in top line
revenue, from two employees to 10,000 employees, some.
What would you say on the opportunity side,
specifically for the business side?
So the next three, five, 10 years,
we're already seeing it all over the place.
Open AI, you got Grok, you got NVIDIA,
you've seen all these different things happening.
But as a small business owner, how should I have my,
what should I view my relationship with AI?
Great questions, I have a lot to say on this.
First, don't think ten years.
Think of the next two years.
Crazy things are gonna happen.
And if you make a plan for what you're gonna do
in nine years, it's gonna be completely irrelevant
because you wanna be nimble is what you wanna do.
And look at what can you do right now.
That's gonna help you in the next 12 months
and then go from there.
First thing I would talk about is hype.
There is a lot of hype about AI.
It's a strong brand right now
so people will try to sell you
a glossed up Excel spreadsheet and call it AI.
Don't fall for the hype.
But there are two kinds of hype.
The first kind of hype is what we had with cold fusion,
like where the whole technology is just complete dud.
Then there's a second kind of hype.
Like, who remembers the dot com bubble?
Yeah, so that was a lot of hype, right?
But would it have been, and a lot of people lost a lot of
money, but is the right lesson to draw from the dot com bubble
that the internet and the web never amounted to anything?
Would a smart thing to do for a company then
to be like, no, we're never gonna have a website?
Of course not.
So the hype there was not, the technology itself
was in fact gonna take over the world.
The hype was about certain companies
that were giant flops, right?
The kind of hype we have with AI now, I is the second is exactly the dot-com kind of hype
There are a lot of companies that are very overvalued and are gonna bug go bust
But the technology is here to stay and it's gonna blow our minds
and so
So what do we do with our own personal business here in such a risky environment?
Well, first of all, look at your existing business.
Instead of dreaming about some pie in the sky,
completely new thing you could do that might be a giant flop,
look at what are you doing right now across your company
that AI might be able to greatly improve the productivity of.
Usually what happens first in your companies
will not be that you just replace a person by AI,
but rather enhance your staff with AI.
So you look at someone who's doing certain tasks,
and you realize you can give
them some tools that they can use to do 40% of their tasks much better, much more productivity
for the same head count. Much lower risk now, right? Because they already know what they're
doing. If the AI writes the first draft of that report or whatever, they will read it before they send it out. So you don't take a risk of being like that lawyer
who filed a court case where there was a case law,
it's like they cited from Chat GPT
that was just completely made up
and the judge didn't like that.
So if you take things you're doing basically
and empower your staff to have AI, do first drafts of things, etc
But the humans are in charge. There's still quality control very low risk huge productivity gains
a second thing I would say is
it's tough, you know when you're especially if you're a small business and
lack a
Lot of in-house expertise to not get ripped off by companies who are trying to sell you a bill of goods
with an AI sticker on it.
So it's a really good idea,
even for relatively modest sized companies that you have,
to just get at least some in-house expertise,
even just one person is really quite knowledgeable.
And that person can then go around
and talk to other key staff across your organization,
learn what they're doing, and advise them on how they can automate certain things,
enhance the productivity, and get things done in a way where you get all the upside and no downside.
What would be the position of that?
And by the way, in about eight minutes, I'm going to come to you guys to ask questions.
We're probably going to get two or three questions.
So if anybody wants to ask Max any questions, go line up by the mic.
We'll come to you guys momentarily.
So when you're looking at hiring somebody that has AI expertise, when you're interviewing
a CTO or you're interviewing somebody that's a CIO, you're bringing somebody that's a BI,
you're a business analyst, what types of questions are you asking to make sure they have
background in AI? Ask them what they've built before. This is very much not about
being able to talk the talk but to be able to actually walk the walk and build
systems, make things work. They should have a track record of having built
things. I mean really nerdy, sit at the keyboard, install stuff, get real
productivity. But don't put them in charge of making business decisions. They are, you
know, they're automation engineers, they're people who humbly go and interview other people
in the company and hey, and ask them, hey, tell me about your workflow. And if those other people feel, yeah, this would be great
if you could give me a tool that writes the first draft
of this thing, that person you've hired
could either do it themselves or contract it out
to someone else to do it, to provide the expertise.
There are many pitfalls.
I mentioned Knight Capital, for example,
with this trading disaster,
There are many pitfalls. I mentioned Knight Capital, for example,
with this trading disaster.
Because that's kind of in a nutshell
what you don't want to do.
Put in place some AI system in your company
that you haven't understood well enough.
It might not even screw you over by crashing.
It might screw you over by giving your proprietary data to whatever company
owns the chat bot, right? And maybe you don't want that. It might screw you over by being
very hackable. So suddenly you come into the office and there's ransomware on all your
computers. It's really important to tread carefully.
But if you do tread carefully like this,
there's just spectacular upside.
Night capitals the one day, he said $10 million
per minute for 44 minutes.
Yeah.
Did you tell them already or no?
Did you guys hear about the story, what happened with them?
Yeah, I mentioned it.
Got it, $4 10 million on 40 minutes 440 million dollars and 44 minutes. So so
That's the one dirty secret
I have to tell you about the state-of-the-art large language models and a lot of the gen AI stuff is we have no idea
Really how it works?
But that's not necessarily a showstopper if you have a coworker, you don't understand exactly how their brain works either.
But you can have someone else check their work and so on.
And if there's an important business decision, you have the
final call and you ask them some tough questions first.
But you have to treat your any AI systems you have as the
output from them,
as some work you got from some temp worker
that you have no reason to trust at all.
So if you can find a way of just verifying
that what they did is actually valid and correct,
that takes much less time for you
than it would have taken to actually
create that thing in the first place, you're wrong.
Max, do you have kids?
I do.
Okay, so with kids, I got a 12 year old, 10 year
old, 8 year old, and a 3 year old. Anybody have kids? Raise your hand if you got kids.
A bunch of people here has got kids. So how do you not only have the conversation with
your kids about AI, but also career planning, positioning, where traditionally you're like, hey, son, you're gonna grow up to be a this.
How do you manage career planning with them
at a young age, knowing what direction AI is going?
I just said to you, what do you do with, you know,
your two year, three year, five year,
10 year plan with AI?
You said, here's what you don't do,
no 10 year plans, right?
You said go to what, 12 to 24 months.
So imagine, 12 year old, what's it gonna happen in six years?
So how do you manage that with career planning and kids?
Yeah, you know, I have a little guy who's just
gonna be two years old in December here,
and it's tough, it really keeps me awake at night
thinking about this.
I think one obvious message is
that you have to be nimble to live in the future
and prosper.
The idea that you spend 20 years studying stuff
and then some career and then you do that for 40 years,
forget about it, that's so over.
You need to be nimble and second and have the idea that you're gonna constantly be innovating, learning
new things and going where it makes sense to go. The second thing is
whatever field you're in or your children are going or in or going into,
even if it seems like it has nothing to do with AI, it's crucial
that they're up to speed on how AI is influencing and will influence that industry, right?
Because what's going to happen then is not that your kid is going to be replaced by an
AI, but rather people who don't use AI get replaced by people who do.
And you want your kids to be in the second category,
the ones who are the early adopters,
who become more productive,
not the ones who are in denial and just get replaced.
How soon do you introduce them to them?
Oh, there is no too soon.
You know, it's...
There's no too soon.
I mean, okay, little Leo, you know,
we keep him away from screens altogether. And you know, if you have someone,
any kids in school these days,
they're always using ChatGPT,
even if you think they're not.
So no need to skip the introduction.
But it's really important to get kids thinking about how they can make the technology work for them, not against them.
And both in the workplace actually and in their private lives also.
It makes me really, really sad when I walk around the beautiful place like this and see all these good looking teenagers around the table and they're not looking at
each other. They're staring into little rectangles like the zombie apocalypse came or something
like that, right? So this isn't just business. This is also about our personal lives. How
can we make sure that we control our technology rather than our
technology controlling us? Coming back to this idea, we are team human. Let's figure
out how we can make technology that brings out the best in us so we can let our humanity
flourish rather than trying to turn ourselves into robots and compete in the losing race
against machines
like John Henry against a steam engine.
And since we're out of time,
can I just end on a positive note?
Please.
Since you said that you're a doomer, a gloomer.
I wanna remind us all about something incredibly inspiring
and positive about all this.
Our planet has been around for 4.5 billion years and our species has been around for hundreds of thousands of years.
And for most of this time we were incredibly disempowered, like a leaf blowing around on a stormy September day,
you know, very little agency. Oops, we starved to death because our crops failed. Oh, we died of some disease because we haven't invented antibiotics, you know. And what's happened is that technology and science have empowered us.
We started using these brains of ours to figure out so much about how the world
works that we can make technology to become the captains of our own ship.
And we've doubled our life expectancy even more, and every single reason for why today
is better than the Stone Age is because of the technology.
And if we do this right with AI and use artificial intelligence to amplify our intelligence,
to bring out the best in humanity, humanity can flourish, not just for the next election
cycle but for billions of years, not just on this planet,
but if you are really bold, even out in much of our gorgeous
universe out there, we don't have to be limited anymore
by the intelligence that could fit through
mommy's birth canal.
It's an incredibly inspiring and empowering future
that is open for us, if we don't squander it all
by doing something reckless and dumb.
And this is what I want to leave you with here.
We want to make sure to keep AI safe,
keep it under human control,
not because we're a bunch of worry warts,
but because we are optimistic.
We have a dream for a great future.
Let's build it.
So in other words, you believe the future looks bright.
We have the power to make it bright. I'm with you. It's gonna take work. Max, appreciate you for coming on. Make some noise everybody. Max, take more. For the last four years every time we do
podcasts I have to ask Rob or somebody, hey can you pull up the snooze? Can you
pull up that? Which way do these guys lean? Can you go back to the
timeline of eventually after asking so many questions, I said, why
don't we design the website that we want aggregated?
We don't write the articles.
We feed all of it in using AI.
So nine months ago, eight months ago, I hired 15 machine learning engineers.
They put together our new site called VT News.ai.
What this allows you to do when you go to it.
If you go to that story right
there that says Trump Proposes Over Tempe, click on it, it'll tell you how many sources
are reporting on this from the left. If you go to the right, Rob, it says left sources,
click on it. Those are all the left sources. If I want to go to right sources, those are
the two. If I want to go to center, I go there. Now if I want to go all the way to the top
and I want to find out a left-sided story, a story that only one side is reporting on, either the left or the right.
So if you notice the first one, we'll say Zelensky announces the release of 49 Ukrainians
from Russia.
Notice more people on the left are reporting on that than the right.
If I go to the middle one, same thing.
If I go to the right one, same thing.
You can see what stories are lopsided.
And if I pick one of the stories, pick the first story, click on a Trump one proposals over time
tax cuts, to the right on the AI, I can ask any question I want but click on the
first question that has it. It says what is the political context and potential
motivation behind the tax, Trump's new tax cut proposal? Click on the question
mark. It explains exactly what the motives are for you to use,
whether you're doing a podcast, you're in the middle of a podcast, or you just want
to know it for yourself, you're busy like myself.
And last but not least, this is all AI doing, it's the machine learning engineers.
Go all the way to the top.
I can go to timelines, go to timelines and see how far back a story goes, pick the Israel-Palestinian
conflict.
If I want to go to that and go back and see why are those two days a big spike, I'll have
Rob pull it over to go to those two days with a big spike and I'll see exactly what happened
on that day or the previous day and many other features.
VTNews.ai has, so simply go to VTNews.ai.
There's a freemium model, there's a premium,
and then there's the insider.
If you wanna have unlimited access to the AI,
click on the VTAI insider.
You can now become a member effectively today.