Will Cain Country - Why The Sam Altman Saga Could Be Dangerous For A.I.
Episode Date: November 24, 2023Who is Sam Altman? How critical are the coming days to the future of A.I.? Will sits down with Ph.D., Founder, and CEO of Virtue and author of Ethical Machines, Reid Blackman, to delve deep into the f...igure who has a substantial influence on how A.I. changes the very way people live. Tell Will what you thought about this podcast by emailing WillCainPodcast@fox.com Follow Will on Twitter: @WillCain Learn more about your ad choices. Visit podcastchoices.com/adchoices
Transcript
Discussion (0)
He's been described as perhaps the most powerful person in humanity.
He is the man behind artificial intelligence.
Late last week, Sam Altman was ousted, a coup at Open AI.
So who is Sam Altman?
What happened at ChatGPT?
It's the Wilcane podcast on Fox News Podcast.
What's up?
And welcome to the weekend.
Welcome to Friday.
As always, I hope you will download, rate, and review this podcast wherever you get your audio entertainment.
At Apple, Spotify, or at Fox News podcast.
You can watch the Will Kane podcast on Rumble or on YouTube.
And follow me on X at Will Kane.
I know loosely.
I know intuitively.
There's everything happening in the world, politics, but yes, even including
war, there might be nothing more consequential than what's going on in technology with artificial
intelligence. I mean, this is the beginning of what could be the end. And I think that's, yes,
a fearful or pessimistic view of the future of humanity. And I would love to adopt an optimistic
and fruitful vision for the future of humanity. But I don't yet have the understanding. And when we
don't have understanding, we do proceed from a position of fear. So I need to understand
artificial intelligence. And as such, I need to understand the man largely behind AI. His name is
Sam Altman. He was the subject of a coup. He was ousted late last week. Some five to seven hundred
employees have said, if he's out, I'm out. On the prospect of adding Sam Altman, Microsoft adds
billions and billions and billions and dollars of market capitalization. So why is this man so
powerful? Who is Sam Altman? Do we need to be worried or should he give us hope if he's the man
behind artificial intelligence.
Today, we'll have that discussion with Reed Blackman.
For a limited time at McDonald's, enjoy the tasty breakfast trio.
Your choice of chicken or sausage McMuffin or McGrittles with a hash brown and a small iced
coffee for five bucks plus tax.
Available until 11 a.m. at participating McDonald's restaurants.
Price excludes flavored iced coffee and delivery.
It is time to take the quiz.
It's five questions in less than five minutes.
We ask people on the streets of New York City to play along.
Let's see how you do.
Take the quiz every day.
at thequiz.com. Then come back here to see how you did. Thank you for taking the quiz.
Read Blackman, the author of Ethical Machines and the host of the podcast, Ethical Machines, joins me now.
Reed, it's always great to see you. I always find it both terrifying and enlightening.
Talk to you about artificial intelligence. Huge news from the world of AI over the past week, week and a half.
Let's start with some very basics.
Who is Sam Altman?
Well, he's the co-founder of OpenAI.
Everyone knows Open AI.
If they don't know the name, they at least know ChatGPT, which came out about a year ago.
And so he's one of the major pushers of the most powerful form of AI we've seen ever.
And at this point, while ChatGPT is something that most.
people have heard of. Is it fair to say as AI startups are pretty much all over the tech
landscape, that ChatGPT is your far away frontrunner making Sam Altman the most important
person in artificial intelligence? That's pretty, he's got to be top five. You know, he had
no if he's the number one. People are going to say things like Satina Della at Microsoft,
for instance, is up there. But they're allies. So the two of them combined are probably
the power couple, I suppose, one could say in the AI world.
But yes, Sam Altman has, along with everyone who works at Open AI, done a tremendous amount to both advance the technology sort of at the research level and then commercially by releasing it as chat GPT and then making paid versions of that as well.
They've got lots of corporate clients.
They have a massive multi-billion dollar partnership with Microsoft that's in peril at the moment, but it's there.
So he's done a tremendous amount and there's no question.
Every lawmaker, anyone who, anyone in the Senate who wants to talk about AI regulation,
they're talking, among other people, to Sam Haltman.
I want to stay on this for just a moment of trying to contextualize just how important
or big this figure is, how big is Sam Maltman.
You know, I remember reading a book.
I believe it was called Sonic Boom by Greg Easterbrook.
And he was talking about Silicon Valley tech investing throughout its history.
And one of the maxims that they learned from, I believe it was KKR, the venture capital
firm, was you don't want to be a pioneer.
You don't want to invest in MySpace because Pioneers get scalped.
You want to invest in the settlers that come along behind the pioneers.
So you want to invest in Facebook.
Is Chad GPT, in your estimation, the MySpace, the pioneer of AI, or is it a land grab,
even a brain drain to try to get everyone under one roof or one umbrella, and whoever is there
first will be the winner of artificial intelligence.
That's a great question.
It might be too soon to tell, especially because of what's going on in the news lately,
which we'll, of course, talk about.
Plausibly, what Sam has been trying to do is make Open AI the settler, not the pioneer
who just shows up, but then everyone else runs them down.
There was an event a week or two ago called the, you know, it was a big DevOps day,
a day for developers for using Open AI and introducing Open AI in a more commercial way.
More specifically, they were basically saying, hey, you know all these chatbot,
the chat bot that you have, chat GPT, you can make your own even though you're not a coder.
So people were comparing it to sort of the app, like launching the app store for Apple.
This was sort of launching the app store for AI.
and so people were very excited about that. That's a move that's not, hey, you know, we're going to try some things out. It's we're going to take over the space as best we can. So that's one thing to say. Their sites are certainly on becoming settlers and arguably they're out in front. The other thing to say is that even though Open AI was in some sense the new kids on the tech block, nonetheless, they had and have a very powerful partnership with Microsoft who invested something like kind of $10 billion plus. So they also have
Arguably that there's nuance here. They've got resources as well to make the case that they are the settlers. Meanwhile, you know, Google is on their heels or other kinds of startups like an Open AI called, for instance, Anthropic. They're competing. So it's too early to tell. But the thing that's interesting that's going on the news now is OpenA may go under. It might be done.
Yes, and I want to get into that, the fall of Open AI, the coup over Sam Altman.
But one more question in order to fully understand, maybe sort of the battleground, the
landscape, I recently heard this comparison made as well, that artificial intelligence
is a bit like electricity to understand the moment that we're living in.
And it's hard for me because I don't use, I don't even use chat GPT, much less artificial
intelligence. And I haven't yet felt my life deprived in some way. I don't know that I've seen the
benefits of it yet read. And I'm certain it's coming. I'm a believer, certainly in the way everyone is
talking. That's probably how people felt at the advent of electricity as well. Like, I don't know,
maybe it was more obvious how it was going to change lives with electricity at the outset. But
the comparison was that now electricity runs through your life as a given. You know, you don't
appreciate the role on a day-to-day basis of electricity, and one day not in a far too
different future, will not only be obvious how AI has improved your life, but you will actually
take it for granted. You will be losing it, you know, day in, day out, hour in, hour out,
minute in, minute out as part of your life. Do you think that's fair, like how, what we're at the
advent of and that this is how AI will be part of our lives, just interwoven into our lives
in a way we no longer can even appreciate.
Yeah, so I think the answer is yes.
One thing I want to note about the electricity analogy is that I don't know if when they
figured it out or discovered electricity, everyone thought, oh, great, this is going to be
great for everyone.
It was the applications of that, like the light bulb.
So once you invented the light bulb, people immediately got the light bulb.
Oh, this thing makes things light.
When I can't see you in this dark out, I don't have to use a candle.
There's no risk of fire.
You know, I can have it at will.
They understand the application of electricity as opposed to,
if you like the technology that is electricity, if we can call electricity of technology.
I mean, there were technologies to harness electricity. But again, even if you told the average
person back in whenever, whenever, 1900 or whatever it was, I don't know that they would get
what it would mean to their everyday lives. If you said, oh, I built the thing to harness
the power of electricity. What does that mean? I built the light bulb and this is what it does
that they get. It's the same kind of thing with AI, I think. The average person is not going
to grasp what is AI, but applications of it they get. And so here's a couple of examples that
you've already seen in your life. So number one, easy one, your photo software in your phone.
You take a picture, it puts it into your, you know, it recognizes, oh, that's a picture of your
wife or that's a picture of your child and it goes into the appropriate folder, whatever,
so that when you search for, hey, I want to see all the pictures of my wife or all the pictures
of me, it recognizes, as it were, those people in those photos. And so that's AI at work.
That's AI doing face recognition. You also, if you've done international travel recently,
when you go through passport control coming back into the U.S., it's kind of amazing.
You don't even have to, you don't have to go wait in this long line.
You go to a kiosk, it scans your face, and then it prints you out a little thing.
It says you're good to go, and you hand it to the guard, and that's it.
Again, that's facial recognition software that's powered by AI.
A little bit more mundane, maybe.
You take a look at the voicemail transcriptions, right?
You don't have to listen to the voicemail.
You can see the transcript.
That's AI listening to the voicemail and converting it to text.
So this stuff is already there.
And then there's all sorts of ways that you're getting served, say, ads on Facebook or Instagram or TikTok, or the way in which your feed and TikTok is constructed.
That's all by virtue of how an AI is working.
So you're already, people are already interacting with AI.
They just don't know it.
They're working with applications and under the hood is an AI.
But just like, you know, I drive a car and there's a certain amount of horsepower under the hood.
I don't actually know what's under the hood.
I'm not a mechanic.
but I know the application of all that technology.
You know, you don't just have to fly internationally, by the way.
I fly every week.
I'm on a plane all the time.
And I've seen it domestically.
I mean, at Dallas-Fort Worth Airport, you will get your face scanned at some of those TSA pre-checks instead of having to go through all the driver's license.
And at Madison Square Garden, I went to UFC 295, and they have that, you know, not everyone has to go through the metal detector anymore.
You know, they scan faces.
Interesting.
Yeah, yep.
That may be so James Dillon can kick people out of the garden that he doesn't like,
which I've heard he does at MSG, but you're definitely getting your face scanned at Madison Square Garden.
Yep.
And, you know, on the one hand, that is really, really scary, right?
I mean, we're talking about massive surveillance by corporations, by government, virtually anyone.
I mean, facial recognition software is fairly widely available.
On the other hand, you can think of some pretty amazing applications.
A child is lost.
a child is kidnapped. You run it through, you know, you look for that face in the, in, say, CCTV or something like that, and you can find the child. I mean, so there's really scary stuff and there's really, whatever the opposite of scary is, great opportunities. The question's going to be, among others, how do we make sure that it gets used in a responsible way? How do we create the proper guardrail so that no massive surveillance? You know, we still want searches requiring a warrant, that sort of thing. How do we keep that, but also allow ourselves to, say,
for instance, safe children or identify the criminal in the crowd.
So let's go back to Sam Altman.
I was listening to a podcast a couple of months ago, the All-In podcast with tech guys,
David Sachs, Chmath, Paula Patia, and they were talking about Sam Altman.
And read, the way they talked about Altman made him seem like potentially the figure in the
movie that controls, you know, the singularity.
These are not just casual guys, by the way.
These are tech investors.
And they were talking about, you.
Even though Open AI was a nonprofit, how Altman was capable of turning that into potential domination of the market and potential profit.
In short, they were talking about Altman essentially emerging as the most powerful man on the planet through his leadership, I guess, of Open AI and their domination of artificial intelligence.
What do you think of that characterization?
Because, again, I think a lot of people listening may not be keeping up with what happened at Open AI and the ousting of Sam Altman.
but I want to talk about who this guy is and why this might be important. Do you think that is a fair
characterization of Altman? You know, I've heard comparisons to him as sort of like the next Steve Jobs. That
sounds not implausible. It's hard to say. I mean, look, let's not forget he's got lots of
scientists behind him. He was a co-founder of the organization. He's clearly an impressive CEO.
He's also an impressive spokesperson for his business. I mean, he's everywhere, right? I mean,
this guy is certainly now, but even before that, you couldn't talk about AI without mentioning
Sam Altman and Open AI. Regulators don't do anything without talking to Sam. The big tech
companies, you know, they're talking to Sam Altman. Like I said, Microsoft has this partnership
with Open AI. Is this the future of humanity debating whether or not the new leader of the
most powerful technology? And this is not my characterization. I'm just picking up what others putting down.
At least of our lifetime, I mean, artificial intelligence. Is it is the future of humanity debating and
hoping whether or not the man who controls artificial intelligence is benevolent?
A little bit. I mean, look, there's these giant tech companies who have trust and safety
teams or ethics teams. Microsoft, for instance, has, they call it responsible AI. So a responsible
AI office. They have a chief responsible AI officer. And one hopes that they actually live up to
the values that they say they want to live up to. They were a bit, my estimation, a bit fast
with the way they rolled out, GPT 4.0 in their Bing search engine. But, you know, they at least
appear to have a commitment to trust, safety, that sort of thing. Others don't. I mean,
meta recently, it's unclear. So Microsoft also at one point dissolved their ethics and society board,
sorry, committee or department. Meta recently dissolved. This was in the news last week,
but it got lost in the shuffle with Sam Holtman. Meta just dissolved their responsible AI team.
Twitter did that, you know, a year or so ago when Musk took over, disbanded the trust and safety team.
It's not just Sam Altman. It's all these tech companies who are developing these technologies.
Are they going to genuinely invest and back their ethics or trust and safety teams?
And we're seeing an exodus of those teams or rather a dissolution of those teams by the big tech companies, not doubling down on them, unfortunately.
So insofar as same as one of those people, to the extent that he doesn't push to double down on trust and safety and instead focuses on commercial activity at the expense of, at the safety responsibility.
Yeah, we have reason to worry.
You know, this always takes me back to first principles.
It truly does.
Like, on the one hand, what individuals do I entrust to sit on a board of trust and ethics, you know, safety and ethics?
Yeah.
I don't, and I think in the case of Twitter and before Elon Musk, I think the answer would be I trust very few people that would sit on a board of safety and ethics.
And so I don't, I don't want to entrust humanity to these individual tech boards who define what is ethical.
I also don't feel very good about entrusting all of it to one individual as I would be forced to sit here on a continuous basis and say, do I believe that Elon Musk is truly.
dedicated to free speech. Do I do truly believe that Sam Altman is good for humanity?
On another hand, I don't want to, I mean, citizen power, individual freedom is ensured through
essentially two mechanisms. One, voting, and that is through government. And so that's diluted,
though. That's watered down through, you know, now two centuries of Washington, D.C. politics
that's limited the voice of the individual. So do I trust the government to regulate this?
technology in some ethical way. My answer will be inevitably no. And then the only other way
that we have power is through choice, consumer choice, what we choose to do with our dollar,
our votes. That's what a dollar is. It's a vote. A dollar is a vote of trust and endorsement
of something. When you spent, that's just at its most basic level, when you trade someone,
your dollar, which you got for your hard-earned time or your hard spent time, you are
endorsing in exchange for convenience or better in your life some way, whatever it is that person is
doing. That's our most basic level of control. And I just wonder as this fight goes on and these
guys compete with one another, I guess it's probably the first principle we have to endorse
some level of competition between all these AI companies that we will, we will, through our
individual choices, choose, you know, elect, buy, patronize.
some ethical version of artificial intelligence.
Yeah, I think that's roughly right.
So there's a bunch of say.
That was a lot.
So there's a couple of things to say here.
Let's start with the sort of corporations having different ethical standards thing.
That's sort of, that's generally fine.
I'm perfectly fine living in a country where we have Patagonia and we have Hobby Lobby,
and we have everything in between.
Very different political ideologies, different actions as a result of those ideologies.
I don't want government to shut down Hobby Lobby.
I don't want government to shut down Patagonia.
or anything in between. Now, if we have like, you know, the KKK shop, all right, that's
a problem. But as long as we don't have that, there's as long as things are sort of,
well, even that, though, Reed, that's not against the law, like in the United States of America.
Yeah, no, that's true. A KKK shop should not, in theory, be shut down by the government.
It should lose by the power of the individual choice and lack of patronization. Yeah, on the condition
that they're not actually engaging in, say, acts of lynching. Correct. Right. Correct. Yeah, yeah.
Yes. So, and so I do think that, yeah, Twitter is going to have different ethical standards than Facebook, which is going to have different standards than maybe Instagram, even though they're owned by the same company, which is going to have different standards than, et cetera, et cetera. And so then I think consumers are going to spend here versus there. And, you know, that's fine. That's not a nightmare scenario. That's just, you know, the fact that different people have different beliefs and we live in a diverse society with different political and ethical beliefs and blah, blah, blah. Okay. So that's, that's fine. What can government,
And so, so I think that when we say, do I trust the people on the ethics board at Twitter or X or meta or whatever it is, I don't know.
I trust that they'll do something that's not illegal, that they'll keep things at least within the bounds of legality and that they will hold not, not bonkers ethical views.
That's, that's, that's sort of my, my first.
I trust neither of those things.
Okay, okay.
Yeah, I mean, it's a, it's a gray area, but it's, uh, I think I've got the backing of violations of the.
First Amendment and bonkers morality from these trust and safety boards over the past five
years on my side. I don't know where your rosy view comes from, but that's my dark view.
I mean, there are definitely views I disagree with, but I disagree without thinking that they're,
in some cases anyway, I disagree without thinking that they're crazy. But anyway, you know,
we have to get into particular decisions to sort of settle on that. Now, with regards to government,
government's an interesting thing. Right now, you might think about government regulation as being
process-oriented or outcome-oriented? So when you think about outcome-oriented regulations, think about
things like, I don't care what you do, just make sure that this car gets 50 miles to a gallon by,
you know, 20, 35 or whatever it is, right? It's about the outcome. It's got to be able to go this
far on a gallon of gas. Process-oriented is not like that. It's sort of like you're going to,
the law requires you to engage in this kind of process. So think about the criminal justice system
and trial by jury and how a trial has to unfold. Those, they're process-oriented
laws about how that's got to go. So we regulate or we make certain processes legal or illegal.
When it comes to AI, there's no way we're going to get, at least for AI, generally, outcome-based
regulations like we do with automobiles. And the reason is that the applications for AI are in the,
I don't know, tens of thousands, hundreds of thousands, millions. I mean, there's tons of
output. So it can't be something like, you've got to produce this with AI. That's not going to happen.
And so right now, what all the regulations are doing, they're focused necessarily on
assessed. If you're going to build AI, you got to do this stuff. It's a lot around documenting
the significant decisions that are made, being transparent about those decisions, and then you can still
have a Twitter or an X and a meta and a Patagonia and a Hobby Lobby, and they just have to sort of
be careful in the way by which they assess their AI as they're building it. But there's nothing
that I've seen that, and you can't produce these outcomes, unless those outcomes are already established
as being illegal, for instance, running a foul of anti-discrimination law.
So I would say the only other application of government that I would be interested in figuring out,
we won't today.
I've got to go in just a few minutes here with you, but is government that further empowers the individual
on that third category I gave you of voting with our dollar, and that is government being
very willing to consider antitrust violations.
So the way to protect us is to ensure that no single company and no single individual
can dominate this market, can become the only option.
That has been at play for the last couple of years, where that's what they're setting
their sites on, for instance, breaking up Google or trying to anyway.
And so, yeah, not having a monopoly is going to be important.
For what it's worth, Google and Microsoft are still in heated competition.
There's a vibrant AI startup world where there's lots of competition.
And so we're still very early on.
So, well, yeah, you might worry about Microsoft.
and Google, Amazon.
Or Sam Altman.
Or Sam Altman, sure.
Dominating, it's too, it's too soon for that.
Any more than you would say, I mean, the iPhone is, isn't, you know, huge market share, but
they're still Android, right?
So I agree, but generally I agree with the point, with your point.
And there's going to be an empirical question as to when do we get to the point,
or what makes of the case that we're at the point, that there's too much power concentrated
in two, too few corporations.
Or, to your point, individuals. It's a real problem.
I think I've said this in the past. I feel so ignorant on this subject. And yet I feel like without buying into hype and hyperbole, it is, if not the only, the primary story that matters, the one that threatens to change everything about how we live on a daily basis and how we govern ourselves, both collectively and individually.
And, you know, I think you and I will be talking a lot in the future, read, about continuing to help me to understand this in its application.
And it seems like we're at the edge of the frontier.
We're staring out across the Great Plains.
We know it's full of hostile Indians.
And yet, we have to push West.
And we've got to figure out the best way to do that.
And as always, like I said, it is terrifying, but also enlightening to talk to you, Reed Blackman.
Fox News Audio presents.
Called with James Patterson, every crime tells a story, but some stories are left unfinished.
Somebody knows. Real cases, real people. Listen and follow now at foxtruecrime.com.
There you go. I hope you enjoyed that conversation with Reed Blackman. Check out his books, Ethical Machine or his podcast, Ethical Machines, wherever you get your audio entertainment.
That's going to do it for me today. I will see you again next time.
Listen ad-free with a Fox News podcast plus subscription on Apple Podcasts.
And Amazon Prime members, you can listen to this show ad-free on the Amazon Music app.
This is Jimmy Phala, inviting you to join me for Fox Across America,
where we'll discuss every single one of the Democrats' dumb ideas.
Just kidding. It's only a three-hour show.
Listen live at noon Eastern or get the podcast at Fox Across America.com.