Bankless - Sam Altman Fired as OpenAI CEO, Joins Microsoft?
Episode Date: November 20, 2023Sam Altman Has been fired as CEO of OpenAI, joining Microsoft this morning and sending the internet into a spiral. Here's everything you need to know about what went down and what it all means for the... world. ----- 🏹 Airdrop Hunter is HERE, join your first HUNT today https://bankless.cc/JoinYourFirstHUNT ------ BANKLESS SPONSOR TOOLS: 🐙KRAKEN | MOST-TRUSTED CRYPTO EXCHANGE https://k.xyz/bankless-pod-q2 🦊METAMASK PORTFOLIO | MANAGE YOUR WEB3 EVERYTHING https://bankless.cc/MetaMask ⚖️ARBITRUM | SCALING ETHEREUM https://bankless.cc/Arbitrum 🔗CELO | CEL2 COMING SOON https://bankless.cc/Celo 👾GMX | V2 IS NOW LIVE https://bankless.cc/GMX 💲 USDV | NATIVE OMNICHAIN STABLECOIN https://bankless.cc/usdv ------ TIMESTAMPS 00:00 What Happened? https://x.com/OpenAI/status/1725611900262588813?s=20 https://openai.com/blog/openai-announces-leadership-transition https://x.com/sama/status/1725631621511184771?s=20 https://x.com/gdb/status/1725736242137182594?s=20 06:25 OpenAI Board Members https://twitter.com/ilyasut?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor https://twitter.com/adamdangelo?lang=en https://www.linkedin.com/pulse/tasha-mccauley-wikipedia-joseph-gordon-levitt-wife-ali-raza-3gsbf/?trk=article-ssr-frontend-pulse_more-articles_related-content-card https://cset.georgetown.edu/staff/helen-toner/ 11:59 The New CEO https://x.com/eshear/status/1726526112019382275?s=20 https://x.com/adcock_brett/status/1726484049664057789?s=20 https://x.com/EffMktHype/status/1726558177028935770?s=20 https://x.com/Jason/status/1726492873775079807?s=20 16:32 Microsoft's Role https://x.com/satyanadella/status/1726509045803336122?s=20 https://x.com/sama/status/1726510261509779876?s=20 https://x.com/balajis/status/1726515841221681619?s=20 22:27 Employee Pushback https://x.com/miramurati/status/1726542556203483392?s=20 https://x.com/ilyasut/status/1726590052392956028?s=20 25:01 The Future of Silicon Valley 28:03 Decel vs e/acc https://x.com/hosseeb/status/1726492186953535764?s=20 31:11 Takeaways ------ Not financial or tax advice. See our investment disclosures here: https://www.bankless.com/disclosures
Transcript
Discussion (0)
Sam Altman was just fired by the board of directors of OpenAI.
Why?
What did he do?
And why are over 500 employees of Open AI signing a letter of protest?
Where is Sam Altman going to go next?
This is a pretty big deal for tech, for Silicon Valley, for the future of the AI industry.
Ryan, have you been following this?
Were you tapped in this weekend?
I have.
But I think high level.
And I know you've been assembling the details here.
And that's what we're going to talk about today.
So, David, give us the details.
What happened to Sam Altman, OpenAI?
What in the world are they doing over there?
Yeah.
End of day Friday, November 17th.
The OpenAI Twitter account tweets out, OpenAI announces leadership transition.
And everyone knows a leadership transition means somebody got fired.
And there's not that many leaders.
There's kind of just Sam Altman.
So everyone reading this is like what automatically is like, what the hell is going on?
There was an entire blog post that was linked in this tweet.
And it was all the explanation of what's going.
on and what's happening next. So two main quotes from this blog post says,
the board of directors of OpenAI announced that Sam Altman will depart as CEO.
Mira Muradi, the CTO of OpenAI, will serve as interim CEO effective immediately.
Mr. Altman's departure follows a deliberative review process by the board, which concluded
that he was not consistently candid in his communications with the board, hindering its
ability to exercise its responsibilities. The board no longer has confidence.
in his ability to lead open AI.
Pretty big words.
And I think when I was reading this,
and I think most people across Twitter
and across a tech landscape as well,
is like, oh my God, what did Sam do?
That was my immediate reaction.
I was like, okay, he hasn't been candid.
This is code for he's lied to us.
And in order to get fired from a company for lying,
it can't be like, it seems to me,
it can't be a, especially somebody with the,
I think, press of Sam Altman,
it can't be a small lie. It has to be like a pretty big lie. So my initial reaction was, yeah,
something really bad just came into some skeleton that Sam had in the closet just came out and popped its head out.
That was my reading of this. But it seems like maybe we haven't gotten fully clarity on what he was not being candid about.
And this is now three days later. Is that the case, David?
Yeah, over the weekend, it was just three, two and a half days of chaos. We do have a lot more clarity as to where this whole mess is going.
it was a tumultuous weekend.
There was speculation going on left and right.
Sam was going to get rehired by the board,
but then Sam is now elsewhere at Microsoft now.
So let's kind of like run through the details
as to what happened this weekend
because it was just a peak drama in Silicon Valley.
First, after Sam was let go by the board on Friday, Friday evening,
not even afternoon, Friday evening.
Sam tweets out, I loved my time at OpenAI.
It was transformative for me personally and hopefully the world a little bit.
Most of all, I loved working with such talented people.
We'll have more to say about what's next later.
Saluti emoji.
So actually, like not really any questions being answered here.
Yeah, there's no answer to the why still.
Yeah.
So still kind of confused.
But this is when we start to get a glimpse of like, okay, there is trouble brewing.
There's not as all right in Open AI.
And that's when Greg Brockman, who is the president of OpenAI and one of
its co-founders was also removed from the board by the board. And so Sam Altman also was on the board
and he is the CEO and then he was fired. So fired, removed from the board. Greg Brockman, president,
was removed from the board, but not fired. He was just removed. Well, that's a huge slap in the face,
right? Like you're a co-founder. We're going to let you have your job, but you're no longer on the board,
right? I mean, that would be pretty frustrating, I would imagine. And we should probably talk about
what is a board and what does it do?
So Sam Altman and Greg Brockman are executives.
They're making the decisions.
They are leading the company.
And the board is generally not doing everything, anything at all.
They're generally just overseeing governance and high-level direction.
And they have the power to remove people, but not make really any executive decisions.
So the board, the governance over OpenAI, removes Sam Altman, and also removes President
Greg Brockman.
And so Greg Brockman is retweeting Sam Altman's tweet.
Sam Altman's tweet that he says,
I loved my time at OpenAI,
basically his farewell tweet from Open AI.
And Greg tweets out retweeting Sam.
After learning today's news,
this is the message I have sent to the OpenAI team.
And it's just a screenshot of his like notes app
writing in his notes app.
He goes, hi everyone.
I'm super proud of what we've all built together
since starting this in my apartment eight years ago.
We've been through tough and great times
accomplishing so much just by all the reasons
that should have been impossible.
But based on today's news, I quit.
So he just quits on Twitter.
I mean, maybe he, like, message the board,
but he basically just resigns via Twitter.
Do you know what this reminds me of?
Do you remember OG Spider-Man?
Norman Osborne, getting fired from Oscorp?
Do you remember that scene?
Right.
He's just like, you can't do this to me.
Do you know how much I've sacrificed?
That's what this reminds me of.
It's like two of the co-founders basically getting fired.
I guess Greg was not fired technically,
but, like, getting pushed out of their own company.
Yeah.
Yeah, that's exactly right.
And so everyone is realizing, okay, number one at number two of OpenAI, perhaps the most important defining company of the modern age are out. One fired, one quitting in solidarity. And so everyone is trying to ask like what the hell is going on. And with Greg quitting in solidarity with Sam, people have stopped asking the questions, what did Sam do? Like what was he up to? Did he lie? Did he steal? Did he cheat?
And now everyone it's like, oh, what is the board up to?
So this is the why being revealed.
At first, at first the initial inclination is, oh, Sam must have done something really bad.
There's some skeleton in the closet.
But then when you have other employees, other co-founders also resigning in protest, then you start to think, well, maybe the board was doing something.
Maybe this was a division not based on some sort of issue revolving around Sam's character.
Maybe this is some kind of a schism around a set of issues or a direction for the company.
So then this brings up the question, who the hell is the board of OpenAI?
Who are these people that made this governance decision to remove the one and two of OpenAI
and cause just a massive amount of chaos in Silicon Valley?
Because this isn't Ryan just about OpenAI.
This is about Open AI and it's a Cambrian explosion of startups that are built on top of OpenAI
because this impacts everything downstream.
So this is people's jobs, this is people's salaries,
this is VC investment.
Since so much, since Open AI is a platform,
the governance decisions over Open AI
and what the board is up to is impacting an entire industry
and really the entire city of San Francisco.
And maybe all of society, right?
If AI is going to be the most transformative technology
that we'll see in the decades to come,
this has ripple effects for that.
And it affects how maybe nation-sense
state geopolitics, like it could affect so much downstream.
Certainly.
So this has brought up the question of who is the board of OpenAI.
There are now four members.
There used to be six.
Two of them are recently gone.
So four left.
Ilya Sutskiver, a co-founder and chief scientists at OpenAI is one of them.
Adam DeAngelo, who is the CEO of Quora and an independent director on the board.
It's pretty normal to have independent directors on boards, just like having kind of
one arm of removal.
Tasha McAulay, who is a tech entrepreneur that not very many people have too many details about.
Like who is she and why is she on the board of directors?
Technology entrepreneur is in her bio on like LinkedIn, but no one really knows what
that means or like what she was an entrepreneur about.
Fun fact, Ryan, she's Joseph Gordon Levitt's wife.
Ah, they're not from the sun.
Right.
And then last one, Helen Toner, a member of Georgetown,
Center for Security and Emerging Technology, also an independent director. So this is your board.
These are the people that voted out Sam and then also caused the departure of Greg.
Okay. So one other co-founder and then three independents, it looks like. Exactly. Yeah.
Three independents, one being Joseph Gordon-Levett's wife. So these are the people that we're trying
to ask, like, yo, like, can you explain yourselves? Like, why remove Sam? Because to this day, to this
moment at the time of recording, Monday morning, noon, Eastern time, we have not gotten an explanation
other than the Sam is not being appropriately candid with the board, which again, leaves a lot
left to be desired. And so this is where we were left all of weekend, all this weekend,
just kind of asking what the hell is going on. Sunday afternoon, Sam tweets out this funny
picture of him holding up a badge as a visitor badge to open AI with the caption first and last I
ever wear one of these.
Kind of like a polite, a polite troll.
Like, Sam's a nice guy.
He doesn't cause anyone any, he's just not like a bully or anything.
But definitely like leaning into some of the drama.
It's like, I'm wearing a visitor's badge to Open AI and it'll be the last time that I wear
one of these things.
To me, I'm like, is he coming back to Open AI?
Like, are they bringing him back?
Like, why tweet this?
And just to bring this up because I thought this was funny.
Someone tweets a photo of him taking the photo.
Just kind of funny.
Behind the influencer?
This is the best part.
Yeah, I thought it was great.
Someone saw him do this, okay.
Yeah.
Where I found this photo was actually,
he just retweeted it,
so he thought it was funny too.
And so the rumors, Ryan,
start going around on Sunday
that the board wants to do an oopsie,
a reverse Uno card.
And I'm like, okay,
okay, no, we made a mistake.
Let's get Sam back.
A nice little takes backsy.
Right.
They're sorry?
So the board instated the CTO Mira Muradi as CEO.
And so as an interim CEO.
And Mira was like, okay, well, as CEO, I plan to rehire Sam Altman and also President
Greg in a capacity that's yet to be finalized.
Yeah, but he was CEO, right?
So you're going to rehire him under, like, working for someone else?
Well, I think the deal is like, Mira, who's now interim CEO, is like, well, the board just
effed up, I'm bringing him back. I'm in charge of this company. I'm going to do what's best
to this company. What's best for this company is getting Sam back into the company.
I'm, I'm, these are the rumors that are going around Sunday. So then everyone's like, oh, okay,
Sam was fired, but now he's going to get rehired inside of like 36 hours. And that's the drama.
And again, this is just rumors coming out. Emily Chang put out this tweet that linked to this
Bloomberg article that had one important paragraph that started to indicate like why was
Sam fired, which says the quote from the article, Altman.
who spearheaded efforts to transform open AI from a nonprofit into a commercially viable business
clashed with board members who were concerned that he was moving too quickly without sufficient
concern to the safety implications of a technology that left unchecked could create content
capable of harming the public. So this, Ryan, was our first indication that this is an AI safety
issue. Now we're starting to see it. It's actually sort of a schism around a belief system, right?
which is like one of let's accelerate,
let's move faster into the AI frontier.
And the other is like, let's slow down.
Let's be careful.
Let's, you know, stalt our progress
and make sure it's safe before we push updates into the world.
So the speculation becomes,
okay, Sam and Greg, the board members of Open AI,
were the gas pedal part of the board.
And the rest of the board were the brakes,
were the AI safety people.
interestingly the next decision by the board was to hire as the new CEO,
Kira, Emmett Shearer, as the new CEO of OpenAI, who is Emmett Shear.
Wait, wait, wait. So just really quick, though, the previous CEO, Mira was only CEO, like,
very temporarily?
For the weekend.
Yeah.
Just like a weekend, you could do that?
And now they're replacing her with yet another CEO?
I guess you always have to have somebody calling shots at all moments of time.
You can't have like a void.
That's just chaos.
Okay.
So she was very much interim, like 48-hour interim CEO.
Yeah, I don't even know if she made it 48 hours.
Because the board brings in Emmett Shear.
And who is Emmett Shear?
He is a part-time partner at Y Combinator, former Twitch CEO.
So another tech executive.
And importantly, Emmett Shear is an AI decelerationist.
So here's a tweet from Emmett that says,
I specifically say I'm in favor of slowing down,
which is sort of like pausing,
pausing AI development, except slowing down.
We're at a speed of 10 right now.
A pause is reducing to zero.
I think we should aim for a one to two instead.
So this is the conversation about how fast do we develop AI
or versus how safe do we want to be.
And Emmett Shears just placing him on that spectrum
at like a one to two,
which if I put in like miles per hour
terms is like a solid 15 miles an hour. Because the context is this AI technology is dangerous. At least
many people believe it's dangerous. And this has been a debate that's been raging quite publicly for
the last year or so between decelerationists, AI decelerationists and regulators and even the White
House has weighed in on this and accelerationsists who say, no, we've got to move faster.
We can create a better future. This is key to productivity. Jobs. This is the next wave of tech
that's going to set our species free, all of these things.
And so Emmett is very much a decelerationist, I suppose.
But I mean, if you are all the way decelerationist, why in the world would you work on open
AI?
You're just like, I mean, there's one extreme here, which is just shut open AI down entirely.
And then what happens?
Does the rest of the world stop?
I guess that's the meta question here.
I think that's been some of the takes that have been going around the Twitter sphere lately
is like, well, the board of directors vision for AI is open AI is for open AI.
to like self-destruct to not exist.
So this is kind of like the dividing by zero approach of the Board of Governance is very
much at conflict with, well, A, it's employees, B, Sam and Greg, and C, general Silicon Valley
tech innovation at its very core. Silicon Valley is always like, go, go, go, go, go.
And so this is just a conflict of interest in direction and the future.
And it's weird to have open AI, no matter what you.
you believe, like maybe you do believe AI is dangerous and that deceleration is good, but you can still
accept the take that a company that's in charge of developing AI should not have an AI decelerationist
as its leader. Like, that's just a conflict. That's a conflict. But this is why it's kind of a microcosm
of the debate and sort of proves, I think, one side more right than the other, because we actually
get to test what happens when you start to shut the doors of accelerationism. What happens? And I think
the next thing that you're going to explain, you know, tells us exactly what that happens. But the first
thing that happens, this is a take from Jason Calcanus is a lot of value was just destroyed, at least
value from Open AI. So shareholders in OpenAI cannot be happy. This is Jason Calcanna saying the
employees at OpenAI just lost billions of dollars in secondary share sales that were about to happen
at a $90 billion valuation. That's over, done. I think Open AI will lose half their employees,
the 12 to 18 month lead and 90% of their valuation in 2024.
Just insane value destruction.
So that is kind of the investor VC take.
The question, though, is where does this talent go?
Where does this energy around AI accelerationism?
Where does it go?
Does it just stop?
Because Open AI has said, we're going to put the breaks on our organizations.
And the board has decided to slow things down.
Is that the end of the story here?
Or like, what happens next here?
Oh, no, this is about halfway over.
Interestingly, when it was announced that Sam was removed as CEO, Microsoft is one of the bigger owners of OpenAI shares.
They're one of the biggest investors.
And like if Jason has this like paper napkin math valuation of Open AI right, well, actually, if you look at the Microsoft stock, it decreased by evaluation commensurate with the paper math that Jason is putting forward here.
So like the market is repricing Microsoft, the owner of Open AI down because of the destruction of value that Jason is talking about.
Why? Because Microsoft is providing cloud services to Open AI? Is that? It owns the equity. It owns the shares of the company.
Okay. Yeah. So like opening I independent organization, but it has investors and one of the largest investors is Microsoft.
Yeah. So Microsoft, the price goes down, not it doesn't tank, but it goes down. That's our public market view, right? Because we can't see, like,
Because OpenAI is still private valuation.
It doesn't have any shares in the public exchange.
So that's a proxy for that then.
Right.
But Ryan, this morning, tweeted out at 3 a.m. Eastern time,
Sadia Nadella, tweets out,
we remain committed to our partnership with Open AI.
Not only is Microsoft, one of the larger shareholders of Open AI,
they also have a partnership with them, a business deal.
We'll talk about that.
So they remain committed to their partnership with Open AI
and have confidence in their product roadmap
and their ability to continue to innovate with everything announced at Microsoft.
We are extremely excited to share the news that Sam Altman and Greg Brockman,
together with colleagues, will be joining Microsoft to lead a new advanced AI research team.
Oh, my God.
So I'm guessing over the weekends, Saudi the managing director of Microsoft was like,
this is my moment, there's a free agent, his name is Sam Altman, I want him on team Microsoft.
I'm going to burn the midnight oil
and get whatever is going to be let go by OpenAI
and they're going to come to Microsoft.
This is like the greatest maybe aqua hire of all time here.
It's an aqua hire, but without...
So one of the big conversations going around is like
Microsoft would have never been able to buy OpenAI
because of antitrust regulations.
It would have been too big.
But when OpenAI just fires Sam Altman
and then Greg leaves,
and then you could only presume that some amount
of employees are going to follow. Well, Microsoft is like, well, this is free real estate,
and we don't even have to go and ask for permission from the feds. And so this is just like
routing around antitrust, and now they're going to take the largest talent pool around AI
and bring it in-house. So huge win for Microsoft. So David, is this all confirmed? Is Sam on board
with this? Well, he retweets, Sam Altman retweets this tweet saying, the mission continues. So you
can only imagine that Sam is on board with this. Yeah.
The technology had to take care.
Yeah, he says,
Sadia wins, reflexes of a startup CEO,
again, just talking about moving fast, moving quickly,
the resources of a trillion-dollar company.
Pretty sure Microsoft is the number one
highest-valued company that exists.
And pulling this all together in 48 hours from a cold start,
gets it signed and over before the market's opened.
Again, this announcement got tweeted out
3 a.m. Eastern time,
about six hours before the markets open this morning.
And so that hole in the Microsoft valuation,
that the one to two percent drop as a result of Open AI losing Sam Altman,
Microsoft talks just rebounds.
So it's like, oh, well, Microsoft lost, but then it recovered it.
And actually, it recovered it in a more internal fashion,
rather than just being an owner of Open AI, is now an owner of the talent.
Do you see this meme?
No, I did.
Okay, this is somebody's photoshopped a Microsoft badge on, you know, Sam Altman's
previous guest pass version.
Yeah, that's exactly right.
And so now the means are like, well, now Sam Altman works at Microsoft.
Okay.
So that's the story of Sam Allman.
And that is as much information as we have about where Sam Altman is going next along with
Greg.
They're going to Microsoft.
That is, there's no much more information around that.
But there's big questions about, okay, what's left for Open AI?
Well, like, what's the next step of the direction?
If you scroll through Sam Altman's Twitter account, there's this line that says,
Sam, OpenAI is nothing without its people.
And so many employees of Open AI are tweeting out this line.
Open AI is nothing without his people.
Mira Amirati, the CTO that we talked about, had tweeted out.
Brad Lightcap, the C.O.
of Open AI tweeted that out.
And Sam is just retweeting out.
The exact same phrase.
Open AI is nothing without its people.
They're all, it's like a chorus.
It's almost like a collective social media chant at this point.
Open AI is nothing.
without its people. Yeah, it's like a call to solidarity of open AI employees. And so this is
open AI employees in my mind, like banding together in protesting, like, yo, like if you do not
align with the employees, the people of open AI, then you're not going, you're going to lose.
You're going to lose out. Well, that's why this is an interesting schism, because it's a schism
based on what you believe, right? And that, you know, to take a lesson from crypto, it seems like
this all settles down to the kind of the social layer, like what we call it crypto, like the layer
zero, basically. And so if Sam is saying, hey, no, we want to be an eight or a nine on the
accelerationism, right? And the board is saying, no, we want to be a one or a two, right? Ultimately,
who actually decides? It's the tech developers. It's the employees. It's the people at OpenAI,
because if OpenAI is not going to provide them an eight or a nine, they'll go somewhere that does.
Maybe they'll go to Microsoft.
And that sounds like exactly what Sam and his fellow co-founder are doing in this case.
And I think it got even more explicit than that when at 848 a.m. this morning Eastern time,
550 of the 700 employees of Open AI wrote a letter to the board telling them to resign.
Wow.
550 of 700 employees.
That's basically all of them.
That's basically all of them.
And you can imagine that the remaining 150
might just like, we're just too lazy or some reason.
The post is pretty damning.
We don't have time to read it all here,
but there's a link in the show notes.
I'll just read the last paragraph.
Your actions have made obvious
that you are incapable of overseeing OpenAI.
We are unable to work for
or with people that lack the competence,
judgment and care for our mission and employees.
We, the undersigned, may choose to resign from OpenAI
and join the newly announced Microsoft subsidiary
run by Sam Altman and Greg Brockman.
Microsoft has assured us that there are positions
for all Open AI employees.
At this new subsidiary, should we choose to join.
Wow. Wow.
Okay, and 550 of the 700 employees of Open AI,
signing this thing.
Yeah.
And so this is when Bollagy was saying, yeah, like Microsoft has a trillion dollar,
it's a multi-trillion dollar company.
They have all the resources.
They can pay these people.
Yeah.
They can pay all of these salaries.
Well, that's the thing.
I mean, the Open AI board was trying to fork in a different direction.
And the runners of the Open AI node, the employees, namely, said, no, we're going to
maintain this.
We're not going to allow you to do that.
It's a massive pushback here.
Interestingly, the last signer on this letter.
It says it signed by 550, but we can see the first 12.
Number 12, Ilya, Suitskiver, is one of the guys on the board.
He signed that own letter about...
Look at this, David.
This just happened.
He tweeted out, I deeply regret my participation in the board's actions.
This is Ilya, fellow board member, one of those additional four that we were talking about
in the earlier in the episode.
I never intended to harm open AI.
I love everything we built together, and I will do everything I can to reunite the company.
Wow.
Basically, Julia saying, oops, we effed up.
So those are the details as it stands, Ryan.
As of Monday morning, this is all that we have.
Half, over half of opening AI employees are threatening to leave to go to Microsoft,
where Sam and Greg already are.
But really, we need to talk about what does this all mean?
What is the future of AI in Silicon Valley?
Why is this accelerationism versus decelerationism defining the landscape?
I want to unpack this part with you because I think this is going to have probably
very sweeping broad implications for this entire country and really the future of this entire
globe because we all know how big of a deal AI is. So we're going to get to all of those thoughts
and reflections. But first, a moment to talk about some of these fantastic sponsors that make
the show possible. Cracken knows crypto. We are all on the journey of building a better financial
system. And Cracken has been leading that charge for over a decade. Crypto is world-changing tech
and it's Cracken's mission to accelerate the adoption of crypto so that you and the rest of the world
can achieve financial freedom.
Head over to cracken.com to see what crypto can be.
And once you buy your assets on Cracken and you need to start exploring Defi,
make sure you explore it through your Metamask portfolio.
A deeper, more expansive way to use Metamask that gives you the battle station you need to
navigate the bull market.
You can buy, swap, bridge, and stake your crypto assets with ease.
I already know that you have a Metamass wallet.
So go check out your Metamask portfolio.
Did you know that Arbitrum is the fourth largest chain by economic activity in crypto?
How did Arbitrum get there?
Well, with low fees and fast transactions, of course.
With over 600 apps on Arbitrum, the Arbitrum ecosystem has a solution for you, whether you're into
Defi, NFTs, or you simply need a fast chain, or even if you want your own dedicated throughput with
an Arbitrum orbit orbitrum orbit.com has a home for you. Visit Arbitrum.io to get started with your
journey with one of the most active chains in crypto. And if you want to try out a newer layer two
to the Ethereum family, try out Selo, a battle-tested EVM layer one that has recently decided to
move to Ethereum. Selo is the mobile first, carbon-negative blockchain built for the regenerative
With the cello layer two, gas fees will stay low, and you can even pay for gas using ERC20 tokens.
Follow cello org on Twitter and visit cello.org to shape the future of Ethereum.
Uniswap Labs just released the Uniswop mobile wallet for iOS.
The newest, easiest way to trade tokens on the go.
You can easily create or import a new wallet, buy crypto on any available exchange with your debit card,
and you can seamlessly swap on mainnet, polygon, arbitram, and optimism.
So you can now go directly to Defi with the Uniswop mobile wallet, safe, simple custody,
trusted team in D5.
Download the Uniswap wallet today on iOS.
Are you launching a token?
Is it already live?
How are you managing the legal and tax
for providing token awards to your team?
Toku simplifies everything about managing token grant compensation.
And you can get started for free.
With Toku, you'll have access to top-notch legal and tax support
to handle the distribution and management of tokens for your team.
Toku understands every grant structure and caters to every step of the compliance process.
Visit them at Toku.com slash bankless.
And last up, GMX.
But specifically, GMX V2.
offering even faster on-chain trading for defy liquidity providers.
GMX is a permissionless decentralized exchange that offers perpetual futures in spot trading.
Liquidity providers receive 63% of all of GMX's protocol fees,
and GMX users get a referral link to lower fees for you and your referrals.
Try out GMXV2 now at app.gmx.io.
Now, on to the show.
All right, David, so now the question is,
what does all of this mean and how will it define the decades to come?
So it seems like this is planting the same.
for maybe a schism in tech and a schism just in general with how the world addresses this technology.
And so Sam being fired was just a symptom of decelerationists versus accelerationism and their
outlook on artificial intelligence.
So what do you think this means for the future of AI?
This Atlantic article came out just now that I thought put it very, very well, or at least
defined the playing board very, very well.
So I'll read two quick passages here.
To truly understand the events of this past weekend,
one must understand that OpenAI is not a technology company.
Open AI was deliberately structured to resist the values that drive much of the tech industry,
a relentless pursuit of scale,
a build first asks questions later approach to launching consumer products.
In this conception, Open AI would operate much more like a research facility or a think tank.
The company's charter bluntly states that OpenAI's primary fiduciary duty is to humanity,
not to investors or even employees.
And when you understand that, Ryan,
I feel like you can actually justify the board's governance decisions
if that is the vision of Open AI.
If that's what they did,
that sounds in alignment with what their original vision is.
It just sounds like the 550 employees that signed that letter
don't give a rat's ass about that alignment, that vision.
They are all here to build tech.
So the second passage that I'll read here
in conversations between the Atlantic and 10 former and current employees at OpenAI,
a picture emerged of a transformation at the company that created an unsustainable division among
leadership. Together, the accounts of OpenAI employees illustrate how the pressure on the
for-profit arm to commercialize grew by the day and clashed with the company's dated mission
until everything came to head with ChatGPT and other products that launched rapidly following.
after chatGBT, there was a clear path to revenue and profit, said one of the employees.
You could no longer make a case for being an idealistic research lab.
There were customers looking to be served here and now.
I think this defines the schism pretty damn well.
Open AI was once upon a time a research division trying to research AI for humanity to improve humanity with AI.
And then they made chat GPT and like, well, this is a very commercial product.
And now we are in the race of Silicon Valley.
and Silicon Valley equity incentives.
And so I think with the advent of chat,
when did chat DBT come out,
like a little maybe two years ago?
Moe in public, yeah,
about just over a year ago, David,
chat GPT4 came out and was available to the public.
And so I'm guessing at that moment,
there was a divide that was created,
which was like one part,
let's research AI for humanity.
And the other part was like,
let's make an extremely valuable tech company.
And it looks like that forking of the company
finally has come to a head, and now we're going to see which part gets unbundled and which goes
well.
I guess I have a few takeaways from this, but maybe the main takeaway is there's no stop in
this train, is there?
Like, this tech is going to be developed, isn't it?
Unless some greater force actually puts a stop to it.
I think it would take at this point in time a government threatening to throw people in jail
for this to stop, at least in the United States.
And then this is the game theory of it.
If you stop in the United States,
why wouldn't China continue?
Why wouldn't Europe continue?
Some other nation-state apparatus.
And I think this is what,
maybe there's a way to view this.
Okay, so your takeaways from this
will all be about how optimistic
versus pessimistic you are on artificial intelligence.
If you are Elyzer-Yadkowski,
and you think that this is a world-ending,
humanity ending technology, then this is exactly what you would have predicted. This is exactly
what you are afraid of because you are essentially saying that this is a runaway train and Silicon
Valley capitalist incentives will make sure that this technology gets developed to its
utmost because to not develop this technology is to lose out on a lot of shareholder profit,
a lot of money, or maybe a competitive advantage versus all of your other, you know, all the
other countries. And so this is what Elyzer was afraid of. On the other side of things, if you are
very optimistic, maybe you're a techno-accelerationalist yourself. If you're somebody like Mark
Andresen, you're investing in this space, but you also believe that this technology is going
to ultimately be good for the world and will navigate these thorny problems. And the best way through
is for maximum competition, like AI's being developed in various places and keeping one AI is
checking the power of another AI, and so nobody gets a competitive advantage, doesn't get captured
by a regulatory agency's, maybe that is the more optimistic view, then this is also what you would have
predicted, but you see this as a win. You see this as a gain, and you see open AI trying to artificially
pump the gas pedal on progress for AI accelerationism. You look at this and you say, we told you,
this is exactly what's going to happen. You can't stop this. The people don't want it stopped.
this technology is going to manifest.
So you better have it in the borders of the United States
or else it'll go elsewhere.
And this is what somebody like Mark Andreessen would probably say as a result.
So the one thing that I'm coming to a conclusion of is there's no stopping this train.
And I don't know how it stops.
And it's not clear to me, David.
I still haven't made my own mind up on whether I'm like more team Ellie Zier Yukowski
or more team Mark Andreessen.
Like I just don't know.
And I don't know if anyone does.
Those are my takeaways as well. I think there's one thing that we can be sure. It goes back to the
embology tweet that we saw earlier where, I mean, Open AI at $90 billion of value, if all those
employees go to Microsoft, and who are the employees that are going to go to Microsoft? They're going
to be the accelerationists who want to move fast and break things and make shareholder profits.
And that is exactly the fear that people like Eliezer-Yoski have. It's like, well, yeah, all the
profit-aligned people are going to make a profit-aligned company, and they're going to accelerate
AI, and then of course, if you're also like
L. E.A.S. or E. Cali., you think that
the acceleration of AI leads to the doom of
humanity. It is
exactly how you would predict. Now we have
no accelerationists being
buffered by decelerationists, because
all the accelerationsists have
forked off into the pro-accelerationalist
arena, which is now Microsoft.
I think Haseeb put this
really, really well. Haseeb of Dragonfly.
He says, this weekend, we all witnessed how
a culture war is born.
Effective accelerationses now have
their original sin that they can point back to. What he's referring to is the destruction of
$90 billion of shareholder profit, of shareholder equity. This will become the new thing that
people will feel compelled to take a side on. Accelerationism versus decelebrationism and
nuance or middle ground will get punished. So I think this is the schism that is unfolding that
we are watching people take sides on. And it's unfortunate. If you're on the Eliezer,
Yudkowski, AI will bring us Doom Camp, you're seeing all the
of the people who want to increase shareholder equity to want to increase market cap,
increase value, they're going to the acceleration aside. And all the deceleration aside are
losing because they're destroying value and no one is on that side anymore. It is the
prisoner's dilemma trap of like, well, everyone's going to choose to profit and therefore
they're going to choose to accelerate. That's the takeaway. And this is the schism. And what an epic
schism it will be. Because I think through the events that just played out, both sides will come
away from this feeling like they're right and they're vindicated and all of their concerns and
you know the energy and the momentum of their arguments were just deemed valid and so yeah it's going
to be quite the religious fight ahead and I do call it a religious fight because I do think that
there is some dogmatic belief associated on the extremes of both sides. I'm hopeful David we're
able to find some happy medium that gives us the promise and benefit of technology like
artificial intelligence and doesn't cause the collapse and destruction of all of humanity.
That is certainly the hope.
And it would be great if we were able to thread that needle, right?
Yeah, it really would.
It would be more than crazy.
It would be kind of critical.
But at least that ends this current saga of this drama of what is now the fight of Silicon
Valley, which is accelerationism versus decelerationism.
Bankless Nation, we'll have some more episodes for you on this divide, I'm sure,
in the near future.
Hope you enjoy the show.
If you're not familiar with Bankless, make sure you like and subscribe.
we'll get more of this content coming to your web.
