Big Technology Podcast - Sam Altman To Microsoft, OpenAI In Flux — With Dan Primack
Episode Date: November 20, 2023Dan Primack is the business editor at Axios and author of the daily Axios Pro Rata newsletter. He joins Big Technology Podcast to break down ex-OpenAI CEO Sam Altman's apparent move to Microsoft, whet...her he'll stay there, how the board structure enabled this, and how the competitive balance in the AI field shifts now. Stay tuned for the end, where we rate winners, losers, and a bold prediction from Primack. --- You can subscribe to Big Technology Premium for 25% off at https://bit.ly/bigtechnology Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
Sam Altman is heading to Microsoft to build an advanced new AI research team
along with the former president of OpenAI Greg Brockman
and what it seems like is going to be many former OpenAI employees.
We'll break it down right after this.
Welcome to Big Technology podcast.
We are apparently a daily show as long as this Open AI drama is going on.
Dan Permanak is here.
He is the business editor at Axios and the author of the Daily Axios Pro Rada newsletter.
Dan, welcome.
to the show. Thanks for having me. It's good to be here. It wasn't expecting to be here,
but good to be here. Right. Yeah, I feel like when our listeners hear your voice, it means the
shit is hitting the fan in Silicon Valley. And here we are again. Yeah,
that's where we are. Absolutely. Let me start with this question. We now know that Sam
Altman is apparently going to Microsoft with Greg Brockman and a bunch of other Open AI employees.
there's like 500 or more Open AI employees who are still trying to get Sam to return to the company.
Is this positive podcast going to be obsolete, like the moment that we posted?
Or do you think he's actually sticking at Microsoft?
It may be obsolete.
I mean, there's basically two schools of thought here.
One is that Microsoft has just had it with Open AI and the board, despite, you know, we support them, et cetera.
And it's just like, you know what, we can build, if not this exact thing in house.
We can build this in house.
We already have access to chat GPT.
you know, there's no patents on it, et cetera.
So let's just bring the brains in and let's build our future because AI is obviously
going to be part of our future.
You know, Microsoft at its core is a productivity company.
And obviously, AI at its core, particularly things like chat GPT, is a productivity tool.
The other theory is that Satya Nadella has just basically called the bluff of OpenAI's
board and is basically using the hiring of Sam and Greg Brockman as leverage against it,
which is basically to say, hey, if you,
you don't hire them back, which is what we, we, the people who have given you, you know,
$12, $13 billion, what we want, we'll take all your people. So, you know, you can maintain
and continue to be the board of directors, but the board of directors of what exactly, because
it's not just rank and filed. As you said, it's 500 plus of around 700 employees. It's all,
almost all the senior executives, including the woman they put in as interim CEO just on Friday,
and including the other co-founder who was part of the board coup. He signed a letter basically saying,
fire me from the board do it by end of day yeah so i guess there is a possibility that that they might
still go back but if you're Microsoft you're basically just like this is going to accomplish all of
our goals right now i mean we'll have to pay some salaries but we're going to basically intake
the world's leading research house yes we did this investment of 13 billion but a lot of that was
azure credits so effectively we can now take them put them inside Microsoft and have them continue on
their work. It's going to take some time to train the next models, but do it without any strings
attached. Pretty good outcome. It is a good outcome. I'd say the only caveat to that is though I'd
kind of been under the impression that one of the reasons that Microsoft, for example, hadn't tried
to buy open AI outright. And I know part of that was Sam didn't particularly want it to be sold.
But one of the reasons Microsoft wasn't trying to necessarily buy it was that it gave Microsoft
some plausible deniability when it came to these safety and guardrails issues. Right. Microsoft is one
of these legacy big tech companies. They have an ethics bureaucracy, which is very strong inside the
company. And what Satya Nadella, the CEO of Microsoft was able to kind of do was let Open AI kind of
make stuff. And I know there was obviously a conflict inside of Open AI and how fast or how slow they
should go. But if there were things that maybe would have been objectionable inside of Microsoft,
well, Satya was able to let that development go because it was a third party at arm's length.
And he could say, well, that's not us. That's them. So now by bringing particularly
by bringing Sam and Greg in house, if that's how this plays out, not only is he kind of bringing
that ethics bureaucracy onto them, they're kind of the folks who were go, go, go, go, go.
So I could see a conflict there inside of Microsoft.
Yeah, I got a text from an AI CEO saying that they cannot believe that these guys are going to
stay inside Microsoft for too long, although then they'd have to like, I mean, not be crazy,
right, shift the employees again to another company.
Like, it was either going to be startup or go.
It seems nuts, but also, I mean, if you had told.
told me, you know, a couple days ago that Sam Altman, you know, forget about the circumstance
here, but Sam Altman would want to go work for a big tech company. I would have been stunned.
I mean, that doesn't strike me as him even a little bit.
Yeah, I wrote a tweet that the world's greatest tech leaders are destined to be middle management
at Microsoft and it was only half joking there.
Absolutely.
That's where he is now. So, okay, let's say everything continues according to plan. They're
inside Microsoft. Is there a plan? What's the plan?
The plan is that they're inside Microsoft. They're building this new GPT tool.
I mean, plan, as in plan, hatched like five minutes ago.
But that's what it is.
They're effectively now going to be operating outside the auspices of this safety-focused
AI board that were running open AI.
Does that mean that this board was kind of fake all along?
Like the second they took this investment, wasn't that out the window?
Like they had to build to get more money and they had to get more money in order to build.
And so like this whole idea that you could run a company of this magnitude,
in a way that like reports to a nonprofit folk nor yeah nonprofit board that's focused on
AI safety like they're they tried to enforce their terms if that's what actually took place
and now they're going to basically have altman and crew go do this without their oversight yeah i mean
look it was a screwy structure to begin with and it was uh and in particular because remember
you know when we think of boards of directors their primary job and they have other jobs but
their primary job is to, is to, uh, ensure or protect shareholder value, whoever those shareholders
may be public or private. That's wasn't this board's job. This board's primary job was to protect
the mission, which is a different thing. That's sometimes. Uh, and but, but it was also,
it's worth noting the board that acted last week is not the board that open AI began this year with.
There were more people who had, for like a better term, corporate experience at the beginning
of the year, namely Reed Hoffman, who was on the board, leaves in March.
which either was pushed or left voluntarily because he had launched this competing AI company called inflection.
He also was on the board of Microsoft.
And for all the talk, including for me, that, you know, how the hell is Microsoft making this investment without some sort of eyes and ears on the board?
For a long period of time, Reid Hoffman was those eyes and ears.
And I'm sure he, you know, was, you know, maintained the proper rules and compartmentalized.
But like, I think Microsoft felt that if all hell was really breaking loose at open AI, Reid was, if not going to tell them, at least be the guy who stood
there and put his finger in the dam. But he's not there anymore. So what was left was this weird
group that with the kind of exception of Adam D'Angelo, who's an early Facebook employee and
founder and CEO of Quora, didn't really have any major corporate board experience, major legal
experience, major marketing experience, major executive experience. And so even if the structure
itself was screwy, it could have maybe worked better if they had had better board. Right. And so
this has been a there's been a through line through this or a meme about this that basically
there's people arguing that AI can't be this serious. This would never like be the case on
like even like a pharmaceutical company that they would have a board like this. And AI can't be
so real if that's the case. What do you make of that? I mean, Theranos wasn't a pharma company,
but it was a medical device company. They had a board, bad board. I mean, you have FTX,
which was technically a financial institution, right? It was an exchange. It was an actual security. I
I mean, quasi-securities exchange, they had no board at all.
Twitter's, you know, one of the most powerful social media platforms in the world.
It does not have a board of directors at the moment.
So, I mean, I don't think it is a knock or I don't think it's a reflection on the industry.
I think it's a reflection on the organization.
Okay.
And again, to a certain extent, in the investors in it who were once again, and we have now
had this story a few times, willing to overlook it.
Right.
There's a, I got to cite this piece of warning that they gave to Open AI investors.
they basically said investing in open AI is a high risk investment and investors could lose their
investment and should see it as a donation um you know but yet yet thrive capital sequoia
microsoft put sequoia by the way who now this is three for them right they were in fpx and they
were and they are in twitter you know two companies with no boards and now they do this one which has a quasi
board i mean it they are look they are the the best historically venture capital firm in the valley
if I was asked to put my life savings in a VC firm, I would probably give it to them.
But good God, how many times can you make the same mistake?
Well, when you have three like that, there's obviously some systemic problem there.
What do you think it is?
I think the systemic problem is that they are valuing the founder over the governance,
or they're not considering governance as a significant enough risk factor when they're making
their final decisions.
Is there a zero interest rate policy component to this that like all this stuff,
was made where, like, you know, companies were much looser with their money. Do you think this is
going to change now that we're in a different environment? I mean, some, but I think this was more a
fomo issue, right? You know, open AI is the lead, you know, if you believe, and most people I think in the
valley do, if you believe that AI is the next great platform shift and maybe a more important platform
shift than was mobile or maybe even the internet, depending on who you are, well, you've got to make
your bets. And it is, it is a, it is a bigger problem to not be on the ship than to be on a ship that
happens to crash. So I had this thought that like now maybe companies will be more careful about
investing, but your perspective seems to be no. I mean, not yet. And look, and part of that might be
how this whole open AI thing plays out. Right. If Sam and Greg and basically most of these
important employees at Open AI end up at Microsoft, then Open AI for the investors, for the VC investors,
Thrive, Sequoia, Koso, et cetera, then they're holding a bag that's worth basically nothing and they get
absolutely screwed. I mean, all the talent walks out the door and what exactly, so they're quasi-sharehold
there's in a small non-profit think tank, like that, that's not good for them.
On the other hand, if this is Microsoft putting the screws on the board, really with Thrive
and Sequoia, et cetera, cheering them on and is a piece of this too, and Sam and Greg end up back
and then, you know, by this Friday, kind of everybody's there and the board is gone.
And then maybe it works out well for them, maybe even works out better for them because the
folks who are advocating for those guardrails are gone.
That might not be good for the world, but it's probably good for these investors.
Right. Reed Al-Brigotti had this like interesting thread about what when you talked about this shell that's going to be left. I mean, Sam and Greg are going to be at Microsoft. So, you know, if everything continues. And so therefore when Open AI, Open AI can only use Microsoft to train their models as part of the deal. So therefore, when they're training, that data will go to Sam and Greg. It's like what does Microsoft? Microsoft is playing is playing hardball right now. I mean, so for all of these.
the statements of we value our partnership. I mean, with friends like these. Right. Do you think one of the
things that I've heard is that, you know, this kind of shows an interesting look into Microsoft
culture, how fast they were able to move. Like, people have been telling me, listen, like, Sundar would
never move this fast. Even Apple might not move this fast. It's so funny because Microsoft has looked at
as this like big clunky company, but their speed and their execution here is pretty impressive.
And it really has been since Satya took over, right? I mean, you even think of some of the stuff he's
done in gaming. Yeah, this is very impressive. And let's also not diminish the idea that they're
angry. Like anger is a good motivator for people, right? Like they found out, you know,
Inifred of Axios, my colleague, she wrote that Microsoft was given a heads up of Friday's
announcement, Sam, by a minute, one minute. You know, we gave you $12 billion granted summon credits.
But like, and you give a 60 seconds heads up that this guy is gone. You don't even have the courtesy
to tell us. I mean, F you. And we're, we're angry and we're going to,
work the weekend to fix this. Yeah, and it made Satya look, look silly in the outset. But then
it absolutely did. Kind of a rebound. Yeah, so far. And you're right. Look, Microsoft,
Microsoft has kind of always been in the background. And they, they, you know, they've obviously
always been a big tech company in terms of market cap and stuff. But they were viewed for a very
long time as the sleepy company when, you know, there was meta and Google, et cetera, and Apple.
And they are absolutely at Amazon. They're absolutely at the forefront now. Satya has done.
that it is probably the most, to me, the most impressive kind of big tech CEO tenure that's
existed outside of founders. So just thinking again about opening I, do they just kind of
wither now? It seems like that's the case. Come back to me at 5 p.m. today when we see do 500
employees really quit? And the board has an interesting decision to make here because
if their job is to protect the mission, the mission isn't safety. The mission is the advancement of AI.
Safety is a piece of that.
So does the board decide that it existing in its current form is basically more important
than those 500 change employees staying with the company?
I mean, if so, I would suggest that's a pretty narcissistic point of view, considering
that none of these people, with the exception of Ilya, are AI engineers.
And Ilya, the AI engineer, says the board should resign.
But how does Sautia even let this, you know, go back now?
Oh, because it would be great for Sautia because they've already made the investment.
They already have the partnership with ChatGPT.
And again, if Open AI is at arm's length, they don't need to deal with all the thorny ethics issues.
Let Sam and Greg deal with those.
And they don't have to be, Satya doesn't have to be the one who gets called in front of a Senate committee to talk about this crap.
Sam does.
But there's no embarrassment of like, oh, I said they were coming in and now they're actually not coming in and they're still going to be, well, I guess if you rearrange the board, then you don't have to deal with that issue.
If you rearrange the board and assuming this is true, if you get out the narrative that Satya organized this, that Satya knew.
exactly what he was doing. And he forced their hand. And it worked. Yeah. That will be,
you know, because we do, one thing we do know, Alex, is that over the weekend, and maybe I
actually starting Friday night, Sautia slash Microsoft and, and the major investors in Open
AI have been pressuring the board to bring Sam and Greg back. And it really looked as of
Sunday afternoon. Like, that's exactly what was going to happen. Then OpenAI's board named
somebody else, LHCR's CEO. And then the Microsoft announcement of Greg and Sam comes, what,
12 hours later or something like that.
So I think this was, I do get the sense that this is Microsoft's attempt to put them back in.
And the worst thing that happens for them is that they work in house.
Oh, my goodness.
Yeah.
I mean, I feel like we are just ready for one more twist.
That'll be the podcast feed will be like Sam fired, what we know about Sam's firing,
Sam to Microsoft, Sam back to open AI.
I mean, the lesson is don't invest in startups founded by guys named Sam.
It is messy.
It is complex.
What do you think about this, speaking of which this like through line of the effective altruists in both, both arenas?
Yeah, that's an interesting piece of this.
And I guess, again, like everything else, let's see how this all plays out.
But I find that to be an interesting piece.
Obviously, OpenAI and FTX are very different organizations with very different people.
But yeah, the EA thing has once again kind of reared its head in, call it a very counterproductive way.
Yeah. And then what about the, again, about the safety thing, like doesn't seem, you know, let's say they do go through with this and, and either they go to Microsoft or they go back to Open AI where, you know, there's a new board in place.
Like, actually there was really no mechanism to, if there were real safety concerns, like Sam Bankman-Fried last, I don't, Sam Altman last week was telling Lorene Powell jobs.
Like, you're not going to believe what we have coming up. It's like sort of changes the barrier of comprehensibility and all these things.
Like, are we even able to develop this stuff safely?
Doesn't seem like it.
I don't think so.
I mean, to me, that horse has already left the barn.
I mean, no matter what happens at OpenAI and happens at Microsoft, I mean, and even this talk
about responsible and safe development.
Like, look, there is value in putting guardrails in place as things are getting built, right?
Because when you build a foundation more safely than the rest of the house is less likely
to fall down.
However, that said, there are so many different organizations working on this stuff that I'm not
sure, I'm not sure that one organization going slower or faster is necessarily, you know,
maybe, maybe this year, next year, and the year after, but, you know, if we're talking about this
in 10, 15 years from now, I think this will be looked at a kind of quaint conversation.
Exactly.
So a couple questions for you about the broader AI world.
So first of all, Sarah Guo, the VC had conviction, has talked about how it's now going to be
open season on AI and that everyone's going to like now think that they didn't have a chance
against Open AI, and now they will.
You have Mustafa Suleiman, who was the co-founder of Deep Mindset, he tweets, utterly insane
weekend, so sad, wishing everyone involved the best, one sentence later.
In the meantime, we finished training Infliction 2 last night.
It's now the second best LLM in the world, and we're scaling much further.
So is it now going to, like, kind of level the playing field, given that they've had this
turbulence?
I mean, it may, although, you know, inflection raised a ton of money.
Anthropic, which is founded by some people who left Open AI in part because they thought
open A, Sam was kind of moving too quickly on the commercialization side, you know, Anthropic has
raised an enormous amount of money, including from Google, including from Amazon. And there's a
bunch of, you know, other companies out there on the infrastructure side, you know, the hugging
faces of the world. I think it's a, it's a gold rush right now. There's a lot of companies
out there. Some are going directly at each other kind of with these foundational models. Some are
playing on other pieces of it. I don't think that Open AI, whether it, you know, this is a little
bit to me like kind of the early.com internet days where some very, very big players, like
kind of fundamental foundational players, think Netscape, right, you know, disappear eventually or get
diminished, not for reasons like this, not with all the soap opera drama, but like kind of
disappear, but the, but the trend, the revolution, the platform shift goes on anyway because
there's so many people working on it in so many different ways that, that no one organization
owns this. Right. And then also just like from an infrastructure perspective,
I would imagine now you're having this
this great model agnostic movement
where any company,
you know, think about like the companies that built
on top of open AI, now they're going to do
whatever they can to hedge because they realize they can't
tie their ship to this, you know, one company.
So I do think that's kind of an undercover
so far part of this in part because it's only been a couple
days, but you know, you're right.
Companies have built, if not their entire business models,
definitely products on top of chat GPT on top of open AI.
And look, by the way, that stuff's still working.
It's not like the thing shut down and now, you know, all their, you know, they're getting, you know, 404 codes on everything.
Stuff's still working. Microsoft still, you know, Azure still powering everything.
But it definitely has to give people pause.
And candidly, any time you as a company build on someone else's platform, look, we're in the media space.
I think we have learned this lesson over and over again from Facebook and others.
You have a massive, massive risk.
And diversification in some way is essential.
Wow. I've been thinking about Meta's research lab. I mean, they are trying to get to AGI. They're not worried about existential risk and they're building open source. Did they end up coming out like looking even better now than they did before? Not to mention that Zuck, you know, look at all these other CEOs that have had their issues. Zuck stays in place. Yeah, I think Facebook and I've written this couple times, I think Facebook is kind of the dark horse in this because and not because of the kind of all this personnel tumult. But there is to me this existential risk for,
for Open AI, for Anthropic, for others, which is that a lot of the data that they are building their models on is potentially legally questionable, right?
You know, stuff they've basically pulled off the web, you know, that is in some ways copywritten, et cetera.
There are a couple lawsuits about this, you know, from some music publishers, from Sarah Silverman, the comedian.
And if someone like a Silverman won or some of these music publishers won, every news organization, et cetera, would file a similar suit to try to, you know, get their piece of the pie.
Facebook is different, though.
Facebook is training its model on its own data.
Every time you or I ever publish anything to Facebook or Instagram, Facebook owns that.
That's in the terms of service.
There's no legal question whether Facebook owns that stuff.
So Facebook, if these other big LLM companies end up having legal problems and end up, you know,
financially having to pay huge amounts for this stuff, Facebook with Lama is going to be sitting
there pretty on this.
Exactly.
Okay, a couple more things I want to check in on.
I mean, one is like, there's this.
this meme that's been going on about how like these uh the board was supposed to be like predicting
what AI was going to kill us but couldn't think three steps in advance in terms of what they
did um what do you make of that i mean i guess going back to the yeah i i am i am desperate
uh for the for an actual interview with any board member on what really happened here and kind
of what thinking at all they did because i mean they weren't even playing checkers i mean they
were like, they weren't even playing Candyland. I mean, this, this was kind of, unless there's really
something we don't know, but I would say that we're now, what, three-ish days after this. And
these board members each do know reporters. I can, you know, I know that we've reached out individually
to some, if there was a real good story to tell from their point of view, if nothing else, just to
protect their own reputations, something would have leaked by now. Something would have leaked by now
from one of them and nothing has to any reporter. I mean, for all the, you know, billions of words
that have been written by tech press and general press and Twitter and blogs, there's not been a
really good defense of what they went about doing. And that's not to say that their concerns weren't
valid or legitimate. I think, I think it's possible you can make that case. But the way they went about
it was was so ham handed and, again, really shot themselves in the foot. Do you think it had more to do
with the ambition of using like the AI research or was it Sam trying to raise money for other
companies? I don't, I can't imagine it was the Sam trying to raise money. I mean, the, the story
that's coming out is that Sam was trying to raise money for kind of for a chip making thing. And,
and I know that Sam's perspective on this for a while, this has been that we, we as a, as America,
but also kind of AI as an industry is highly, highly reliant on Taiwan as being this chip making hub.
And I understand we make some chips here in Intel's building factories in Arizona. But, you know,
If Taiwan disappeared tomorrow, the AI industry stops tomorrow and open AI for sure stops
tomorrow.
And I think Sam's perspective was if there can be another global chipmaking hub and if that's
Saudi Arabia, so be it.
I don't think he was trying, Sam was trying to create a chipmaking company to build chips
in Palo Alto and have Saudi Piff fund it.
I think he wanted Saudi Arabia to become that next place really is a redundancy so that
if something happens with Taiwan, namely China invades it, well, the industry wouldn't grind
to a basically a complete and total halt.
You can't just put up a semiconductor fab overnight.
Did he mislead the board about that?
Maybe we don't know,
although we keep hearing, quote,
no malfeasance.
Was he just not fully honest?
Do you really?
And again, this all goes back to the,
if it was so bad,
like the only reason CEOs usually get fired instantaneously
in the dead of the night,
you know, on a Friday afternoon is something,
you know, we found out that he stole money.
We found out he slept with the wrong person and lied about it.
something really extraordinary as opposed to really a strategic disagreement, well, then if we
want to get rid of you fine, but we work out a, you know, a transition plan that everybody's
on board with. By the way, we don't tell the chairman that you're staying on as president when you
haven't told him that. You sure as hell don't do that. You don't promote someone to the new CEO role
who seems to actually be more in line with the people you just fired. Like, it's, it's mind-numbing
how they went about this. And in the end, if they really wanted things to go slower,
I think the result of things are going to go faster.
Exactly.
And okay, so Ilya Sitskever, he was part of the coup, you mentioned.
Now he's regretting it.
What do you think is possibly going on there?
I think he's trying to save his job.
Or let me rephrase, I think he's trying to save his job in the perspective of working with this
group of people, whether that is at Open AI or whether that's at Microsoft.
I mean, look, he is not, for lack of a term, a politician.
And I mean that from a corporate perspective.
He's not a corporate politician.
I don't know him personally, but from those who do.
I don't think he necessarily appreciated the gravity of what he was doing.
I think he believed in the in the called the strategy thinking, right,
which is that Sam and Greg were not doing what he would want to have done on the technology side.
But I don't think he, you know, I don't think he played any game theory from a, you know,
a boardroom drama perspective.
I don't think he ever watched a session is what I guess I'm saying.
I don't think he ever figured what was coming next.
Okay.
So let's do, let's end with a couple of winners and losers.
and then quick prediction.
So I'll just go through some of the main players here
and kind of get your perspective, winner or a loser.
And imagining that things stay the same.
Open AI, imagine that's loser.
Loser.
Microsoft?
I'd say loser, but not a huge loss.
They're making the best of their situation.
Okay.
Basically, like they were betting on the stability,
but this is sort of going to set them back,
but they should be okay.
Yeah, and they do look dumb,
not having some sort of protections on them.
for this. I mean, again, they're, I think they're playing the last couple of days very
smartly, but like, I think it's kind of like, all right, that's a really good basketball
coach. His team's down 10, but he's figuring out how to get him back in the game. Yeah. Okay,
Google. God, I guess at the moment, winner. Yeah, sounds like a winner. Amazon. Do they,
same thing. At the moment, to winner, but I think, you know, TBD, winner by not being,
winner by not being part of this mess. Like, winner by not playing. And how does Sam Altman come out
looking at from all this. Right now is a winner, which is kind of remarkable. I mean,
I don't know any CEO founding CEOs who get fired from their company overnight and, and
they're looking really good three days later. Sam has all the optionality. And the main reason
I think Sam's coming out looking good. Sam has a tricky reputation. He definitely likes the
spotlight, et cetera. You know, he obviously ran Y Combinator. And then there's always been questions
about what really happened at the end there. But look, the vast majority of open AI employees just
threatened to quit basically in solidarity with him. I would say that there are very few companies
that if the CEO was fired, that most people would be willing to quit in solidarity with their
CEO. That shows remarkable loyalty from the people he was overseen. Okay. And prediction,
where does this go? Is this the end of the story? No. My prediction is that Sam Altman and Greg
Brockman are working at OpenAI in some way very, very soon.
I do think they probably come back.
I wouldn't bet any money on it, but that's my guess.
And no, I think there's going to, and I think there's going to be more twists and turns.
And I don't know what happens with L.H. Shear.
Yeah.
Well, I hope that it wraps before Thanksgiving.
It's been fun working through one weekend, but I don't want it to go through two.
I mean, can we say I think they have to get that?
I think this has to be done by Thanksgiving just for everybody's sanity, including the sanity of the board.
This cannot be fun for them.
Exactly.
And could you, by the, let me add, these board members,
don't they're not founders of this company they're not employees of this company they can walk
away and they'll feel bad about it but like there are times when it's worth walking away because
a you've screwed up and you don't really have anything invested in it except time they could just
say you know what this is someone else's problem exactly dan this has been great where can people
find your work axios dot com and the newsletter uh get prorata dot axios dot com awesome dan great
having you on thanks for joining for another emergency show
show. I mean, it's always, I feel like we break down these big news events. We speak with you and come
away, feeling way more informed than we started. So thank you for being here. And thanks everybody
for listening. Well, we'll see how this goes. We may have another show tomorrow, another show Wednesday,
or we might wait until Friday, just depending on the cadence of the news. If Dan's right,
we'll be back on the air quite soon. So stay tuned and thanks so much for listening. We'll see you
next time on Big Technology Podcast.
Thank you.