Big Technology Podcast - Will AI Save The World + Azure's Origins — With Bob Muglia
Episode Date: June 14, 2023Bob Muglia is the former CEO of Snowflake, author of THE DATAPRENEURS: The Promise of AI and the Creators Building Our Future, and ex-head of Server and Tools at Microsoft. He joins Big Technology Pod...cast to discuss the latest in artificial intelligence, including Marc Andreessen's essay — Why AI Will Save The World — and the lessons in his book. We also go deep into the emergence of Azure at Microsoft, which he was present for, and how Satya Nadella was able to push it forward despite the cultural challenges. Enjoying Big Technology Podcast? Please rate us five stars ⭐⭐⭐⭐⭐ in your podcast app of choice. For weekly updates on the show, sign up for the pod newsletter on LinkedIn: https://www.linkedin.com/newsletters/6901970121829801984/ Questions? Feedback? Write to: bigtechnologypodcast@gmail.com
Transcript
Discussion (0)
LinkedIn Presents.
Welcome to Big Technology Podcast, a show for cool-headed, nuanced conversation of the tech world and beyond.
Bob Muglia is our guest today.
He's the former CEO of Snowflake, a $50 billion-plus technology company.
And he was also the head of server and tools at Microsoft.
which is the division that produced Azure Microsoft's cloud service is offering,
and it's also the division that its current CEO, Sa Inadella, came out of.
Muglia has a new book out called The Datapreneurs,
The Promise of AI and the Creators Building Our Future,
and he's here to talk about it.
But first, we're going to dive into Mark Andreessen's new essay,
why AI will save the world,
and dive into its arguments and merits.
In the second half, Mugley and I will have a discussion
I've been looking forward to having for a long time.
We're going to speak about the formation of
Azure and the conflicts inside Microsoft before it emerged.
This is definitely one of the tech stories I've wanted to get to the bottom of,
and we're going to hear it from Muglia, who is right there.
So here we go.
My conversation with Bob Muglia is coming up right after this.
Bob, welcome to the show.
Good to be here, Alex.
Great to have you.
So I emailed you in March 2019, and I was writing this book about Microsoft, actually all
the tech giants, but specifically a focus.
on reinvention at Microsoft.
You ran the server and tools business there,
Sott and Adela eventually took it over
and transitioned it to a cloud business.
And we're going to talk about that.
We'll start with AI.
We'll talk about that.
And I never heard back.
And I almost feel bad for sending it at that time
because I think you were in the middle of being pushed out
as leader of Snowflake.
I was transitioning out of Snowflake.
And I don't remember the email.
I apologize for now.
I try and respond to these things when I don't remember,
but I apologize for night if I didn't get back to that.
No apology necessary.
and when I saw your name come through that you had a book in the works,
I was like, all right, this is finally the conversation with Bob.
Here we are.
Why wait a chance to do it now?
Of course.
And you're at a very interesting point.
You're releasing a book.
You actually wrote the book, had some parts about chat GPT and Dolly in it with basically
a month after they came out.
And now it's being published this month.
And it's been amazing what's happened with your previous company, Microsoft,
Snowflakes, obviously involved, and the discussion has just evolved in so many different ways.
So in your book, towards the end, you have some discussions about AI ethics and sort of the rules we want to set for AI.
And recently, Mark Andreson, the head of the VC firm, Andresen Horowitz, who I'm sure our listeners are familiar with, came out with this post and basically said,
we don't really need a worry about any of the worst case scenarios about AI.
I mean, I'm simplifying it a bit, but point by point, taking the AI Dumerist's concerns and basically saying these are actually fallacies, I know you read the post.
So let's just start there.
I'm curious what you thought about and recent central thesis.
Well, generally speaking, I think I agree with Mark.
Let me start by saying that.
I mean, in the sense that I am an AI optimist, and he is also an AI optimist.
I'm a technology optimist, and clearly Mark is as well.
And I think that AI is going to do increasingly.
incredibly positive things for the world. And we're going to see an incredible set of advances
over the next few years, you know, with these co-pilots or assistants that are going to help us
in so many ways. AI is going to make us smarter and more productive, and that's a good thing.
And in general, I agree with Mark that there's, you know, a lot of reason to be optimistic about
jobs as well. Although Mark did over, you know, he did simplify it a little bit in that, in that
while technology has historically always created more jobs than it's displaced, it's very
inconvenient and challenging, you know, really the problem, actually, for the people that are displaced.
And I don't want to, and to me, those are the sorts of issues that are still going to be very real
societal issues over time. Even if Mark is fully correct in that more jobs are created, you know,
we'll have to deal with the reality that many people will lose some jobs. And I think we've seen
that over time. Perhaps it's a great, you know, the transition goes well and there's great jobs ahead
for everyone, but I'm hopeful about that. I am a little bit. Let me stop you there.
for a minute because we will we are definitely going to get to the jobs situation but and i'm going to
start to read some of these arguments that and dracin's makes and actually want to get your perspective on it
yeah sure because he made a lot of different arguments oh yes he did he was very he was very thoughtful
in what he put together exactly so he actually splits the categories of the people that are criticizing
AI into two different categories and i think it's very interesting you're shaking your head i'm going to
read it anyway he calls it the baptists and the bootleggers and the baptists he say are the people that
actually have serious concern and really believe in it. And the bootleggers, he says,
are people who are kind of doing this to profit off of the fear, to basically say that AI is
dangerous, to say that AI is going to take jobs, and to put themselves cynically in a better
position. So, and he says, oh, something very interesting. He says, the bootleggers always win,
because the Baptists are just going to raise the alarm and be of, you know, be concerned while
the bootleggers with their malicious intent are going to go and say take that fear and actually
turn it into something that furthers their self-interest. What do you think about that?
Well, I mean, Mark sort of talks about two extremes of people's approaches to things.
And he's not wrong about what he's saying, what I would say. But I think he oversimplified that
a wee bit. It's a little bit too simplistic. I have a slightly different, what I would say is,
And I think Mark referenced this, but not explicitly.
And I think it's really worth referencing this explicitly and distinguishing between two
kinds of fears that people have about AI.
One is the fear that AI, which is a tool that people will use and people will use it
for bad things.
And Mark very clearly said in his piece that that's going to happen.
And I agree with him in that.
In general, we have built a lot.
As humanity, we built tools from the very beginning.
tools have enabled us to be as productive as we are in society, and obviously it's how we live our
lives. And many tools have positive and negative uses. And AI is absolutely one of those tools
that can be used for ill and can be used for a lot of good. And AI is a very powerful tool. And so
the good and the evil could be very powerful. But I have a lot of belief that we will fully control
that, that the right things will happen. Mark says that no new legislation is required. I'm not
quite as optimistic about that as he is, that no legislation is required. I think there may
need to be some. But I do share some of Mark's concerns that we could over-legislate this
and put too many restrictions in. And the worst concern I would have, and Mark reflects this
in his blog, is that we restrict all of the innovation that is happening in the open-source
community right now. Because I think that's where the most interesting innovation is happening
in AI. And I think it would be a real bad mistake if through perhaps well-intentioned but
misapplied government regulation makes it so that only the largest companies have the money
to deal with all of the regulatory concerns. So I do have some of his concerns on that. But this is
AI being used by people as a tool. Exactly. And the second thing that Mark addressed, and you kind of
wave this off, and I'm not with him on this, is this idea that AI will eventually become
an intelligent entity that is effectively the equal of us. Mark says that's not going to happen.
He certainly seems to imply that that's not going to happen. I'm not so sure about that.
And when I talk to a bunch of the AI researchers, they're not so sure about that either.
So that's an interesting question. And that's where I think the bigger and broader concerns of
existential risk come in. This is not a concern, I think, in the next five or even 10 years.
But if you go out longer, as AI gets smarter and smarter, could it do some things to deceive us?
Are there ways we could lose control of it? While I am an optimist that we will not lose control
and that we can always pull the plug, because these things are, in fact, digital circuits running
in data centers that at least for now we still control, I do think there is something that
that needs to be thought about there.
So that's one area where I disagree with Mark.
Yeah.
And I think he does say that they are going to be,
I don't think he precludes the idea that they're going to be on par with us intellectually.
I just think that he believes that there is, you know, a kill switch because you can
unplug it the same way you can unplug a toaster.
And let me just read his, his argument to what you would say and sort of we can go back.
Because by the way, like, I think what he's trying to say broadly by writing this piece is that
startups, you know, because that's who he funds, are going to be hurt by this fear,
which he calls a moral panic, because what's going to happen is the established companies will
use it to legislate barriers and then eventually lead, that will lead to regulatory capture.
I'm sure you, you know, who ran a company basically from close to no revenue to something
that's now, you know, just a few years later, a $59 billion market cap, you understand the
idea that the bigger companies want to box out the smaller companies by
manipulating these fears. Absolutely. And I also understand as a very large company
what can happen when government comes down on them, because I was very much at Microsoft
and part of Microsoft during the DOJ lawsuit. So the regular, I'm quite familiar with the
regulatory regime and some of the downsides associated with it. And so I have, again,
I share some of those concerns with Mark, and I think it would be terrible if that happened.
And I don't think there's a reason for it to happen either.
But it can happen. I was just talking with someone today, just like these things,
logically might not make sense, right? Regulation that boxes out, smaller companies,
might not make sense. But narratives are really important here because they are the stories
that end up, you know, either pushing something forward or getting it blocked, and then you
can end up with suboptimal outcomes, even though there's like consensus. Consensus means nothing
when it comes to government and regulation. I do agree. There's concern about this. I think that
that if you when we really want to focus on on issues with AI, I am much more concerned not about
what's going to happen in the next five years, but will happen say 10 to 15 or 20 years from now
as these things get smarter and smarter. And we want it. We have to think about how to treat them
and how we work with them. Because I do think they'll get smarter. And while I do agree that we can
pull the plug, eventually maybe we can't. I mean, they may come a day when they control enough
of what they're doing that we won't be able to pull the plug. And that's why I do think we need to
make sure that their values are aligned with what society believes in general are appropriate values.
Values are complicated. Everybody has different values, but there are societal values and societal
norms. And it's appropriate as we create these things, and we are creating them, that as they
get smarter and smarter, we make sure that they accept the norms that humans think are appropriate,
which is why I think that Asmoth's laws are quite relevant.
in the conversation about, you know, about AI that is AGI and potentially super intelligent A.A.A.A.
Exactly. And he does, yeah, he does go through some of the arguments about why the fact, like, let me just read his argument to you in terms of why he believes the fact that we might lose control of this to be not feasible.
So he basically says AI is math. And he goes, my response to their position is non-scientific.
what is that it's not is that their position is non-scientific what is the testable hypothesis what would falsify the hypothesis how do we know when we are getting into a danger zone these questions go mainly unanswered apart from you can't prove it won't happen and in fact he says these that their position is so non-scientific and so extreme a conspiracy theory about math and code and is already call some are already calling for physical violence that i will do something i normally do not do and question their motives as well
Just respond a little bit to what he's saying in that part.
Well, first of all, I am heartened that there is an outcry of people that are asking questions about this.
I think that's a good thing.
And the conversation that we've been having across society for the last five months, six months,
because it really all started in December right after chat GPT shift.
That's when this conversation got into high gear.
I think it's a helpful conversation.
The concerns about regulation that Mark has, I think, are all very relevant.
And while I do recommend that we be thoughtful about this, and in particular as the very large
language models that get developed, say GPT-5 as an example, and it's ilk that are coming out
from the other tech companies, that we think through what those capabilities are and we'd be
thoughtful about how they benefit all of us and will not take us down these negative directions.
I do think all of those conversations are important.
I also worry about the potential for over-regulation and overreaction.
And it's sort of funny because my position on this is shifting relatively rapidly as time goes on
because in January I was worried that people wouldn't think this was important enough.
And I'm clearly no longer have that concern because there's been so many articles and so much
outcry about this.
And frankly, so many scientists that are really deep into the technology expressing their concerns.
Now, Mark can assign motives for this or not.
I'm not trying to assign any motives associated with it.
I think people are just genuinely concerned about what they're doing with the realization that
there are some incredible, that this is very, very powerful technology that will affect society.
That's why I always come back to saying, you know, there's a lot to learn from Isaac Asimov
and the things that he talked about because he really did project a society where we coexisted
with intelligent machines.
And he thought about it.
In his stories, he wrote about many of the challenges that are associated with that.
They're really parables that provide us with some guidance here.
And so I think that's all very relevant.
I do think that it's appropriate for government to be worried about this and talking about this.
Should regulation be going down now?
I think it's a little early.
I definitely think it's early because I don't think we know what to regulate.
And I do think we have to be cautious.
I think we'll learn a lot as well from what other parts of the world are doing.
Europe has historically taken a more active regulatory role in technology, and it would not surprise
me if that's also the case here.
And we already saw Italy make some moves in that regard.
Well, they rolled that back, though.
So, so you mentioned the job thing.
So let's talk about the job thing for a second because I do want to, let's get into this
discussion a little bit about his arguments against being worried.
I'm not saying I'm taking a side.
I just think it's interesting to discuss.
So he talks about how automation, the automation can kill jobs.
And he goes, the core mistake of the automation kills jobs that doomers keep making is called the lump of labor fallacy.
This fallacy is incorrect notion that there is a fixed amount of labor to be done in the economy at any given time.
Either machines do it or people do it.
And if machines do it, there will be no work for people to do.
And he goes, when technology is applied to production, you get productivity growth, an increase in output generated by a reduction in inputs.
The result is lower prices for goods and services.
As prices for goods and services fall, we pay less for them,
meaning that we now have extra spending power, which we would use to buy things.
This increases demand in the economy, which drives the creation of new production,
including new products and new industries,
which then creates new jobs for the people who were replaced by machines and prior jobs.
The result is a larger economy with a higher material prosperity,
more industries, more products, and more jobs.
Your thoughts.
I agree with all that.
I mean, he's totally right.
The only thing is there's one element he left out, which is time.
Yeah.
And these things happen over a period of time.
And what's happening now, and this is an incredibly important point, is they're happening
very quickly.
When, you know, when in the past, when technology affected jobs, it happened over years
and decades.
It's not going to take decades.
It's just not.
It's months, it's days, months, and maybe years.
we're talking about. And so, you know, if in fact we have automated, we have automation to the
point where autonomous self-driving cars become a reality, and Uber and all of the other car fleets
replace their drivers with robots, that is going to displace the jobs of a lot of people that
use that particular field as almost, you know, either it's their primary income or it's a last-ditch
income. You know, so there's real human displacement issues that I think are quite real and cannot be
ignored in this. I mean, it's very, very important. Over time, Mark is correct. I believe more jobs
will be created, but will those people be able to be trained to do some of those jobs? Classically,
there's always been an issue in human history with this. And the speed at which the potential
for this displacement is happening makes this more acute than perhaps past ones in the past.
That I agree with 100%.
The way that he ends his piece,
I mean,
he ends his piece with this like cute thing
about how the people who build
the AI before are legends
and the people who are building AI today
are heroes,
which is like kind of an interesting assignment
to people without, you know,
to say that each and every one of them
fit into that category is.
I think it cheapens the word legend and hero.
But that's a lecture for another day.
That's just ending of his piece though.
I mean, you know, it's a little cute ending with them.
I'm curious, he makes a point.
point right before that talking about how the U.S. and China are basically interlinked in a war over
AI. And his theory is basically go ahead and win from the U.S. standpoint. Your perspective on that.
I think that then, you know, first of all, I'm worried, you know, again, I said I have no, I have
some concerns about, of an existential nature, you know, in a long-term period as AI gains
superintelligence in particular, assuming that that has.
happens. No guarantee it happens, but it sure looks like it's going to happen. I have bigger concerns,
honestly, about the situation with China and the United States. I think, you know, I'm a Cold War baby,
right? I was born in the 1950s. I ducked and covered when I was a child, right, in elementary
school. I mean, I remember the nuclear fear that we had. I grew up with that over my life, you know,
is one of the things that people were afraid of. It's been almost a non-issue since 1990
after the Berlin Wall fell. We've really forgotten about what that means to live in a world
where we have these large existential enemies, because that is an existential situation. Nuclear war
is an existential thing. The Taiwan situation is an extremely scary situation right now.
and it has a potential to put us in conflict with China.
Right, but the broader AI thing is sort of...
The AI fits into all of this, though,
because in a way, AI is the solution through all of this,
is that, you know, if it progresses humanity forward collectively,
I generally speaking agree, however, with Mark,
that we must progress AI quickly
because the Chinese, you know, they will have a different,
they will have a different approach to this and different,
as I say, values are critical.
values will determine what is created.
We are creating these systems, whether they're tools or entities, we are creating them,
and they have the values that we put in them.
And there's no question that China will have a different set of values.
So I do agree that we need to move quickly.
I hope that through all of this, AI will help to bring people together, humanity across the
world together, because this conflict that I see coming with China doesn't seem to help
anybody. And it does make me very concerned. Okay, I'm with you on that also. Throughout this
conversation, you've said that you think AGI is artificial general intelligence is inevitable,
and it certainly seems that way in your book. So I'm curious what makes you so sure of that.
And, you know, is that something we actually want? Well, first of all, whether we want it or not
may not matter because I think we're on a trajectory to get it. No, no, of course.
course it matters. It definitely matters. You can't just throw up your hands and say it doesn't
matter. We built them though. Yes, we did. Yes, we did. Just like we will build AGIs if we have
the ability to do so. I think that we need it. When we're working towards technological progress,
it's not, you can't just throw up your hands and say it's going to happen. This is a decision
that people make. People are, is a very broad statement, right? I mean, you can have your decision.
Look, all these scientists signed that thing that said we should put a six-month moratorium on AI.
It went nowhere, absolutely nowhere.
People ignored it completely.
I mean, short of government regulation, which I completely do not believe is appropriate,
I can believe that would be disastrous.
I don't know how to stop it.
I don't know how to stop it.
Now, you can tell me, you can disagree and smart people can say, hey, this trajectory
doesn't result in what one would call an AGI, which is a system.
that has a computer system that has the intelligence of a median human.
That's kind of the general definition of an AGI.
And I think we're proceeding on a very well-defined path towards that right now.
It's certainly not provable that it's going to happen.
And I don't have proof that it'll happen.
And I will also tell you that when I started writing the book, I had no idea that it would happen in my lifetime.
But I still don't know it will happen.
But now I believe it.
The difference is I lived, you know, over 60 years in my life.
Having spent a lot of time reading science fiction as a kid in Asimov, spending a lot of time thinking about intelligent robots.
And my entire career has been about making computer systems more intelligent and more productive for people.
And I've been able to use the tools that are at my disposal, the chips and the power that they give, the database software, the application software, the development tools, the languages, all that stuff to be able to build.
to work with teams to build some really cool products over time. Now, intelligence is available.
For God's sakes, computers can respond effectively to English. APIs. English is now an API.
Oh, my gosh, my head explodes. It's stuff like this. So what happened to be straight, and it's, you know,
it was, I'll say you can, you can disagree with it if you like, but it's what I believe.
I spent 60 plus years of my life believing that all this technology was interesting,
but I wouldn't see a world where intelligence was actually available to be applied to a broad
set of problems, and now it is.
Will that result in an AGI, given the trajectory we're on, I totally believe it.
Will it result in superintelligence?
That's a fairer question.
That's a fair question.
And then does it result in what people sometimes refer to as a technological singularity,
which is an acceleration of progress, kind of beyond anything.
we can imagine. And no one knows that, of course. No one knows that. Was it just chat GPT and
Dolly that convinced you? It was all the pieces. The first thing that got me, honestly, was, was, was the, was co-pilot,
Microsoft copilot. Okay, because that codes alongside you. And, and when I realized that,
that, look, of all the things that we expected, you know, the, the, the, the technology advances
to, to drive, productivity of developers like that was just, it, it, it,
people that came out of left field from my perspective that that all of a sudden a computer could
assist in writing code such to the point that a developer could get get 40% or more of their work
done by a co-pilot AI machine I didn't see that coming I know I do speak with with CEOs I guess
you've seen it in action I've spoken with some CEOs who said yeah we all use it and I say well
what sort of productivity increase have you seen and they're like still waiting but the
CEOs that you speak with? Are they in that same place? When it comes to co-pilot in particular,
are they in that same boat or have they actually seen that 40% increase? What I hear from the
Microsoft people is that people are using it all over the place and that it is, you know, that a lot of
the code that's being written is written by this, by the AI, by co-pilot. Now, it's certainly
true that most of what co-pilot is writing today is the lower value code. Right, right.
That doesn't require the level of intelligence. But, you know, but that's still an incredible
productivity improver. And, you know, we haven't even begun to see these.
You know, right now, what we have are...
Yeah, no, you're setting, you're setting me up for what I'm going to come to, which is the
will AI save the world question, that Andresa, I mean, Andreessen's post is, will AI,
why AI will save the world. So I'm going to give you a minute to speak about that in
the moment, but the AI, it's very interesting. I've never heard someone speak,
I refer to AI as, sorry, artificial general intelligence as a median human.
intelligence. Sort of like, I feel bad for the people who just are under the cut,
like AI is smart as you, is as smart as you, but you're too done to be. That came from Sam
Alman, by the way. But I also like, it's interesting. I wonder like why you think it's possible
to reach that point and then not reach super intelligence. Because if an AI, which has access
to all the information in the world, can become as smart as one median human being, how does
that? It doesn't stop. Like, can it possibly stop that way?
I don't think it does stop, right?
I don't think it does stop.
Just get smarter.
So I'm just saying we haven't proven that we're going to hit what people would call
AGI.
We certainly haven't proven that we've hit super intelligence.
But I agree with you, if we're on a trajectory to AGI, it'll continue to get smarter.
And that's where the concerns kind of come in, right?
That's where I do think there are some concerns because these entities will move beyond
us in some ways.
And I think that's where we need to make sure that the values that we imbue into them
are consistent with what will protect humanity
and protect our lives
because I still find that very important.
Yes.
Okay, so let me now sort of end this segment
talking about some of the benefits
that Andreessen sees
and why he believes AI will save the world.
And I'm curious if you agree with him.
You know, it seems like you are open
to a little bit of the idea that AI could destroy the world,
but let's see if it will save the world
after I read a few of these.
So he says, every child will have an AI tutor
that is infinitely patient, infinitely compassionate,
infinitely knowledgeable, infinitely helpful.
Every person will have an AI assistant, coach, mentor, trainer, advisor, therapist
that is infinitely patient, infinitely compassionate, knowledgeable, and helpful.
Every scientist will have that assistant that can greatly expand their scope of scientific research of achievement.
Every leader of people's CEO, government official, nonprofit, president, athletic coach, teacher will have the same.
That magnifies the effects of better decisions by leaders across the people they lead are enormous.
So this intelligence augmentation may be the most important of all.
He talks about productivity, growth, scientific breakthroughs,
the creative arts going through and evolution and AI even approving warfare
because leaders will have a realistic understanding of how many of their people will get killed
before they go to war and not be optimistic about it and be able to make better decisions.
So having heard these arguments, and you read it this morning,
is AI going to save the world?
AI is a huge part of the, it is a huge part of our world.
And together we will work with AI, you know, to deferrent, determine the future of, of society and humanity.
It is going to have enormous benefits to people.
I totally agree with Mark.
All the things you describe, you won't go to the extent of this.
Let's talk about killer robots a little different separately.
That one has kind of in its own sort of AI and warfare is a fascinating conversation.
And again, I'm not such a purist to try and say that's never going to happen.
I mean, your old company is working with big contracts with the Department of Defense,
so they clearly believe that something is up there.
There's a lot that will happen there.
You know, Asimov outlawed that essentially with his laws of robotics.
He made it impossible, and unfortunately, that will not happen.
That's not right.
That we will not see.
In terms of the incredible benefits to society, these co-pilots or assistance mark described,
tutors, the ability to help people in learning and training in school. I believe all of those
things are going to happen. It will have a huge change in society that I think is a little,
you know, we have to be thoughtful about that because we saw large benefits as an example with
social media. We also see some downsides associated with it. And if we're spending all of our
time as humans interacting with AIs, I do wonder what that will do to our relationship amongst all
of us and no one knows because no one has experienced it. Society is going to undergo a massive
change. I do believe at some level that we are merging with these things. I mean, that's our
long-term outcome, is that they are becoming part of us. It's already happening. All of us,
you know, who can walk through a day without looking at your phone a thousand times a day?
Kids on average spend, what, seven hours on a screen in 15 minutes outside playing these days.
I mean, we are already engaging really deeply with these devices, and we see some negative impacts
from the way people are developing, and particularly children.
So those are questions that I think are still unanswered.
In general, I believe that this will be incredible, this will provide incredible benefits to society.
It's also interesting, I'll just say, one of the fascinating things that I learned through
this process of writing the book is that when I was a kid, I read a whole lot of what Isaac
Asimov wrote. He wrote over 450 books, 470 books almost. And I've had a chance to re-read a bunch of
his, of what he wrote. He had a very thoughtful nuance view of how robotics and society would
interact. And in fact, in Asmoth's writings, robots never established themselves on Earth in a major
way, not in society. They were out in the agricultural fields, but they were not part of day-to-day
society. And in the long run, Asimov rejected robotics as the future for humanity because he felt
people would become too dependent on them. And in his stories, he talked about what happened as people
got more and more dependent on robots and actually stopped innovating in his world. So it's
really interesting because while I do agree with Mark that all of these things will happen,
and I think they're super positive, I personally can't wait to have my calendar assistant to help me fill
out my calendar and, you know, and get things done. I had an executive assistant for most of
my life, and I don't have that now. And I, and there's a lot of benefit to having some help with
that. But I do think there are going to be a lot of really interesting societal implications to this and
the way people work with each other. And we'll have to see how that goes. So basically, AI,
I'm going to summarize your position as AI could save the world, but not necessarily. And it
sort of depends on how we handle it. What does save the world mean? I save the world. What I actually
struggle with that is the word save the world. With the exception potentially, where I do think
it is really interesting is going to be our relationship with China and how AI impacts on
that. I think that's a fascinating dynamic that will happen, you know, in part of, you know,
I will see some of it and younger people will see almost all of it, I think. But I think that
AI is part of our world and it is going to become an embedded part of society. Certainly by 2030,
we will not know how we live without it.
That's for sure.
The question really, I mean, I think what he's saying was save is improve it dramatically.
Well, everybody, it's the opposite of what people are saying, AI will destroy the world.
He just used the opposite there.
He's trying to write some provocative.
It's a response to the doomerism.
So, okay, Bob Mugly is here with us.
He is former head of server and tools at Microsoft.
I can't wait to speak with him about that on the other side of this break.
Also, former CEO of Snowflake.
and the author of the book The Databpreneurs, The Promise of AI, and the Creators Building Our Future.
Well, very appropriate subtitle.
We'll be back right after this.
Hey, everyone.
Let me tell you about The Hustle Daily Show, a podcast filled with business, tech news, and original stories to keep you in the loop on what's trending.
More than 2 million professionals read The Hustle's daily email for its irreverent and informative takes on business and tech news.
Now, they have a daily podcast called The Hustle Daily Show, where their team of writers,
break down the biggest business headlines in 15 minutes or less and explain why you should care
about them. So, search for The Hustled Daily Show and your favorite podcast app, like the one you're
using right now. And we're back here on Big Technology Podcast with Bob Muglia, author of
The Datapreneurs The Promise of AI and the creators building our future. Bob, let's do a little bit of
biographical stuff. I want to ask you some of the questions that I wanted to ask you
when I was writing the book and now we have a chance because we're sitting down with each other
today. So here's the story as I heard it. Server and Tools built effectively servers for
companies that they would maintain on premise and built that into an exceptionally successful
division, largely thanks to your leadership. In fact, it became so successful that when
people started to think about the move to the cloud, Microsoft didn't want to go there.
When Amazon built AWS, the company basically said, we're not, we don't believe in the cloud
in the beginning because it didn't want to lose. It had to, I mean, you can cite the numbers,
right? Multi-billion dollar revenue coming in from installing servers in people's offices.
It did not want to be in a world where, you know, that business could be threatened by putting
those servers, you know, outside the office. And even the customers, the chief
information or chief technology officers and chief information officers that were inside the
company didn't want Microsoft to transition away from the old model because they would lose
their autonomy and lose a lot of their power, which came with setting up and maintaining
those servers in the office. So it was actually a big struggle inside the company to actually
move to the cloud until Satina Delah came in and said, all right, well, we need.
know that this is where the future is heading, and that's sort of the origin. He ran
servant tools, and that's the origin story of Azure. That's what I've heard. I haven't been
able to speak with Satya about it, but I do have you here with me today. So what is the actual
story that happened inside Microsoft that gave birth to Azure? That's totally wrong.
Why don't we start by saying that? It's totally wrong. I wish you would have picked up the phone
before the book went out, but okay, let's hear it. The reality was,
that Microsoft, you know, while Steve ran Microsoft, it was Windows-centric.
Yeah. Windows included Windows server to Steve, right? And in fact, if, in fact, you know,
he and Bill, who was still involved, you know, at the company as a director at that point,
um, uh, were very aware. They were always very paranoid, right? They, they were the classic,
only the paranoid survive. And so they were looking hard at what was happening.
at AWS. And they saw the writing on the wall. In 2009, I think it was 2009, Ralph Harms,
who was working in the business development area and strategy area, wrote a paper about how many
data centers there would be in the world. And how many companies that ran data centers
that would be in the world. And they had it, you know, at, you know, a handful. And the logic
was it would all consolidate into these large systems, these large data centers in the cloud,
because of the cost economics associated with it.
I didn't fully agree with Rolf because I didn't,
because I thought that private clouds would be meaningful.
And that could have happened potentially.
But I actually think what happened in the world was that Microsoft shifted in that 2008, 2009 time period
to move towards the cloud.
And our strategy was Azure.
It was Azure.
No question about it.
And that was under Steve and that was under me.
In fact, we moved, you know, Azure was originally developed under Ray Ozzy, and it was developed in a group that was, you know, sort of not part of server and tools.
And in my last year at the company, in 2010, really, that was the last year I was, I left in 2011, but the last year I was really running the division was 20, 2010, because Saatcha took over in February of 2011.
It was, it was, you know, the Azure group worked for me.
At that time, it was Windows Azure, though.
Wow.
Because Steve had a Windows-centric view of the world.
And here's the crazy thing about it.
And here's what was gone.
The group that built Windows Azure built a product that did not run Linux and it did not run Windows server.
Let me say that again.
It didn't run.
Windows Azure in 2010 did not run the version of Windows server that we sold in retail.
And, of course, ran just fine in AWS.
And so we built the wrong product.
We built a past product that was targeted at essentially internally.
It was mostly internally targeted at first.
And with the idea that we could sell these internal services that were a whole new generation,
that's not what happened.
What happened was Amazon was infrastructure as a service.
And we built incorrectly a platform as a service pass.
And literally built a system that in 2010,
was incompatible with everything on the planet.
So we're making, we're totally losing.
In fact, I, my, my leaving Microsoft was really about that and my belief as to where that needed
to go, which was to be more compatible.
And the leader of that group disagreed with me.
And Steve, Steve actually sided with that leader.
And that was what caused me to leave the company.
That's the time that, that I left.
I decided to leave Microsoft.
And, uh, uh, what happened was that, that,
that fortunately, you know, when Steve asked me who he thought you replaced me, I said, you know, there's one guy to do it.
And that's, his name is Sacha.
And Sacha replaced me in February 2012 and immediately was able to affect the strategy I was unable to affect.
I mean, in essence, the person that I was struggling with left the company.
Okay.
Because he wanted the job and he didn't get it, which is a good thing.
And what happened is Sacha being the new guy in there was free to pursue the correct strategy.
And Satch and I were totally aligned on that strategy.
I've never had one ounce of misalignment with Satch on it.
I definitely had misalignment with Steve back in 2010, but I think everyone would agree that in fact, in fact, building a infrastructure as a service cloud and then adding past services on top of that was the way to go.
I mean, we literally just built the wrong product.
And it took and what, what, and Sacha did some really good things.
I mean, in particular, he put Scott Guthring in charge of that.
And that was, you know, I was not free to do that at the time.
I mean, it was literally not one of my choices.
I mean, and because the person who was running it was in that job.
And Steve had a strong relationship.
And once Scott took over, all the right things happened.
I mean, basically, at that point, the product went great, you know, went in the right directions.
So, you know, kudos to Sotcha for doing all of that.
But it really wasn't a disagreement.
It wasn't that disagreement.
It was really a disagreement about infrastructure.
It was about a weird technical thing.
Mostly it was about people, honestly.
These things are always ultimately they come down to disagreements with people.
And Steve was betting on one guy.
And frankly, he made the wrong bet.
Yeah.
And this is sort of like the origin story of cloud.
And really Microsoft's reinvention.
So appreciate you telling us.
I'm curious like what you think the lessons have been for Saita inside that company
about how important it was for Microsoft to reinvent beyond
Because, you know, this is another thing that I learned, that Windows and Cloud are sort of two different.
They're sort of competing in some ways.
If you make everything available on the browser, for instance, it doesn't really matter what operating system you use.
We succeeded to killing.
You know, one of the great, you know, the saddest stories of Microsoft history was, you know, we're getting now into some of this.
None of this is covered in the book.
We're talking about the, you know, the issues that Microsoft had.
one of the saddest stories in my opinion that ever happened at Microsoft was the death
the client API the Windows API and you know to me the event that did it was when we killed
Silverlight um I don't know if you remember Scott Guthrie had built Silverlight it was a cross
platform it was it was a flash you know Steve you Steve misunderstood it it was he viewed it
as a flash competitor because it actually ran on the Macintosh but it was a native Windows
dot net API to write Windows applications.
And sadly, I was actually, was forced to kill it.
I mean, it was one of the saddest days of my career.
It was late in my career at Microsoft.
And we went down, what happened is we went down, when we killed it, we killed all momentum
that existed on development for Windows, and we were never able to regain that.
So the primary development of Windows programs are browser programs.
And we've really lost, we've lost all developer traction on the client in that, in that time frame.
And what that means really is that Windows is really a device.
It's an operating system that runs on a device.
But the apps that run on Windows, by and large today, what people use mostly are browser apps,
other than Office, basically.
Yeah.
Well, I think that's, I mean, on Mac as well, largely that's the case.
It is the case on the Mac as well.
It is, it's true.
But, but what was different is that we had a huge developer.
franchise that we threw away.
And that was what, that's what, that was really a sad, that was a sad day.
But, okay, we could go on that forever.
So the question I had for you is about Sanya, which is, do you think he learned some lessons
about reinvention?
Sure.
That, that he's carried through to the, that he's carried through to the CEO job.
And I'm, because that, that is sort of like my way of asking, what do you make of this
basic, basically reinvention of Microsoft through AI right now?
I mean, it's not like a completely discard.
of the old model. But it's a very big bet on a new technology. So what's your perspective on
all this? Well, I have infinite respect for Sacha. I mean, I sometimes call him the Yoda of the
technology industry. And I think that's sort of, there's a lot of truth to that. He, you know,
the move that he has made, I mean, I think he's done a lot of good things in terms of understanding
that Microsoft is first and foremost a services company. And that means that you deliver your,
your product across every device. They understand that now. They really do understand that now.
They don't always execute as well on it as I'd like them to, but they definitely understand, they definitely understand that now. And Sotja has that religion in place. What he was able to do in building the relationship with Sam Altman and an open AI was one of the most amazing moves that I've ever seen a leader do in the technology industry in my career. You know, if you'd ask anyone two years ago, or even a year ago, frankly, even a year ago, frankly, you know,
Where does Microsoft fit in the technology race for AI?
Most people would have put them, you know, relatively low on the stack.
They wouldn't have said, you know, that Microsoft is a leader.
And now, you know, everyone is acknowledging them in a leadership role.
And they really pulled the table.
You know, he really, he really was able to change the game on Google.
And I think it was Google's reticence to leverage some of this technology that won,
and a little bit of conservatism that put them in this position.
I think what he's doing what Satch is doing across Microsoft and the apps is exactly
the right thing to do. By working with Open AI, they got a huge head start because they had access
to these models basically a full year before any of the rest of us did. And so, you know, that's put
them in an incredibly strong position to release a whole plethora of products this year that we'll see
and I think we'll see some incredible new things coming from them. So I'm, I think,
do you have inside knowledge of what's coming. Can you share? Some of it. Some of it I do.
And the things I do, I can't share because I do work with those guys. Most of you know, I'm not as
close to the office group.
What I'm most interested in is what the office group is doing, to be
honest, and that's mostly just personally.
What about an office is interesting?
I mean, Google did just release, and I think this is amazing.
I can't wait to use this.
In Google Docs, apparently there's this button that you can hit.
That's experimental now, but eventually it's going to roll out to everyone where you're
writing a document and it just finishes it for you.
I mean, that's amazing.
Well, like, the one thing that's certainly the kind of stuff I've seen in Microsoft
announce what I think is mind-blowing is teams will,
will do it summary of the meeting,
you know, your meeting notes.
It will write up your meeting notes
and summarize the action items
at the end of a meeting by taking the verbal,
I mean, it's pretty remarkable.
Taking the voice,
transcribing that into English text
and then using, you know,
using GPT to summarize it.
It's a pretty amazing thing.
And so I look forward to seeing things like that,
having been in so many meetings
where note taking wasn't done
or it was done incorrectly.
Well, I would love that,
but it has to, teams, it's got to be faster.
If teams could be faster and put this into place, that would be interesting.
You put teams next to this.
I don't know what you mean by the team test.
To use teams versus use Zoom, teams is slow compared to Zoom.
And I don't even like Zoom.
Okay, anyway, last question for you.
You wrote candidly about, and again, correct me if I'm wrong, but candidly about being pushed out as CEO of Snowflake because they wanted someone to take the company public.
talk a little bit about what happened and why you felt ready to write about it.
Well, I mean, it's been, you know, at this point, it's four years.
It's a long time.
So, you know, it was, it was tough.
It was a tough transition for me.
You know, I mean, ultimately what happens is what always happens these things, which is disagreement between people.
That's always the root of this.
And in this case, I mean, the board at Snowflake, you know, had a real long-term relationship with Frank Slutman.
And they saw what Frank could do in terms of taking.
companies public and the board all wanted would love to want to have frank in that role so that's
what caused the transition you know it was partially because there were issues from people issues
between me and some of the board members and partially because the board really had a candidate
that they could that they could put in the role um it was a tough transition for me and it took
me a while to you know to get fully back and doing things but pretty quickly i started working
with small companies i realized that through that that although i worked at at a huge company
Microsoft for most of my career. I was actually doing entrepreneurial work while I was at Microsoft,
and I was often going to the new thing and building the new thing at Microsoft, and that's why
I had connected with all these entrepreneurs around data and ultimately the datapreneurs.
And so I realized that there was a good opportunity to help people in the early stages of
companies, and that's what I've been doing mostly for the last four years. I'm on a handful
of small private boards, a full handful, by the way, five of them. So they keep me pretty
busy. And, you know, and I still stay very connected with the industry. So yeah, that's what
happened. The book is the datapreneurs, the promise of AI and the creators building our future.
It's available in all bookstores right now. So go check it out. Bob Muglia, thank you so much
for joining. Really appreciate having you here. Thanks. I've really enjoyed the conversation.
I did as well. And great to finally get the true inside story of the battle inside server and tools and
the emergence of Satya. So thank you for that.
No apologies necessary.
We had a great podcast, so really appreciate that.
All right, everybody, thank you so much for listening.
Ranjan Roy and I will be back again on Friday to talk about the week's news.
And you can stay tuned for our next flagship interview this upcoming Wednesday.
I've got some really good ones to announce and I'll do it.
Well, let's try to aim for Friday.
I'll do it on Friday.
All right, thanks, Nate Gwattany for handling the audio LinkedIn for having me as part of your podcast network.
And all of you, the listeners, we'll see you next time on Big Technology Podcast.
Thank you.