Screaming in the Cloud - Episode 9: Cloud Coreyography
Episode Date: May 9, 2018Microsoft has experienced a renaissance. By everything that we've seen coming out of Microsoft over the past few years, it feels like the company is really walking the walk. Instead of just t...alking about how it’s innovative, it’s demonstrating that. Microsoft has been on an amazing journey, making the progression from telling customers what they need to listening to them and responding by building what they ask for. Today, we’re talking to Corey Sanders, Corporate Vice President of Azure Compute at Microsoft. Some of the highlights of the show include: Customers are asking for Microsoft to help them through support and enabling platforms Storytelling efforts through advocates, who play a double role – engaging and defending Microsoft Customers moving to the Cloud are focused on a continuum and progression; they have stuff to move from one location to another and want all the benefits–better agility, faster startup time, etc. Virtual serial console into existing VMs; this is how people are using this and Microsoft is going to, if not encourage this behavior, at least support it Microsoft is the only Cloud with a single-instance SLA Serial consoles: Windows' has seen less usage, partly due to operational aspects of Windows vs. Linux. It's not a GUI; it's scripting. Does the operating system matter? From a Cloud perspective, it shouldn't have to matter; you should be able to deploy it the way you want Edge enables much more complex and segregated scenarios; that combination with cognitive searches running locally will make it accessible anywhere Branding challenge as customers start to notice that devices are smarter and more complex; will they lose awareness that Microsoft Azure is powering most of these things - they shouldn’t care An awareness of not just what's possible, but what's coming; the democratization of AI Education and fear gap of trying something new and taking that first step; make products and services stupid and simple to use Customers return to add cognitive services and AI capabilities to existing, running deployments, environments, and applications Multi-Cloud solutions can be successful, but there's a caveat; they’re actually built on a service-by-service perspective Azure Stack, offers consistency, but some people may place blame on it for poor data center management practices; some expectations and regulations may be frustrating to some customers, but lets Microsoft offer a consistent experience Freedom and flexibility have been challenges for Microsoft and other products for private Clouds What people need to understand about Azure, including from a durability and reliability experience To some extent, scale becomes a necessary prerequisite for some applications Microsoft has taken many steps and is the leader in various areas Links: ReactiveOps Microsoft Azure Corey Sanders on Twitter The Robot Uprising Will Have Very Clean Floors Kubernetes Cassandra Azure Stack .
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
This week's episode of Screaming in the Cloud is sponsored by
ReactiveOps, solving the world's problems by pouring Kubernetes on them. If you are interested
in working for a company that's fully remote and is staffed by clued people, or you have challenges
handling Kubernetes in your environment because it's new, different, and frankly, not your
company's core competency, then reach out to ReactiveOps
at reactiveops.com. Hello, and welcome to Screaming in the Cloud. I'm Corey Quinn.
I'm joined this week by Corey Sanders, who is excellently named. He was the head of product
at Microsoft Azure Compute, and he's now the corporate vice president of that same group,
meaning that everything that happens in Azure Compute is, in one way or another, your fault. That's right. That's right. Both good and bad.
Perfect. We'll keep our slings and arrows. There you go. I'm ready.
So how long have you been with Microsoft? Approaching 14 years, actually. So I started
right out of college, and I started actually a developer. So I was in the Windows Serviceability
team, and I fixed bugs in Windows in the early days. And then I moved over and became a product
manager for Azure. I was, I think, the fifth PM on the team, so very early days and been on it since.
Yeah, it's been a lot of fun. Wonderful. What has it been like over that time period? I mean,
you go back to 14 years ago, that's 2004. That was after the days when Microsoft was in the world of,
oh, yeah, this internet thing, that's not going to happen. And more into the world of, oh, yeah, this internet thing, that's not going to happen.
And more into the days of, oh, cloud, that's not really what we do either. But in the last decade,
decade and a half, there's been a tremendous renaissance. I think that Harvard Business
School and the rest will be studying this for years as a fantastic transformation story of a
company reinventing itself. Everyone claims to want
to be able to do this, but by everything that we've seen coming out of Microsoft over the past
few years, it feels like you're really walking the walk. Instead of just talking about how you're
innovative, you're demonstrating that. How does that work? How did you get to where you are from
where you were? Yeah, I mean, it's been an amazing journey, a roller coaster even, as we've sort of made this progression.
And I do think a lot of it originated from the work we were doing in the cloud.
And I'd probably say the biggest thing is sort of the rate at which we learned and listened from our customers changed.
And that, I think, is probably the biggest thing. I had some retrospective on the plane here,
thinking hard about sort of how did this all happen, right?
I've been in Azure now for close to nine years.
You know, how did this all happen?
The way in which we listen to our customers and respond
is what really, I think, changed the most, right?
In the early days of Azure, just candidly,
it was very much, we know what's right. We know the platform that you guys should be using, and let us show
you that platform. And over the years, it quickly changed into, we actually may not know inherently
what's right. We may need to actually build what people are asking for. And really put simply,
that is where the springboard of sort of everything
else came from. So if you look at things like support for Linux, support for things like
Postgres and MySQL, support for sort of many of the open platforms, many of the other solutions,
even support for things like Edge and sort of our Intelligent Ed platform, all of these really spin
off from what are customers asking
for us to help them do, right? What are customers asking for us to enable? And I think that's really
been the crux of that transformation. And then the freedom from a management perspective and
a leadership perspective to go do that work. From a customer story perspective, something that has
been incredibly noticeable in the community for the last two or three years has been Microsoft more or less hiring every dev advocate, developer relations, dev reliper person
that they can find to tell stories. Is that storytelling aimed internally at Microsoft?
Is it aimed externally to the community? Is it helping to build the story before it winds up
getting out there? Yeah, you know, I think the best advocates that we have are the ones that are actually looking both ways,
and they're harsh both directions, right?
And so what I mean by that is, you know, our best developer advocates are really mean to me
about sort of the things that I've got wrong, the parts of my service that are no good, right?
The things that people are struggling to use, right?
And then they're also pushing back hard in the community and saying,
hey, you guys are being overly judgmental, right? You're being overly harsh. And so they're sort of playing this double role, right? Sort of a double agent,
if you will, both engaging with us and yelling at me while also defending the platform externally.
It's a tough job, actually. It's, you know, I think that I really respect a lot of those folks.
It's a hard job. Yeah, it is definitely not for the faint of heart. Yeah think that I really respect a lot of those folks. It's a hard job.
Yeah, it is definitely not for the faint of heart.
Yeah, that's right.
And it's also a bit of a challenge in that back when you first started seizing up developer advocates, I'm going to say Azure was not as great as it is today. Is that diplomatically
acceptable to frame it that way?
I would like to say that every day that's true from the previous day.
Exactly.
We are closer than ever.
It's uplifting and meaningless all at the same time.
That's right.
And it felt at the time that, okay, they're bringing in a lot of people to tell a story,
but that story right now isn't great.
I don't think people have that same criticism anymore.
So I'm wondering how much of that was, oh, now that their story is better, they're
bringing in more dev advocacy folks versus they brought in the right dev advocacy folks. And now
as a result, things are much better than they were. It's a little bit of both. It's a little
bit of both. I think, you know, the developer advocates definitely have a major impact on what
we build, right? I think that the, especially, I think the emphasis is the developer advocates that we may not, inside Microsoft, have as much experience with.
So the strongest pushback, the Java and the Node developer advocates that come in and say,
whoa, you guys completely missed the boat on this.
That has been the biggest push for us because it's just a voice that we haven't heard as strongly.
And so making sure that we're hearing that has been really important. And definitely getting the right developer advocates has been a big part of it as
well. You know, we always, there's more room to go, right? I think that if you sat down with the
developer advocates, they'd tell you, no, there's quite a bit still that needs to get fixed. And so
I think we've still got a long ways to go, but definitely better. And I think they've had a big
hand. Yeah. Okay.
That's a very fair answer.
Yeah.
Changing gears slightly, the keynote this morning and the ones that are upcoming over the rest of this week are focusing on a lot of the higher level, flashy, wonderful services.
And we will get there.
But 85% of global spend these days, as per several vendors, is not based upon these high-level
platform-as-a-service offerings. They're based on compute, storage, maybe a managed database here
and there, an object store, the data transfer between all of these things. And as you take a
look at customers who are moving into the cloud, either from an existing on-premise environment
or looking to migrate from either
another cloud provider, or in many cases, they're startups who are, all right, time to get started.
We have this idea. Turns out in 2018, building a data center is not in your top five list of
things to do. Let's go with a cloud provider. How do you see that from your side? Do you find
that customers are interested and very, I guess, all in when it
comes to these high-level AI, ML, edge type of services? Or are they, this is great and we want
to talk about this, but first we need a really big box to go ahead and crunch some numbers for us.
Yeah, yeah. Yeah, no, I mean, absolutely. I think I have a skewed perspective certainly since I'm
the compute team, right? So certainly the focus that I get is, I need that gigantic box.
Can I get that and can I get it now?
But for me, the conversation with customers
is almost always about a continuum.
It's always almost about a progression.
And as you've stated,
most customers today have a bunch of stuff
that they're running on-prem, right?
Or running in another cloud or what have you.
And their first order of business is moving that stuff from one location to another location, right, and getting all the benefits
they're in, right, better agility, faster startup time, whatever, right, but the first step is just
moving the stuff, right, and we have a bunch of services and tools to try and help that for
exactly that reason. I think the key reason why, you know, you look at some of the keynote today
and some of the future direction, it's really important because there's a lot of value there, but nowhere near as much value when you look at the broader PaaS services, when you look at sort of the new AI opportunities that exist, when you look at sort of the opportunity to deploy things onto the edge.
There's so much more value, business opportunity, and cost savings in those higher level services that seeing the
vision there is really important, right? And so, you know, when I look at the overall continuum
for customers, they're going to be along this path. And some services, they're going to lift
and shift and stop and say, thank you very much. That's my day. But some, they're going to say,
this is business critical. This is where I want to go really innovate. And I'm going to carry
that all the way through. And I'm going to start looking at these new services and capabilities and
things that I heard about it at Build or other such conferences, right? And so, you know, I think
that that continuum is important, but you're absolutely right. I mean, for a lot of customers,
the first step is just getting into the cloud and then sort of taking it from there.
To that end, you had a blog post about a month and a half ago about a feature that wasn't widely recognized, but I saw that and I'm doing backflips personally.
It was like more tweets than I've gotten on anything else in the ever, by the way.
Absolutely. It's a great offering. It is a virtual serial console into existing VMs.
That's right. And other providers have pushed back against this exact thing by saying,
oh, you should be viewing cloud instances as ephemeral.
They should be cattle, not pets.
Start over again.
Yeah, which is a very polite way of telling your customer to go screw themselves.
It's easy to write a whiteboard diagram with a perfect architecture that makes sense for what a customer should be doing, but they have to get there from where they are.
And as much as we like to tell people, oh, don't have any single
instance be something that needs to be cared for lovingly, that's not the reality today. So people
have had to do all kinds of unfortunate workarounds. This seemed almost like a tacit acknowledgement
that this is how people are using this, and we're going to, if not encourage this behavior,
at least support it. Yeah, right. Was that difficult to win hearts and minds around internally before release?
It's interesting.
I think if you had asked me that eight years ago, the answer would have been yes.
It would have been, that's not our model.
That's not how we do cloud, right?
In the last year, it wasn't, right?
I think the hardest part was implementing it in a secure way.
I think that it's actually just a complex thing, even though the experience we
produced is pretty simple. Building it was a little bit harder than we thought it was going to be.
But, you know, I think to your point, it really helped establish my place in the world that,
again, this was this sort of serial console, again, the basis of the base, right? It's sort of
this core infrastructure access thing got more excitement than any other announcement that I've ever done.
It was very telling to me of the world in which our customers are living.
And that's great.
I think that there's no problem.
Similar single instance SLA.
We're the only cloud with a single instance SLA.
And it's a similar sort of interesting thing where, to your point,
customers have this today and they want to take that first step,
right? And so how do you make sure that first step isn't totally painful and a huge step
so that they can take the following steps after that? In many large accounts, you see customers
who are proudly up on conference stages talking about how they have no single points of failure.
Any instance can be terminated with no business impact. And then you go out for
a drink afterwards and you pour six of them in when they're not looking. And then they become
really honest and say, yeah, we have no single points of failure except that one. And that's
always a database server or it's the Jenkins box. And typically mission critical. Yes, incredibly
so. And if it goes down, everyone's having a terrible day. That's right. I mean, I'm old
enough to remember working in data centers where I was thrilled
to get serial consoles in because I no longer
had to frantically drive to the data center
at three in the morning to figure out what broke.
Pull up a little keyboard. Yeah, exactly.
So seeing this come to the cloud, it's, oh, yes,
I'm old enough to remember why that's
important and, I guess,
experienced enough to understand that as
much as we like to say this is the perfect
design pattern,
nothing is perfect in the real world. Everything's a tire fire.
That's right. And what's funny about serial consoles, we've seen it in action.
It's had a huge uptake on the Linux side. And this is one of the more interesting things also about our base. It's had a huge uptake on the Linux side, huge amount of usage. The Windows
side, we've actually seen less usage. And that's also partly some of the operational aspects of Windows versus Linux.
It's not a GUI, right? It's scripting, right? And so we've seen less usage on the Windows side.
The Linux side has actually been a huge amount of growth, huge amount of pickup. And so we now
also need to go back a little bit and think harder about our Windows support here, because I feel
like we nailed it on the Linux side and gotten sort of the same response that you're sort of
evicting here.
But the Windows side, we've got more work to do.
And so this is, it's kind of an interesting world for us.
Again, I go back in time years ago,
and I would never have believed
I'd be able to ask this question
without being thrown out of a Microsoft event.
But in 2018, does the operating system matter?
If you're using a serverless function
that fires off a piece of code that you give it,
or we're talking to a database that's managed for you,
you likely don't care about what operating system
is under the hood.
You care about the code that's executing
and you care about the business outcome.
How does Microsoft think about that,
given that they have for a very long time
been the operating system company?
Right.
I mean, there's sort of two sides to this conversation, right?
I think from a cloud perspective
and enabling scenario perspective,
it shouldn't have to matter, right?
It shouldn't have to, right?
You should be able to deploy the language that you want,
should be able to deploy it on the platform that you want,
and you shouldn't have to care, right?
And this is especially when you look
at those higher-end PaaS services,
things like functions, right? Things like Postgres when you look at those higher-end PaaS services, things like functions, right?
Things like Postgres, MySQL, Cassandra-based services.
It doesn't matter what's running underneath, right?
You're getting a service and you're taking advantage of it.
Even something like the Intelligent Edge and the IoT Edge solutions, right?
Where you're running containers.
It really shouldn't have to matter what's running there, right?
It's enabling you to be able to deploy your services and solutions, right?
So very much so when you look at these platform services and solutions.
Now, certainly, I think there is an aspect of, because of our long history with Windows, right,
we do have a pretty strong belief that we run Windows very, very well, right?
Well, if you can't, I think it's time to give up.
This is certainly an important factor, right? Well, if you can, I think it's time to give up. This is certainly an important factor,
right? So making sure, so I think, you know, when we talk with Windows customers, we do strongly
believe that aspects of both our licensing and our support model and so on, obviously Windows
runs incredibly well, again, for those customers in that first phase. But when you look at that
sort of later application platform, that edge solution, it shouldn't matter. And that's really how we design our solutions. Okay. Yeah. That sounds a good transition point.
Today, there was a lot of talk about compute being done at edge, a lot of IoT stories,
a lot about machine learning slash AI slash math, if it's just two engineers talking to one another,
not trying to raise money. Yes. How do you see that evolving and, I guess, driving the growth
of Azure as it continues to embrace,
I guess, a world-spanning computer, as Satya Nadella said this morning?
Yeah, I think we have a really unique perspective on how the cloud is evolving, right? Certainly, we have the most global cloud of any of the clouds, right, with 50 announcement points and
a comprehensive set of platforms. But the realization and understanding that as more devices
and more computation is required at the edge,
whether it be small devices like Raspberry Pis or Azure Spheres,
or whether it be large computational devices like something like an Azure Stack,
enabling you to take the same cloud model, the same developer experiences,
the same containers, and this is why we've really, really centralized our focus on containers being this portable
object that could deploy anywhere, taking those same ones, building, creating them in the public
cloud, developing them in the public cloud, and then sending them out into the edge to be able
to run the compute as close to the end user as possible, I think it's going to enable a set of scenarios that we've only dreamed of before. And in fact, the demos on stage really,
I think, were really exciting. That taking a camera that doesn't need to communicate back at all
to be able to do a cognitive service, to be able to do a visual recognition and fire a function
based on that cognitive service, all without leaving the device,
is astounding, right? It really, it's something that I just, I think it's so cool.
And I think, you know, with the edge not necessarily being 24 by 7 connectivity to the public cloud and still enabling it to run, I think enables much more complex scenarios, much more
segregated scenarios, again, from sort of
top to bottom. So I think it's, again, unique. I think it's very exciting. And I think the
combination of that with the cognitive services running locally is going to make the edge
accessible to almost anyone. Do you think there's going to be a branding challenge as people start
to notice that devices around them, from doorbells to cameras to wristwatches, et cetera,
are smarter and able to do incredible levels of compute complexity,
do you think that people are starting to wander away from the idea
of that even being tied to a cloud provider at all?
It's not, for example, I was talking on this show earlier with iRobot
and some of the stuff that they're working on.
People don't think in terms of their provider and the services they're using. They just think of it as a vacuum that no longer hurls itself down
the stairs to its own death. Do you think that people are going to lose awareness of the fact
that it is Microsoft Azure technology powering most of these things other than the folks who
are building it? And if so, is that a bad thing? I've always found that one of the more magical
aspects of technology is people don't need to care.
Your end customer that are using these devices, it doesn't matter.
They're getting the value that they're getting from whoever they bought that device from.
And so being a platform company and offering this technology, it's not about jamming ourselves into people's lives.
It's about helping people's lives without having them to care, right? And so, you know, I think that the future of these devices and customers
interacting with them is going to be exactly as you say. It's just going to, everything's just
going to get smarter. And people are just going to start taking that for granted, which is amazing,
amazing future, right? Because suddenly you take that for granted and that's just the way things work then, right?
But it doesn't necessarily matter to me
whether people then say, oh, thank God Azure's there, right?
Being a platform company, it's fine, right?
We're there to support that service
and that end customer and that end device
and hopefully make it better and easier to build
and make better and better and better.
But I don't think that there's a branding risk there.
I think that actually the benefit of technology is that they don't have to care.
And there's also one of the most striking moments from the keynote this morning
was AI for accessibility.
The idea that this technology isn't just about ephemeral business outcomes.
And yes, we see this beautiful chart that's generated for us automatically.
Great.
That adds value, but it doesn't make people sit up and take notice the same way of I'm blind,
my child is not, but now I can work with them on their homework. That is one of those gripping,
very compelling, sentimental stories that I think shocks people into an awareness of not just what's
possible, but what's coming. And it's fascinating.
Or what's even here right now.
Exactly.
If you had asked me about that three days ago, I would have said, oh, yeah, that'll
probably come in a couple of years.
It's here today.
And that's something that's incredible to wrap your head around.
It's got to be an interesting experience watching the future arrive rather than always
being this thing that's one day coming and maybe your great grandkids will
have a flying car. Right. Yeah. I mean, what a heart-wrenching video that was too. It's touching.
I think, you know, when I look at that, the thing that Satya talks a lot about is the democratization
of AI, right? Is sort of how do you get AI into the hands of everyone and anyone, right? And then
when you look at those types of services, it drives it home, right?
I think that, you know, the demo that Jeff did today
where it takes a picture of Scott
and gives a checkbox, right?
It's cute, and it's a great demo
to show off the power of it, right?
But it doesn't really drive home
why democratization matters,
why giving people who understand accessibility the ability to take
advantage of AI to improve the lives of hundreds, thousands, I think such a billion people in this
world fundamentally change their lives. I think that, again, is a touching opportunity for
technology. And I think it really drives home where AI is taking
the world. And exciting. I think a really exciting opportunity. I wound up conducting a survey
somewhat recently. Okay. And as a result of it, I've started using the term AI and machine learning
less and less because I was asking people, is this something that folks are actually using?
Or is this one of those far future pipe dream things that, yeah,
a couple of people are doing interesting things with and duping VCs out of giant piles of money,
but it's not here yet? And the responses were fascinating, not just because of the breadth and
depth of responses, and they were all across the board of what people were working on,
but the fact that they almost universally started with some form of the statement,
well, I'm not a data scientist, but.
It's something that people feel like, oh, I just have this silly thing
that isn't taking advantage of any of that, but here's what it does,
and they're wrong.
What they're doing is fantastic.
They have effectively become someone who is capable of wielding these tools,
but there is an education and fear gap of, well, I'm nowhere near good enough for that.
I fought for 10 years against being called a developer despite the fact that I spent 80% of my time writing code.
So it was a difficult mental barrier to get over.
I think that we're going to see less of that with technologies like this as they continue to evolve and become used for interesting world-changing technology applications like we've just seen.
But it is interesting to me,
how do you get people to take that first step?
How do you get people onboarded into a,
yes, we machine learn and you can too.
Yeah.
I mean, just like with anything with developers,
you make it stupid simple to use, right?
I mean, like this is where you look at our cognitive services
and the ability to just take REST endpoints,
API endpoints, right?
Train them yourselves by just uploading a set of images, right?
And then being able to just dump that into a container and run it.
There's sort of like, why aren't we all using AI, right?
And I'm air quoting.
People can't see the air quotes.
Why aren't we all using AI, right?
I mean, I think that there's an aspect of it's just so simple to take advantage of in this form.
And sure, we have the very complex, the massive amounts of GPU and FPGAs and so on and so forth.
And that all is there too, right?
But to your point, I'm not a data scientist, right?
And so I'm excited to go just write a logic app, write a function that's going to go do a video analysis and be on with my day. And I'm
AI-ing, as I like to say. I like that. It's going to be very difficult to pronounce when reading it,
but it works better when spoken. Spoken, yeah, people will probably misunderstand. Yeah, anyway,
we don't need to get into that right now. No, no, that'll be an after show. We should spend
some time on that and some cocktails. Absolutely. So as far as people who are adapting this and using
this, are they generally, in the abstract, I don't need specific NDA violating business ending
stories here, but do you find that people who are adapting this and working with this are already
Azure customers or are they coming in with, we use either other cloud providers or we're from
the stone age and run our own data center after we mine our own tin to build the server yes and we're where are your customers coming from they're they're using they're
using visual cognitive services to find the right tins to go mine yeah um no i mean i think i think
i probably say a lot of them are coming in as as azure customers going back to a little bit of the
previous conversation right i think a lot of them are coming in as my first step is migrating my current stuff
to my new stuff, right?
And then I'm going to start adding some services to it, right?
And so we see a lot of the customers coming in
as adding cognitive services, adding AI capabilities
to existing running deployments, environments,
and applications, right?
And so it becomes some of the easiest, again,
coming back to a little bit of the sort of accessibility
of just being able to use it.
The easiest way to use it is to append to an existing app.
So we see a lot of that.
There is some, as we're seeing more and more of serverless
and sort of multi-cloud serverless pickup,
which has been kind of an exciting trend
that we've started to see pick up.
There's a little bit more of taking advantage
of our cognitive services in conjunction
with other cloud-based serverless products
and kind of bringing them all together.
I think that's also a very exciting
multi-cloud opportunity.
But a lot of it is sort of adding and building
and supporting upon existing services
and solutions that they have.
Do you see multi-cloud as being a driver these days?
I'm somewhat bearish on the concept myself.
Oh, really?
I find that when
companies have specific workloads that they want
to be able to deploy to multiple providers,
it invariably turns into a
world where they're no longer taking advantage of
any higher-level service whatsoever.
Lowest common denominator. Exactly. At that point, they're running
VMs with storage, maybe a database,
and that's as far as it goes.
So they spend a lot of time reinventing things that they
could get, quote-unquote, for free
if they wound up picking a single provider
to work with for that workload.
Look, I see a lot of successful multi-cloud solutions,
but there's a caveat.
The successful multi-cloud solutions I see
are ones that don't fall into your trap, right?
Where they actually build it
on a service-by-service perspective, right?
So they'll take one service here and one service here,
and they will take advantage
of all of the deep platform capabilities on each side, right?
And so this is where capabilities
like Azure Kubernetes Service, right?
This is where the Kubernetes platform,
being able to span across multiple solutions,
offers this flexibility, this portability,
while still taking deep advantage of past services.
Even Cosmos DB, multi-model data solution,
you can write Cassandra-based workloads here,
fully managed platform, you can move them over here
and still have your Cassandra work seamlessly, right?
And so this is where I do see success stories,
but not limiting yourself to the lowest platform, but finding portable
platforms that can go up the stack.
Wonderful.
Data gravity, of course, also factors into that heavily.
It does.
Oh, we're going to save 20 cents by running our container over here for $2,000 in data
transfer to get the four terabytes it needs over to it.
That's right.
It doesn't tend, the model tends to break somewhat, unfortunately.
And yeah, you have to be intelligent about it.
You can't just be sort of blindly multi-cloud. You've got to be intelligent about
how you approach it. Yeah. Semi-relatedly, I would consider multi-cloud to encompass on-premise as
well, to an extent. And there are a lot of migration stories there. To that end, I wanted
to ask you a little bit about Azure Stack. For those who don't know, and please correct me if
I'm wrong on any of this, it effectively lets you run an Azure-style platform on your own hardware. I'm sure I'll
get hate mail for this, but imagine an OpenStack that isn't terrible and you're pretty close.
So it's the idea of being able to take effectively the same primitives, the same API calls, and have
things running in your own environment as well as in Azure. I've seen a lot of interest in it.
Are you seeing a lot of adoption?
Yeah, both.
We are actually seeing both.
And it's definitely not terrible.
But I will stay away from other comments about OpenStack.
But I think the thing that we've seen,
and it's actually, you buy the full stack,
so you buy the hardware with the layer on top, with sort of the software on top.
We have seen a lot of excitement and interest, but in very specific usage cases.
Actually, Edge has become a very interesting usage case where you do need the full cloud model, the full computational model that's offered in public cloud.
But you want it proximity or in a low connected environment, right? The example on
stage in oil rigs, right? Being able to do sort of a fair amount of analytics while still being
able to run in that sort of isolated environment. And so we've seen a lot of excitement and energy
there. And then certainly regulated environments or regions where we're not deployed with regulated
requirements, we see quite a bit of interest and excitement there.
And the key value is that consistency.
You've got that same application model that exists in both.
And this is the same reason why we're excited,
I'm excited about the IoT Edge as well.
Similar sort of consistency model
where there it's this container-based application model
with Azure Stack, it's the full API and portal model.
But similar sort of
consistency that just I think really developers are looking for that right once deploy both public
cloud and at the intelligent edge. Interesting. One of the questions I have around the idea of
Azure Stack is, on the one hand, it's easy to hand wave over it and imagine, oh, it's just like
having an Azure data center inside of my own data center, and this is awesome.
There's a, to put it very directly, definite skills gap between the caliber of engineer who spends all day every day running an Azure data center and someone who does this across three racks in a colo somewhere.
Having been that person running three racks, I assure you, I was not Azure caliber sysadmin
to run these things.
Do you find that in many,
that whenever there's a, to some extent in Azure,
if there's a hardware failure,
things automatically will migrate.
There's a lot of those edge case failures
that aren't visible to me.
In my data center, the exact opposite is true
and I have to care about that.
Do people dipping their toes into the waters of working with Azure Stack
sometimes mistakenly blame Azure or Azure Stack
for effectively their poor data center management practices?
We haven't seen that that much, but I'll explain why.
It's because it's important.
There's a lot of aspects of Azure Stack
that are deliberately controlled
and controlled in such a way to make sure
that sort of the random ways
in which people run their own data centers,
which is fine for their own data centers,
doesn't create a limited experience
when they're deploying on Azure Stack.
Because their expectation with Azure Stack is that it looks and feels and touches just like public Azure,
which means there are a set of expectations and requirements and things that we don't allow.
A great example, you can't just run it on your own hardware.
One of the things that we learned very early in sort of the early days of Azure Stack was
if it ran on its own hardware, you have all kinds of random failures and random issues and so on and so forth.
And you get into a lot of difficulties where customers' expectations are not being met.
And so this is where we came in and said, no, no, no, look, we're going to have to enforce this.
And it's going to upset some people because they want to go take it as software.
And we're saying, but to get the experience that you're looking for, this is the way you need to do it. And so some of those expectations, those regulations, while frustrating to some customers,
enable us to get that consistent experience. And I would argue that is what has been probably the
challenge for both us and other products for private clouds, has been the freedom and flexibility
actually comes with a huge tax.
And there's a set of customers that will get it, right?
They're super sharp and super strong,
and they have a lot, a lot of developers working on it.
But to just pick it up and run it,
and then also have flexibility,
becomes sort of a shoot yourself in the foot experience, right? And so, I mean, you're spot on that that is a challenge
and something that we are very careful about enabling,
while also making sure that quality of service is spot on for the Azure Stack environment.
Durability and reliability is always going to be a hard problem.
I don't know that you can ever get away from that entirely.
So I guess a question for you.
As you see people discussing Azure in the general sense throughout the community, throughout the industry, what do you wish people understood about Azure that they seem to not be grasping the way that you do?
From a durability, reliability perspective?
Across the board.
Okay.
Well, let me start with durability and reliability as kind of a good kickoff, and then we can
take it from there since you mentioned that explicitly.
You know, I think one of the things that going from sort of the beginning, early days to
now, that has been, I think, most exciting for me, especially on the infrastructure and compute side,
is our ability to take the amount of data we have about running our own system
and actually use AI for our own internal purposes.
This is something that's really been sort of an exciting,
and not something we necessarily sell as a product,
but it's something that's an experience that we enable.
Great example of this, we have a huge amount of data coming in around hardware failure
and being able to predict hardware failure. And the key aspect of having so much hardware and so
much data about that hardware and being able to say, this is showing signs that we've seen before.
And given that, we need to go make sure we move customers off this before they see the impact, right? Before they see that machine die. And so this is some of the,
you know, when I think about the benefits of AI, for me, it's very close to home because I can
build a better service because I'm able to use all this data that I've got and take advantage of it.
Same things apply for our security-based services, right? Being able to take the huge amounts of
data we've got about security models and security solutions and apply it.
And the reason why I think this is so interesting to me is it's entirely why what we're able
to do is different from what customers can do on their own.
It's why we offer this service that I think is so compelling because we have this amount
of data.
We have this amount of sort of AI-minable information.
Scale is an incredible asset.
It's a huge asset.
And I think it's all about
again, in the early days, we didn't really
have scale, and so we didn't get it, right?
But at this point, being able to
apply that scale to create a better
service is really, really exciting.
Especially, you know, we talked earlier,
the serial con, being at the infrastructure, being
at sort of the bottom of the stack, there's
always this question of, well, what's going to make it better? And that, the better security,
better reliability, better availability, all of that being built with AI models using the huge
amount of data we've got, I think that's a super unique and interesting opportunity for us.
I would agree. I think this is the sort of thing that takes time to emerge as a use of these
things. Whenever a new technology comes out, it seems that the first applications of it are generally ridiculous.
Take a look at the Internet itself.
When that first became somewhat consumer mainstream, it felt like the only thing on the Internet was Star Trek trivia and adult content.
And people were struggling for a long time to figure out what that was.
I have no idea what you're talking about.
Oh, exactly.
I wasn't interested in either of those.
What's the old line?
Oh, yeah, there's this trick I learned once.
It was called buffering.
Yeah.
No, there's a definite story of ridiculous edge case toys demonstrating the problem that then start to inform business decisions.
I agree.
The idea of having AI working on hardware failure rates, on looking
for patterns, finding signal and noise. A common complaint about AI is that many shops don't have
enough data to do anything particularly innovative or groundbreaking with that. So to some extent,
scale becomes a necessary prerequisite for some applications of this technology.
Yeah, absolutely. And I think you've mean, you've also hit sort of a very interesting point about just the progression of technology innovation, right, go view a pipe, right? I think,
if I remember a year ago, I think we had sort of an edge device that could tell whether the
picture was a cat or not, right? I mean, I think the level of sort of reality that's coming into
our edge-based solutions based on the innovation, both on the AI side, but then also on the ability
to run these things on the device and sort of the power that comes with it, it's changing, right? It's changing to being real
world things, where there's not a lot of companies out there that are like, I really wish I had a way
to tell if something was a cat or not. Like, that's not actually necessarily a real world scenario.
But the amount that are saying, I wish I could tell if a pipe was broken, it's quite a few,
right? And I think, so, you know, we are getting to the point
where these become real scenarios. And I think that's an exciting progression. And obviously,
seeing that cat or not step into pipe broken step, seeing that progression, that's sort of where
we come in, right? And can offer those services and that capabilities to grow.
Right. And if you roll out the, let's say you do all this in a lab somewhere
and never publicize it,
and you skip the cat step,
and you just have the,
is this pipe broken or not?
People look at you and first,
do they burn you at the stake?
Because what you're doing looks like magic.
They have no idea how you got there.
That's right.
But it also means that you're developing things
in-house at that time,
rather than letting the community way back in.
That's right.
And I would argue that 10 or 20 years ago, Microsoft very well would have been the kind of company to do all of this in
house. Today, I think it's blindingly apparent that you're not. You're embracing the community
and heading in very interesting directions that even if those some of us on the periphery don't
always agree with it, it at least is much more understandable. What's driving your decisions? Right. Yeah. I mean, I think that the,
the, just the constant amount of innovation versus an incremental innovation versus this big drop of like, here's the new world versus instead we're, we're making incremental steps. I mean, we made
announcements last week, made a bunch this week. We'll make some more in the coming weeks, right?
I think this is not a, this is not sort of of a here's everything and we've solved the world's problems.
It is look at the step that we've taken, right?
Look at how in the areas that we're leading, look at the things that we're excited about.
And by the way, we'll be back and we'll have more for you and give us feedback, right?
Wonderful.
Yeah.
Thank you very much for joining me.
This has been Screaming in the Cloud.
I'm Corey Quinn.
This has been Corey Sanders.
And I'll talk to you next week.