The Changelog: Software Development, Open Source - The impact of AI at Microsoft (Interview)
Episode Date: July 4, 2018We're on location at Microsoft Build 2018 talking with Corey Sanders and Steve Guggenheimer — two Microsoft veterans focused on artificial intelligence and cloud computing. We talked about the direc...tion and convergence of AI, ethics, cloud computing, and how the day to day lives of developers will change because of the advancements in AI.
Transcript
Discussion (0)
Bandwidth for Changelog is provided by Fastly. Learn more at fastly.com. We move fast and fix
things here at Changelog because of Rollbar. Check them out at rollbar.com and we're hosted
on Linode servers. Head to linode.com slash changelog. This episode is brought to you by
Airbrake. Airbrake is full stack real-time error monitoring. Get real-time error alerts plus all
the information you need to fix errors fast. And in this segment, I'm talking with Joe Godfrey, the CEO of Airbrake, about taking the
guesswork out of rollbacks. Joe, imagine a developer has found a serious problem and they're trying to
figure out what to do, whether to roll back yes or no. And if they roll back, what do they roll back?
You know, because they may have released 10, 20, or even 50 new commits that day, and they're just
not 100% sure which commit caused the problem.
That's exactly where a tool like Airbrake comes into play.
So instead of guessing and trying to figure out, well, this one seems like it's the code that's most likely to be related to what's happening.
And first customer reported about two o'clock, so maybe it went out a little bit before that.
Airbrake has this great tool called the Deployment Dashboard that literally will show you
every single revision, who did the deployment,
and it will show you which errors
were tied to each deployment.
So you can pretty quickly figure out,
aha, this error started happening
only after this code was deployed.
And by the way, it will also tell you
which errors were fixed.
So that if you go and go ahead and roll back a revision
or try to put out a fix or a bug,
you can then look and say, did it actually get fixed or is it still happening and do I need to take different action?
I just know from my experience, we've spent a lot of time guessing about what to do and
a lot of time arguing about whether those guesses were any good or not.
And I would have loved to have just had some real anecdotal evidence that proved, no, this
bug was caused by this release.
We're either going to fix it real quick based on all the information that rate gives or
we're going to roll it back.
And we're not going to guess and roll back a whole day's worth of work
just to make sure we catch this bug.
All right, check out Airbrake at airbrake.io slash changelog.
Our listeners get Airbrake for free for 30 days,
plus you get 50% off your first three months.
Try it free today.
Once again, airbrake.io slash changelog.
Welcome back, everyone.
This is the Changelog, a podcast featuring the hackers, the leaders, and the innovators of open source.
I'm Adam Stachowiak, editor-in-Chief of Changelog. On today's show, we're back at Microsoft Build,
talking to two Microsoft veterans focused on artificial intelligence and cloud computing.
Corey Sanders, Corporate Vice President for Azure Compute with 14 years of experience at Microsoft,
and Steve Guggenheimer, Corporate Vice President of AI Business, 24 years at Microsoft.
We talk about the direction and the convergence of AI, the ethics behind AI,
cloud computing, and how the day-to-day lives of developers will change because of the advancements in AI. And by the way, we're releasing this AI Focus show in conjunction with the launch
of Practical AI. This is our newest podcast hosted by Chris Benson and Daniel Whitenack.
This show is focused on making artificial intelligence practical, productive, and accessible to everyone.
Learn more and subscribe at changelog.com slash practical AI.
Okay, so we're joined by Corey Sanders,
head of Azure Compute.
And Corey, first of all, thanks for joining us.
Of course, great to be here.
Thank you for joining us at Build. Well, it's great to be here for us us of course great to be here thank you for
joining us at build well we're great to be here thank you very much for having us so for the
completely uninitiated never heard of Azure perhaps I've been living under a rock uh what
is Azure compute just give us the high level yeah totally it is a um uh set of services, right? So it's not just a single service,
a set of services that offer,
on the sort of core VM side,
offer you sort of agile infrastructure, right?
So you can spin up virtual machines,
you can run your services on it.
But then, you know, the services in my team
go well beyond just the core compute.
We also, in my team, do the Azure Kubernetes service,
which we talked a lot about today in the keynote.
We also have Service Fabric,
which is a sort of managed PaaS service.
And then higher-level services,
some of the eventing services, messaging services,
are also in my team.
So Event Grid is also my team,
sort of more of an event-based PubSub type solution.
So they're all a part of our overall cloud offering.
I don't know how much broad spectrum you want me to give,
but a cloud offering on Microsoft.
So, yeah.
So let's talk specifically about maybe about Event Grid first.
Let's get deep.
It seems like that's a good place to dig in.
Sure.
What exactly does it offer?
What is that?
You mentioned it's event-based, but that's about all.
You said PubSub.
Yeah, yeah.
Go into it more totally with the growth and sort of excitement that we're seeing from serverless
based application models we sort of identified that there was a gap in in uh in some of the model
where certainly there's the ability to run just arbitrary code functions right which is basically
an event fires and you run some code great um uh there's also sort of workflow we have an event fires and you run some code. Great. There's also sort of workflow. We have an
event fire and we run through workflow. It's called our logic app services. And these are
kind of our core serverless platforms. The thing that we sort of identified was those are great,
but there's no sort of platform in the cloud period that offered sort of an event-first platform to make it very easy to launch those
services. So basically, you know, a publishing and subscribing model where different services,
whether inside Microsoft or outside Microsoft, can create an event, right? So let's say every time
you drop an image into a storage account, this will create an event using our event service.
And then this event service will fire whoever's subscribing, whoever is saying, I will now listen when you publish.
And so you can make that something like Azure Functions or so on.
And the reason why it's so important for serverless is that sort of your classic eventing model is a polling model, right?
It's like, okay, I got an event, now listen.
But for serverless, that's nonsense because the whole point of serverless is you don't launch until you need to launch.
So now you need something to call into you.
And so that's where sort of this idea came from.
But what's really cool about Event Grid is that it's not strictly tied to our services. So you can have third-party calls
that will create events, and you can also call out to third-party services or applications that
you write. So you can basically write your own serverless application services using Event Grid
as sort of the backbone. In fact, even just last week, we launched a open source model called Cloud Events, which is basically a CNCF-led open source model that will allow pretty much any service anywhere to call into Event Grid and take advantage of it.
It's very cool.
What's the name of that?
Cloud Events was the CNCF solution, and then Event Grid is the service that we have in action.
So did it go through the natural incubation period?
Is it a project?
It's version 0.1
announced last week at
KubeCon. We missed that.
We weren't there. I'm sorry.
Where was it in? Copenhagen.
I was going to say it's far away.
We came here instead.
I owe you.
I have people who were at both.
People who were at both?
So we're slacking.
I didn't want to say it, but you said it. We had people who were both. So just to tell, I mean, just to, just to people who were at both. So we're slacking. I didn't want to say it, but you said it. I didn't want to say it, but you said it. Uh, yeah,
we had people who were there Friday for this announcement and then flew in this weekend.
But, um, anyway, congratulations. That's awesome. Yeah, it was a cool thing. So that is cool. You
should, if you want to see something super cool Friday morning, the keynote had a demo that had,
it had us, it had, I think it had Google, it had AWS.
I mean, it had basically all the cloud providers
taking part in this sort of end-to-end event experience
using this new cloud events thing
that we're part of the work group for.
So it's very, very cool.
So this is open source then?
Yeah.
Is it an extension of Event Grid?
Is it a class event?
Cloud Events is a spec that now we've implemented.
So now we were the only cloud actually that did a, you can publish out of Cloud Events,
but many of the cloud providers you could subscribe into.
So with Event Grid, you can both send out Cloud Events format and listen Cloud Events format.
But it's a specification that now you can use.
And it's a nice thing, you know, with where sort of events are going and where IoT is going,
having sort of a standard that you can go write your events to is really nice
because it means you're not tied in with any cloud.
Exactly.
Right?
Which is awesome.
So we're pretty excited to sort of support that.
We like that as end users because we think it makes the clouds compete on a level that we would prefer them to compete on.
A quality of service.
Exactly.
Versus a ability to lock in.
Exactly.
So that's spectacular.
Is there any other examples of that going on?
Tons.
So Cosmos DB is a great example of this, right?
So Cosmos DB, if you saw this morning.
We did, but give us the run initiated.
So the exciting thing about Cosmos DB is it's a NoSQL offering, right?
It's a global offering.
It has some really interesting things. It's got multiSQL offering, right? It's global offering. It has some really interesting things.
It's got multi-master writes,
so you can have writes happening sort of all over the world
and figure out how to sort of reconcile that.
You've got different consistency models,
so you can decide how, whether it's strictly consistent,
so automatically write all the time,
or eventually consistent, it'll get there.
And then it has some crazy SLAs, right?
Latency SLA and 5.9 SLA for availability.
There's a latency SLA.
There's a latency SLA.
It's the only service that has a latency SLA.
Yeah, yeah, yeah.
It's really cool.
How does that work?
Like you say, I need like 50 millisecond latency.
It's a commitment of sort of when contacting the service,
this is how long it will take you to get reads and writes out.
Yeah.
Huh, okay.
But the really cool part about it is that it's multi-model,
which means you can write to it using a wire protocol with Cassandra or Mongo or Gremlin,
which are all open source-based solutions, right?
So you can basically write a Cassandra-based application, point it at MongoDB,
get this high SLA, get this multiple region replication,
get sort of this consistency model. And so you're not locked in, you've got portability, right? You
can move to another Cassandra deployment if you want to, but the service that we offer, I think,
is, you know, the best around. And so you'll hopefully want to stay because it's such a great
service, right? And so this is, it's one of these types of,, you know, the Kubernetes service that we talked about a lot this morning as well,
a similar category of sort of...
Yeah, absolutely.
Let's go back to serverless in this event grid.
Sure.
Because I've taken a few stabs at serverless.
I think it makes tons of sense for the IoT space,
especially when there's, like, maybe not single-use devices,
but, like, simple-use devices.
Sure, sure.
Areas where I get into confusion are, like, I'm like, wow, am I
doing this wrong or what's the deal?
I start shoving all the stuff into a single function
or I start wondering,
how do I get my, as a developer...
Where do you draw the lines?
Yeah, we believe
in single responsibility principle
and refactoring into smaller functions, blah, blah, blah.
How do you architect
these serverless things and
do they scale up?
Or is it only good in the small?
It's like the old porcupine joke, very carefully.
Yeah, exactly.
Exactly.
So, you know, I think that this is one of the challenges
that actually we as an industry have with serverless, right?
I think serverless is fantastic when it comes to IoT,
as you mentioned, right?
And going to be a core capability there,
because it really, it breeds that scenario, right? It's like, you've got your edge solution,
you've got your IoT solution, when you need it, it fires, when you don't, it's quiet.
Exactly. I also think it breeds, actually, it's a great scenario for automation, right? This is
where we've seen a lot of usage for serverless, in fact, maybe even more than IoT, which is every
time a VM gets created, do this thing, right?
Every time a storage thing happens, do this thing, right?
And so this type of automation,
this type of sort of DevOps experience,
I've seen a lot of usage of serverless,
a very exciting scenario.
The one that I think is still pending, frankly,
is an entire app being written with serverless.
It's very hard.
It's very hard because, one, the tool is not there, right?
I think the tooling is not there.
And I feel like we're furthest along in some of these things.
Like you can do local debugging of functions, right?
So you can basically bring a function down.
You can run it locally in your box.
You can debug.
You can put breakpoints and so on, right?
Just like you'd expect for a normal app.
But still, function chaining, which you need to write a large application very hard.
Monitoring a function chain,
where does the function fail?
It's really, really hard.
And I think back to the early days
of object-oriented coding,
where the way you would debug object-oriented coding
would you actually go into the C language
and look at what your variables were, right?
Like none of it actually worked, right?
It was sort of, and so you basically had to fall back down to...
It was a facade.
Exactly, it was a facade.
Exactly right, exactly right.
And with pound defines, right?
I mean, that really was how it was originally written, right?
Until you sort of got to models that were actually built
with sort of object-oriented first class,
I think we'll get there.
I just don't think
we're there yet. And that would be at least my perspective on the serverless world.
Well, that's heartening because I just thought maybe I was-
Maybe it was just you.
An idiot. I just don't get the serverless. I mean, I get it.
I mean, that may be true too. I don't know. We just met. So I don't know. I don't want to speak
on behalf of-
What's some of the tooling out there for this scenario, like serverless?
Well, I mean, I think that some of the things, the integration with some of our dev tooling,
I think is a good example, like VS Code,
which is nice open source, run anywhere,
sort of the ability to take it and debug it locally, right?
That's some things that can really help.
And then monitoring, I think that the monitoring tooling
is getting there, right?
Both us and other clouds, I think,
are getting sort of this ability to monitor between it.
Right.
But you really need, I think you need
some sort of programming model abstraction on top that is going to understand function chaining and sort of take care of it for you and sort of be able to build out sort of how to go monitor and debug it along the path.
I feel like there's constructs that are still missing.
And frankly, we're working on some things.
I know some of our friends, cloud friends out there are also working on some things.
I think we'll see a lot of improvements over the next year in this area.
Do you believe that it is inevitable that serverless will be the way to go for large-scale applications,
and we're just not there yet?
Or do you think perhaps that's just a round peg in a square hole?
No, I think it's going to be a fundamental part of the
overall programming model.
I don't know how long it will take to get there.
I actually do
return back to the object-oriented
world, where the early days of
object-oriented, you ask them that question, they say,
I think probably
C is going to stick around for a long time.
Yeah, exactly.
It has.
I think that there will be a long time, right? Yeah, exactly. Yeah. I mean, it has, right? There's a lot of stuff going on. Exactly. So, you know, I think that there will be a long
time before we say, you know, a majority of the world is even written in this way. But I do think
we will start seeing out because the benefits are so quick. I was going to ask, what are the virtues?
I think both the agility and the cost. I mean, it's just, you know, imagine having,
effectively having a program that you've written, an application that you've written, and split up into functions today.
Now, imagine in your mind you only pay for each one of the functions when they get called and never any other time, right?
And so it's, you know, it's never sitting on a server.
It's never sitting anywhere in a pass service.
It's priced in milliseconds, right?
It's priced in milliseconds by the lines of code that are executed.
I mean, it's hard to get much more agile in your pricing than that.
Yeah, that's as minute as you get.
I do think...
Next is nanoseconds, though.
That's right.
Nanoseconds.
You do...
That's right.
And then picoseconds?
I don't know.
We've got to get to picos.
We've got to get to picos.
Give me some pico.
Either way, it's getting smaller, so keep it going.
That's right.
What comes below pico?
I should know this. Zetto. I don't know. I made it up. I just made so keep it going. That's right. What comes below Pico? I should know this.
Zetto.
I don't know.
I made it up.
I just made it up.
Zetto may be right.
No.
Something like that.
Well, Zetto's the big side, but then it starts getting, yeah, yeah, yeah.
Anyway, you're going to look it up?
I'm going to look it up while you go.
He's Googling.
Jared's Googling right now.
Smaller than Pico.
Either way, I do think.
So I think that that's going to be.
And then the other aspect of that is that you look at something like a microservice model.
And we showed this a little bit today with some of the cool new development tools that we talked about when it
comes to Kubernetes and being able to basically take a microservice and take it out and build
sort of updates to that microservice while leaving the rest of the service untouched.
With serverless, that's even easier, right? You started saying, you know, your function chaining,
you start saying, great, take this function, just update it. And it's like that updated and calling it into it. And suddenly
you've sort of got, you know, an entirely new path for your application going through that new
function. And so the agility and the cost reductions, I think will drive it there. But then
you start thinking to yourself, all CICD, I mean, what CICD pipelines have you seen that really have
deep chaining of functions and sort of pipelining of functions? They don't really exist. Like we do thinking to yourself, all CICD, I mean, what CICD pipelines have you seen that really have deep
chaining of functions and sort of pipelining of functions? They don't really exist. Like,
we do have CICD updating with functions, but it's still pretty primitive. So, I think we're still,
I think we're getting there. And I think we will get there. And I think it's going to be a core
part of many services. But I think it's going to see the progression like object-oriented saw, right?
Real-time update.
A picosecond is one trillionth of a second.
Makes sense.
Smaller than that is a femto.
Femto!
I didn't know that.
Smaller than that, even, is an atto, A-T-T-O, second,
which I've never heard of in my life.
And I can't even pronounce what that means,
so we'll move on.
Zetto doesn't even exist, then.
It's the other side. I think it's even exist then. It's the other side.
I think it's the other side. It's the big one. I made it up.
It was pretty, it sounded on point.
It sounds like something from Superman, I think.
So that makes sense. Corey agreed
in the moment. In the moment. There you go.
I'm a trusting soul.
Zepto-second, I think
you're referring to. One sextillionth.
You've got to be careful how you say that one.
One sextillionth of a second. All right. So we've covered that.
This sounds good. I'm glad we got to the bottom of this.
This is an educational show.
We've all learned. We've all learned in port.
Now we know.
Now we know.
One of the slides in Satya's keynote, I think it was almost an opening slide, was
the world is a computer.
Yes.
And I'm in this. I get it, but I didn't really consider that
being the truth, right? Like that seems so profound to see on that screen. What does that mean to you
in compute? Yeah. So this is where, when you look at some of the services and things we talked about
today with like the IoT edge, this is really where you start seeing this come become a reality,
where you start seeing with the sort of explosion of IoT,
right, where just everything's going to have some level of intelligence to it, right?
And that sort of pushes down this whole edge concept of, great, now that you've got that,
you're going to need these edge components to be able to do some work, right? You can't just
have them all calling home and saying, tell me what to do. They need to be able to do some work, right? You can't just have them all calling home and saying, tell me what to do.
They need to be able to do some work.
And so this concept of taking computation,
pushing it out to the edge, and then,
but then the principle of that is just a part of the cloud.
Right, that's not creating something different
from the cloud, but actually creating a sort of a-
Extends it.
Extends it, exactly.
And so then how do you take a consistent programming model,
a consistent application model, and make it possible to deploy on that edge?
And this is where IoT Edge, which we open sourced today, which is very exciting.
So that means, again, portability.
You don't need to feel locked in with our cloud.
Come back and tell us exactly what that means, but keep going.
Yeah, I will. I will.
And then the ability to deploy, perhaps even more important, the fact that all the components that deploy into IoT Edge are containers,
which means, again, they have portability, they can
deploy anywhere, and so you take our cognitive
services, you take our
function platform,
also open source, you can take
those, you can containerize them, you can deploy
them in this IoT Edge, and suddenly
that IoT Edge can run disconnected,
and so we can start doing intelligence
and using the cognitive services that you built in the cloud
and you developed in the cloud,
but it can run without talking to the cloud, right?
With the open source or the runtime portion of it.
With the runtime portion of it, exactly.
And so that is, I think that is a super compelling,
and then Azure Stack sort of up the chain
where it can run even the full cloud in that environment.
It's super compelling when you look at sort of that picture,
you know, sort of the cloudy picture
where it's got the center cloud and then sort of the edge cloud.
It's just super compelling to say, you know, look, you write once and you can deploy anywhere.
It's just a very, very exciting world that that could be.
Yeah.
Let's go back to the open sourcing of the runtime.
Yeah.
You know, what is that runtime?
What's it look like?
Yeah.
So it's, you know, it's fairly simple, right? I mean,
it effectively runs containers and manages sort of the health of the containers running inside that
Edge device, right? So on the example today, sort of the camera, right? It could run containers
inside the camera and the camera had enough computing power to be able to sort of do that
and do that work. And so that included cognitive services, it included functions, right?
It included sort of those aspects.
And then the IoT Edge was basically taking the containers that were built in public Azure
and accepting them and deploying them, right?
And sort of managed them so they could take updates and they could communicate back
and they could communicate health of the containers and so on.
So it ends up being sort of a hosting platform for the actual containers.
Kind of like an operating system or actually?
Yeah, it's sort of a layer above the operating system, right? Because those things have an
operating system. It's more of a PaaS type environment. Like it's more of a container
orchestrator, but on the local host. But on the local host. Yeah. What
technologies is that built with? So it's built with a lot of technology,
right? Inside Azure, it uses components of actually service fabric, which has got some sort of
capabilities to do container management, but it also has actually Kubernetes based support as well.
So it's ending up trying to be kind of an enablement of whatever type of sort of orchestration
you want, but it also shouldn't matter, right? So it's built with those sort of core functionality
capabilities, but in the end, the customers just see that their containers are deploying. And so,
you know, bully.
And we saw that today running on a Raspberry Pi.
Yeah, you did.
Exactly.
Raspberry Pi, you saw it on the Copter.
The Mavic, I think it was. Yeah, exactly.
The DJI, and then you saw it running on the camera directly, right?
So the Qualcomm camera.
All of those were running that runtime directly.
It's an interesting partnership, too, to see you guys work with DJI.
I mean, they're obviously the number one, you know, drone.
Yeah, I called it a copter, and I'm embarrassed because I don't think that's actually what we call it.
Drone is the word.
I'm embarrassed.
I'm not correcting you.
I'm just saying the proper term.
Don't edit that.
People should know that I'm embarrassed.
Go on.
There's all sorts of people using drones.
Yes.
From the DJI's perspective, you've got, I'm a drone user.
We do some filming and stuff like that. It's a lot of fun.
I guess, where do you see, the example they gave was like agriculture, like in industry, you know, examining pipelines or, you know, going over fields.
This is an interesting place to put that.
Do you see, like, say, agricultural companies
becoming software companies and using
this runtime on the edge?
Who's using this runtime?
Is it DGI or is it farmers?
It's interesting, yeah.
One of the points that we do talk about a lot
is that we think every company's becoming a software company.
I think as you sort of look at both AI
and you look at sort of the cloud capabilities,
there's a little bit of every company's getting involved in technology because they have to.
That's the only way to compete.
Specifically in this case, it can be a combination of both of these.
I think that I would actually expect in this case that DGI is creating the environment and the platform
to be able to deploy and control the drone, right, and engage with it.
But the end business requirements are going to come from that end customer.
Right.
How I use a drone is how I want to use a drone.
Yeah, exactly.
And then what sort of machine learning am I doing?
Like, am I detecting broken pipes or am I detecting broken fields, right?
I mean, I presume that's a different model, right? And so, you know, even if it is the demo
that we showed on stage,
take a bunch of pictures and identify it
and learn the model in Azure,
take the container and deploy it,
you know, that should be fairly easy.
It doesn't take a lot of development skills to do it.
It's so interesting to put that kind of power
in, like, general developers' hands.
Yeah, it's unbelievable.
I mean, like, it's astounding.
I can only come up with that one.
I mean, like, it takes so much to train these models and do all this interesting stuff around machine learning and neural networks and all the necessary things.
And you've, you know.
We use the word democratize all the time.
Democratizing AI, making sure that whether you're a data scientist or an entry-level developer, you can take advantage of these tools.
Tell him, Jerry.
We're going to make a practical round.
We're practical AI people around here.
Tell him. Yeah, we have a show in the works around here. We're practical AI people around here. Tell him.
Yeah, we have a show in the works, a brand new podcast called.
Is this a secret?
Are we not talking about it?
No, no.
We're announcing it right now.
This is it.
This is announcement.
Nobody knows this but you.
Live announcement.
Okay, good.
Maybe a few other people.
And the people that may be listening.
And everybody else who's listening to this.
It's known.
Other than that.
Go on.
It's called Practical AI.
So it's a brand new podcast.
It's not an episode.
It's a whole new show.
A whole new series.
A spin-off series, as it were.
That's right.
And we're quite excited about it.
So it's going to focus on Practical AI.
Yeah, it's going to be.
Making it practical.
You know, making it accessible.
The word democratizing it.
I mean, there's a lot.
It's very mystical to many of us still.
And there's so many high-level concepts that need discussing.
There's ethical implications.
There's privacy security.
There's like the nitty-gritty of how you deploy it to the edge and whatnot.
And there's so many conversations that would take over the changelog if we were to have all that.
Enable it here.
Right?
And so it's like, well, let's give it another outlet.
So anyways.
You saw the video at the end of Satya's keynote.
Very touching.
But a good example of sort of the power of AI, which is just being able to change the rules of accessibility.
The two parents who are visually impaired, who are blind, raising a child who's not,
and using machine learning, computer vision to tell them, you know, what's
happening in their surroundings.
I mean, it's just, it's changing our lives in front of us.
It's a, anyway, I'm sorry, I'm getting, I'm getting, I'm getting choked up.
Well, I don't know whether it's the video or the beer or what, but anyway.
But it's exciting stuff.
I mean, there's a lot going on, especially in the robotic space, in the AI space, in
the serverless space.
That's just very, like, it's very exciting.
What else are you excited about that we haven't asked you about or haven't talked to you about today?
So I think we talked about, gosh, a lot of the things I'm excited about.
So we talked about IoT Edge.
We talked about the overall Edge strategy and where we're going, which I think is exciting.
We talked about AI and the opportunities there.
And we talked a lot about the open source work, right?
I think we talked about Cosmos DB.
You know, I think maybe one that we didn't spend as much time around
was the Azure Kubernetes service,
which has been really exciting to see, you know,
our container-based implementation of this,
and then the developer tools around it, right?
Some of the things that we showed today, the developer tools,
things like doing live, you know, live share, being able to be able to edit sort of a single container in a microservice
environment without touching the rest of the environment.
Particularly hard to demo that one because it's kind of invisible.
Exactly.
If you don't look closely.
If it works, it just should just work, right?
You shouldn't see anything wrong.
And then even one of the things that we didn't show today, but it's very cool in telecode,
which you should go take a look at as well if you haven't yet.
This goes in.
It's actually built as part of Visual Studio.
And it goes in and it uses AI, again, using AI for practical purposes,
to try and detect bugs that you may have in your code.
It's like the pitch that I use for your stuff.
I love that. I love how you put it that I use for your stuff. I love that.
I love how you put it in there.
Practical AI.
I love it.
We nod.
I like that.
So that practical usage case there.
Now you have to get these guys on your show to talk about this.
It goes in and can predict bugs that you're writing based on other ways that you've written your code.
And so it'll come in and say 92% chance that you meant this when you wrote this in your code.
And it can even use sort of external,
sort of, hey, we've seen other people do this
and they always use this instead.
It's just, it's really pretty cool, actually.
What if you don't write bugs?
Like, asking for a friend.
Then you become the model that AI needs to learn from.
Oh, I like this.
You are the patient zero. You are patient zero.
You are patient zero for the AI system.
So we need to study you, I think.
Okay.
Okay.
That's interesting.
I'm very expensive to study.
However, one downside potentially to this
is the feeling of over the shoulder as you're coding.
Sure.
Coding is kind of an intimate, personal thing.
Sure.
You get in your zone, put your EDM on, whatever,
and you kind of feel like your editor is always watching you.
Well, here's the thing I'd say.
Yeah, fair.
Cue the police on.
The way it's been, this is actually the deep discussion on this point,
this exact point, and the way it's been implemented
is very similar to IntelliSense, which is why we use the same.
Right, so when you're using it,
IntelliSense comes in
and tries to auto-complete your thing.
You don't mind.
You don't mind.
Not creepy.
And so this is the whole point.
It's basically, it's like spell check.
But it's better.
When you're typing,
squiggle, squiggle, squiggle.
Hey, are you sure this is what you meant?
Yeah, done.
Got a word for you then.
I wouldn't do that if I were you.
Bug check.
Spell check, bug check.
Spell check.
Yeah, that means something else though. Bug check. Bug. Bug I were you. Bug check. Spell check, bug check. Spell check. Bug, yeah.
What did you say?
Bug check.
Bug.
Bug check.
Okay.
Bug check.
Yeah.
That means blue screen in my world.
So that's a bad thing.
That's a bad connotation.
I'll take it back then. Yeah.
I mean, let's think harder.
I mean, I don't mean to be critical, but you're better than that.
That's all I'm going to say.
What's the official name for it then? So if it's IntelliSense, what is it?
IntelliCode.
IntelliCode.
That's a slightly better name.
It's a good name. Better than BugCheck.
BugCheck. This went sideways quickly.
This is clearly an evening show.
Either way, though, you've got the feeling
of somebody looking over your shoulders.
You can turn it off.
What I mean by that is turn turn into a counseling session that's right
it's okay turn it off is uh making it like that makes it more approachable to not feel like
somebody's watching oh and i have a real person watching yeah that's right that's right yeah
someone's here to help it's you know it's the the clippy my question is can we get clippy in there
it's really oh gosh i'm not allowed to bring that up, am I?
No.
Yeah.
We're hitting the borderline here.
Adam, do you have any more questions?
I'm going to get the knife here.
I think, you know.
I love Clippy.
That's it.
That's it.
Okay.
Close it out gracefully, then.
It's been fun.
He doesn't seem convinced.
Obviously, we have a show coming up where we can talk a lot about ai together so i would say any conversations you think practically we'll talk practically we'll talk practically
in a future show all right i love it thanks cory you guys this was fun thank you for having me
thank you yeah This episode is brought to you by Linode, our cloud server of choice.
It's so easy to get started at linode.com slash changelog.
Pick a plan, pick a distro, and pick a location,
and in minutes deploy your Leno Cloud server.
They have drill-worthy hardware, native SSD cloud storage,
40 gigabit network, Intel E5 processors,
simple, easy control panel, 99.9% uptime guaranteed.
We are never down.
24-7 customer support, 10 data centers, 3 regions, anywhere in the world they got you covered.
Head to linode.com slash changelog to get $20 in hosting credit.
That's 4 months free.
Once again, linode.com slash changelog.
And by GoCD.
GoCD is an open source continuous delivery server built by ThoughtWorks.
Check them out at gocd.org or on GitHub at github.com slash gocd. GoCD provides continuous
delivery out of the box with its built-in pipelines, advanced traceability, and value
stream visualization. With GoCD, you can easily model, orchestrate, and visualize complex workflows
from end to end with no problem. They support Kubernetes and modern infrastructure with elastic
on-demand agents and cloud deployments.
To learn more about GoCD, visit gocd.org slash changelog.
It's free to use, and they have professional support and enterprise add-ons available from ThoughtWorks.
Once again, gocd.org slash changelog. So we're joined by Stephen Guggenheimer,
Vice President of Microsoft AI.
Steve, thanks for coming on the show.
Thanks for having me. Excited to be here.
So we're excited. We love AI. We're very interested. We're kind of outsiders in terms
of we're not using it in our day-to-day lives yet. I feel like a lot of people are in that
circumstance, right? Well, I think you are. You just don't know it. You know, the sort of the
core origin of almost all of the search engines that are out there, be it Google or Bing or even
Amazon, there's AI at the core of that, sort there, be it Google or Bing or even Amazon,
there's AI at the core of that,
sort of looking at very large sets of data
and trying to proactively give you a little bit of help.
You may not think of that as AI, but it's there.
Yeah, it's deep down underneath the covers, right?
Exactly, and those building blocks are now
finding their way into lots and lots
of software and programs.
It's still early, make no mistake.
We're really high on the hype cycle
and sort of low on the it's broadly available, but it is there. And I think people are just
starting to understand that. So when you speak to lay people about what you do and the progress
you're making with artificial intelligence, what's the way that you go about describing it,
like maybe even defining what AI is to somebody who's not on the inside of the scene?
Yeah, I think there are a lot of definitions
for AI, but in general, how do you, on one hand, take large sets of data and find information from
that? And then more importantly, how do you sort of move from a reactive, hey, I know something and
let me help you, to a proactive, let me proactively help somebody in any activity that they're doing.
And so the most useful case I think people think about is robotics. And when people see movies,
it's sort of a physical instantiation of AI, something that can communicate with me, hear me,
see me, and interact with me on what feels like a natural level. In some way, AI is trying to
bring those different capabilities to life. Whether it's a virtual agent on a website,
whether it's proactively giving you ideas on what to buy next
or what movie to go see,
whether it's sort of the ability to have ambient computing around you
so that ability to have speech-to-text translation
or language-to-language translation,
it's woven in lots of ways.
But at the end of the day,
how do we allow people to interact naturally
with the computing environment around us?
A lot of our space that we're speaking to in this podcast is developer, but I think I
still sit back and think like, when is AI a threat? You know, being responsible about it,
when can it be like Skynet, which is the typical thing. You know, you talked about responsibility
in terms of the way you approach that kind of thing. What's Microsoft's stance towards
responsibly deploying artificial intelligence,
making frameworks, making it accessible to practical for the users out there?
Well, I'll start by saying the good news is for all the hype on AI, we're still pretty early.
So we've got a ways to go before it gets to the levels you see in most movies and most commercials.
I think for us, you know, we're trying to take a leadership position in enabling the conversation.
When the web first came out, you know, we all sat around and go,
oh, the web's going to change the world, and it has.
We didn't sit around and say, okay, now let's have a conversation on the ethics of the web
and sort of how we want to manage it and how we should work.
With AI, I think, you know, we can see the potential for the transformational half,
and in that light, it's going to be, it's not one company that's going to define it.
Frankly, it's not one government that's not going to define it. And it's no particular group in society. So how do you
create a conversation between sort of society, you know, as a whole government and industry
to have the conversation? We've been trying to, you know, get that proactively out there. We
published a book, our chief counsel and our head of AI published a book called The Future Computed.
And it's, it's meant to sort of start the conversation. We have a council inside of Microsoft on the ethics of AI, and it works across the
entire company. And on the ethics side, we have sort of published a set of base level things to
think about for the ethics of AI. So there's seven areas, things like transparency and removing bias.
And so we're trying to drive that conversation proactively and get ahead of it.
Again, it's not up to us to define per se, but if we're not, you know, sort of in a healthy
way trying to move forward, we're all going to collectively sort of not get to the point
we want.
So that's our approach right now.
What are your thoughts on organizations like OpenAI, for example?
Like, you know, just doing the research behind things, kind of putting the information out there
in an unbiased way.
I think that's what we're all after,
trying to get information out there
in an unbiased way,
and honestly in an educated way
to your statement earlier
about a layperson who's not living and breathing AI.
How do you help people understand what's possible?
And this is tough.
I mean, to some degree it's generational.
If I look at my parents and trying to help them with computers, like they didn't grow up with computers.
So there's not a super comfort level there.
If you take even our generation, you know, they didn't grow up with AI.
And so, you know, I talk to people, my friends, you know, that are our age who worry about data privacy.
They worry about sort of the things they hear about different providers and their information.
You take the next generation. You take my 19-year-old and their 20-year-old,
they understand a lot more about how their data is used. They understand how these things work,
how to turn them on and off. And so part of this is generational. Part of this is sort of trying to help people understand what's there and making sure the tools and the
conversation are going forward. Yeah. It seems like there's a,
we're in the current state of the world
with these technologies,
there's, maybe it's an uncanny valley.
It's like, it's almost like it's,
there's an uncanny valley.
Are you familiar with that term?
Where it's from like CGI graphics.
Okay.
Where if we see computer animated things
and they don't look that much like,
like they're not trying to look like humans,
it's fine.
Right.
But then if they try to get to a certain point looking like a human.
And they don't quite get there.
And it's almost worse.
It almost looks like a monster or something.
It's like the Turing test for things.
Yeah, exactly.
Can I interact with something and not know it's a computer versus a human?
Right.
And we're in the place with AI where there's insights that,
that these systems can make the proactive stuff that you're talking about.
But it's not so useful that humans perceive it as helpful.
We perceive it as creepy sometimes.
I'm thinking specifically of specific ad targeting based on data models and profiling and those kind of things. And I guess the question is, what steps is the community taking to get over those hurdles?
Right. Is that like a thing that soon enough it will be less creepy or is it going to get more creepy as these systems learn more and more about us?
You know, I think that's up to every person to sort of define what's creepy to them and what makes them comfortable or uncomfortable.
It's a little like sort of the tradeoff between privacy and security.
People want more security, but to do that, you often give up a little less of your privacy.
And so there's this balancing act where you have to find your own personal comfort level.
You know, people like free stuff.
And so what is their willingness to trade off their sort of information for free?
Yeah, exactly. You know, for me, when I think about it, I tend to think about,
look, if you do a good job infusing AI into systems, people don't know you've done it.
They just work better.
And so it's all about do things work better?
And if they work better, they don't necessarily feel creepy.
And so for us with Office, you know, the fact that we can filter out spam,
a lot of people would find that positive.
But that's sort of AI in the background.
Yeah, yeah. The ability to sort of help people, you know, when writing a paper know, hey, that sentence
looks like this sentence and maybe you want to change it. That's a very comfortable laying out
pictures. Like the ability to take a set of photos, throw them on a slide and have it give you four
or five layouts and give you a subcaption. That's AI. That just, it feels like PowerPoint's just
working better. It doesn't feel like, hey, that's a creepy AI thing. And so again, it's sort of how it gets used relative to the scenario it's in.
And does it feel natural or does it feel unnatural? Yeah. Going back to the point about
responsibility and kind of the community deciding, maybe not self-regulating or determining what's the responsible ways to go about these things.
Are there any efforts across organization, Microsoft, Google, Apple,
the people that are working, making huge progress in AI to kind of standardize and work together, share?
Definitely sharing, in particular on the ethics and the AI conversation.
Look, sort of like this podcast, there's very few forums you go to
where you don't get some combination of the,
you know, if it's all developers,
you get more depth on the technical side,
but you do get these sort of
social conscience type questions.
Yeah.
And so trying to, you know,
have that conversation in a vacuum isn't too useful.
So we do talk with the Facebooks of the world.
We talk with Amazon.
We talk, you know, I get a lot of interesting questions
from other large corporations
and we connect them
with the folks in our company who are sort of leading
that dialogue on behalf of Microsoft.
And what you want to try and do is have
an industry conversation.
Doesn't mean everybody agrees with the approach,
but this notion of what are sort of the ethics of AI,
what are the seven principles we have,
what are the, I don't know, n number of principles
a different company has, how do we have
a unified conversation, how do we, you know, have How do we have a conversation across borders, across countries?
Can you stake in what you put out, the seven principles, what that book is from?
You put out a manifesto, wasn't it?
Yeah, the Future Computed is the name of the book.
And it's sort of about, hey, what are the things we should be thinking about with AI?
For developers, should we have, you know, I forgot what the oath is that doctors take.
Hippocratic oath?
Yeah, Hippocratic oath, is there something equivalent
for developers in the future?
Is there, you know, they talk a little bit about
is there a Geneva Convention for the use of data?
Right.
You know, when you think about the things
that have traditionally, you know,
had boundaries for countries, which are borders,
data doesn't quite work that way,
AI doesn't quite work that way, It doesn't quite work that way.
So what are some of the new tools we might need in terms of how we think about this?
And that's why it comes back to some combination of government and companies and society, because
there's not one group that's going to sort of be able to make it work across sort of
the world we live in.
Well, this is the second time we've been to a Microsoft conference in the last year.
We were at Connect last November. Oh, nice. Welcome. We're glad to have you back. Thank you.
And artificial intelligence was a big conversation there. It's a big conversation to build.
Obviously, this is more towards a developer, a developer conference for Microsoft, right? So
I guess maybe the devs out there are thinking, like, how is my job going to change because of
artificial intelligence? How are the tools, the things the things i'm making you know how will this impact developers lives over the next
you know decade well i think one of the things we're working on is is how do you take what's
traditionally lived in the research world you know computer vision as a research area or natural
language processing as a research area how do we sort of create a normalized set of tools for
developers that they can think now of a new layer in the developer stack specific to AI.
So traditionally, you know, we have the cloud as a layer,
you have data as a layer.
We have these cognitive services.
Here's how you as a scientist could work
with computer vision, or here's how you could take
CNTK or TensorFlow and build your own models.
We're starting to build a normalized set
of cognitive services, the ability to understand speech,
the ability to talk, the ability to see, the ability to understand speech, the ability to talk, the
ability to see, the ability to reason, and taking them from being sort of individual research area
and projects to a developer toolkit where there's documentation, where there's consistent set of
APIs, where there's sample code, and then making them more enriched over time. So now as a developer,
I can say, hey, can I infuse sort of sight into
my application or listening into my application or reasoning? So how can I start to infuse AI
into the applications I'm building? And how do I have that as a tool set where I can pull it from
Visual Studio or whatever tool set I want to use? I can use them against the cloud. I can use them
at the edge. So you're moving from a world that's been pretty heavy research
and been able to pretty much do it on your own
to a set of tools that are more standardized so every developer can use AI
or every data scientist now can more easily work with data and create models.
Almost kind of reminds me of JavaScript sprinkles in a way, like AI sprinkles.
Yeah.
Sprinkles for sure.
And I've been waiting for this for a while because as a developer, I'm just saying,
just give me the API call, give me the SDK.
I don't want to go
learn the math necessarily
or the deeper concepts required
to deploy these things. I just want to be able
to sprinkle some AI onto my
application and immediately affect my
users. So are we there? Are we coming
to that point? We're getting there. We're at a point
now where I always say it's too early to do everything with AI,
and it's too late to do nothing.
Yeah, I like that.
And it's been an interesting time because I liked your notion of Sprinkles.
I stole that, by the way.
I know. It's been around for a while.
That's somebody else who got that.
Everything's a remix.
It's a remix.
It's a remix.
This notion of starting to work with AI and infusing it into what you do,
that is the point in time we're at.
And sometimes I get asked, hey, where do I go get AI developers?
Generally, you don't.
You can go find AI researchers who are really deep in computer vision or really deep.
But AI developers, that doesn't really sort of exist if you go look at them.
But if you say, hey, are we at a point in time where developers can start to infuse AI into what they're doing?
The answer is yes.
You need to go spend some energy on it.
It's sort of a new set of tools.
And there's sort of a different approach if you're trying to create a solution that's sort of more probabilistic in nature versus linear coding.
But we're there.
And so now is a good time for developers to start to at least spend some energy on this.
Yeah. So we've seen this migration of, as you mentioned, these technologies and lines of mathematics and research in academia, right, in the labs for years.
Many of these things existed 25 years ago.
And then finally, the practical use of that in industry and it's transforming everything.
Are there any other things going on in the labs where it's like this is not ready for prim time yet, but five years from now, ten years from now, are there new technologies in AI?
Because it seems like we have this curve where we have like a massive growth and then like a flatting out,
and maybe we're at a plateauing of a phase.
No.
No, the research is still super active.
I mean, if you look at, there's three things that get us here today to start with.
That ability of sort of, you know, compute storage and networking at scale of cloud,
us, Google, Amazon, et cetera,
means every developer now can build
and work with models at scale that you couldn't do that for
unless you sort of worked somewhere
that had that sort of core.
The second is data, right, the growth of data.
We just didn't have that for building models
and working with it.
And all that research to bring these APIs to life.
So now you're at a point where
everybody can start to play with it,
but it's actually just fueling more research, so for example, if you look at
deep neural networks, there's not sort of one approach
to a deep neural network.
People are discovering new types or new approaches
to deep neural networks for speech versus vision
versus how do I apply this to music?
I want to replicate music or I want to replicate art
or we're working on ambient computing,
I want to be able to mix multiple sensors together
and computer vision to come.
So the research is actually accelerating. And when we look at the solutions team,
we actually, part of the team is researchers, like keeping up with machine reading and comprehension because it's changing so quickly, we need the latest. And then developers, because
frankly, I got to go build something that I can ship. And so you're seeing, you're actually
seeing an acceleration of the research and you're seeing these records about sort of
human ability,
so the ability to recognize as well as a human or the ability to speak or understand.
Or now we just did one that was sort of English to Chinese translation.
You know, we just like three weeks ago, you know, beat the record.
And, you know, it was Christmastime where the record relative to reading comprehension equal to a human for a specific test was equal.
So you're actually seeing it accelerating right now.
So you sound excited about this stuff.
I am.
What's the angle into it that makes you the most excited, that you're most bullish on?
Well, if you think of this, there's patterns relative to AI,
and one of them is sort of AI assisting humans.
So this notion of how do you help people, how does Microsoft and the industry
and AI help us as humans in any field is incredibly powerful. And so the one that always catches you
the most is healthcare. Like the Project Emma, if you go look that up and what we're doing
relative to Parkinson's, seeing AI, the ability to help people who are sight impaired.
I mean, the ability to help people,
the work going on in genomics and radiomics.
You know, look, I'm getting old.
Like, this stuff is pretty cool.
I might be needing some of it soon.
You might be using it sometime soon.
But it is that ability to amplify human ingenuity
or amplify human capability that is so powerful.
And I feel comfortable because, look, we're early enough in it that I worry less about the Skynet scenario.
And the truth of the matter is the kids coming out of university today, they're going to grow up with a new set of tools.
They're going to grow up with AI, blockchain, virtual or mixed reality, IoT.
So you're going to grow up with a new generation of people with a new generation of building blocks,
which will vastly change the future.
And it's kind of fun to be at the beginning of that.
We grew up with compute storage and networking.
And we've been from client to client server to internet to cloud mobile to now we're intelligent edge to intelligent cloud.
They've got a whole new set of building blocks.
They have a different attitude towards life.
It's pretty cool to be on the front edge of that.
And being there at the beginning,
I won't be in there at the end,
unless, of course, some of this healthcare stuff comes through.
But it's fun to watch.
And you can just see people light up when they start thinking about what's possible.
Your notion, though, of the AI-assisted human
is what you said.
I think it's a pretty interesting notion
because I think that if we as humans
could have more information faster and more relevant,
considering our context and surroundings, maybe even the particular tool we're working in or the thing we're working on or the, you know, whatever it might be.
I can't really consider any particular scenarios, but humans are pretty quick with thinking logically about the next advancement and not to have to rely on the machine to do it for us, but feed us the necessary information at the right time to make a good decision.
Well, it's that curation of information. Like we're, we're drowning in stuff today. Like
between your social networks and your work networks and your, all the information that's
out there, having it curated for you proactively based on you as a human and having it sort of
easier. And whether I'm at a small screen or a big screen, like a TV screen or at a work,
having, having it proactively help me work with all that information
so that I can sort of leap ahead further
is, again, incredibly powerful concept
versus me going out and searching.
One of, Bill Gates made a statement,
I don't remember the exact phrase,
but something online is, you know,
we all sort of go to computers
and have to know how to use them.
You know, we're hitting the right point,
sort of with AI, when computers know how to work with us.
It's the opposite, It's the opposite approach.
And computing is becoming more of a fabric as opposed to a device.
So there's a lot of big players in AI.
Microsoft definitely one of the big ones.
When you look at the landscape and you look at your competition,
what gives Microsoft an advantage in moving faster and delivering more than the other guys?
You know, I think there's a couple things that differentiate us,
and we can decide whether, you know, it's about moving fast or not.
But I think, first off, those core building blocks,
which is sort of compute storage networking at scale.
Azure's obviously, you know, one of the large cloud providers.
There's only three of them on the planet and growing well.
The second is not only do we sort of understand data, but we have some very unique data assets between Bing and
between LinkedIn and between the Microsoft Graph. And third, look, Microsoft Research has been there
for 25 years, we've been doing research. So we have sort of the core in place. I think the second
thing, frankly, is a sort of customer and commercial ethos. When you talk about people using AI,
like the notion of security and privacy and management and sort of solutions that are on premises
and into the cloud and edge and cloud computing.
I think that ethos of sort of a commercial entity
and how you apply this sort of in a business setting,
I think people get a little less nervous about us.
We have sort of stricter rules on how we use data.
We always have and we're not in industries. We're not adding
sort of ourselves into certain industries that others are. So that combination of a commercial
ethos, core fundamentals that are very sound, non-competitive, and then sort of taking this
proactive approach, for example, on the ethics, I think that gives people a comfort level in sort
of saying, hey, we're looking for help in AI because a lot of people are. And, you know, you seem like a good set of folks to talk to about it.
Yeah.
I think that, you know, these keynotes like you did today with Satya, his keynote, the
first thing he said was privacy.
I mean, that was what it essentially opened up.
But it wasn't like, here's how awesome we are and what we're doing.
It's here's how responsible we are with the data.
And here's how we apply data to problems. And, you know, when you take that, you say, well, here's how responsible we are with the data, and here's how we apply data to problems.
And when you take that, you say, well, here's artificial intelligence laid on top of big data or cloud software, things like that.
You really have to look at Microsoft.
Definitely is a point of view of like, that's what you came out the gate with, not here's our latest tool.
Here's all the cool science.
Yeah.
I mean, at the end of the day, look, you need the science, but it comes back to that ethos.
What's your ethos as a company?
You know, how do you think about sort of the commercial landscape versus the consumer?
Right.
How do you think about helping other companies use this technology?
How do you think about doing it responsibly?
Look, I've been at Microsoft 24 years, you know, been through a lot of sort of ups and downs.
You learn a lot over that period of time.
You know, how you work with developers, how you work with open source, how you work with data,
how you work with governments, how you work globally. There's a lot to learn there. I think that
helps us. What's something like you do, like your role? VP, Vice President, right?
Yeah. It's a big role. It's overarching, right? You've got to be in the weeds.
What's that look like in the day-to-day sense? What's the day-to-day in your life?
A good chunk of it is spending time with customers and partners.
You know, before this role, I managed our developer evangelism.
Before that, I managed our OEM ecosystem.
So I'm used to community and partners.
So I spend a lot of time with customers and partners just having this kind of conversation.
And then working across Microsoft, look, we're still a big company.
How do you orchestrate the sets of work going on, you know, between AI and the research
group, between AI and the platform team, between AI and the product groups, and then managing
people? How do you build and grow, you know, people, you know, leaders in the company and
anyone from fresh out of college to other leaders? So, you know, my time is split between internal
and external and trying to be a good advocate for the company. I think because AI is such a
cross-cutting concern, like you said, it's going to be infused into everything.
It doesn't lend itself well, I wouldn't think,
organizationally into a siloed approach
or a functional approach.
Very integrated.
You probably have to integrate into lots of teams.
Is that a challenge?
You end up working across teams.
Look, you have a set of people that are working on the platform,
the things that developers would use,
and then you have a set of people who are using it.
And really what you want to do is enable the conversation
so that the platform teams learn from the people
who are actually using it,
and you feed things back into the platform that you need to,
and that the teams that are busy advancing stuff
are making the most of it in the products,
then you've got researchers on the other side.
So a lot of that's enabling the conversation.
It's helping get ready for events like this. It's helping sort of bring it to life for customers. So there's a decent amount of
orchestration. That's where, you know, having been there a long time, that's actually helpful,
knowing your way around. But, you know, it's just fun. It's just good. It's, you know, I always like
to be, I would say, be in the center of the whirlwind. You know, AI is definitely the center
of conversation or energy today.
So that's where I like to be.
Maybe as a closing question,
biggest challenges
moving forward for AI.
What are those biggest challenges?
I think one is...
What are the hurdles?
Expectation setting.
You got two ends of the spectrum.
You have the,
gosh, it should be able to do all,
you know, I've seen that.
It should be able to do
all these great things.
It's not that easy.
We're not there.
And then you have the other side,
which is, oh my gosh,
this is, you know, scary.
Yeah.
And so you end up working on both sides, which is sort of the over-expectations on what it can do from both a good perspective and a discomfort perspective.
And so I think once you can get sort of the expectation setting, which is one of the biggest hurdles by far, then it's helping people pick the right path.
How do you get expectations?
How do you apply expectations?
You know, it's
on the not overdoing it side,
just sitting down and talking to people. I don't know
in terms of any new technology area
or any new conversation,
unless you're in the dialogue,
unless you're in the community, you know,
in with the developers, you're just not
there. And so I think our biggest way to help is
to be part of the dialogue and part of the conversation.
You're not, again, it's not one person or one company, but you got to show up.
You got to show up ready, ready to have the dialogue. And some of the dialogues are easy
and some of them are hard. It's my favorite advice. Just show up. Just show up. You got to
show up, right? I agree. That's having been here for a long time and haven't worked a lot with it.
You just got to show up consistently. All right. Well, speaking of the dialogue, we have a brand
new podcast, Practical AI. I love it. It's a weekly show. My kind of show. All right. Well, speaking of the dialogue, we have a brand new podcast. Yes. Practical AI. I love it.
It's a weekly show.
It's my kind of show.
All about making AI accessible, practical.
And fun.
And fun for everyone.
We'd love to have you back on that show.
Keep the dialogue going around these topics.
Love to be there.
And I just did the precursor for sort of Practical AI blog today.
Awesome.
So kick that one up, and we'll do two more in the next two weeks.
Beautiful. Thanks, Steven. All right two more in the next two weeks. Beautiful.
Thanks, Steven.
All right.
Thanks, guys.
Appreciate it.
All right.
Thank you for tuning in to today's episode.
If you enjoyed it, you know what to do.
Share with a friend.
Go on Twitter, tweet a link.
Go on Overcast and favorite it.
If you use iTunes or can go to iTunes Go and give us a rating. It helps us grow this show and share it with more developers.
And of course, thank you to our
sponsors, Airbrake,
Linode, and GoCD.
Also, thanks to Fastly, our bandwidth partner.
Head to Fastly.com to learn more.
And we move fast and fix things here at
ChangeLog because of Rollbar. Check them out
at Rollbar.com slash ChangeLog.
And we're hosted on Linode cloud servers
at Linode.com slash ChangeLog. And we're hosted on Leno Cloud servers at leno.com slash changelog.
Check them out.
Support this show.
This show is hosted by myself,
Adam Stachowiak and Jared Santo.
Editing is by Tim Smith.
The music is by Breakmaster Cylinder.
And you can find more shows just like this
at changelog.com.
When you go there, pop in your email address,
subscribe to our weekly email.
We'll keep you up to date with the best news
and podcasts for developers. Thanks for tuning in. We'll see you next week.