PurePerformance - Red Hat and OpenShift Innovation Update with Chris Morgan
Episode Date: January 29, 2019We catch up with Chris Morgan of Red Hat here at Dynatrace Perform 2019 and he shares several cool things about latest versions of OpenShift and automated operations and integrations with Dynatrace....
Transcript
Discussion (0)
Coming to you from Dynatrace Perform in Las Vegas, it's Pure Performance!
Hello and welcome back to Pure Performance and PerfBytes at Dynatrace Perform 2019.
Of course we have Mark Tomlinson and James Pulley with us.
And we have a very special.
And Brian Wilson.
And Brian Wilson.
And one of our awesome sponsors.
Our celebrity guest star.
Celebrity guest.
From, you might have seen him on, what was the old show with the celebrities doing the
sporting event and the relay races from the 70s?
Battle of the Stars.
Battle of the Network Stars. Battle of the Network Stars.
Battle of the Network Stars.
You might have seen them on Battle of the Network Stars.
Chris Morgan from Red Hat.
How are you doing, Chris?
Welcome back.
It's been 365 days since last you were on our podcast.
Exactly.
And in IT terms, it might as well be 10 years.
Yeah.
No, thanks for having me back.
How are things going?
They're going really well.
You know, it's been a lot of fun, you know, especially Red Hat with what we do with open source and all the different things that are going on.
You know, and we've seen a big adoption since we spoke last, especially around an understanding in general of Kubernetes and containers and what they can do for people.
And so, yeah, it's just been a lot of fun.
Very cool. And anything special this year that has changed for Red Hat?
I don't know, like a large company acquisition or anything?
We can't really talk about it, but that's big news.
I have no idea to what you are referring.
It's business as usual.
Yeah, you know, honestly, to your question, nothing's really changed for us.
I mean, that's not just speak.
It's still business as usual.
Right.
You know, but, you know, we'll see where things go and how that evolves.
But, you know, I just think maybe something that calls that is some of the success we're seeing, especially, you know, in these emerging areas, particularly with OpenShift, which is our Kubernetes distribution,
and the things we do.
And we've seen a lot of great adoption.
We actually have a lot of customers using it in production,
not just kind of experimenting and doing some other things.
And I mean, that's part of what brings us here as well,
because they are running it in production
and they want to know how well it's doing.
You know, OpenShift's kind of really landed.
It's taken over.
It kind of won, right?
Like, there's a lot of the Kubernetes orchestration things going on
and it was always the question of, well, we need to know about all these.
And suddenly it's like, well, yeah, we just kind of really need to focus on OpenShift now
because everything else kind of fell to the wayside for the most part.
Well, that's nice of you to say, but I will never say we've won.
Yeah.
Especially not in 2019.
There's never a win, right?
Right.
You know, you always have to worry about
certain large book resellers.
Yeah.
Oh, yeah.
Kind of jumping in and taking over some things.
But no, we are excited, you know,
and the customers are excited.
I think that's been the most fun part.
But they're asking us now for things.
You know, to me, the sign of a good platform, and I think even the been the most fun part, but they're asking us now for things, you know,
to me, the sign of a good platform, and I think even the academics agree with this,
is one where people start using it in ways that you didn't anticipate them using it.
Right.
And what's really been fun for us on the team. So that's really a maturity issue.
Yes, a lot of times, right?
You know, I think Apple's famous for it, right?
Well, they'll bring up some blind guy that wrote an app to help him see, you know, things like that, things they never imagined.
And so what we're kind of looking at now is that next phase of, you know, what are the specific
workloads that customers are asking us for and how can we now optimize those for this,
for the environment, right? I mean, you can already bring any app, but hey, you know,
there's a lot of NoSQL out there. There's a lot of SQL.
There's a lot of Elastic.
And how do we optimize the platform
for those types of workloads?
Cool, cool.
What is the latest version of OpenShift?
And again, we talked last year that people could get,
there were trials and things.
There are all sorts of ways for people to get hooked up.
Well, it's actually even a really exciting thing
for you to ask.
So we're currently in the beta phase of something we have called OpenShift 4,
which is going to be the first version.
We made an acquisition over the last year.
In fact, I announced it at Perform, oddly enough, was CoreOS.
Oh, right, yeah.
The CoreOS acquisition.
And so OpenShift 4 is the first one that has full integration of all the CoreOS tooling.
Cool.
Which was, what was really cool about that for us is we'd always, you know,
put a lot of heavy concentration on the developer persona,
but what CoreOS did was really great concentration on the operations persona, right,
which is a lot of folks at this event.
Yeah.
And so kind of combining those best of breeds together, people are really excited.
And so you can go to try.openshift.com now and actually see that OpenShift 4 experience
and deploy it yourself with kind of a more opinionated installer, if you will, where
you don't have to think as much and stuff just kind of works.
So yeah, that's kind of where things are heading with that is to finally have the operational tools to take advantage of things like over-the-air updates and, and that allows, as everyone knows, of course,
it allows you to install the operator. That's going to go ahead and take the one agent and
push it everywhere into the OpenShift ecosystem so that you light the whole thing up. But
the idea of these operators, what else have you been seeing how people have been taking
advantage of this operator idea?
So it's awesome because, so operators,
and you'll have to indulge me for a moment
in a bit of a history lesson, I think.
You know, the Linux kernel, it's not very smart by default.
Right.
It doesn't know anything until the advent of kernel mods.
All of a sudden now I could write a kernel mod
and it told the Linux kernel how to do things, right?
It gave insight into what was actually running.
You plug it into the kernel, it could do stuff.
Kubernetes is very similar, right?
It was designed to just say,
hey, I need three pods running.
No matter what, run three pods.
It has no clue as to what those pods actually do.
When you look at custom resource definitions
in the operator framework, well, it's doing the same thing
that kernel mods did to Linux kernels, providing how. So, you know, you mentioned
the life cycle, if you will, of the one agent. Once it's deployed and it's
operator enabled, the life cycle's just handled now. And so if you put yourself
into a classic kind of infrastructure admin, this becomes really interesting.
Because now there's going to be all these workloads that can run, and it's up
to the ISV vendor to put their intelligence into it. Like a common example would be databases.
Sure. Right. If a database goes down, it's not enough to just bring it up. A lot of times it's
clustered. So you have to rebalance the data. So an admin has to know how to do that with operators.
And we have database vendors doing this. It just automatically does that because it knows how to
react to an event. But more importantly, every single database within that cluster gets the same operation,
so it creates a consistent environment.
So that's where we've seen the biggest uptake, and we're actually doing a pretty significant
investment on our side to make sure the ecosystem is enabled with these.
Yeah.
That's great.
And also the sophistication of when you use just the word how, how to do something, the
context of that would be how to remediate a problem.
That's right.
So you're not just like how to add a disk,
how to do basic stuff in the kernel or in the cluster,
but how to add an index where I know there's a missing index.
Hey, we started scanning, add an index.
Why do I have to type that?
Well, because without that, right, I mean, the platforms,
that's been the beauty of them is they were designed,
both Linux and Kubernetes, to be dumb.
I mean, I don't know any other way to put it,
but that allows you then to educate it in ways that are specific
to the services that use this kind of plug-in.
Yeah, very, very cool.
It almost sounds as if that's going to be,
that could potentially be the ticket to getting people to use it
in ways
that it wasn't designed for because it certainly is.
Opens the door.
Well, you know, we like to joke sometimes that, you know, the big kind of buzzword,
if you will, from 2016, maybe through 2018 was people worried about getting Ubered.
Well, now you're kind of worried about getting Amazon, right?
But if you're an ISV now and you can put your intelligence in your service, well, you can
get that Amazon experience anywhere.
Yeah.
That's a very good point.
And that's kind of what we were going for.
That's great.
So Chris, one of the other announcements that we heard this morning was about the AIOps
piece.
Sure.
And so I'm just curious now that maybe you're playing with IBM Watson, maybe Red Hat
and IBM going to do some AI in the OpenShift world?
Well, as a partner, we already kind of do some pretty cool stuff with the IBM Cloud Private
and its integrations with Watson today.
But there's so many open source projects when you look at AI.
Our kind of foray into that space is going to really be around all the things you need to enable AI.
When you think about data science and things like K-Native and OpenWISC,
I think that's what you're going to see evolve with us.
As far as us having an AI offering, I don't know.
And you mentioned KubeVert?
Yeah, KubeVert.
So when you think of the four footprints,
public cloud, private cloud, virtualization, and bare metal,
KubeVert's a really key piece to allow you to run an entire VM
within a container itself.
Right.
And so that's kind of, if you think about really migration, you can now all of a sudden
decide hey, maybe there's certain things I don't need to containerize, run it as an entire
VM, because maybe I don't know what it does.
Or maybe it's just some really old code that you still need.
Like an old Complus app.
Yes, an old Complus app.
With some DTC in the background.
Now we're talking. 2003 is a lot. Yes. Yes! An old Com Plus app. With some DTC in the background. Woohoo!
Now we're talking.
2003 is alive!
Yes.
But that's kind of the premise behind it, right?
We want to no longer make folks have to rip and replace to adopt new technologies like
we have had to do with every other shift.
Let's literally help you kind of make it a revolution of, or an evolution rather, of where you are today and where you want to get to.
Yeah, definitely.
So I know standards are kind of fluid in this whole deployment of VM-type stuff across different implementations.
Are you hopeful that by containerizing the deployment of VMs that you're able to set some standards there where it's been really fluid?
Well, you know, what's interesting is we have a lot of great services partners and all as well
that I'd like for them to start to develop that, right? That's a really big practice for them,
you know, at the end of the day. So I think for me, it's more a matter of, you know,
there's already standards around containers, right? And so a VM in a container, it's just now another workload.
It's just now it's a very specific one that's going to allow you to kind of continue your business in ways that are familiar
and hopefully save some money with regards to both CapEx and, more importantly, OpEx.
Because now I don't have to manage, hey, here's managing my VM environment,
and now here's managing my container environment.
I manage them with one fabric.
Yeah, in one actual piece. And then apply some AI within that piece as well.
Yeah. I mean, I think there's opportunity there when you saw the AI announcement to
maybe analyze an existing environment and determine, hey, here are some things maybe
we should consider moving first. I think that's something that we should look at.
Well, that's awesome. And again, you have a talk coming tomorrow?
No, it's this afternoon at 1 o'clock with Peter Hack.
We're going to talk a little bit about some cool things.
You mentioned the operator framework early.
We're going to talk a little more about that and some things that we've been doing together.
So it should be fun.
Cool.
Awesome.
Awesome.
All right.
Well, good to see you.
Yeah, good to see you guys.
I'm sure I'll see you next year.
We'll talk about how they don't need any of us
because the machines
are doing it all themselves.
We'll all be energy cells
for the machines.
That's right.
I'm going to change my name
to SkyNet Sarah Connor.
Yeah.
I don't want to be Sarah.
I'll be Neo.
Who's Sarah?
All right.
Cool.
Thanks a lot, guys.
Take care.
Thank you very much.
Bye. Thank you very much.