PurePerformance - How to scale k8s operations from a single to thousands of clusters

Episode Date: November 2, 2020

We are sitting down with Sebastian Scheele (@sscheele), CEO and co-founder of Kubermatic, to discuss the challenges organizations have as they are moving their workloads to k8s and realize that managi...ng, scaling and operating k8s is not getting easier the more k8s clusters you allow your application teams to spin up or down. We learn more about the Kubermatic Kubernetes Platform, the Open Source Project, which centrally manages the global automation of thousands of Kubernetes clusters across multi-cloud, on-prem and edge with unparalleled density and resilience.Thanks Sebastian for answering all the questions we threw at you – questions we have received from many organizations that are moving to k8s but get surprised about the complexity as it comes to properly operating and managing k8s.Sebastian Scheele Twitterhttps://twitter.com/sscheeleKubermatic Kubernetes Platformhttps://github.com/kubermatic/kubermatic

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for Pure Performance! Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson. Hello everybody and welcome to another episode of Pure Performance. My name is Brian Wilson and as always, my co-host, the one and only Andreas Grabner. Andy, I was going to call you Andreas Griemer, but I figured that was last week's recording. How are you doing not too bad actually um it's the first time in months that i'm not working from either my kitchen or from the dynatrace office in linz where i've been to maybe four times now in september but today i'm actually in vienna it's kind of like my first trip in uh in covid
Starting point is 00:01:06 times so very interesting in a hotel room like the old days recording a podcast wow oh wow that's wow and when you say vienna besides thinking of you know some of the foods and stuff and all i think of vienna calling which was i forget who did that when there was this one did falco do vienna calling that was his too of course yeah yeah yeah who else it was either falco or mozart one of the two right and is there really much of a difference between falco and mozart really when you think about their genius um anyhow we got colder weather coming in let's talk about the weather now it's nice it's it's cooling off here uh it's just so awesome i love I love falls and springs in Denver. We get nice, cool evenings, warm days.
Starting point is 00:01:54 So it's just really, the mood has changed from the oppressive heat to just some nice welcome weather. So I'm not traveling. I'm not in fancy Vienna. I'm not in a hotel room, but I am at least enjoying some of the outside a little bit this week and last. So that's where I'm at, Andy. We're not here to, people don't really care how I feel, right? They might care about you, but I think they care more about the topics, right? Exactly. And maybe as we've been talking about the weather,
Starting point is 00:02:17 I wonder, asking Sebastian, how the weather in Hamburg is these days. Yeah, the weather is great. So typically, I mean, stereotype, Hamburg is rainy. But the last days and today, sun is shining. So really enjoying the weather. And unfortunately, not in a hotel, or luckily not in a hotel. Working from home currently.
Starting point is 00:02:44 And yeah, the weather is really nice, especially for Hamburg at that time in the year. You know, I'd be really remiss, I'd be really shirking my duties if I didn't point out that there is a kind of a cookie called a Vienna finger, the dessert. And then Hamburg, obviously you have hamburgers. There's nothing really Denver,
Starting point is 00:03:03 so I'm the only non-food. And it kind of shows because the food here is pretty bad. But I just had to point that out. And Hamburg! I think that's awesome that you're there because being a big Beatles fan, that's where they got their start. Yeah, exactly. You know? So
Starting point is 00:03:17 that's some place I hope to visit someday. And yeah, Andy, take over, please. Yeah. Hey, Sebastian, so come in, one more punt on Hamburg. Hamburg, obviously, for those people that have never been, take over, please. Yeah. Hey, Sebastian. So come in like one more punt on Hamburg. Hamburg, obviously, for those people that have never been, have a big harbor, a lot of ships, a lot of containers being moved around on a day-to-day basis. You're so clever, Andy. You're so clever. into Sebastian's previous podcast that he actually did with the Kubernetes podcast with Google, where you were invited
Starting point is 00:03:47 to actually talk about the company that you run. You are CEO of Kubernetes. And I thought it was interesting because these two guys, Craig and Adam, they did a little, you told them a little story
Starting point is 00:04:00 with being a port city. And then I think you had an event a webinar and people and you announced containers and then was it a container company that actually ships physical containers they registered and you were wondering if they know that this is a software webinar and not a container webinar is that right yeah we like five years ago when we started the company, we started a conference called Container Days. And so we will start promoting it. And one of the tickets we sold was sold to a container shipping company. And we were like, oh, shit, did they really understand the purpose of the conference?
Starting point is 00:04:41 And it's software container and not real containers. And so we called them and explained, hey, look, potentially the conference is something different. What do you expect? And they said, no, no. We are from the IT department of the company and we are really looking into IT containers. And we said, oh, good.
Starting point is 00:05:02 It was quite interesting. And I mean, another funny story about Hamburg. So, yeah, I'm CEO of Kubernetes, but formerly we were called Luzi. And why we chose the name was really like, I mean, we all know Docker. We know Kubernetes, the health of a ship. And so we were thinking, okay, we are from Hamburg. We know containers, real containers, at least quite good. So let's stick to the pattern. And Lutze is a lower German word for navigation pilot of a ship, which comes like at the end when you go into the haywire. And we said, okay, that's pattern matching well. And so we said, okay, let's go with this name.
Starting point is 00:05:46 That's pretty cool. It's great that we all find our Nordic names for, not naughty, Nordic names, for things that are related to Kubernetes. Sebastian, not sure if you know, but we've also launched an open source project in the Kubernetes space, in the CNCF space, and we call it Keptn,
Starting point is 00:06:04 because we kind of want to steer the containers safely into the Kubernetes space, in the CNCF space, and we call it Captain because we kind of want to steer the containers safely into the next harbor, which we believe is production. So that's why, in case you want to read up on what we are doing, we also used the German phonetic version of a Captain, so K-E-P-T-N.
Starting point is 00:06:21 But back to you, Sebastian, the way we actually got introduced, I was actually invited to speak at one of your meetups that you run, the Kubernetes online meetup. And I think you have meetups around at least Germany and I think around Europe and maybe even around the world. And so we actually got to talk. And then I thought, wow, this is really interesting what these guys are doing and especially the experience that you have in helping organizations automate Kubernetes and operate Kubernetes. But instead of maybe coming from my words, maybe you can give a quick introduction on
Starting point is 00:06:56 which problems you have actually seen out there as organizations move to Kubernetes and what problems you are solving with your company? Yeah. So when we started, I started really early with Docker 0.5, something around. And so we really started working with Docker and Kubernetes, oh, it's five years ago, something around that, even more. So really early on on the journey and so we used it first for for some uh own experiments and um we said hey that's really interesting we want to do
Starting point is 00:07:33 something with it and then we started the company and uh what we see so quite early is like kubernetes is not easy um it's uh it makes a lot of things easier, but in general, the system, it's a distributed system. It's a quite complex system and how everything works together is quite challenging. And so what we saw was like, hey, how can we help customers to really adapt this new modern ideas about cloud native
Starting point is 00:08:03 and how to use cloud native to build new modern products, but also how they could potentially move existing workloads to Kubernetes and using us with our experts there to enable them on the one side with trainings and consultings, but on the other side also also, we built a product, Kubernetes, which helps customers to really managing Kubernetes cluster across different cloud providers on-prem. Because what we saw from the beginning, one of the biggest challenges was, how can I spin up Kubernetes cluster?
Starting point is 00:08:37 At that time, the only cloud provider managed service which was available was GKE. And it was easy. You press a button. But we wanted to have a similar experience on mostly all providers. And so we came up with the idea, can we build something like this? And we came up with the idea, can we be leveraging the operator pattern to manage this? And so our goal is really, how can we help and enable customers to use cloud native and moving into
Starting point is 00:09:05 the cloud native world. Now you bring up, I need to bring up another conversation I just had earlier today because you said, you know, from the outside, Kubernetes looks like this cool, very easy to use and operate thing, platform. But then if you take a closer look at it, and especially you have done this for the last couple of years, you'll see how complex it is. Now, Brian and I, we both have a colleague who has been heavily invested in PCF and Cloud Foundry over the years. And I just have to quote him because he just mentioned this to me today. He said, I don't understand why Kubernetes is so popular
Starting point is 00:09:45 because it feels like we're making a step five years back as compared to the experience that some people had with PCF as a very opinionated platform. And I obviously think he is not yet as deep in Kubernetes as he is with PCF. So I'm pretty sure Kubernetes these days is much easier to handle, especially with services and tools that you offer. But do you kind of agree with it that it took a while for Kubernetes
Starting point is 00:10:19 to really become more developer-friendly, end-user-friendly, operations-friendly? Yeah, absolutely. I mean, first, at the beginning, Kubernetes was quite limited in functionality. You only had, at that time, you had pods and replica set. No, not replica, replica controller. There were even not stateful applications. Now, but on the other side, now as more and more functionality
Starting point is 00:10:51 is coming inside of Kubernetes itself. But also if you're looking into the CNCF landscape, it gets more and more. Yeah, there are so many projects out to figure out what are the right components I want to use. And I think what your colleague is mentioning, Cloud Foundry is really a quite opinionated way how to do this. And there's only one way.
Starting point is 00:11:18 With Kubernetes, there are a lot of ways. Of course, we have the common ground to say, okay, we have the Kubernetes API. I know that is standardized, but on top of this, there are a lot of ways. Of course, we have the common ground to say, okay, we have the Kubernetes API, and that is standardised, but on top of this, there are a lot of capabilities. And I think it will be also interesting to see where this goes in the next years, what workloads will end on Kubernetes. But I think now on Kubernetes, you already have much more capabilities and possibilities, but typically this brings also a lot of more complexity in this game. Yeah, I mean, especially as I think you pointed it right out there, it is there's so many different ways of doing things in Kubernetes
Starting point is 00:11:58 and achieving one thing where in other platforms there might have just been one opinion in a way which made it very easy to do and felt very good. But then it obviously limits you to that opinion. What we see is more and more organizations obviously want to move on Kubernetes because I think it's clear Kubernetes is one or it kind of is the popular platform for container-based systems. Now, do you see working with your organizations, with your customers, are people still or primarily moving existing applications on Kubernetes
Starting point is 00:12:39 and just trying to find a different platform to run this on? Or do we actually see more and more organizations actually really building applications, the cloud-native ways to really leverage the power of these platforms? What we are seeing is mostly, especially when people are starting, especially in the early days,
Starting point is 00:13:03 it was more like, okay, we want to build a new modern architecture and not directly migrating existing stuff. Especially, hey, we want to build microservice architecture. Let's build this completely containerized and run this on Kubernetes. But then the next step was like, okay, let's also move
Starting point is 00:13:19 existing workloads on top of Kubernetes. And I think still most of the workload was currently going on Kubernetes is still like own built custom software, which the customer wrote on their own or with some agency wrote for them. I think one of the big moves
Starting point is 00:13:40 was currently still not happening, but I think that will come in the future, will like that vendor software will run more and more on Kubernetes so that different applications are built from the scratch for running on Kubernetes and you have complete certified Kubernetes on this certified solution to run it on Kubernetes
Starting point is 00:14:00 instead of running it on VMs. And I think that is one of the next steps, what I think will happen, especially that customers are also pushing the vendors to provide them a way how they can run this on Kubernetes. But yeah, then I think what a lot of customers try to do is really with Kubernetes standardizing their operation for new applications,
Starting point is 00:14:25 but also that it helps to move legacy applications so that they in the future can potentially have one platform which can run all the workloads they have. And I know this is obviously a tough question to answer, but do you think that this is a future state that we will achieve in the next two years, five years, ten years? How long do you think it will take until we actually really see also legacy?
Starting point is 00:14:54 So let's say, when will we see a Kubernetes platform that runs all the workloads? I think it's... I mean, when all the workloads i think it's i mean when all the workloads uh will move i think that will take some time i guess it will definitely take five to ten years uh that a lot of workloads is migrating but what we see now and we did already a survey to our containerer Days participants, but also to customers. Because we wanted to figure out how far are they in their cloud-native journey. And we see now that first customers have adapted Kubernetes, have done first things into production, and now really start migrating more and more workloads to Kubernetes. And I think in the next two to three years, we will see
Starting point is 00:15:45 a lot more workloads moving to cloud-native and Kubernetes. And also then, I guess, also more legacy or existing application will move to Kubernetes. Which doesn't every time mean that it's like the same application will
Starting point is 00:16:01 move to Kubernetes. It could also mean that the application will be rewritten or new architectured to run in the future in a cloud-native way. Now, as you said, if more and more people are moving over to Kubernetes, and obviously we see this with the people we work with, with the organizations that are our customers, also within our organization. We are building more and more stuff on Kubernetes.
Starting point is 00:16:31 I think questions come up is, what are the boundaries of an individual Kubernetes cluster? Do you need a cluster for a certain environment, for a certain set of software, for, I don't know, for... I mean, the question is really what belongs into one cluster versus whether you need
Starting point is 00:16:56 different clusters, what are the boundaries, and also who is responsible for these clusters? Because eventually, if you have a large organization that is building software and you have, let's say, a thousand applications and a thousand application development teams,
Starting point is 00:17:16 do they all own and run and manage the wrong Kubernetes clusters in their dev, in their pre-prod and in their prod environments? Or is there going to be a traditional operations team or platform team that is then
Starting point is 00:17:32 providing these things as a service, but they manage everything? Do you have some insights in what large organizations are doing and especially how they're organizing and how they're organizing their clusters and who owns them and who is responsible for it? Yeah, so I mean, that's exactly
Starting point is 00:17:49 where everything started with our company. So we strongly believe in that you need to manage a lot of Kubernetes cluster across multiple providers, in the cloud, on-prem. And the one thing Kubernetes is not really good is really multi-tenancy. So multi-tenancy in a way
Starting point is 00:18:11 that it's really hard multi-tenancy. Of course, you have like namespace and airbag capability there, and you can do some stuff, but it has quite fast some limitations. And I think every organization needs to think about how they want to manage this. Sometimes it could be cut based on applications.
Starting point is 00:18:31 Sometimes it's more like based on their organization. So different teams have their own cluster. But I think what you really need is also like container as a service operation team or a team which can really help inside of the organization to operate all this Kubernetes cluster and to manage all this Kubernetes cluster and also ensure that the operation of these clusters
Starting point is 00:18:57 or that how these clusters are rolled out are secure and in a way that is compliant with the organization and that not every developer needs to figure out how do I provide specific policies. I mean, similar what we did in the past was like databases. Not every developer is running their own databases. So we either using managed services from the cloud provider or we have a database team inside of the organization who are taking care about this.
Starting point is 00:19:27 And I think similar can be applied to a general Kubernetes platform architecture so that you want to have a team who's taking care in general about the Kubernetes cluster, but then that's a developer API driven and having the capability to easily spin up and manage the capability to easily spin up and manage the kubernetes cluster yeah i think i guess i i concur with you on this i guess it makes sense right because in the end
Starting point is 00:19:54 it's just another service that you want to consume because it should not be the core responsibility or the core um i'm not sure what the right english term now is the core, I'm not sure what the right English term now is, the core competency of developers to also figure all these things out. There has to be either, if you go with a cloud provider that provides the whole thing as a service, then they're taking care of it. Or if you do it within your organization, you have a part of the organization that provides that container platform as a service and making sure that the things like security policies governance that this is all taken care of um yeah exactly i mean i think you said exactly the word um you want to provide this as a service and as you
Starting point is 00:20:36 consume the service from an external provider which could be a cloud provider or like an outsourcing provider or internally you want to provide this as a service to your own organization now i know we i guess i have to come back to it we see a lot of people that that you know are moving to kubernetes and a lot of people have already played around with it somehow just talk with with folks this week uh they have in the magnitude of you know a thousand plus kubernetes clusters running in their environment and managing them um i don't know what additional let's say management layer they use on top of it but from a scale perspective and you must see a lot of organizations running Kubernetes,
Starting point is 00:21:33 do you see what's kind of the spread from organizations that are running 10, 100 clusters, 1,000 clusters, 10,000 clusters? Is there kind of like the size of the enterprise typically tells you how many Kubernetes clusters they will have? Or is it more like how their software architecture looks like, how their organization structure looks like? Do you have any thoughts and ideas on how enterprises typically end up with
Starting point is 00:21:54 in terms of number of clusters and how they manage them? Yeah, so I think it really depends how far are they on their cloud-n native journey. So if they are more in the beginning, they run potentially a few clusters, two, three, five, up to 10. If they are already like, especially if it's a bigger organization, and they really have
Starting point is 00:22:20 this service idea and really having the capability to provide this to their organization. I think it scales quite fast up so that around 20, 50, hundreds of clusters. What we see, especially when you look into like the edge, that you easily come up with use cases and rollouts where they're talking about a thousand or ten thousand of clusters which needs to be managed. Could you quickly define for me edge? So what are these use cases? What type of apps are running and where? I think for the edge there are different use cases. One is like, for example, you have a factory and or you have a few big data centers, but you also want to run stuff inside of your factory with Kubernetes. So you need to deploy Kubernetes in the factory. Another edge use case would be also if you have supermarkets and you want to run in each of the supermarkets,
Starting point is 00:23:26 Kubernetes cluster. So you need to run a lot of clusters there. And I think also up to running individual devices on, for example, trucks or inside of specific machines, even like with like Raspberry Pi, this kind of stuff so that you're leveraging containers and the API of Kubernetes to manage your software and lifecycle management of your application.
Starting point is 00:23:56 Yeah, I remember also one use case from one of our customers. They run, funnily enough, Kubernetes clusters on their cruise ships. So they have data like floating data centers, basically, because they run all of their obviously software that manages everything on that boat and platform of choice is Kubernetes. That's also kind of fun. I mean, obviously these days it's a little bit challenging for them, hopefully they will bounce back after this pandemic is over.
Starting point is 00:24:27 But I thought that's an interesting use case. We need to get a shipping company using it. Putting a data center into a container and then putting the container onto a ship. Yeah, exactly. Maybe a literal container. Containers and then put the container onto a ship. Yeah, exactly. It would be a literal container. Containers and literal containers. Yeah. Then you'll have to get captain.
Starting point is 00:24:52 Exactly, because you need somebody that steers it into the right port, making sure that everything is... That'll be a new requirement to be able to captain a ship. You'll have to know captain. And then you need a load for the last uh miles to get you in the in the hay bar yeah yeah um so sebastian i hope it's okay that i fire off these questions because i really this is just fascinating having somebody
Starting point is 00:25:17 like you on the other side and i can just do this i hope you're still you're still good with me yeah absolutely uh um yeah um so the the other thing um uh now i need to pause i just lost my train of thought i think that's what it called what's what's called brian this will be the first time when you need to cut me out no uh so don't don't do it don't cut me out this is just it don't do it. Don't cut me out. This is just, it should be on record. It took me a while. I lost my train of thought, but now I'm back. So if what we've been, I remember I did a lot of talks over the years. Both Brian and I are performance engineers,
Starting point is 00:25:57 and we saw a lot of things that people just did wrong. It's like, why when you build an application, do you make 1000 database statements if you can do it with a single call? Why are you allocating memory in that way, which shouldn't be done and then lead into large garbage collection? Now, I assume there's something like this as well with the organizations that you work with that maybe start with Kubernetes in a certain way. And then they see it's not scaling. It doesn't work for them.
Starting point is 00:26:27 It's slow. They're frustrated. They need something new to manage. And then probably you come in and then you see maybe what organizations have done wrong from the beginning. So my question is, if something like this is true for the way you see the IT world in the Kubernetes space, what are the, I don't know, two or three things
Starting point is 00:26:47 that you see people are doing wrong from the beginning so we can give our listeners a little bit of advice on when they're going down the Kubernetes train and they want to eventually scale this across the enterprise.
Starting point is 00:26:59 What do they need to figure out from the beginning? What do they need to do right? Yeah, I think one of the interesting facts is now everyone is saying okay i do cloud native i do kubernetes and i sometimes have the feeling they expect now all the it problems are solved with this and even not questions themselves if potentially kubernetes is the right choice for it, or does it still make sense to run the application inside of a VM,
Starting point is 00:27:29 potentially because the application is not scalable or it's not there. And so really think first, okay, what are my problems? And then think about what are the solutions I'm using and not starting with like, okay, this is my solution and where are my problems and then think about what are the solutions I'm using and not starting with like, okay, this is my solution and where are my problems? So really having the idea, what really can Kubernetes gain me on benefits so that you can really leveraging this. And then I think the next thing is also if I use it, Kubernetes really think about what is my architecture, is my application really already built for this kind of architecture or potentially what I need to change on my application itself,
Starting point is 00:28:14 that it's really scalable and can Kubernetes can provide me the scalability expectations. What I have initially thought about. That's what we've seen quite a lot, that people throwing their existing application on top of Kubernetes and that even the application cannot do, for example, rolling upgrades or cannot run HA. And so when they do an upgrade,
Starting point is 00:28:44 it goes down and they have an outage. And they say, yeah, but I'm running it on Kubernetes. Yes, but it's your application. It's not capable to do this. And it's not Kubernetes. You first need to re-architecture and think what needs to be changed on the application. And then you can leverage this scalability from Kubernetes that it can do rolling up or scale up when you get more workloads. Yeah, that's great advice. So kind of don't prematurely pick Kubernetes just because you think it's also a problem. It's like I think the analogy that we often use is right.
Starting point is 00:29:24 If you get a hammer, everything becomes a problem. It's like, I think the analogy that we often use is if you get a hammer, everything becomes a nail, but that might not be the problem that you need to
Starting point is 00:29:29 solve. And then the other thing is with your apps, I remember the same example. One of the
Starting point is 00:29:37 people that I talked with, they lifted and shifted one of their Java Enterprise apps to Kubernetes. They basically packed a four-gigabyte memory-heavy WebSphere application server into a container and deployed it on Kubernetes.
Starting point is 00:29:54 And then we said, that's great. Now let's scale it up. Well, they said, well, let's roll over to the next version. And I said, well, this takes about an hour or half an hour because of the the way the jvm was sized and then we said well then you will never be able to reap any of the benefits of kubernetes it just becomes an additional layer of complexity that you don't want um exactly i mean that's exactly a good use case or probably we see like putting your existing application and the application needs like 5, 10, 20 minutes to really warm up and to get ready. And so to really then scale it, if you have peak traffic, for example,
Starting point is 00:30:36 the traffic is already gone until your application is ready and also like rolling updates, it's quite slowly because if you have already three replicas and takes 20 minutes, you need one hour to roll it really out to all the application. And I think that's really, you first need to think about, is it really there?
Starting point is 00:30:55 Can I use this? Or have the understanding to say, okay, I know I have some drawbacks in my architecture. I'm doing already the move to Kubernetes, but I cannot expect more out of this. Now I'm running it on Kubernetes, but in the same time, I'm now starting re-architecturing it.
Starting point is 00:31:15 Yeah. So it's hopefully we will have, hopefully the people are listening in and that are thinking about moving to Kubernetes really think this through and then do that to deal with the chance and do the research on the apps that they're moving. Just because you are moving on Kubernetes doesn't mean, as you said, not all of your problems go away. The other question that I have, so now obviously you founded your company, and I think I heard this in the podcast.
Starting point is 00:31:50 You really tried to listen to people that you worked with, organizations that you worked with, and where they ran into challenges, and then really around that build a product that actually solves problems. Can you explain a little bit for people that have never heard about Kubernetes and kind of the type of problems you solve with your solution? What are the biggest things you're actually solving? Because people may have never heard about that a tool like yours exists and there might also be others in the market.
Starting point is 00:32:22 So we want to be open here, right? That there might be other solutions out there, but i think we want to hear from you now what are the what are the key things you're solving when it comes to to operating and managing large-scale numbers of clusters yeah so when we started um the company four and a half years ago, it was, we saw a pattern with our customers. So they wanted to have a cluster, then they want to have, of course, upgrades of the clusters. And some of them wanted to already have multiple clusters.
Starting point is 00:32:55 And some of them also wanted to have like already as a service. So want to provide the developer, in best case, a UI or API, where the developers can spawn up the clusters. And at that time, the only solution where you get this out of the box was Google with GKE. And so we were thinking, yeah, we like the service, but we want to have it anywhere. And so we came up with the idea, can't we really build a solution which can help to manage tens or even hundreds or thousands of clusters across different cloud providers, also on-prem and now more and which was really focusing also on day two operation,
Starting point is 00:33:47 not like spinning up only the cluster, how can I keep the cluster up and running, that everything is automated, that we do automatically backups of the clusters if something is happening that I can recover. And that in best case, I can run a lot of Kubernetes clusters with really less amount of manual work or people involved so that you can really run this in an automated way. Because especially if you're now thinking about you want to run thousands or 10,000 of clusters at the edge, where you not easily can go and send an engineer to this really needs to be fully automated and so that's what what we are really believing also if you think about enterprises you want to roll out some kind
Starting point is 00:34:35 of yeah corporate blueprint where you say okay a production cluster needs to have like a specific set of requirements for example potentially you don't want to expose services on HTTP. It should be encrypted in HTTPS, or you don't want to run privileged containers so that you can really force policies in the cluster. And then you know, okay, all my clusters have at least a specific set of requirements which is which is fulfilling um and that's what we doing with our uh kubernetes platform uh to provide this uh solution and on the other side uh what we also built is cube one where you can really managing individual clusters so we had this so kubernetes was built that it's it's built as an operator and it's running completely on
Starting point is 00:35:26 Kubernetes. And so we had every time this chicken egg problem, we need to underline Kubernetes cluster. And so we said, okay, we have mostly all the solutions out, or we have mostly all the components, what we need to run also single clusters. And then we started to one, two, really managing individual clusters so that you can easily spin up a cluster,
Starting point is 00:35:45 manage a cluster and also having another tool to do this where you have less complexity but in an easier way to manage this in operators. So if I understand the architecture right, that means when you are managing or launching a new cluster, you use an operator to then deploy your management piece of it that is then making sure that the right things are then really getting rolled out. As you said, maybe even launching a cluster from a template or I'm not sure what you called it earlier, but making sure it's properly configured. And then obviously this component is then, you know,
Starting point is 00:36:28 communicating back to a central Kubernetes platform that is then managing and taking care of all the different clusters where the operator runs. Do I understand this correctly? Yeah, exactly. So because what we were thinking, like when we started building the application, we were saying, okay,
Starting point is 00:36:49 what is the best way to manage Kubernetes? And we came up with the idea, can't we managing Kubernetes in a cloud native way with Kubernetes? And so at that time, like third party resource and our custom resource definition were not available,
Starting point is 00:37:04 but we had the idea. This reconciling and this controller mechanism from Kubernetes are working quite well. Can't we copy this over and use this also to manage our own stuff? And yeah, we tried around and it was working. And so we said, hey, that's a great way to manage this because with this, we can, we on our own already can scale this because it only depends on the
Starting point is 00:37:46 underlying cluster and then we can scale this up to a lot of Kubernetes clusters and what we also do is like instead of running the Kubernetes control plane, SVMs, we decided we want to run them containerized on top of a Kubernetes cluster so that in case of errors or something
Starting point is 00:38:02 is breaking, it's running in a pot and Kubernetes takes care and it's restarting the components automatically without that we need to build additional logic around this. So your control plane runs on Kubernetes and then obviously you leveraged all the benefits of Kubernetes if you're actually building and architecturing the software the cloud-native way.
Starting point is 00:38:23 That's pretty cool. Yeah, exactly. and architecturing the software the cloud native way that's pretty cool yeah exactly yeah um so that's that's fantastic i looked at um i'm looking at your website right now and there's just one thing that intrigues me a little bit i gotta say um it's uh devops automation um because the reason why it just you know brings in jumps into my eyes is because I've been I've been talking a lot about DevOps over the last couple of years I think it's a big term and a lot of people
Starting point is 00:38:53 maybe mean different things with it but never heard of it let me google this for you Mr. Wilson but I just wonder just out of curiosity, is this where you are integrating, is your story that when you're rolling out,
Starting point is 00:39:16 when you're integrating with existing CICD solutions, that through your API, you make sure that the target clusters, the target environments are there so that these tools can deploy into these clusters? Or do you also actually have some type of continuous delivery solutions as part of Kubernetes? So our story is really that we want to provide a standardized API which existing solutions can use. And I think what we are seeing also is that we want to enable developers in the future also to create more clusters,
Starting point is 00:39:56 for example, and throw them away so that you think about new patterns and instead of having a staging cluster, potentially create a cluster for each run, spinning up, deploy the components, and after the run, you destroy it. And everything is going in a quite fast way so that you every time really can start from scratch. But also on parallel, potentially you want to have a staging environment where you can test the existing upgrades but that you can do both or if you need to do a load test or something like this that you quickly can spin up a new environment deploy everything on top of this and then you do the load test and afterwards you throw everything away and
Starting point is 00:40:42 it don't takes you a days to to get everything together and sort it out uh to do so and so our goal here is really what we're providing is uh the openness and uh access of uh upstream kubernetes api and then having an api to manage the cluster so that you can provision this clusters which is mostly based on cluster API from Kubernetes. That's pretty cool. This sounds like we definitely should also talk around integrating this
Starting point is 00:41:14 with our open source project, Keptn, because Keptn is an events driven control plane where we are orchestrating a delivery process for progressive delivery and also for automating operations. So in the way we do it, we then integrate with other tools to do a particular job along
Starting point is 00:41:32 that process. So if your process is delivery, then the first step is typically provision the environment, make sure the environment is dead and deploy the app in there, then run the tests. And then we do the evaluation of the test. And I can see that an integration with your API would be great to make sure before we deploy that the environment is there, that we can deploy it. And in case we need to run tests and massive tests, let's say that we could even provision the clusters for the testing tool.
Starting point is 00:42:01 We talked about load testing earlier. So sometimes these tests obviously then need dedicated environments. So that would actually be really cool. So we definitely need to have a follow-up conversation on that. Yeah, we definitely should look into how an integration could look like.
Starting point is 00:42:19 I mean, Kubernetes is also open source. We could think about how we could combine both to really gain these benefits. Very cool. Sebastian, is there I know this was kind of me asking, firing off
Starting point is 00:42:38 a lot of questions. Hopefully, it all kind of made sense and there was not too many stupid questions in there. But is there anything that we missed? Is there anything that you want to make sure people know about Kubernetes, people know about Kubernetes, people need to know about problems that you're solving? Yeah, I think one project we didn't talk about is CubeCarrier,
Starting point is 00:43:04 which is our new open source project. So what we're seeing now is like, yeah, a lot of companies now know, okay, they need to manage Kubernetes clusters. And there's our solution, but also a lot of other solutions out there. But one of the next challenges, what we think are coming up more and more is like how can i manage the workloads across all these clusters especially also with this operator pattern which is now coming up more and more and when we think about we want to run 1 000 clusters and potentially per cluster i want to run 10 applications. So I need to manage 10,000 applications.
Starting point is 00:43:46 In most cases, it's then like cloud, hybrid cloud, on-prem, potentially at the edge. And what we really believe in there is like, okay, the next level would be like, how can we help customers to manage this complexity? Because this is what we believe is like really needs to be also heavily automated, because otherwise this whole workload will not really work. And we also really think about,
Starting point is 00:44:19 we need to think also there more in this idea of providing services so that different teams can provide a team can provide database as a service another team can provide monitoring as a service so that it gets easily consumable and not every area needs to figure out on its own how to plug everything together that's exactly where we started building cube carrier so that we can help customers to deploy and manage these workloads across different environments in a scalable way that you can, for example, have a database in the future, have a database cluster, have an application cluster, and that potentially you even could connect them. How do you spell cube carrier? Oh, I found it. Cube carrier. Okay.
Starting point is 00:45:03 Perfect. We want to make sure that we also put these things into the proceedings yeah exactly you find it uh when you go to github.com there you'll find all of our open source projects very cool um brian yes brian Brian. I'm just sitting here. Oh, I just muted my, I was on mute and then I unmuted. I was, I was muted. I was unmuted. Then I muted. I was opposite.
Starting point is 00:45:34 Yeah. No, I've just been sitting here, um, stewing over all this. So Sebastian, as you could probably tell, there is, I'm on the show because I'm the engineer and Andy's the brains on the show. But I, no, I'm absorbing all this. And I think a lot has really become much more solidified, clear. we hear quite often when we're talking to various people is whether it's you're looking to move to microservices or functionless you know serverless functions or moving to kubernetes or whatever the idea is you don't do it unless you have a reason right it shouldn't be that hey i'm an organization kubernetes is hot we need to move to kubernetes or
Starting point is 00:46:27 microservices are hot we need to move to microservices the first thing is always the idea of evaluating like why do we need to move to that what's it going to give us and i think today's conversation really helped me understand just even conceptually like obviously kubernetes is to me it's pretty obvious conceptually, but it really helped frame it in another way. It's not that Kubernetes makes things easy, because far from it. Kubernetes is very complicated. You're managing all of your ingresses, egresses, your network connections, all these different little things going on in it. You still have a humongous role for what would be the sysadmins. But what Kubernetes is doing is giving you flexibility
Starting point is 00:47:06 to use more modern architecture. It's going to be hard to run microservices on bare metal or even virtual machines. So if you're moving to these more modern deployment things, Kubernetes makes sense. But why I was overwhelmed listening today is that if you're just running bare Kubernetes, it's still going to be overwhelming. You still need a lot of, for lack of a better word, sysadmins. I know a lot of them like to call themselves DevOps admins, but the people
Starting point is 00:47:38 running that system who know the system and knows it has to be done, whether or not it's done programmatically, I guess most of it in Kubernetes is, but it's really just, to me at least, I'm going to go out on a limb here and say Kubernetes is your new data center model, right? It could have been on bare metal before, could have been on VMware now, now you're running it on Kubernetes, which is probably running on one of those two anyway, but that's the next thing that you have to manage. And I think it's really great that, you know, Cubematic is coming in and taking a lot of that hard work out of it. Because I also see a parallel, Andy, between, you know,
Starting point is 00:48:18 I see running, and correct me if I'm wrong, Sebastian, too. I see running Kubernetes on your own versus leveraging the help of the Cubematic is similar to when it comes to monitoring using a well-established APM tool like Dynatrace or doing the open source, we're going to do it ourselves, right? If you're going to do it yourselves, you can,
Starting point is 00:48:44 but there's a lot of work and there are do it yourselves you can but there's a lot of work and there's there are people out there who are figuring out the management layer the of this piece to help you out to make something like kubernetes easier to use um because it just yeah it's just it just really seems that it can be there's there's the the of, I see someone like a CTO somewhere says, we're going to move to Kubernetes. Everyone's like, yay. And then they move and they're like, holy crap, we need to hire four more people.
Starting point is 00:49:12 We all need to train up on this and that, you know, and it's, it's a lot. It's a lot. So I think this today's episode really just helps solidify that to be, unless I completely missed it. Did I get it right, at least? I think... Yeah, absolutely. I think we can also see this with Linux itself.
Starting point is 00:49:36 Potentially in the beginning, you built your own Linux distribution, but nowadays, I think most of us are not trying to build our own distribution. We're using existing tooling, either completely open source distributions
Starting point is 00:49:53 or you want to have vendor support for it. And I think similar patterns applies also for Cloud Native. But I think what's also important is um i think in the whole discussion about cloud native uh the tooling is one component of this i think what people often forget is like you need to change your processes
Starting point is 00:50:19 if you try to run communities in the same way as you do it before, it will not work, at least not well. And also, you need to train your people because it requires completely new skills. So Kubernetes can only really be, or Cloud Native can only be really successful if you're also touching the other both components, processes and your employees and your people to get them on this new journey. Because otherwise, I think the chance is high that it will fail. Yeah. It seems like, you know, I would think that some people might think Kubernetes is sort of the easy button, but it seems that it's not. You know, the easy button is move all of your software to SaaS vendors, right?
Starting point is 00:51:04 But then you're completely restricted to only what they can do. It's not a real solution. I mean, for some things it is, right? I mean, if you're going to do, if you're a smaller shop and you want to set up a commerce store, there are plenty of commerce SaaS products out there that are going to be great. But if you're, you know, if you're aiming to be the next Amazon or something, you're going to have to build your own, right? You're going to have to do that. So, um, uh, obviously the easy button is not going to be a solution for, for a lot of people. Um, and Kubernetes is not an easy button, but again, thanks for takes the people like you, Sebastian, your company that, that helps that, that make it easy. Um, yeah, no, it's just
Starting point is 00:51:44 overwhelming and it really makes me grateful that I'm on the pre-sale side of Dynatrace and no longer on the side that our customers sit on that have to deploy all this stuff. I get to just talk about it and have fun. It's so much nicer. Anyhow, that was my thoughts about the topic. I really, really appreciate you being on.
Starting point is 00:52:04 It was really awesome. Hey, to kind of conclude today's, That was my thoughts about the topic. I really, really appreciate you being on. It was really awesome. Hey, to kind of conclude today's session, Sebastian, I know in these days where we don't travel anymore, we do a lot of events virtually, everything basically virtually. I know you are a CNCF ambassador, at least I believe so,
Starting point is 00:52:24 when reading through your bio and a frequent conference speaker. Are there any places to go where people can find out more about you, what you're doing, where you speak, or how to get in touch with folks from your organization or your open source projects? What are the best places to get started with if they want to get in touch with you? Yeah, I mean, to really cover this problem,
Starting point is 00:52:56 how to migrate existing applications to Kubernetes, we have, I think, in the next two weeks, a webinar. So if you go to our website, you can find this there, where I will talk with one of our engineers about this topic. I'm regular on conferences. You can also find me on Twitter, sscheele... Wait.
Starting point is 00:53:30 I need to check. Every time I'm mixing up Twitter and GitHub, because I have not the same. It's vice versa. We will make sure that we will add the link to sscheele. I think it's sscheele. Yeah, it's sScheele. Yeah, it's S-Scheele.
Starting point is 00:53:47 Exactly. Because my Twitter one is Scheele S. So I needed to flip around and I never know which one I have where. Yeah, and of course you find me also on LinkedIn.
Starting point is 00:54:03 Perfect. I will make sure to link to link to all these resources um with that um i think that that it was as just to echo what brian said very insightful thanks for uh answering all these questions uh none of them were scripted that's why i'm also so happy that uh you just took all these these questions without thinking about it and just talking about and answering them and not kind of backing out of it. So that's great. And it seems we found some additional to-dos for us in the future,
Starting point is 00:54:35 which we will talk about later on once we stop the recording here. But yeah, that's all from me. Thank you so much. Thank you. Yeah, thanks for inviting me really appreciate it and I hope we can find some ways where we can do some
Starting point is 00:54:51 joint open source work yeah that would be awesome cool awesome well thanks everyone for listening if you have any questions comments you can reach us at pure underscore dt or email us at pure performance at dynatrace.com. And we'll have links to everything Sebastian in the description of the show. So make sure
Starting point is 00:55:10 to check those out. Thanks everyone for listening until next time. Bye-bye. Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.