Screaming in the Cloud - The Non-Magical Approach to Cloud-Based Development with Chen Goldberg

Episode Date: November 15, 2022

About ChenChen Goldberg is GM and Vice President of Engineering at Google Cloud, where she leads the Cloud Runtimes (CR) product area, helping customers deliver greater value, effortlessly. T...he CR  portfolio includes both Serverless and Kubernetes based platforms on Google Cloud, private cloud and other public clouds. Chen is a strong advocate for customer empathy, building products and solutions that matter. Chen has been core to Google Cloud’s open core vision since she joined the company six years ago. During that time, she has led her team to focus on helping development teams increase their agility and modernize workloads. Prior to joining Google, Chen wore different hats in the tech industry including leadership positions in IT organizations, SI teams and SW product development, contributing to Chen’s broad enterprise perspective. She enjoys mentoring IT talent both in and outside of Google. Chen lives in Mountain View, California, with her husband and three kids. Outside of work she enjoys hiking and baking.Links Referenced:Twitter: https://twitter.com/GoldbergChenLinkedIn: https://www.linkedin.com/in/goldbergchen/

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. Forget everything you know about SSH and try Tailscale. Imagine if you didn't need to manage PKI or rotate SSH keys every time someone leaves.
Starting point is 00:00:41 That'd be pretty sweet, wouldn't it? With Tailscale SSH, you can do exactly that. Tailscale gives each server and user device a node key to connect to its VPN, and it uses the same node key to authorize and authenticate SSH. Basically, you're SSHing the same way you manage access to your app. What's the benefit here? Built-in key rotation, permissions as code, connectivity between any two devices, reduce latency. And there's a lot more, but there's a time limit here. You can also ask users to reauthenticate for that extra bit of security.
Starting point is 00:01:15 Sounds expensive? Nope. I wish it were. Tailscale is completely free for personal use on up to 20 devices. To learn more, visit snark.cloud slash tailscale. Again, that's snark.cloud slash tailscale. Welcome to Screaming in the Cloud. I'm Corey Quinn. When I get bored and the power goes out, I find myself staring at the ceiling, figuring out how best to pick fights with people on the internet about Kubernetes, because I'm basically sad and have a growing collection of personality issues.
Starting point is 00:01:49 My guest today is probably one of the best people to have those arguments with. Chen Goldberg is the general manager of cloud runtimes and VP of engineering at Google Cloud. Chen, thank you for joining me today. Thank you so much, Corey, for having me. So Google has been doing a lot of very interesting things in the cloud. Chen, thank you for joining me today. Thank you so much, Corey, for having me. So Google has been doing a lot of very interesting things in the cloud. And the more astute listener will realize that interesting is not always necessarily a compliment. But from where I sit, I am deeply vested in the idea of a future where we do not have a cloud monoculture. As I've often said, I want what cloud should I build something on in five to ten years to be a hard question to answer,
Starting point is 00:02:30 and not just because everything is terrible. I think that Google Cloud is absolutely a bright light in the cloud ecosystem and has been for a while, particularly with its emphasis around developer experience. All of that said, Google Cloud is sort of a big unknowable place, at least from the outside. What is your area of responsibility? Where do you start? Where do you stop?
Starting point is 00:02:54 In other words, what can I blame you for? Well, you can blame me for a lot of things if you want to. I might not agree with that. We strive for accuracy in these things. But that's fine. Well, first of all, I've joined Google about seven years ago to lead the Kubernetes and GKE team. And ever since, continued at the same area.
Starting point is 00:03:14 So evolved, of course, Kubernetes and Google Kubernetes Engine, leading our hybrid and multi-cloud strategy as well with technologies like Anthos. And now I'm responsible for the entire container runtime, which includes Kubernetes and the serverless solutions. A while back, I, in fairly typical sarcastic form, wound up doing a whole inadvertent start of a meme where I joked about there being 17 ways
Starting point is 00:03:44 to run containers on AWS. And then as that caught on, I wound up listing out 17 services that you could use to do that. A few months went past, and then I published a sequel of 17 more services you can use to run Kubernetes. And while that was admittedly tongue-in-cheek, it does lead to an interesting question that's ecosystem-wide. If I look at Google Cloud, I have Cloud Run, I have GKE, I have GCE, if I want to do some work myself, it feels like more and more services are supporting Docker in a variety of different ways. How should customers and or people like me,
Starting point is 00:04:22 though I am sort of a customer as well since I do pay you folks every month, how should we think about containers and services in which to run them? First of all, I think there's a lot of credit that needs to go to Docker that made containers approachable. So Google has been running containers forever. Everything within Google is running on containers, even our VMs, even our cloud is running on containers. But what Docker did was creating a packaging mechanism to improve developer velocity. So that's on its own. It's great. And one of the things, by the way, that I love about Google Cloud approach to containers and Docker, that yes, you can take your Docker container and run it anywhere. And it's actually really important to ensure what we call interoperability,
Starting point is 00:05:08 or low barrier to entry to a new technology. So I can take my Docker container and I can move it from one platform to another, and so on. So that's just to start with on containers. Between the different solutions, so first of all, I'm all about managed services. You are right that there are many ways to run Kubernetes. I'm taking a lot of pride. But the best way is always to have someone else run it for you. Problem solved. Great. The best kind of problems are always someone else's. Yes. And I'm taking a lot of pride of what our team is doing with Kubernetes. I mean, we've been working on that for so long. And something that we've coined the term, I think back in 2016. So there is a success disaster, but there's also what we call sustainable success. So thinking about how to set ourselves up for success and scale, very proud of that
Starting point is 00:05:58 service. Saying that, not everybody and not for all your workloads, you need the flexibility that Kubernetes gives you, or the ecosystem. So if you start with containers your first time, you should start with Cloud Run. It's the easiest way to run your containers. That's one.
Starting point is 00:06:17 If you are already in love with Kubernetes, we won't take it away from you. Start with GKE. OK? Go all in. OK, we are all in love in Kubernetes as well. But what my team and I are working on is to make sure that those will work really well together. We actually see a lot of customers do that.
Starting point is 00:06:37 I'd like to go back a little bit in history to the rise of Docker. I agree with you, it was transformative, but containers had been around in various forms, depending upon Docker. I agree with you, it was transformative, but containers had been around in various forms, depending upon how you want to define it, dating back to the 70s with logical partitions on mainframes. Well, is that a container? Is it not? Well, sort of. We'll assume yes for the sake of argument. The revelation that I found from Docker was the developer experience start to finish. Suddenly it was a couple commands and you were just working, where previously it had taken tremendous amounts of time and energy
Starting point is 00:07:08 to get containers working in that same context. And I don't even know today whether or not the right way to contextualize containers is as sort of a light version of a VM, as a packaging format, as a number
Starting point is 00:07:24 of other things that you could reasonably call it. How do you think about containers? So I'm going to do, first of all, a small round. I actually started my career as a system A-frame engineer. And I will share that when, you know, I've learned Kubernetes, I'm like, oh, we already have done all of that in orchestration and workload management on my phone, just to the side. The way I think about containers is as two things. One, it is a packaging of an application. But the other thing which is also critical is the decoupling between your application and the OS. So having that kind of abstraction and allowing you to portable and move it between environments.
Starting point is 00:08:02 So those are the two things that are, when I think about containers and what technologies like Kubernetes and serverless gives on top of that is that manageability and making sure that we take care of everything else that is needed for you to run your application. I've been, how do I put this, getting some grief over the past few years in the best way as possible around a almost off the cuff prediction that I made, which was that in five years, which is now a lot closer to two, basically nobody is going to care about Kubernetes. And I could have phrased that slightly more directly because people think I was trying to say, oh, Kubernetes is just hype.
Starting point is 00:08:43 It's going to go away. Nobody's going to worry about it anymore. And I think that is a wildly inaccurate prediction. My argument is that people are not going to have to think about it in the same way that they are today. Today, if I go out and want to go back to my days of running production services in anger, and by anger, I of course mean in production, then it would be difficult for me to find a role that did not at least touch upon Kubernetes. But people who can
Starting point is 00:09:14 work with that technology effectively are in high demand and they tend to be expensive. Not to mention that thinking about all of the intricacies and complexities that Kubernetes brings to the foreground, that is what doesn't feel sustainable to me. The idea that it's going to have to collapse down into something else is by necessity going to have to emerge. How are you seeing that play out? And also feel free to disagree with the prediction. I am thrilled to wind up being told that I'm wrong. It's how I learned the most. I don't know if I agree with the time horizon of when that will happen, but I will actually think it's a failure on us
Starting point is 00:09:54 if that won't be the truth, that the majority of people will not need to know about Kubernetes and its internals. And, you know, we keep saying that, like, hey, we need to make it Kubernetes and its internals. And, you know, we keep saying that, like, hey, we need to make it more like boring and easy. And I've just said, like, hey, you should use manage. And we have lots of customers that says that they're just using GKE and it scales on their behalf and they don't need to do anything for that. And it's just like magic. But from a
Starting point is 00:10:20 technology perspective, there is still a way to go until we can make that disappear. And there will be two things that will push us into that direction. One is, you mentioned it as well, the talent shortage is real. All the customers that I speak with, even if they can find those great people that are experts, there are actually more interesting things for them to work on. Okay, you don't need to take all the people in your organization and put them on building the infrastructure. You don't care about that. You want to build innovation and promote your business. So that's one. The second thing is that I do expect that the technology will continue to evolve and our managed solution will be better and better. So hopefully with these two
Starting point is 00:11:03 things happening together, people will not care. That's what's under the hood is Kubernetes, or maybe not even, right? I don't know exactly how things will evolve. From where I sit, one of the early criticisms I had about Docker, which I guess translates pretty well to Kubernetes,
Starting point is 00:11:24 are that they solve a few extraordinarily painful problems. In the case of Docker, it was, well, it works on my machine as a grumpy sysadmin the way I used to be. The only real response we had to that was, well, the time to back up your email, Skippy, because your laptop's going to production then. Now you could effectively have a high-fidelity copy of production basically anywhere, and we've solved the problem of making your Mac laptop look like a Linux server. Great. Okay. Awesome. With Kubernetes, it also feels on some level like it solves for very large-scale Google type of problems where you want to run things across at least a certain point of scale. It feels like even today it suffers from having an easy Hello World style application to deploy on top of it. Using it for WordPress
Starting point is 00:12:05 or some other form of blogging software, for example, is stupendous overkill as far as the Hello World story tends to go. Increasingly, as a result, it feels like it's great for the large-scale enterprise-y applications, but the getting-started story of how do I have a service
Starting point is 00:12:21 I could reasonably run in production, how do I contextualize that in the world of Kubernetes? How do you respond to that type of perspective? I will start with maybe a short story. I started my career in the Israeli army. I was head of a department of one of the technology units, and I was responsible for building a PAS. In essence, it was 20 plus years ago, so we didn't really call it a PaaS, but that's what it was.
Starting point is 00:12:48 And then at some point, it was amazing. Developers were very productive. We got innovation again and again, and then there was some new innovation. It was the beginning of web at some point. And it was actually, so two things I've noticed back then. One, it was really hard to evolve the platform to allow new technologies and innovation.
Starting point is 00:13:14 And second thing, from a developer perspective, it was like a black box. So the developers team that people were, the other development teams couldn't really troubleshoot the environment. They were not empowered to make decisions or reliance on the platform. And you know, when Ruanji was just started with Kubernetes, by the way, beginning it only supported 100 nodes and then 1000 nodes. Okay, it was actually not for scale.
Starting point is 00:13:41 It actually solved those two problems, which I i'm this is by the way where i spend most of my time so the first one we don't want magic okay it should be clear to us like what's happening i want to make sure that things are consistent and i can get the right observability so that's why the second thing is that we invested so much in the extensibility of that environment that it's, I wouldn't say it's easy, but it's doable to evolve Kubernetes. You can change the models, you can extend it, you can, there is an ecosystem. And you know, when we were building it, I remember I used to tell my team, there won't be a Kubernetes 2.0, which is for a developer, it's frightening. But if you think about it and you prepare to that,
Starting point is 00:14:30 you're like, huh, okay, what does that mean with how I build my APIs? What does that mean of how we build a system? So that was one. The second thing, I keep telling my team, please don't get too attached to your code because if it will still be there in five, 10 years, we did something wrong. And you can see areas within Kubernetes, again, all the extensions, I'm very proud of all
Starting point is 00:14:50 the interfaces that we've built, but let's take networking. This keeps to evolve all the time on the API and the surface area that allows us to introduce new technologies. I love it. So those are the two things that have nothing to do with scale, are unique to Kubernetes, and I think are very empowering and are critical for the success. One thing that you said that resonates most deeply with me is the idea that you don't want there to be magic,
Starting point is 00:15:18 where I just hand it to this thing and it runs it as if by magic. Because again, we've all run things in anger in production. And what happens when the magic breaks? When you're sitting around scratching your head with no idea how it starts or how it stops, that is scary. I mean, I recently wound up re-implementing Google Cloud distinguished engineer, Kelsey Hightower's Kubernetes the hard way, because he gave a terrific tutorial that I ran through in about 45 minutes on top of Google Cloud. It's like, all right, how do I make this harder? And the answer is, is to do it on AWS, reimplement it there. And my experiment there can be found at kubernetesthemuchharderway.com
Starting point is 00:15:56 because I have a vanity domain problem. And it taught me an awful lot. But one of the challenges I had as I went through that process was at one point, the nodes were not registering with the controller. And I ran out of time that day and turned everything off because surprise bills are kind of what I spend my time worrying about. Turn it on the next morning to continue. And then it just worked. And that was that was sort of the Spidey Sense tingling moment of, OK, something wasn't working and now it is. And I don't understand why, but I just rebooted it and it started working, which is terrifying in the context of a production service. It was understandable, kind of, and I think it's the sort of thing that you understand a lot better the more
Starting point is 00:16:35 you work with it in production. But a counter-argument to that is, and I've talked about it on this show before, for this podcast, I wind up having sponsors from time to time who want to give me fairly complicated links to go check them out. So I have the snark.cloud URL redirector. That's running as a production service on top of Google Cloud Run. It took me half an hour to get that thing up and running. I haven't had to think about it since,
Starting point is 00:17:00 aside from a three-second latency that was driving me nuts and turned out to be a sleep hidden in the code, which I can't really fault Google Cloud Run for so much as my crappy nonsense. But it just works. It's clearly running atop Kubernetes, but I don't have to think about it. That feels like the future. It feels like it's a glimpse of a world to come that we're just starting to dip our toes into. That, at least to me, feels like a lot more of the abstractions being collapsed into something easily understandable. First of all, I'm happy you say that. When talking with customers and we're sharing like,
Starting point is 00:17:37 you know, they're, yes, they're all on Kubernetes and talking about Cloud Run and serverless, I feel there is that confidence level that they need to overcome. And that's why it's really important for us in Google Cloud is to make sure that you can mix and match. Because sometimes, you know, a big retail customer of ours, some of their teams, it's really important for them to use a Kubernetes-based platform because they have their workloads also running on-prem and they want to have the same playbooks, for example. How do I address issues? How do I troubleshoot? And so on. So that's one set of things, but some cloud only, as simple as possible. So can I use both of them and still have a similar developer experience and so on. So I do think that we'll see more of that
Starting point is 00:18:26 in the coming years. And as the technology evolves, then we'll have more and more, of course, serverless solutions. But it doesn't end there. We see also databases and machine learning. There are so many more managed services that are making things easy.
Starting point is 00:18:43 And that's what excites me. I mean, that's what's awesome about what we're doing in cloud. We are building platforms that enable innovation. I think that there's an awful lot of power behind unlocking innovation from a customer perspective. The idea that I can use a cloud provider to wind up doing an experiment to build something in the course of an evening. And if it works great, I can continue to scale up without having to replace the crappy Raspberry Pi level hardware in my spare room with serious enterprise servers in a data center somewhere. The on-ramp and the capability and the lack of long-term commitments is absolutely magical.
Starting point is 00:19:19 What I'm also seeing that is contributing to that is the de facto standard that's emerged of most things these days support Docker, for better or worse. There are many open source tools that I see where, how do I get this up and running? Well, you can go over the river and through the woods and way past grandmother's house to build this from source or run this Docker file. I feel like that is the direction the rest of the world is going. And as much fun as it is to sit on the sidelines and snark, I'm finding a lot more capability stories emerging across the board. Does that resonate with what you're seeing, given that you are inherently working at very large scale, given the nature of where you work? I do see that. And I actually want to double down on the open standards,
Starting point is 00:20:06 which I think this is also something that is happening. At the beginning, you talked about, I want it to be very hard when I choose a cloud provider. But innovation doesn't only come from cloud providers. There's a lot of companies, a lot of innovation happening that are building new technologies on top of those cloud providers. And I don't think this is going to stop. Innovation is going to come from many places and it's going to be very exciting. And by the way, things are moving super fast in our space. So the investment in open standard is critical for our industry.
Starting point is 00:20:43 So Docker is one example. Google is, generally speaking, investing a lot in building those open standards. So we have Docker, we have things like, of course, Kubernetes, but we're also investing in open standards of security. So we are working with other partners around Salsa, defining how you can secure the software supply chain, which is also critical for innovation. So all of those things that reduce the barrier to entry is something that I'm personally passionate about. Scaling containers and scaling Kubernetes is hard,
Starting point is 00:21:16 but a whole other level of difficulty is scaling humans. You've been at Google for, as you said, seven years, and you did not start as a VP there. Getting promoted from senior director to been at Google for, as you said, seven years, and you did not start as a VP there. Getting promoted from senior director to VP at Google is a, shall we say, heavy lift. You also mentioned that you previously started with, I believe it was a seven-person team at one point. How have you been able to do that? Because I can see a world in which, oh, we just write some code and we can scale the computers pretty easily. I've never found a way to do that for people.
Starting point is 00:21:46 So, yes, I started actually, well, not seven, but the team was 30 people. And you can imagine how surprised I was when I joined Google Cloud to lead Kubernetes and GKE. And it was a pretty small team to the beginning of those days. But the team was already actually on the edge of burning out. You know, pings on Slack, the GitHub issues. There was so many things happening 24-7. And the team was just doing everything. Everybody was doing, were doing everything.
Starting point is 00:22:18 And one of the things I've done on my second month on the team, I did an offsite, right? All managers, that's what we do. We do offsites. And I brought the team to talk about, the leadership team, to talk about our team values. And at the beginning, they were a little bit pissed, I would say. Like, hey, Chen, what's going on? You're wasting two days of our lives to talk about those things while we're not doing other things. And I was like, you know, guys, this is really important.
Starting point is 00:22:44 Let's talk about what's important for us it was an amazing workshop by the way that work is still the foundation of the culture in the team we talked about the three values that we care about and how that will look like and the reason it's important is that when you scale teams the key thing is actually to scale decision making. So how do you scale decision making? I think there are two things there. One is what you're trying to achieve. So people should know and understand the vision and know where we want to get to. But the second thing is how do we work? What's important for us? How do we prioritize? How do we make trade-offs? And when you have both the what we're trying to do and the how, you build that team
Starting point is 00:23:29 culture. And when you have that, I find that it's your sets up more for success for scaling the team. Because then the storyteller is not just the leader or the manager, the entire team is a storyteller of how things are working in this team, how do we work, what we're trying to achieve and so on. So that's something that has been critical. So that's just from a methodology of how I think it's right thing to scale teams. Specifically with Kubernetes, there were more issues that we needed to work on. For example, building or recruiting different functions.
Starting point is 00:24:07 It cannot be just engineering doing everything. So hiring the first product managers and information engineers and marketing people. Oh my God. Yes, you have to have marketing people because there are so many events. And so that was one thing just, you know, from people and skills. And the second thing is that it was an open source project and a product, but what I was personally doing, I was with the team is bringing some product engineering practices into the open source. So can we say, for example, that we're going to focus on user experience this next release? And we're not going to do all the rest. And I remember my team was worried about like, hey, what about that?
Starting point is 00:24:52 And what about this? And they were juggling everything together. And I remember telling them, imagine that everything is on the floor. All the balls are on the floor. I know they're on the floor. You know they are on the floor. It's OK. Let's just make sure that every time we pick something up, it never falls again.
Starting point is 00:25:10 And that idea as a principle, it then evolved to no heroics and it evolved to sustainable success. But building things towards sustainable success is a principle, which has been very helpful for us. This episode is sponsored in part by our friends at Optics. Attackers don't think in silos, so why would you have siloed solutions protecting cloud, containers, and laptops distinctly? Meet Optics, the first unified solution that prioritizes risk across your modern attack surface, all from a single platform, UI, and data model. Stop by booth 3352 at AWS reInvent in Las Vegas to see for yourself and visit uptix.com.
Starting point is 00:25:59 That's U-P-T-Y-C-S dot com. My thanks to them for sponsoring my ridiculous nonsense. When I take a look back, it's very odd to me to see the current reality that is Google, where you're talking about empathy and the no heroics and the rest. That is not the reputation that Google enjoyed back when a lot of this stuff got started. It was always, oh yeah,
Starting point is 00:26:25 engineers should be extraordinarily bright and gifted, and therefore it felt at the time like our customers should be as well. There was almost an arrogance built into, well, if you wrote your code more like Google will, then maybe your code wouldn't be so terrible in the cloud. And somewhat cynically, I thought for a while that, oh, Kubernetes is Google's attempt to wind up making the rest of the world write software in a way that's more googly. I don't think that that observation has aged very well. I think it's solved a tremendous number of problems for folks, but the complexity has absolutely been high throughout most of Kubernetes' life. I would argue on some level that it feels like it's become successful almost in spite of that rather than because of it.
Starting point is 00:27:06 But I'm curious to get your take. Why do you believe that Kubernetes has been as successful as it clearly has? It's such two things. One about empathy. So yes, Google engineers are brilliant and they're amazing and that's all great. And our customers are amazing and brilliant as well. And going back to the point before is everyone has their job and where they need to be successful. And we, as you say, we need to make things simpler and enable innovation.
Starting point is 00:27:39 And our customers are driving innovation on top of our platform. So that's the way I think about it. And yes, it's not as simple as it can be probably yet. But in starting the early days of Kubernetes, we have been investing a lot in what we call empathy and customer empathy workshop, for example. So I partnered with Kelsey Hightower. And you mentioned yourself trying to start a cluster. The first time we did a workshop with my entire team, so then it was like 50 people, their task was to spin off a cluster without using any scripts that we had internally.
Starting point is 00:28:19 And unfortunately not many folks succeeded in this task. And out of that came the, what we call an OKR, which was our goal for that quarter, is that you are able to spin off a cluster in three commands and troubleshoot if something goes wrong. That came out of that workshop. So I do think that there is a lot of foundation on that empathetic engineering.
Starting point is 00:28:42 And the open source and the community helped our Google teams to be more empathetic and and the open source and the community helped our Google teams to be more empathetic and understand what are the different use cases that they are trying to solve. And that actually brings me to why I think Kubernetes is so successful. People might be surprised, but the amount of investment we are making on orchestration or placement of containers
Starting point is 00:29:04 within Kubernetes is actually pretty small. And it's been very small for the last seven years. Where do we invest time? One is, as I mentioned before, is on the, what we call the API machinery. So Kubernetes has introduced a way that is really suitable for cloud nativenative technologies, the idea of reconciliation loop, meaning that the way Kubernetes is... Kubernetes is like a powerful
Starting point is 00:29:35 automation machine, which can automate, of course, workload placement, but can automate other things. Think about it as a way of the Kubernetes API machinery is observing what is the current state, comparing it to the desired state, and working towards. Think about like a thermostat, which is a different automation versus the if this, then that, where you need to anticipate different events. idea about the API machinery and the way that you can extend it made it possible for different teams to use that mechanism to automate other things in that space. So that has been one very powerful mechanism of Kubernetes and that enabled a lot of innovation. Even if you think about things like Istio, as an example, that's how it started by leveraging that kind of mechanism. The same with storage and so on.
Starting point is 00:30:27 So there are a lot of operators, the way people are managing their databases or stateful workloads on top of Kubernetes, they're extending this mechanism. So that's one thing that I think is key and build that ecosystem. The second thing, I am very proud of the community of Kubernetes.
Starting point is 00:30:44 Oh, it's a phenomenal community success story. It's not easy to build a community, definitely not in open source. I feel that the idea of values that I was talking about within my team was actually a big deal for us as we were building the community, how we treat each other,
Starting point is 00:31:02 how do we help people start. We were talking before, like, am I going to talk about DEI and inclusivity and so on? One of the things that I love about Kubernetes is that it's a new technology. There's actually, well, maybe not, no, even today, there's no one with 10 years experience in Kubernetes. And if anyone says they have that, then they're lying. Time machine, yes.
Starting point is 00:31:29 That creates an opportunity for a lot of people to become experts in this technology. And by having it in open source and making everything available, you can actually do it from your living room sofa. That excites me. You know, the idea that you can become an expert in this new technology and you can get involved and you'll get people that will mentor you and help you through your first PR. And there are some roles within the community that you can start, you know, dipping your toes in the water.
Starting point is 00:32:05 It's exciting. So that makes me really happy. And I know that this community has changed the trajectory of many people's careers, which I love. I think that's probably one of the most impressive things that it's done. One last question I have for you is that we've talked a fair bit about the history and how we see it progressing with a view toward the somewhat recent past. What do you see coming in the future? What does the future of Kubernetes look like to you? Continue to be more and more boring.
Starting point is 00:32:38 There is the promise of hybrid and multi-cloud, for example, is only possible by technologies like Kubernetes. So I do think that as a technology, it will continue to be important by ensuring portability and interoperability of workloads. I see a lot of edge use cases. To think about it, it's like just lagging a bit around like innovation that we've seen in the cloud. Can we bring that innovation to the edge?
Starting point is 00:33:09 This will require more development within Kubernetes community as well. And that really actually excites me. I think there's a lot of things that we're going to see there. And by the way, you've seen it also in KubeCon. I mean, there were some announcements in that space. In Google Cloud, we just announced before with customers like Wendy's and Rite Aid as well. So taking advantage of that technology to allow innovation everywhere. But beyond that, my hope is that we'll continue and hide the complexity. Our challenge will be to not make it a black box,
Starting point is 00:33:50 because that will be, in my opinion, a failure pattern. Doesn't help those platforms. That will be the challenge. Can we scope the project, ensure that we have the right observability? From a use case perspective, I do think Edge is super interesting. I would agree. There are a lot of workloads out there that are simply never going to be hosted in a cloud provider region for a variety of reasons
Starting point is 00:34:14 of varying validity, but it is the truth. I think that the focus on addressing customers where they are has been an emerging best practice for cloud providers. And I'm thrilled to see Google leaving the charge on that. Yeah, and you just reminded me, the other thing that we see also more and more is definitely AI and ML workloads running on Kubernetes, which is part of that, right? So Google Cloud is investing a lot in making AI and ML easy. And I don't know if many people know,
Starting point is 00:34:42 but like even Vertex AI, our own platform is running on GKE. So that's part of seeing how do we make sure that that platform is suitable for these kinds of workloads and really help customers do the heavy lifting. So that's another set of workloads that are very relevant at the edge. One of our customers, MLB, for example, two things are interesting there.
Starting point is 00:35:07 The first one, I think a lot of people sometimes say, okay, I'm going to move to the cloud and I want to know everything right now, how that will evolve. And one of the things that have been really exciting with working with MLB for the last four years is the journey and the iterations. So they've started at one phase and then they saw what's possible and then moved to the next one and so on. So they started at one phase, and then they saw what's possible, and then moved to the next one, and so on. So that's one. The other thing is that really they have so much
Starting point is 00:35:29 ML running at the stadium with Google Cloud technology, which is very exciting. I'm looking forward to seeing how this continues to evolve and progress, particularly in light of the recent correction we're seeing in the market, where a lot of hype-driven ideas are being stress-tested, and progress, particularly in light of the recent correction we're seeing in the market, where a lot of hype-driven ideas are being stress-tested. Maybe not in the way we might have hoped that they
Starting point is 00:35:50 would, but it'll be really interesting to see what shakes out as far as things that deliver business value and are clear wins for customers versus a lot of the speculative stories that we've been hearing for a while now. Maybe I'm really wrong on this, and this is going to be a temporary bump in the road, and we'll see no abatement in the ongoing excitement around so many of these emerging technologies. But I'm curious to see how it plays out. That's the beautiful part about getting to be a pundit, or whatever it is people call me these days,
Starting point is 00:36:17 it's at least polite enough to say on a podcast, is that when I'm right, people think I'm a visionary, and when I'm wrong, people don't generally hold that against you. It seems like futurist is the easiest job in the world, because if you predict and get it wrong, no one remembers. Predict and get it right, you look like a genius. So first of all, I'm optimistic. So usually my predictions are positive.
Starting point is 00:36:39 I will say that, you know, what we are seeing, also what I'm hearing from our customers, technology is not for the sake of technology. Actually, nobody cares, even today. Okay, so nothing needs to change for like nobody would care. Even today, nobody cares about Kubernetes. They need to care, unfortunately. But what I'm hearing from our customers is,
Starting point is 00:37:01 how do we create new experiences? How we make things easy? Talent shortage is not just with tech people. It's also with people working in the warehouse or working in the store. Can we use technology to help inventory management? There are so many amazing things. So when there is a real business opportunity, things are so much simpler. People have the right incentives to make it work. Because one thing we didn't talk about, right, we talk about all these new technologies,
Starting point is 00:37:30 and talk about scaling team and so on. A lot of times the challenge is not the technology. A lot of times the challenge is the process. A lot of times the challenge is the skills, is the culture. There's so many things. But when you have something, going back to what I said before, how you unite teams, when there's something, a clear goal, a clear vision that everybody's excited about, they will make it work. So I think this is where having a purpose for the innovation is critical for any successful project. I think and I hope that you're right. I really want to thank you for spending as much time with me as you have. If people want to learn more, where's the best place for them to
Starting point is 00:38:10 find you? So first of all, on Twitter, I'm there, or on LinkedIn, I will say that I'm happy to connect with folks. Generally speaking, at some point in my career I recognize that I have a voice that can help people and I have experience that can also help people build their careers so I'm happy to share that and mentor folks both in the company and outside of it I think that's
Starting point is 00:38:38 one of the obligations on a lot of us once we wind up getting to a certain position in our careers to send the ladder back down for lack of a better term. I've never appreciated the perspective, well, screw everyone else, I got mine. The whole point is the next generation should have it easier than we did. Yeah, definitely.
Starting point is 00:38:55 Chen Goldberg, General Manager of Cloud Runtimes and VP of Engineering at Google. I'm cloud economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an angry rant of a comment talking about how LPARs on mainframes are absolutely not containers, making sure it's at least far too big to fit in a reasonably sized Docker container.
Starting point is 00:39:27 If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duck Bill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started. This has been a humble pod production stay humble

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.