Screaming in the Cloud - Spanning the Globe with Jaana Dogan

Episode Date: August 11, 2020

About Jaana DoganJaana Dogan is working on Spanner at Google to make state not your problem problem. She has 15+ years of experience in building infrastructure, developer platforms, and tools.... Jaana's current work is focused on storage systems, observability and performance tools, and helping customers with architectural design tradeoffs.Links Referenced: Recommended book: https://www.amazon.com/Designing-Data-Intensive-Applications-Reliable-Maintainable/dp/1449373321Twitter: https://twitter.com/rakyllhttps://jbd.dev/https://rakyll.org/

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. Learn about the dancing flames of EKS cluster security, evade the toxic dumpster of the standard controls, and tame the wild beast of best practices for minimizing the risk around cluster workloads. Become renowned for your feats of daring as you learn the specific requirements for securing an EKS cluster and its associated infrastructure. To learn more, visit snark.cloud slash stackrocks. That's snark.cloud slash stackrocks.
Starting point is 00:01:13 This episode is brought to you by Trend Micro Cloud One, a security services platform for organizations building in the cloud. I know you're thinking that that's a mouthful because it is, but what's easier to say? I'm glad we have Trend Micro Cloud One, a security services platform for organizations building in the cloud, or, hey, bad news, it's gonna be a few more weeks, I kinda forgot about that security thing. I thought so. Trend Micro Cloud One is an automated, flexible, all-in-one solution that protects your workflows and containers with cloud-native security. Identify and resolve security issues earlier in the pipeline and access your cloud
Starting point is 00:01:50 environment sooner with full visibility so you can get back to what you do best, which is generally building great applications. Discover Trend Micro Cloud One, a security services platform for organizations building in the cloud at trendmicro.com slash screaming. Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Yana Dogen, staff engineer at a small company called Google. Yana, welcome to the show. Hi, how are you? I am very well, and I'm better now that I get to talk to you. One of the, I guess, not one of, the best database in the world is DNS. And that is a hill I will die on.
Starting point is 00:02:31 Almost as impressive is a product that you work on, namely Spanner. What is Spanner and why would someone care about it? Spanner is our relational, transactional, and globally scalable database. So historically, or even today, it's just actually really hard to make transactional, relational databases scale. Google actually has humble background in databases as well. Lots of people are thinking about Google as this large company that only cares about large-scale problems. But in the beginning, it started very small. There's this very typical example, a typical story around our MySQL usage.
Starting point is 00:03:18 Specifically, AdWords, our ads business, had been heavily dependent on MySQL. And they got to this point that there was neither shards of MySQL instances. And they've been dealing with like resharding things. It was causing outages and so on. And around this time, people decided to maybe take another look at the storage in general. And they figured out
Starting point is 00:03:36 we definitely need something transactional because, you know, we're doing a lot of money transactional related things. Consistency is really important for us because you want to be consistent, especially if it's about money. And they need the relational capabilities because there are a lot of relational problems they had.
Starting point is 00:03:55 So Spanner came as a result of these problems, but it didn't appear in a day or two. It took them like six years of experiments to figure out the right thing. And it's been largely in use in a day or two. It took them like six years of experiments to figure out the right thing. And it's been largely in use in a lot of systems at Google. And one of the things that I really like about it is it does a lot of work on behalf of you. It gives some sort of promises
Starting point is 00:04:18 and you as a user don't have to think about those problems that much. We can talk a little bit about maybe some of the higher level promises it makes. One of the interesting things that came out of the original Spanner paper was that in the world of databases, there are the idea of cap theorem, where you have either consistency, availability, or partitioning. It's one of those good, fast, cheap, you can only ever have two. What made Spanner so groundbreaking was that, yeah, we've decided that we can actually cheat
Starting point is 00:04:46 and hit all three of those things, which normally one laughs and makes fun of and then goes back to doing serious work. But this wasn't Twitter for pets announcing this. This was Google. You folks generally tend to hire smart people who are right about these sorts of things. So that was definitely eye-opening.
Starting point is 00:05:05 I guess, first off, how is that possible and how does it do it? Firstly, maybe I should explain to you my mental model about the CAP theorem. Because according to Eric Bruver, this is a way that you think about these problems. You think about compromises in distributed systems, and there are three things that you care at the very extreme cases, consistency, availability, or network partitioning. You just pick two of these, right? But according to him, when you're getting closer to 100,
Starting point is 00:05:34 things are changing so much. You can't do all of those compromises. When Spanner was launched, they come up with this idea that we are almost 100 for all of these, but not 100. What Eric Brewer was telling was launched, they come up with this idea that we are almost 100 for all of these, but not 100. What Eric Brewer was telling was like, Captheorem is really great if you're very close to 100. But it depends on how close they are. For example, Spinner says we have five nines of availability. Is that close enough to 100%? Or it like 8-9's that we should consider
Starting point is 00:06:06 to get to what Eric Brewer is talking about so I think the controversy or the discussion around whether Spanner is actually breaking the cap theorem or not is what does cap theorem actually means or what Eric Brewer was trying to achieve by talking about those extreme cases it also feels like this is much more of a, you must be at least this far along the maturity curve
Starting point is 00:06:31 before you begin to worry about these sorts of things. I know, for example, when I build databases, what compromises do I make on cap theorem? I hit none of those three points because everything I build is fundamentally awful. At some point, you start caring about this sort of thing only after you've gotten to a point of your site doesn't fall over on its
Starting point is 00:06:47 own every 20 minutes. Absolutely. And lots of people don't need actually five nines, right? Lots of large businesses just are on three nines. And as a random business, you probably don't need that much availability as well. Like three nines is still at a level that like cloud providers
Starting point is 00:07:03 are trying to achieve. So five nines or beyond is just a really extreme. So I think what Eric Bruver was trying to explain in the CAP theorem mental model that like, he was trying to give people a way to think about these extreme problems and extreme sacrifices you have to make. So maybe in practice, you know, it doesn't necessarily fit well, because we never can achieve or there's no real reason for us to achieve that level of extreme availability or, you know, consistency or network partitioning problems. Feel free to opt out of answering this question, but Spanner was always one of those,
Starting point is 00:07:41 ooh, we're doing something at Google that we can't generally talk about similar to Borg. But then one day there was an announcement out of Google Cloud, a division I tend to spend a fair bit of time tracking, and announcing that you were releasing something called Google Cloud Spanner, which, oh, Spanner, that word sounds vaguely familiar. What is the relationship between Spanner as we've been talking about it and Cloud Spanner that is something that I can go out and buy with somebody else's credit card? So I think one of the differences, a lot of people keep asking me this question, is it like the case where we open source Kubernetes, which looks like an equivalent of Borg, but is actually like completely different systems? What is the relationship with the cloud product and the internal product? What Cloud Spanner does is it packs the internal solution we have
Starting point is 00:08:28 and deploys it to the user's nodes. In order for us to have isolation, we have to make sure that we are not sharing the same deployment environment. So there are a lot of cases that Cloud Spanner is completely behaving similarly, but it's running on our Cloud stack. So networking-wise, maybe it might be going through some different hops, but it also is trying to achieve
Starting point is 00:08:56 something similar, which is dedicated networking, similar operational model, and so on. So they are way more similar than what people think. Of course, there is always the difference between the fact that I can buy Cloud Spanner, whereas if I want to buy actual Spanner, I probably have to acquire Google, which is currently not on the roadmap for at least the next 18 months. Yeah, there are cheaper options, probably. One or two. It's interesting because whenever I've worked with various environments where I was running ops teams or, heaven forbid, being a operations engineer type, the database was always the root of all problems in the sense of, okay, we're doing disaster contingency planning.
Starting point is 00:09:42 We want to have multiple availability zones so we can wind up taking rack loss or building loss. And then expanding beyond that into going multi-region. Oh, now you have a problem because sure, you can read from databases from anywhere, but when you start having something that has a lot of rights and you want those rights to be consistent, now you're having to make a whole bunch of determinations
Starting point is 00:10:03 that all come down to something you talk about in your bio, which I will quote from now, you spent a lot of your time helping customers with architectural design trade-offs and everything that I've ever seen around databases and most other things as well are built from trade-offs. So how does that inform how you see the world? A while ago, a coworker of mine said this very useful quote that I can completely relate to. He said, any useful system has some state. And in any architecture, when I was working with any customer at large, I realized that there's no way that you can ignore data problems.
Starting point is 00:10:42 Even in systems where data is not the intensive work, a lot of things are designed around data problems. And databases world is far more complicated than anything else, even though it becomes such a fundamental thing. Like lots of people are joking about like, it's just as complicated as JavaScript frameworks, but you know, that's such an underestimation of the problem. We see audits, data loss, revenue loss
Starting point is 00:11:08 on a daily basis everywhere in the industry. Just because you don't understand or you necessarily can't pick the right solution, you see a very complex application layer, poor maintainability, low velocity, not being able to open to change, decline morale among developers and everything. It's just at the core of the system problems.
Starting point is 00:11:29 I realized that over the couple of years, I see myself recommending Martin Kleppman's book on data-intensive applications for people who are asking for architectural catalog of problems. I just realized that there's such a big overlap in terms of hard system problems as well as data problems. And one of the biggest problems with databases is databases don't keep their promises. Their edge cases, the way they implement some of the features have a lot
Starting point is 00:12:00 of edge cases and they don't necessarily are transparent about what's out there. And there are so many choices. If you think about the whole spectrum of databases, we have relational databases and then we have NoSQL. We have key value stuff. We have different storage engines. We have different persistency options. And then you have niche databases where you have document DBs or graph DBs, whatever. Basically, you have to know a lot about your problem and a lot about databases in order to get things right at the first time. So I've seen that if I can go and explain people the overall trade-offs and give them some guidance about data, it really reflects on their progress, on their overall system design. Because data keeps being always the bottleneck. I'm really surprised that we are talking a lot about
Starting point is 00:12:48 this infinite scalability when it comes to Kubernetes or Lambda or whatever. But in the end of the day, your biggest bottleneck, you're going to hit that bottleneck with your data system. And the way you handle data from your modeling, from the way that you operate your database is just really impacting the whole design of the system. For me, one of the reasons I always stayed away from databases, to be perfectly honest, is that if I screw up a web server, well, that's funny.
Starting point is 00:13:23 Everyone gets to point and laugh, we'll redeploy it, and things are back to normal. If you screw up a database, in some cases, you don't have a company anymore. And I am whatever the digital version of accident prone is. So first, this taught me to do very good backups. And two, it taught me to hire professionals to wind up handling anything with persistence, by and large, which has led to some very interesting beliefs and structures in my world around, for example, DNS as a database. What do you find that, from a customer perspective, the biggest misconceptions are that require architectural trade-offs? I think the biggest problem is, especially with cloud,
Starting point is 00:13:58 they believe that resources are infinite and it's easy to auto-scale. Some of the customers are like coming from this really dynamic workload type of environments and they believe that over on cloud we have no capacity issues plus we can just auto scale and we can dynamically resize our pool. And I think most of our compute products are sort of like making this a bigger issue because we made auto scaling too easy
Starting point is 00:14:23 without necessarily considering what it means to the overall limitations of the design. So I see a lot of people coming from that and realizing that that's not the case. And then they start to see everything more holistically. Maybe they realize that they need to start about understanding the limitations at the database layer. And that's also a very complicated problem because the things that they are to start about understanding the limitations at the database layer. And that's also a very complicated problem
Starting point is 00:14:46 because the things that they are looking at, like latency and throughput, and these are very superficial numbers to take a look at. They still have to realize, they have to still identify large specific operations and how they're going to work against their database, particular loads and so on. They're kind of like getting lost because the spectrum is really high in terms of like what to measure. And the existence studies
Starting point is 00:15:13 around standard benchmarking or standardized stuff just doesn't really help their particular use cases. So they have to do a lot of prototyping. They have to evaluate a lot of things before they are somewhat happy about their overall initial design. And this is like, if you're building things from scratch, if you're migrating over, it's just getting much harder. In some ways, it feels like working at Google puts you in a position where something that the rest of us have to struggle with, but Google doesn't. Specifically, whenever I build something, it probably doesn't need to scale until suddenly it's absolutely going to have to scale because it turns out I built something that
Starting point is 00:15:55 resonates with people. That doesn't seem like it's a Google problem because if you slap the word Google on a product that gets launched, on day one, you'll have millions of people using it. So anything you build has to scale. Therefore, it removes the entire debate of, do we build this right or do we just slap something up there and go back and refactor it later? I would think. Am I right or am I wrong? It's true that we design for scale because we expect, let's say, this number of millions of users on the day first. This is mainly true for large products that we are going to release. So Google is a very large company. We have like all these different systems that doesn't do
Starting point is 00:16:38 any consumer market thing. So there's actually a variety of different scales. But for consumer market problems, yes, it's true. We have this large XX million expectation on the first day. That's why we specifically pick this type of trade-offs, pick this type of solutions, and everything is more in an open-ended way. But there's a large spectrum of other problems inside Google that doesn't necessarily need that type of scale.
Starting point is 00:17:06 And internally, for example, we have a lot of database solutions, a lot of general storage solutions. And there is like a huge also a decision chart internally that which one you have to pick. And it really depends on the type of problem you have. So even at Google, it's true that product teams especially are more biased towards very large scale. There are a lot of small scale problems too.
Starting point is 00:17:31 In what you might be forgiven for mistaking for a blast from the past, today I want to talk about New Relic. They seem to be a relatively legacy monitoring company, and I would have agreed with that assessment up until relatively recently. But they did something a little out there. They reworked everything. They went open source, they made it so you can monitor your whole stack in one place, and most notably, from my perspective, they simplified their pricing into something that is much more affordable for almost everyone. There's even a free tier, with one user and 100 gigs per month, totally free. Check it out at newrelic.com.
Starting point is 00:18:07 In many cases, what's right for Google is absolutely not going to be right for other people. But at a certain point of scale, certain things change. And if you take a look at all of the big tech companies out there, they've all built their own programming languages. For example, Microsoft has a whole bunch of them,.NET, ASP.NET, C Sharp, etc. Facebook came out with Hack, their custom PHP thing. Apple came out with Swift. Amazon came out with CloudFormation. And Google came out with Go, which is something you were deeply involved in before a relatively recent shift over to work on Spanner. What did you do for Go? And what made you decide it was time to go stare at databases
Starting point is 00:18:46 instead? I was working on Go after the 1.0 release. So I started, I think, around 2012. The funny story is, when Go was released, I was not working at Google. And I was working in telecoms. We were working on like message parsing systems. These are like highly concurrent systems. So we were just basically looking after what else is coming, especially in the languages and runtime space to make our jobs much easier. And I was looking at Go around that time and I didn't truly understand the type system or anything. And I felt like this could be something that I can consider in the long term maybe. But I don't really feel like this language is really the best choice for my personal things that I like in a language and so on. So I just really didn't do much work.
Starting point is 00:19:41 But after I joined, I was by chance sitting right next to the compiler team in the Zurich office in Switzerland. And I was just kind of like, you know, in the conversations because there were all like language enthusiastics around me. And I started just like taking a look at things. And at that time I was working on Google Drive. So we had a bunch of migration projects with lots of network in an IO. And I started just kind of writing small things and trying out things. And as a result of that, I started to publish some of my open source tools and so on and realized that the community is just really amazing in Go. And I realized in a couple of years that maybe I should do something as a part of my full-time job. And I joined the Go team to work on generally our external API, client libraries, some tooling around them,
Starting point is 00:20:33 gRPC. gRPC was just coming around at that time. So we did a lot of work on gRPC as well. We had some sort of project to unify our stubby internal API stuff with gRPC. So I did some Go specific things. I did all these reviews for all these cloud products who wanted to support Go. I actually initiated one of the earlier projects for our cloud to support Go as a first-class language. So back then, nobody was interested in Go.
Starting point is 00:21:03 This was back in 2012. Go was still kind of like a smaller, you know, language in a community. So I initiated a lot of like, you know, the bunch of small things and they got funded. Luckily, now there are like teams actually working on those things that I initiated as a 20% funnel. That's how I started my journey with the Go team. I was necessarily just handling more of the cloud support related things. And then I switched some sort of like my interest. There was a project that was trying to make the Go runtime working on Android and iOS. I briefly worked on it, also contributed to some of the tooling. And recently before my switch I was really interested in
Starting point is 00:21:45 instrumentation and performance and debugging tools and that sort of and there was a small subset in the Go team that was handling a lot of performance related stuff I'm not sure if you're familiar with Go has Pprof support, we have an execution tracer we were thinking about maybe establishing some sort of primitives for distributed tracing.
Starting point is 00:22:09 We're thinking about some metrics APIs now. There are a bunch of small things, so we were trying to see what is the overall larger picture, what else we can do. And so I worked on that team for a while and then left that team to work on instrumentation at Google at large, because I realized that a lot of things that I was trying to do in Go was actually larger than
Starting point is 00:22:31 just Go problems. So maybe I should just go and work on the instrumentation team to get some more exposure to that problems. And then I can go back to Go and like apply them. But then I ended up being on the Spanner team because, you know, instrumentation team at Google was sponsored by the storage system. So I was really involved in a lot of storage problems as well as networking problems. That was a really gradual switch from, you know, go to other things.
Starting point is 00:22:59 But it's funny, sometimes you have to do what you can do and what's important and the most priority thing. And I like to be able to switch back and forth at Google. We have this like very loose way of collaboration. And so, I mean, we don't have to necessarily go through interviews or anything. You can just switch projects and contribute.
Starting point is 00:23:22 And sometimes overlapping different skills are very useful for the project that you're going. Like you're bringing a completely different background. On the Spanner team, for example, I have some experience before coming to Google. I actually left my previous company because of the database problems that we have. So I had a lot of experience migrating us to different systems,
Starting point is 00:23:47 like designing and evaluating databases. That was my previous job. And at some point, I don't want to name a database, but we were losing data. And it turned out to be a very fundamental issue. At a database, I don't want to name, but we spent weeks. And I spent- I will fight you if the answer to that database
Starting point is 00:24:04 that you don't want to name is DNS it is not DNS thankfully DNS is a better database than that database I'm pretty sure the funny thing is I honestly don't know the answer but I can think of at least five that fit that profile probably you can tell like if you have
Starting point is 00:24:19 maybe a short list of like two you will be able to tell which one it is I don't want to tell it like I don't think that I want to be in that fight. The thing is, I just gradually ended up leaving my previous company just because I was so tired of the storage
Starting point is 00:24:35 problems. And I joined Google because they gave me an offer first. And the second thing is, I can actually learn about storage maybe at this company because they seem to know what they were doing. Yeah. I mean, again, there's a lot of criticisms that you can lobby at a whole bunch of different companies. Google is right in that list too. And I do have a counted list that I don't have the time on this episode to read and blame you personally for all of them. But one thing that Google has always gotten very right has been
Starting point is 00:25:03 fantastic technical solutions to incredibly hard problems at scale. It's easy to bag on companies, but there's a lot of hard work that goes into making these global world scanning systems. And I think that that's something that often gets forgotten. I mean, there was a time where Google was light years ahead of absolutely everyone else. And now it seems that, oh, well, what do they do? They built this world-spanning thing that's super fast under where you are on the planet. Yeah, here's my five lines of YAML and 20 cents a month, and I could do something like that too.
Starting point is 00:25:34 We all stand on the shoulders of giants. And it's easy to forget that Google's 25 years old. They've built a lot of these things that have been transformative to the entire industry. But now it's, well, what have they done for me lately? I agree with this. I feel like we still need to do a better job in terms of understanding what it means to scale to small, right? We have these aspirational experiments. Maybe we're running on behalf of other people because
Starting point is 00:26:00 we had some unique problems in the past and we built these systems that works for us. Some of those experiments could be aspirational for other things rather than like completely translating. The interesting thing before joining to the Spanner team, I had this concern. Is this like, you know, we don't want to be this new shiny thing that nobody cares about that only works at Google scale. A lot of people on the team have been telling me they had the same concern. They didn't want to actually release Cloud Spanner. The more they were talking about customers, largely about database systems, they were asking them to release Cloud Spanner because they wanted a solution. They didn't want to deal with
Starting point is 00:26:43 consistency issues the way they used to do. Like in traditional databases, especially relational ones, it's so hard to scale rights, for example. It's such a fundamental problem, right? Like you can't scale your rights, but you want to launch this large game and you want to focus on your game.
Starting point is 00:27:01 You don't want to deal with your database layer. So just because they've been so hard on them, on the team, if you have an end solution to this, you should share as a product. And then that's how Cloud Spanner actually comes around. I was very impressed by the fact that Spanner team is thinking too much about the customers.
Starting point is 00:27:23 This is like something that I have to tell. At Google, maybe this is the only team that I've been working at. And at every meeting, I think we keep talking about customers and customer issues all the time. And that is such an incredible thing, how much they actually care about customers and necessarily prioritizing what they want.
Starting point is 00:27:45 So before you were on Spanner, you worked on Go. And you mentioned that you did telco work before that. What is the story? Generally speaking, Google staff engineers do not spring fully formed from the forehead of some god. What was the journey that got you to where you are in your career? I started actually before telecoms, I started at a small company based in London. It's headquartered in London called Multimap. They were an online mapping company that sort of like was Google Maps before Google Maps was
Starting point is 00:28:16 Google Maps. And they've been acquired by Microsoft. So my first experience in life was actually like working at a company that was acquired by Microsoft. And it was very interesting for me because I've seen two phases of the same thing, right? You have this small company that cares a lot about their business problems in a smaller scale, as well as there's this giant coming in and trying to see what kind of differentiated value that acquisition can bring. And they have a completely different scale of systems and different trade-offs and so on. So that was like a really eye-opening thing.
Starting point is 00:28:52 And just going through a lot of discussions about how things work or why things don't work in the new scale and so on really helped me a lot. After that, I started to work mainly in telecoms. And as I said, I was working on this highly concurrent message parsing rule engine type of systems. They're actually very boring, but they really helped me a lot. Just prototype and evaluate and understand the overall system problems. What are some of the limitations that I should take a look like?
Starting point is 00:29:23 What are some of the failure modes that I should care? And it was in a very actually fast-paced environment that I was able to, you know, prototype, build, change significant things, push things to production, see some oddage, iterate. So I've seen a lot of interesting things, being able to touch a lot of different things. We were also trying to modernize parts of our stack. So there was a lot of work in terms of going and discovering new things and stress testing a lot of new tools or a lot of new libraries or language runtimes and so on. As I said, I was looking at Go as a part of that work. So that really helped me to see everything in a broader sense.
Starting point is 00:30:08 And I really liked the fact that I've worked so much more outside of Google than years that I've spent at Google because I've seen problems of different scale. And I've seen different levels of flexibility when it comes to introducing new technology. And I've also seen a lot of different types of organization with different types of problems in terms of scaling the organizational issues.
Starting point is 00:30:32 That really is helping me right now to help our customers because, I mean, I can relate to a lot of things, large majority of the customers I'm going through. After telecoms, by the way, before coming to Google, I've worked for two small companies. They were trying to bootstrap. They were just at the initial design phase for lots of things, but they also had some sort of established business going on. So, you know, I had the chance to see the both sides of the things and more of a playground area where we can go and try out new stuff, as well as a lot of established problems and legacy issues and scale problems and large organizational problems that we have to tackle. It's always interesting to me to see how people come to where they are from various different places. So my last question before we wind up calling it a show
Starting point is 00:31:27 is what advice would you give to people who are looking to go from where they are in their careers to a job that looks a little bit more like yours? My biggest concern when I was earlier in my career was I was like thinking I am wasting too much time. I was feeling like I'm wasting too much time all the time. I'm working on these problems that actually doesn't make any sense in the larger scheme of the things and so on. And I was very frustrated. I was feeling very demotivated.
Starting point is 00:31:57 And if I was able to go back to that person and tell her something, I would say that just don't worry. Because at some point, all of that little experiences really just gets you to a point that you can overlap different experiences and bring some different perspective. What I realized is over time that, especially on the Go team, there were a lot of very senior engineers. But in some cases that I realized that my particular background and all of this weird stuff gave me a huge, very niche thing, but at the same time, a very general perspective that I can apply anywhere on some of the topics. And in some cases, I was the only one in the room
Starting point is 00:32:36 that actually have any experience in all across and was able to say something when we're thinking about design or like some of the goals or some of the trade-offs. So I would say like people shouldn't worry too much. And especially if they learn, if they think that they are learning, there is nothing tedious about, you know, learning, trying out, going after new shiny thing. Most people think that going after the new shiny solutions is the only way to kind of have any job security
Starting point is 00:33:06 I also disagree with that just work on the tedious things most of the problems in the world are very tedious and become the person who can recognize and identify the tedious parts and the common patterns of problems think through them and that's going to contribute to your career
Starting point is 00:33:23 or your growth more than just going after every shiny new thing. Thank you so much for taking the time to speak with me today about basically all of this. If people want to hear more about what you have to say, where can they find you? I have a Twitter account. I usually am trying to be very public about what I am working on, which helps me to hear other voices. So you can find me on Twitter. And I try to write a lot. Nowadays, I'm not writing that much,
Starting point is 00:33:53 but I have two blogs. So I'll give you the links. You can probably link them. Excellent. And I will put links to those in the show notes. Jana Dogen, staff engineer at Google. I'm cloud economist, Corey Quinn. And this is Screaming in the show notes. Jana Dogen, staff engineer at Google. I'm cloud economist Corey Quinn, and this is Screaming in the Cloud.
Starting point is 00:34:09 If you've enjoyed this podcast, please leave a five-star review on Apple Podcasts. Whereas if you hated this podcast, please leave a five-star review on Apple Podcasts and then leave a comment telling me what I got wrong written in Go. This has been this week's episode of Screaming in the Cloud. You can also find more Corey at Screaminginthecloud.com or wherever Fine Snark is sold.
Starting point is 00:34:42 This has been a humble pod production stay humble

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.