Screaming in the Cloud - Episode 28: Serverless as a Consulting Cash Register (now accepting Bitcoin!)

Episode Date: September 19, 2018

Is your company thinking about adopting serverless and running with it? Is there a profitable opportunity hidden in it? Ready to go on that journey? Today, we’re talking to Rowan Udell, wh...o works for Versent, an Amazon Web Services (AWS) consulting partner in Australia. Versent focuses on specific practices, including helping customers with rapid migrations to the Clouds and going serverless. Some of the highlights of the show include: Australia is experiencing an increase in developers using serverless tool services and serverless being used for operational purposes Serverless seems to be either a brilliant fit or not quite ready for prime time Misconceptions include keeping functions warm, setting up scheduled indications Simon Wardley talked about how the flow of capital can be traced through an organization that has converted to serverless Concept of paying thousands of dollars up front for a server is going away Spend whatever you want, but be able to explain where the money is going (dev vs. prod); companies will re-evaluate how things get done Serverless is either known as an evolution or revolution; transformative to a point Winding up with a large number of shops where when something breaks, they don’t have the experience to fix it; gain practical experience through sharing Seek developer feedback and perform testing, but know where and when to stop With serverless, you have little control of the environment; focus on automated parts you do control Serverless Movement: People have opinions and want you to know them Understand continuum of options for running your application in the Cloud; learn pros and cons; and pick the right tool Reconciliation between serverless and containers will need to play out; changes will come at some point Blockchain + serverless + machine learning + Kubernetes + service mesh = raise entire seed round Links: Rowan Udell’s Blog Rowan Udell on Twitter Versent on Twitter Lambda Simon Wardley Open Guide to AWS Slack Channel Kubernetes Aurora Digital Ocean .

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud, with your host, cloud economist Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. This week's episode of Screaming in the Cloud is generously sponsored by DigitalOcean. I would argue that every cloud platform out there biases for different things. Some bias for having every feature you could possibly want offered as a managed service at
Starting point is 00:00:37 varying degrees of maturity. Others bias for, hey, we heard there's some money to be made in the cloud space. Can you give us some of it? DigitalOcean biases for neither. To me, they optimize for simplicity. I polled some friends of mine who are avid DigitalOcean supporters about why they're using it for various things, and they all said more or less the same thing. Other offerings have a bunch of shenanigans, root access and IP addresses. DigitalOcean makes it all simple. In 60 seconds, you have root access to a Linux box with an IP. That's a direct quote, albeit with profanity about other providers taken out. DigitalOcean also offers fixed price offerings. You always know what you're going to wind up paying this month,
Starting point is 00:01:22 so you don't wind up having a minor heart issue when the bill comes in. Their services are also understandable without spending three months going to cloud school. You don't have to worry about going very deep to understand what you're doing. It's click button or make an API call and you receive a cloud resource. They also include very understandable monitoring and alerting. And lastly, they're not exactly what I would call small time. Over 150,000 businesses are using them today. So go ahead and give them a try. Visit do.co slash screaming, and they'll give you a free $100 credit to try it out.
Starting point is 00:02:00 That's do.co slash screaming. Thanks again to DigitalOcean for their support of Screaming in the Cloud. Welcome to Screaming in the Cloud. I'm Corey Quinn. I'm joined this week by Rowan Udell. Welcome to the show. Thanks, Corey. Long-time listener, first-time caller. Really excited to be on the podcast, given the quite impressive list of attendees you've
Starting point is 00:02:23 had so far. Well, that's the trick, is some people say that when you wind up bringing in guests, you should wind up making sure that there are people who are much better than you are at this stuff. For some of us, that's not so much a strategy as it is our only option. Nice, words of wisdom. So what are you up to these days? So I work for an AWS consulting partner here in Australia, Sydney specifically. Versant is an AWS-only premier partner, so we don't do any other clouds.
Starting point is 00:02:56 Our angle on this is when you have a heart problem, you want to go to a heart surgeon. You don't just go to a general practitioner. We want to be the heart surgeons for the AWS cloud. There's about 15 ways we could take that in an unfortunate metaphor direction, and I'm going to resist temptation. But I see what you're saying. There's something to be said for specialization. To that end, do you wind up focusing on particular aspects of the AWS environments your customers are running? Is it just anything involving Amazon on the tin?
Starting point is 00:03:20 You'll go ahead and dive into it. Where's the line? Yeah, so we have a number of specific practices that we focus on. We do help our customers with some rapid migrations, just wholesale moving data centers into the cloud. My personal area of
Starting point is 00:03:35 focus is in the serverless space. We do a lot of stuff in between that. The cloud market in Australia is probably a little bit different to the cloud market in the United States. So we work with what our customers want to do. To be fair, I've never been to Australia. It's on my short list of places that I want to visit, but haven't quite made it to, mostly because I tend to think in terms of cartoons. Therefore, anything in Australia is kangaroos, incredibly venomous, and I will probably die as soon as I arrive.
Starting point is 00:04:05 How accurate is that naive, ridiculous, I guess, stereotype catalog description? Yeah, look, it's a tough one because there aren't kangaroos hopping through the streets of Sydney, unfortunately. I hate to disabuse you of the notion, but there are indeed a lot of poisonous creatures here and we just kind of get along with them. So, you know, I don't want to set expectations too high. You do have to watch out for some small animals, but it's a pretty nice place. Perfect. So in the grand scheme of things, it's probably less hazardous than say hanging out in US East One. Yeah,
Starting point is 00:04:41 yeah. I think you'd be right there. that, at least to my consultant brain, sound like, hey, there's a very profitable opportunity hidden in here somewhere. It feels like that might be a little bit ahead of the curve, at least for the markets that I tend to spend my time in. What are you seeing there? Yeah, look, I think you're on the right track for where we are now. I'm definitely optimistic about the opportunities. So we work a lot in the enterprise space here in Australia, and we're definitely not yet seeing widespread serverless adoption for their flagship application, as much as I would love to see it sooner rather than later. But what we are seeing is a real pickup of the serverless tool services by the developers themselves. You know, it's a real kind of grassroots movement.
Starting point is 00:05:42 We're also seeing a lot in the operational space. This idea of having to run a server just to run a scheduled job every so often is starting to go away. And as Amazon comes out with more and more features in this space, we're seeing it being picked up more and more for the things around the core business applications. In my experience so far, the enterprises are mostly adopting this initially, as you mentioned, it's going to replace the cr business applications. In my experience so far, the enterprises are mostly adopting this initially, as you mentioned, it's going to replace the cron job. It's going to wind up turning into something you can dip your toe into and see what winds up happening. And oh, okay, the learning curve was a little steep, but once we're there, wow, there's a lot less for us to worry about. And that starts to
Starting point is 00:06:21 turn into, well, what else can we, this is an amazing hammer. What else can we hit with it? What else looks like a nail in our environment? And you see some aspects of that where it becomes a brilliant fit, and others where it's, how do we put this, not quite ready for prime time. We're going to replace our mainframe batch scheduled jobs with a bunch of lambda functions. Yeah, that's 30 years of code you're staring and moving. Maybe that's not the first project I would tackle. Oh, for sure. There's definitely a large piece of the maturity story there that needs to still be explored. And organizations need to, quote unquote, go on that journey.
Starting point is 00:07:01 The things that I'm seeing a lot is, you know, there's still a lot of misconceptions. You know, one of the ones that I frequently see and like to kind of call out is this idea of keeping functions warm, you know, setting up a scheduled invocation to ping the function so that you don't hit that cold start time. And that for me is a real code smell. You know, I really want to ask the question, do you really want to use serverless for this? And in many cases, the answer is still yes, it's still probably appropriate. But you want to ask the question, do you really want to use serverless for this? And in many cases, the answer is still yes, it's still probably appropriate. But you need to change the way you approach it. People are still approaching it with a kind of an old world way of thinking.
Starting point is 00:07:35 And so they need to kind of move towards a more event-based and item potent approach to how they handle their events and their inputs. I think that right now we're seeing a sort of evolution as far as how people interact with this. So, oh, cold starts are a problem, so we're going to have a second Lambda function that just hits that on a schedule. That winds up, I think you're right, I think that's a code smell,
Starting point is 00:07:56 but I also would argue it's probably a temporary phenomenon. I believe I've seen multiple AWS employees talking publicly about how reducing cold start time is a priority for them. And given that it's one of the first things people start complaining about, I can't imagine that it wouldn't be. We've decided to go into a room and ignore everything every customer has ever said to us isn't really the Amazon ethos. No, they're pretty good at doing what their customers ask. Yeah, look, I think about it, even in the last couple of years that serverless has become a thing. And even in that timeframe, I've seen the whole cold start discussion really drop down in terms of the amount of airtime it's getting.
Starting point is 00:08:37 It used to be a lot worse than it is right now. And with the increased competition in this space, I can't imagine it's going to slow down the rate of improvement at all. What I found interesting about ServerlessConf somewhat recently was that Simon Wardley got up on stage and talked about how you can now trace the flow of capital through an organization that's converted to serverless. And that's a really neat idea. The counterpoint is that even talking to startups that were born cloud-native, that never had any servers to speak of, and you take a look at what their AWS bills look like, which is my bread and butter,
Starting point is 00:09:14 and every time I see thousands of dollars in Lambda or API gateway charges, I see hundreds of thousands of dollars or more in EC2 instances. It isn't anywhere on the radar right now for any shop I've ever looked at as far as a major cost driver for what they're doing. Even in shops that are pure Lambda and API gateway, they're seeing orders of magnitude more spend just from things like their CDN or data transfer or data storage. Yeah, totally.
Starting point is 00:09:45 And the way I kind of explain this, I was giving a talk at one of the local meetups here. The days of the business handing over brown paper bags full of cash to the IT department to, quote unquote, solve their problems, if not gone, going the way of the dinosaurs. The idea that you would pay tens of thousands of dollars for a server up front, if you think about where these technologies are heading, it's just going to seem ludicrous in another couple of years compared to how it was, say,
Starting point is 00:10:18 just five, 10 years ago when you had to buy all the tin yourself. So I think we're only going to see that process speed up as the businesses get used to it. Once a business has gotten used to paying for their compute by the millisecond, they're not going to want to go back. It's just not going to make any sense. I think you're right. Increasingly, there's a sense in IT corporate culture where you're no longer just throwing money over the wall to IT and letting them do whatever they want to do. This is sometimes being misinterpreted as, oh, now finance is cracking down because we're spending too much money. And I don't think that's the case. What I see that manifesting as instead is spend whatever you want, but you've got to be
Starting point is 00:10:59 able to explain what that money is going to. How much of it is dev versus prod, though they call it R&D and cogs? How much of that is tied to each project? And how do we wind up doing a showback model to understand what impacts our current business? I think that as that continues to grow as a cultural phenomenon, that we're going to start seeing significant advances in serverless shining a light on this. That said, today, I think we're very far from a point where the spending on serverless is material in many shops. Yeah, for sure. And I think this is where it comes back to the fact
Starting point is 00:11:34 that this is becoming more and more a go-to tool for the developers out there. So as developers play with these technologies on their own and get more familiar with it, I like to think about what's going to happen to the next generation of developers, those that have started their development careers with serverless as an option.
Starting point is 00:11:53 And for them, I think it's going to be, obviously, just much less of a big deal as those of us who remember the time before serverless is. And so I think we'll see that shift, depending on how you measure developer lifetimes. And that's obviously another conscientious topic. It's going to accelerate as those people hit the job market and become the decision makers. I think this is one of those transformative moments where companies start to reevaluate
Starting point is 00:12:21 how things get done. There have been evolutionary moments and there have been revolutionary moments. And I think in some ways, serverless tends to wind up having a foot in each of those worlds. Yeah, you should talk to my marketing department. I think our latest tagline is evolution or revolution. So yeah, we'd have to agree with you 100% in that space. It's transformative to a point. I think that there's going to wind up being a lot of false starts in this space where people wind up looking at things like,
Starting point is 00:12:53 okay, now we have developers writing code and throwing it over the wall. There's no need for anyone in an operational capacity. All we need is people who know how to write code. There's no value to historical sysadmins or similar. That winds up being dangerous past a certain point. There's always going to be a role for having things interrelate, for understanding what's going on under the hood. And when you're this many layers of abstraction above what's going on at the hardware level, diagnosing complex failure modes
Starting point is 00:13:26 is going to require a tremendous degree of experience, expertise, exposure to this type of problem space. What I wonder about is whether we're at a position relatively soon where we're suddenly going to wind up with large numbers of shops where when things break, people don't really have that same level of hands-on experience or exposure to it. Just because the path that most of us walked to become senior engineers or their equivalent today isn't really there in the same way. I mean, I talked about this in some of the early episodes of the show when I had a lot fewer listeners. So yeah, I'll hit it again. Why not? There's a question of where the pipeline's coming from. There aren't too many jobs like there were when I was coming up, where you go work at a help desk at an ISP, or you go ahead
Starting point is 00:14:11 and start working in a data center and very quickly realize that humans don't work well in giant neon-lit rooms that are incredibly loud and cold, and you can never tell what day it is or what time it is, and everything feels the same. That's not a great work environment, but you learn a lot in those moments. Where do you go to learn those things now? Yeah, look, I think it's a really good point. And I've got a similar kind of background as yourself,
Starting point is 00:14:36 spending some time in operations and then moving into more of a dev-based role. And yeah, it's going to be tough in general because the kind of work that uh the operations staff are doing is going to change as well as that undifferentiated heavy lifting gets moved towards the cloud providers so i think there's a changing story there no matter which way you look at it you know this whole idea of managing fleets of servers by hand is well on its way out at the scale and the enterprise end of town. As to how people will get that practical experience, I feel like we're seeing a lot more
Starting point is 00:15:11 sharing in the community around that. You've obviously had this topic in your podcast before, and there's so much good material out there today in terms of observability and things in the monitoring space. And so I'd like to think, and maybe I'm being a little bit optimistic, that the community itself will help drive this out there. When you see problems of this nature, you'll be able to go search and see other people that have had problems of this nature. One of the first things that comes to mind is the Open Guide for AWS Slack channel and also the AWS Slack channel.
Starting point is 00:15:45 These are really good resources for bouncing ideas off people and finding out, hey, what do I do? Where do I look? You don't have to do it on your own as much these days, I'd say. I think that that's always been something of the case where you learn from the mistakes and experiences, trials and tribulations of others. The challenge has always been, where are those people hanging out? And I'll throw a link to the OpenGuide Slack team into the show notes for the Amazon official Slack team. Apparently it's invite only, but if you talk to people on Twitter, they can usually wind up cracking open an invitation.
Starting point is 00:16:19 It's a somewhat strange model. Yeah, not the usual scalable approach by Amazon, but anyway, I'm sure it'll improve if people complain loud enough. Absolutely. Amazon does that for a living. Every time they wind up releasing a new service, they're in a position where they're going to go ahead
Starting point is 00:16:35 and release something new and shiny that's kind of okay. And over time, the features improve to the point where it goes from something that's relatively shaky to something you might actually trust your bank to run. And how long it takes depends greatly upon which particular service we're talking about. I would argue that some that have been around for a decade now are still borderline unfit
Starting point is 00:16:56 for purpose. But no need to pick fights. I don't need to. The other bit I wanted to talk a bit potentially about is the whole serverless development story. You've been very forthright and open with your serverless development experience, both in your newsletter and on Twitter, the highs and lows. And I think that's been a really good thing for a lot of people to see, especially those with less of a development background. Those kinds of stories coming out through the community in all its forms, I think that gives people a real corpus of work to lean upon and do it for themselves. And at the end of the day, there's no substitute for doing it yourself. Oh, absolutely. I think it's one of those
Starting point is 00:17:36 problems where it's very easy to wind up in an unfortunate place where you're suddenly talking about something that you've never touched yourself. And that becomes a very inauthentic story. One of the things that I found most exciting about my own experience with working with serverless is I'll mention how I built something, and then people who are active in the space, people who work on serverless technologies at AWS, look at me like I'm nuts when I would say something like, well, here's a code path so I can go ahead and run my Lambda function locally on my laptop for
Starting point is 00:18:12 testing. And they would say, why in the world would you do it that way? Just go ahead and run it in the Lambda environment, and that's all you need to do. Well, that's because I don't necessarily want to dive in face first when I don't know how deep the water is. I want to be able to run this thing in a traditional environment, if it turned out that not being in a serverless environment was a constraint I had to deal with. In time, I've gotten away from that pattern. But early on, it felt like a very clear thing to do. And I was very surprised by the pushback I got from that model.
Starting point is 00:18:45 Yeah, look, and I'm really glad you mentioned it because it's one of the things I feel particularly conflicted about in this space. And part of that is because of my geographical location. You know, I definitely think that trying to mock the cloud on your local machine is a big anti-pattern, you know. Oh, I believe in mocking the cloud in my newsletter. That I'm okay with. But we've seen a number of different projects out there, even AWS's own SAM CLI, that let you run at least parts of the cloud locally. And for me, I'm always looking
Starting point is 00:19:19 for ways to get developer feedback as soon as possible. It know, it just makes sense on so many levels. By the same token, I think it's a bit of a slippery slope. You know, if you start mocking out your Lambda runtime and your API gateway environment, then the next step would be DynamoDB, and that opens up a whole can of worms that I wouldn't recommend to any developer. So it's one of those things where,
Starting point is 00:19:43 obviously, some level of testing and maybe mocking is appropriate, but it's really hard to know where you should stop and then just move into the actual environment that you're going to run in. And it's probably one of those cases where the first time you do it, you don't know where that line is. It's only after you've been burnt and after you've got experience that you learn where that line is, where that diminishing returns on investment is for the amount of setup and maintenance that you do in your dev and test environment. For me, I think a lot of people fall down that slippery
Starting point is 00:20:16 slope of trying to get everything running locally and quickly tie themselves in knots. I think if you think about the testing pyramid where you have your base of unit tests, a smaller layer of integration tests, and finally a very small segment of end-to-end tests, you know, that really hasn't changed in the serverless model. Some people think that maybe you should have kind of more integration testing if you can,
Starting point is 00:20:42 since most of what's running is not actually your code, it's other services. But I would argue that, you know, testing those other services is something that's probably not going to be worth the maintenance required in the long run. And you should still really focus on that unit testing layer. That's where the most return on investment for your time as a developer is going to be. Yeah, I think part of the challenge is generally going to be understanding the model when you first start. I mean, my first Lambda function was, okay, I hit a remote API, I grab a bunch of data out of it, and then I shove that into DynamoDB. Well, since those are two external, it's an external input, and it's an external output, it doesn't really matter where I run that thing. My initial inclination was to run that in a cron job on an instance I parked somewhere. I already have a few of those sitting around, not that big of a deal, but let's go ahead and try this. But to that end, the code that I wrote was still very much aimed
Starting point is 00:21:33 at the possibility of being able to run it in that original environment. Now, as you wind up going down the road to other alternative models where, okay, now you wind up having this operate on a request that comes in at the edge. Well, okay, at this point, you are well past the point where you're going to be able to mock that locally to any reasonable fashion. Increasingly, I'm starting to arrive at a model where running it outside of the AWS environment makes less and less sense, given the nature of what I'm doing. And now that I understand where the guardrails are, as well as the limitations and constraints imposed by that environment, it seems to make sense now to just completely avoid local development. Provided, of course, you have reasonable deployment methodologies where I change a line of code, I press a button, how long until I see the output of that test run? That's where it starts to get hazy.
Starting point is 00:22:29 Yeah, look, and I think the piece of advice that I generally have for developers getting started in that is definitely to focus on that automation. In the serverless model, you control so little of the actual environment that you're running in. If you look at the number of configurations option on a Lambda function, there's only really about half a dozen that make an impact on it. There are some extra config options that aren't actually on the function itself,
Starting point is 00:22:54 things like concurrency limitations and that. So they kind of push the boundary a little bit, but really focusing on automating the parts that you do control, that deployment model, like you mentioned, are definitely worth the effort that it takes to do. When you're starting to really integrate with the AWS services, like you're talking about, you know, particularly Lambda at Edge,
Starting point is 00:23:15 for me, I think the piece of advice for developers getting started there is to really change the structure of your code and isolate those touch points. A lot of people think that they can write just the same kind of code that they run locally and then throw it up in the cloud. And while that will run, definitely, you're going to have a harder time
Starting point is 00:23:34 testing it and troubleshooting it. So really letting the serverless model impact how you write code is a good thing. And it's something that I don't see enough of in a lot of the material around it. I actually did see an AWS presentation that mentioned it specifically just the other day. So I think these ideas are getting a bit more traction.
Starting point is 00:23:53 And what we'll realize is that we can still use all of the good things that we've learned about software development over the last 10, 20 years. We just need to make it fit the new model. Right now, I think part of the challenge is if you talk to any different group of people, they're going to have their own opinions on what a best practice looks like. The only consensus is that everyone else is wrong. I think that this is something that still needs to emerge as far as, I guess, a global somewhat half-baked understanding
Starting point is 00:24:22 of what you should be doing versus what you shouldn't be doing. Otherwise, you very rapidly fall into a model where, well, every other shop except this one's doing something wrong, probably because they're terrible. And it winds up turning into schisms and arguments about stuff that frankly doesn't advance the state of the art any further than it already is. I don't find those arguments to be compelling. I don't find them to be interesting. I'd rather focus about workflows than spend time having pointless arguments about what framework you should be using to deploy your functions. Yeah, look, I couldn't agree more. I guess the cynic in me said what you've described sounds like software development in a nutshell, not even specific to serverless or any particular area. People have opinions and they want you to know that they have them,
Starting point is 00:25:06 for better or worse. And you see it now with people who are focusing on containers and Kubernetes who are sneering down their nose at the serverless movement as if it's some completely foreign idea that there's absolutely no commonality between. There's a lot of points of commonality. I think there are different points on a spectrum,
Starting point is 00:25:23 and increasingly I think we're going to see more and more integration points between them. But right now, there just seems to be this giant argument pile that isn't getting people anywhere. Yeah, I see that a lot with some of our customers. It feels to me like they have what I've affectionately coined the term a kube hammer. They've heard about Kubernetes. They think it's going to solve all their problems. And the reality is it probably will solve many of the problems they have. But like most pieces of technology, it brings with it its own set of challenges. And when you get caught up in the hype, you can sometimes forget that. And I think it's just about understanding that continuum of options that people have when it comes to how they run their
Starting point is 00:26:10 applications in the cloud, learning the pros and cons, and picking the right tool for the right job. And that's always been a challenge. Do you find that right now there's anything approaching a reconciliation between the container folks and the serverless folks? Or is that still going to wind up taking entirely too long to evolve? Yeah, unfortunately, I think it's going to take a while. I think we're still in the early days of both these technologies. And we really have to play it out to see where it's going to go. It does feel kind of strange when people are having
Starting point is 00:26:45 these big arguments because these things are more similar than they are different. You know, it's just a slight difference in where the levels of abstractation are and also how you orchestrate. But these things are much of a muchness in the grand scheme of things. I think we'll see hopefully a better agreement on what jobs belong in what particular approach. And that will make things easier in the kind of short to medium term once people really get a handle on which kind of workloads perform better or maybe more cost
Starting point is 00:27:22 effective in which particular approach? I still think that we're in the early days of serverless. There's going to be a lot of changes that wind up hitting in the relatively near future, not just in how people think about this, but to the capabilities of the platform themselves. I have not heard anything from the Lambda team that implies that they are considering it a finished service that's now going to sit there as it is forever. Of course, there are going to be new enhancements made. Of course, old constraints are going to move. It's interesting watching that evolve, and it's interesting seeing what it turns into as that continues to grow.
Starting point is 00:27:56 But from the other side of it, you often wonder how much of what we're doing today with serverless is going to look like a constrained child's toy in a few years? Definitely. I think we are very much in the early days still, even though it's been a couple of years. Some of the examples coming out in the last few weeks, things like serverless Aurora, a really interesting change,
Starting point is 00:28:20 and are going to give people a bit more to think about when they make that decision between servers, containers, or serverless. I also think, as you've had some of your guests on the podcast already talk about this whole story about observability and monitoring in the serverless space, when you don't control so much of the stack that you're running on, it's really early days in that space, and it's something I'm really looking forward to see what comes out. There's a few startups in the space.
Starting point is 00:28:50 The other story is obviously security. Security is always incredibly important, and there's so much room for improvement there that, yeah, again, I think when we see the next generation of developers come along who at least have this as their starting point rather than racking and stacking servers as their kind of default approach, it's going to look completely different. And it's going to happen in the next five years, I'd say.
Starting point is 00:29:16 What astounds me more than anything else that I've seen in the entire serverless space was Tim Allen Wagner, the GM of Lambda, leaving Amazon at the height of the serverless craze to go work at Coinbase to sell Bitcoins to people. I'm sure he has his reasons. I'm sure that there are a wide variety of things that make perfect sense once you see it. I've never known people at AWS to make foolish decisions intentionally, but I've got to say, as something of a cryptocurrency skeptic, I am not seeing it. Yeah, look, maybe he just wants to live right on the edge.
Starting point is 00:29:57 I'd be a little bit afraid of getting cut, but that's what some people like. It really feels like you can tie the words blockchain, serverless, machine learning, Kubernetes, and, oh, I don't know, service mesh, all together into one sentence and
Starting point is 00:30:13 raise an entire seed round with nothing more than that sentence on a slide. I'm exaggerating, but I'm not sure I'm doing it by much. Look, I think you've just made a few enterprise architects out there foam at the mouth, hearing those words in one go.
Starting point is 00:30:29 Sorry, we'll have to put an explicit warning on this one. I'm tempted to do it in a proposal just to see how it goes, but I'd be afraid of it getting accepted. Exactly. It's one of those, yeah, we don't even need to see your pitch. Here's a check.
Starting point is 00:30:41 And that's just a disturbing reality. I'm not sure I want to play in yet, but we'll see. So where can people find more about what you're working on? How can they find you on the internet? So I blog occasionally at rowanudell.com. I'm on Twitter as well. That's probably the easiest way to get a hold of me. Perfect. We will throw a link to Twitter and your blog both into the show notes and see what winds up coming out of it. Anything else you'd like to share? Anything else you're working
Starting point is 00:31:10 on that makes for a good story that people should look into or be aware of? Yeah, look, just trying to get the whole development story out there for the serverless spectrum. I think there's so many new bits and pieces coming out in this space that it's really hard to keep up.
Starting point is 00:31:25 So I encourage everyone to get involved and share what they're working on, share what they're learning, because that's how we're going to move forward. Sounds like a plan. Rowan Udell, I'm Corey Quinn, and this is Screaming in the Cloud. This has been this week's episode of Screaming in the Cloud. You can also find more Corey at screaminginthecloud.com or wherever fine snark is sold.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.