Screaming in the Cloud - Serverless Observerless with Aviad Mor

Episode Date: May 25, 2021

About AviadAviad Mor is the Co-Founder & CTO at Lumigo. Lumigo’s SaaS platform helps companies monitor and troubleshoot serverless applications while providing actionable insights that ...prevent business disruptions. Aviad has over a decade of experience in technology leadership, heading the development of core products in Check Point from inception to wide adoption.Links:Lumigo: https://lumigo.io/

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. This episode is sponsored by ExtraHop. ExtraHop provides threat detection and response for the enterprise, not the starship.
Starting point is 00:00:41 On-prem security doesn't translate well to cloud or multi-cloud environments, and that's not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads and IoT devices, detects these threats up to 35% faster, and helps you act immediately. Ask for a free trial of Detection and Response for AWS today at extrahop.com slash trial. This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build
Starting point is 00:01:28 applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself? Make one of them go away. To learn more, visit Lumigo.io. Welcome to Screaming in the Cloud. I'm Corey Quinn. I periodically talk about how I bolt together a whole bunch of different serverless tools in horrifying ways to write my newsletter
Starting point is 00:01:58 every week. At last count, I was up to something like four API gateways, 29 lambda functions, and counting. How do I figure out if something's broken in there? Well, honestly, I just keep clicking the button until it works, which is a depressingly honest story. Now, not that it doesn't work for everyone. Today's promoted episode is brought to us by Lumigo. And my guest today is Aviad Moore, their CTO and co-founder. Aviad, thanks for taking the time to suffer my slings and
Starting point is 00:02:25 arrows. Thank you, Corey. I'm very glad to be here today. So let's begin at, I guess, the very easy high-level question. What is Lumigo and is Lumigo an accepted alternate pronunciation? So Lumigo is a monitoring and debugging platform for serverless environments. And yes, you can call it whatever you want as long as it's Lumigo. What we do is we integrate with the customer's AWS account. We do a very quick connection to its lambdas, and then we're able to show him exactly what's going on in his system, what's going well, what's going wrong, and how we can fix it. So let's make sure that we hit a few points here at the beginning. It is
Starting point is 00:03:12 AWS specific at this time? Yes, it is. We're not officially exclusive with AWS, but right now we see the most interesting serverless environments in AWS, so it's a pretty easy call. But we are keeping our eye open to Google, Microsoft, even Oracle. Oracle Cloud has some phenomenally interesting serverless stories that I don't think the world is paying enough attention to yet. But one of these days, I'm hoping that that's going to change just because they have so much savvy locked up in that platform. Right. They do have serverless functions.
Starting point is 00:03:46 They acquired the Iron.io folks a while back, and those people were way ahead of Lambda at the time. Right, right. So we're waiting for the big breakout of serverless and Oracle, and then we'll build the best monitoring solution for them. So something else that I think that you have successfully navigated as far as, I guess, the traps that various observability tooling falls into, you also talk on your site about monitoring AWS Lambda as the center around which everything winds up being captured. You also, of course, integrate with the things that tie directly into it, such as API Gateway or API Gateway, as I'm sure they mispronounce it at AWS. But that's sort of where you stop. You don't also show all of the container workloads that folks are running. And, oh, hey, while we have access to your API, here's a whole story about EC2 and RDS and all the rest. And eventually, it feels like everything in the fullness of time tries to become Datadog version two. And that
Starting point is 00:04:46 always drove me nuts because I want something that's going to talk to me specifically about what it is that I'm looking at in a serverless application context, not trying to be all things to all people at once. Is that a fair assessment of the product strategy you folks have pursued? Right. So we're very focused on serverless. We think there's a lot of interesting things that we can do there, and we're actually seeing more and more use cases of serverless.
Starting point is 00:05:13 And it is important to say that when we say serverless, it's very clear what is serverless. So Lambda, of course, and API, Gateway, DynamoDB, S3, and so on, there's a lot of services in that ecosystem and seeing them all being tied together in a serverless cloud application we're able to do all of that to monitor it not only monitor it at the high level but also you know get into the details and show you things which are very specific because like this is what we do all day and sometimes all night. And then there are those boundaries of where do we go beyond serverless.
Starting point is 00:05:53 So there are some hybrid environments out there. And when I say hybrid, the easy hybrid, which is you have two different applications which just happen to be on the same AWS account. One of them is completely serverless, and then the other one is EC2. So that's kind of hybrid. But the more interesting hybrid is those applications which start with an API gateway and a Lambda, and then are directly connected to something else, which is maybe Fargate, ECS, EKS, and so on. So we are very much focused on serverless,
Starting point is 00:06:31 but we are getting also a lot of requests from our customers. So show us also the other parts. We're starting to look at that, but we're not losing our focus. Our focus is still very much on the serverless while allowing you to tie together if you do have some other aspects in your environment to see them all together. So you've done a number of things
Starting point is 00:06:55 that I would consider best in class as you've gone through this. First and foremost, let's begin with the easy stuff. It doesn't appear that your dashboard, your tooling itself is purely serverless itself. I can tell this because when doesn't appear that your dashboard, your tooling itself is purely serverless itself. I can tell this because when I click around in your site, the site loads super quickly. It's not waiting for cold starts or waiting for the latency inherent to the Lambda. It's clear that you have not gone so far down the path of being, I guess, religiously correct
Starting point is 00:07:20 around everything must be serverless all the times in favor of improving customer experience. That's something that I've seen a number of different vendors fall into the trap of. Why is the dashboard so slow to load? Ah, because everything is itself a Lambda function. Is that accurate, or have you just found a way to improve Lambda function performance in an ungodly way? We are serverless. We call ourselves serverless first, but the customer is always, you know, he's really the first. So if there's a place where serverless is not the best solution, we're going to use whatever is the best solution. But the truth is we're, I'd say something like 99% serverless and specifically anything which is dashboard facing, customer facing,
Starting point is 00:08:02 that's actually completely serverless. So we did have to put in a lot of work. But also, I have to say that AWS went a very long way in the last two years, allowing us to give much better latencies in different parts of the dashboard. So all of that is serverless. And it goes together with the new features of Lambdas and API gateways and a lot of small things we had to do in order to provide the best experience to the customer. The next thing that I think was interesting
Starting point is 00:08:35 as far as capturing the way in which people use these things, one of the earliest problems I had in the early days of this new breed of serverless tools was getting everything instrumented correctly. It felt like it took, in some cases, more time to get the observability pieces working than it did to write the thing in the first place. So you're integrating out of the gate with a lot of the right things, as best I can tell. Your website advertises that you integrate with the serverless framework. You integrate with a bunch of other processes as well. Chalice, which I haven't seen used in anger too much,
Starting point is 00:09:07 but okay. Terraform, which everyone's using, Stackery, et cetera. Is AWS's SAM on the list as well? Yes, it actually is. And once we started seeing more and more users using SAM, we had to provide a way to allow them to easily do the integration.
Starting point is 00:09:24 Because one of the things that we learned is, you know, our users are developers. And just like you said, they don't want to spend time on anything which is not like doing the thing that they want to do, especially in serverless, right? Because the whole serverless premise is, you know, work on what you do best and don't spend time on everything else. So we actually spend a lot of time ourselves in order to make the integrations as easy as possible, as quickly as possible. And that also means that working with a lot of different tools to fit all the different environments our users are using out there.
Starting point is 00:09:59 It looks like you're doing this through the application judiciously of a bunch of custom layers. In other words, whatever you wind up using winds up being built as an underpinning of the existing Lambda functions. So it's super easy to wind up grabbing the necessary dependencies for anything that you folks support, and without having to go back and refactor existing applications. Is that directionally correct? Right, that's correct. We're using layers in order to, on one hand, to do this deep integration we do with the Lambda,
Starting point is 00:10:29 allowing us to do different instrumentations, collecting the data that's being passed into the Lambda, being passed out of the Lambda on one hand, but on the other hand, so the developer doesn't have to make any code changes and he can do whatever changes he wants to do, doesn't have to think about Lumigo at any point, and serverless layer does everything for him automatically. How do you handle support of the Lambda at Edge functions, which seem an awful lot like regular Lambda functions, except they're worse in nearly every single way every time I've tried to use them. In fact, in my experience,
Starting point is 00:11:07 the best practice has been to immediately rip out Lambda at Edge and replace it with something else. Recently, it was formally disclosed that they only ran in a subset of 13 regional cache locations, and they still took a full CloudFront distribution update cycle
Starting point is 00:11:23 every time you did a deployment, which dramatically slowed everything down for deploying it. They were massively expensive to run at significant scale. And they would log to whatever region was closest. So it was a constant game of whack-a-mole to figure out what was going on. But, you know, other than that, they were great. How do you approach those? Lambda and Edge are not very easy to use.
Starting point is 00:11:44 And they're like, let's say they're full of surprises because not everything they do is exactly what you find in the documentation. But again, since our users are using them, we had to make sure that we give them proper support. And giving them the proper support other than running and collecting the data is things that you mentioned, like the fact that it will log to the specific region it's running in. So you have to go and collect all this data from different places and you don't really know exactly where it's going to run. So the main thing here is just to make things easy. It's a bit of a mess when you're looking at it directly and taking all the information, putting it in one place so you as a user can just go ahead and read it
Starting point is 00:12:31 and you don't care where it's running and what it's doing. That was the main challenge, which we worked on and added to the product. So across the board, it seems like you folks have been evolving in lockstep with the underlying platform itself. Have you had time to evaluate their new CloudFront functions, I believe is what they're calling it, or is it CloudFront workers? I can never quite keep it straight. Between all the different providers, all the words start to sound alike, but the thing that
Starting point is 00:12:56 only runs for a millisecond or two, only in JavaScript, only in the actual, all the CloudFront edge locations, et cetera, et cetera. Rather than fixing Lambda at Edge, they decided to roll something completely different out. And I haven't looked into anything approaching the observability story yet because I'm still too angry about it. Right.
Starting point is 00:13:16 So, you know, there's a lot of things coming out and we're also like very close partners with AWS. So in many cases, we're actually beta users of new services or new functionality in Lambda. And one of the hardest parts is in the end, we cannot spend all our time checking everything new. So this is one of the things which is still in the to-do list. We're going to check it very close in a very close time.
Starting point is 00:13:43 I think it's interesting to see how we can actually use it and is it as quickly as they say. What they say usually works. We'll see if it works already today or do we have to wait a little bit until it works exactly like they said. But no, that's one of the things that are on my to-do list. I'm really looking forward to checking it out.
Starting point is 00:14:03 So it looks like once I set this up and it starts monitoring my account or observing my account, I know, I know, observability is just hipster monitoring, but no one agrees with me on that, so I'm going to keep rolling with it anyway just to irritate people. It looks like I can effectively more or less click a button
Starting point is 00:14:18 and suddenly you will roll out an underlying Lambda layer to all of my existing Lambda functions. How does that get maintained whenever I wind up, for example, doing a new deployment with the serverless framework or something like it that isn't aware of that underlying layer? So it presumably would revert that layer itself in the definition. Or am I misunderstanding how that works? No, no, you're actually getting it right. So unless you, for example, are using a serverless plugin,
Starting point is 00:14:45 so this is an integral part of your deployment, one of the things that we need to do is to automatically identify that a deployment is happening so we can automatically update the Lambda layer to be the right one. So you won't miss anything. And this deep integration, which is happening you know
Starting point is 00:15:07 without the user having to know anything about it this is i think one of the most important parts because in serverless as you know you have so many components and very easily you can reach you know hundreds of lambdas which are things that we we're seeing so if a user has to take care and maintain something across 100 lambdas or more, you can be sure that it won't be maintained because he has something much more important to do. So behind the scenes, we immediately, as a deployment is happening, we can recognize that it's happening
Starting point is 00:15:41 and update the layer that's required. And by the way, now the layers have a new part called extensions, which allow us and everybody else to do a lot more with those layers, basically allowing the code to run in parallel to the Lambda. So this is a new thing that AWS has started to roll out, and we think will allow us to give even better experience to our users. As I look across the, I guess, the ecosystem of different approaches to this stuff, one thing that
Starting point is 00:16:13 has always annoyed me about a whole raft of observability and monitoring tools is they wind up charging me whatever it is they charge me. It's generally fine. And I don't really have a problem with that. You know in advance going in what things are going to cost you. Incidentally, what is your pricing model? So our pricing model is according to the number of invocations you have. So we have basically two models which we're using right now, and each one can decide what he wants better. So if you want to know in advance exactly how much you're going to pay, you can go with the tiered model, meaning I want to pay for, let's say,
Starting point is 00:16:51 a million invocations each month, and then you're sure that you're paying exactly for what you have a budget for. And it's always related to how much your AWS account is working, so similar to how much your AWS account is working. So similar to how much you're paying for your Lambdas. And then there's another way, which is dynamic pricing, which is very similar to serverless payment.
Starting point is 00:17:20 So it's really according to the number of invocations you have. You don't need to decide in advance. And at the end of each month, according to the number of invocations you have, you get the bill. And that way, it's not based on the invocations in general. It's exactly according to the number of invocations. And let's be clear, if I wind up exceeding the number of invocations under my plan, it just stops tracing and observing these things. It doesn't break my app. Yeah, right. Always good to triple check those things. It seems like it might hurt.
Starting point is 00:17:50 That's very important. You're totally correct. And we never do anything bad to your lambdas. That's written on the top of our door, never hurt a lambda. And we make sure that nothing bad happens. We just stop collecting data. And by the way, even as you pass your limit, we still collect the basic metrics so you can see what's going on in your system, but you won't be able to see the rich information, all the information allowing you to do the debugging or seeing the full traceability end-to-end of all the invocations and see how they're connected to each other. This episode is sponsored in part by Thinkst. This is going to take a minute to explain, so bear with me. I linked against an early version
Starting point is 00:18:37 of their tool, canarytokens.org, in the very early days of my newsletter. And what it does is relatively simple and straightforward. It winds up embedding credentials, files, that sort of thing, in various parts of your environment, wherever you want to. It gives you fake AWS API credentials, for example. And the only thing that these things do is alert you whenever someone attempts to use those things. It's an awesome approach. I've used something similar for years. Check them out. But wait, there's more. They also have an enterprise option that you should be very much aware of.
Starting point is 00:19:10 Canary.tools. You can take a look at this, but what it does is it provides an enterprise approach to drive these things throughout your entire environment. You can get a physical device that hangs out on your network and impersonates whatever you want to. When it gets NMAP scanned or someone attempts to log into it or access files on it, you get instant alerts. It's awesome. If you don't do something like this, you're likely to find out that you've gotten breached the hard way. Take a look at this. It's one of those few things that I look at and say, wow, that is an line with what I would expect. But the thing that irritates me then is, great, I know what I'm going to be paying you folks on a monthly basis, and that's fine. And then I use the monitoring tool, and it costs me over three times as much in AWS charges, both direct and indirect, where it's, oh, now CloudWatch is going to suddenly be the largest component of my bill. And data transfer
Starting point is 00:20:17 for sending everything externally winds up spiking this into the stratosphere. What's your experience been around that? So since we are collecting data and we are doing API calls, it will affect your AWS bill. But because we don't want to irritate you or anybody else, we are putting a lot of focus to see that we're doing the absolute minimal possible effect on your system. So for example, as we're collecting data from your Lambda, we're doing our best to add only milliseconds to the running time of your Lambda
Starting point is 00:20:54 so you don't end up paying a lot more for the runtime. Or for the API calls or data transfer, we have a lot of optimizations that we did. So the billing on your AWS account is really very, very small. It's not something that you will notice. And sometimes when people do ask us, we go together with them into their account and show them exactly how their billing was affected by Lumigo. So they'll have assurance that nothing crazy is going on there. Which is, I guess, one of the fundamental problems of the underlying platform itself. I have a hard time blaming you for any of this stuff.
Starting point is 00:21:34 This is the perpetual joyless story of getting to work with a variety of AWS services. It's not something that I see that you folks have a good way around, just on a basis of how the underlying platform works. Yeah, and then there are a lot of different prices for a lot of small things that you do, and you need to be able to collect it all in order to have the big picture of the effect. And yeah, we don't have a silver bullet for it,
Starting point is 00:22:02 but we can show exactly where we're going, what we're adding to show how low it is. One of the things that I think is not well understood for folks who are not into the serverless ecosystem is just how these applications tend to look. In most organic environments, you'll see a whole bunch of Lambda functions that are all tied together with basically spit and bailing wire. They talk to each other either directly on the back end, which is an anti-pattern in many respects, let's not kid ourselves, or alternately, they're approaching it through a lens of, we're going to now talk to each other through hardened REST APIs,
Starting point is 00:22:38 which is generally preferred, but also a little finicky to get up and running. So at some point, you have a request come in and it winds up bouncing around through a whole bunch of different subsystems. Tracing and a lot of the observability story around serverless is figuring out, all right, somewhere in that rat's nest, it winds up failing.
Starting point is 00:22:56 Where did it break? What was it that actually threw the exception? What was it that prevented something from working? Or alternately, adding latency. Where is the bulk of the time serving that request being spent? And you would think that this is the sort of thing that AWS could handle itself. And they've tried with their x-ray distributed tracing option, which more or less feels like a proof of concept demonstrating what not to do. And if you take a look from their application view and all the rest, it is the best sales pitch I can possibly imagine for any of the serverless
Starting point is 00:23:31 monitoring tools that I've seen. Because it is so badly articulated. You have to instrument all of your stuff by hand. There's none of this, oh, I'll look at it and figure out what it's talking to and build a automated trace approach the way that Lumigo does. And that's always frustrated me because I shouldn't have to turn every weird analysis into a murder mystery. Am I missing something obvious in how this could be done using native tools directly? Or is it really as bad as I believe it is? I won't say it as bad as you're saying it is. I think X-Ray is a great place to start with. So, you know, if you have to get hard, especially if you're trying to do it yourself. That's usually the wow part when people start using Lumigo. When we show them a demo, it's seeing how everything is tied together.
Starting point is 00:24:36 So once you see how everything is tied together, the whole system, which components are talking to each other, and how they're affecting each other. And for example, if one of them goes down, does it mean that the whole system now is not working? Or maybe it wasn't that important and everything is working, I'll fix it next week. But I think the most important part is actually what we call the transactions. So as you said, there's an API call at the very beginning with an API gateway or AppSync, and then it can go through dozens of components. Some of them are not directly related. So it's like a Lambda calling, putting something into a DynamoDB, which triggers a DynamoDB stream. And then another Lambda is being called and so on and so on, it's crucial to be able to see how everything is connected,
Starting point is 00:25:27 both very visually, so you can understand it. There's only so much you can understand when looking at a list as a human being, right? You need to see it visually, how everything is connected. But then after you understand how everything is connected
Starting point is 00:25:43 in this specific transaction, if, for example, you have an issue in a specific invocation, you need to understand the story of that invocation. And, you know, maybe you're looking at a Lambda which starts to throw an exception and you didn't change anything in its code today, yesterday, or the day before that. So take care of that exception. But the root cause is probably not in that Lambda. It's probably upstream. So you need to be able to understand exactly what was the chain of events, all the calls being made until that specific Lambda was called to see the data being passed, including the data that that Lambda maybe passed to
Starting point is 00:26:26 a third-party API like Stripe or PayPal and what it got in return. And only when you're able to see all of that, you're able to solve an issue quickly, not a murder mystery like it might be, time over time without having to think about how will I make sure that I make all the code changes in order to keep getting these transactions. So taking a look at the somewhat crowded space, if I'm being perfectly honest with you, of the entirety of, let's call it the serverless observability space, or observerless as as I'm a big fan of calling it. What is it that differentiates Lameego from a number of other offerings
Starting point is 00:27:10 that people could wind up pulling out of a hat? Right, so that's a great question. And every time somebody asks me, the first thing I can say is, the more I see people getting into this space, I think that that's a great sign. Because that means there's more serverless activity. There's more companies doing serverless and it means that our serverless
Starting point is 00:27:32 space is interesting. People see an opportunity there and they want to try and solve the issues that we're seeing there. And I think that there's a few things. One of them is the serverless expertise. So if you look at a lot of the big companies, like I'll mention Datadog and New Relic, they're doing a lot of great things. But in the end, in the serverless environment, there are very specific things which you need to know how to do in order to be able to do that distributed tracing.
Starting point is 00:28:06 The distributed tracing, which allows you to correlate specific transactions together and then bring in those metrics which are relevant and bring in the logs which are relevant for a specific transaction. That's a lot of hard work which we put in in order to be able to do the transactions with the distributed tracing in the best way possible, and then showing it to you in the simplest way possible. And today, I think that Lumigo does that in a very good way. And also, if we're looking around at other players, which are not only the big ones, also players which are doing more specifically serverless, I think that if we're looking at companies which are very focused on serverless and serverless is the thing they do, you'll still see that Lumigo is the one which is
Starting point is 00:28:55 doing serverless the most, let's call it. So as serverless is expanding, we're still not becoming generic, something that we mentioned before. And this allows us not only to do the best distributed tracing, but also allow us to show you out of the box a lot of issues which might be hiding in your environment. So it's not only, okay, you have an exception here. It's also more specific things to serverless, like for example, because it's event-driven. So sometimes you'll get duplicate events. The Kinesis or SQS might send you over and over the same event. The fact that we can show you it automatically and put a spotlight on it can save
Starting point is 00:29:39 you a lot of time in trying to understand why things are not working the way you think they should be working. And allowing us to scan your environment and show you misconfigurations, which are specific to serverless. This is the kind of things that once you use Lumigo, you get automatically without having to do anything special. And that can save you a lot of time. I think that's a relatively astute position to take.
Starting point is 00:30:07 I'm a big believer in getting the right tool for the right job. I don't necessarily want the one single pane of glass to look at everything. I think that that is something that is very often misunderstood. Yeah, I might be using three or four different clouds in different ways.
Starting point is 00:30:23 I don't need to see a roundup of all of them. I don't necessarily care what the billing looks like on all of them. I don't necessarily want to spend my time thinking about different aspects of these things juxtaposed to one another. And it's a pain in the butt to have to sort through to find the thing I actually care about. So yeah, on some level, I definitely want there to be a specific tool. And let's be clear, you have a terrific stack of things that you integrate with for alerting, for opening tickets, for remediation, or issues as the case may be. Nomenclature is always a distraction. Don't at me. But yeah, across the board, I see that you're doing a lot of things right. That if I were going to be entering this
Starting point is 00:30:58 space, I would make a lot of those decisions very similarly and then expect to hear it from the internet. You've been around for years now and are continuing to grow. What's next for you folks? So that's a great question, which I ask myself every morning. I'll actually take together the two things that you mentioned. One is how we're focused on serverless. And the second is where do we want to grow from there? And when you do this great focus, you have to make sure that what you're focusing on is big enough. So as we're growing, we're very happy to see that the serverless is growing with us. We're seeing more and more places using serverless. We see a lot more users, companies, developers going into serverless.
Starting point is 00:31:46 And we see new types of users. So it's not only those bleeding edge technologies that people want to use and they are really trying to find out how they can use it. We're seeing more and more places which are like, for example, enterprises that had maybe one architect in the beginning that said, okay, I'm going to use serverless. And now a year or two afterwards, they see that it's working and it's saving them money. They're able to build faster. And now it's both spreading virally to other teams, which are starting to use that. And also the initial project, which was started two years ago, is now growing and becoming bigger and more complex. And also that team, which was just starting with serverless two years ago, now has maybe a second
Starting point is 00:32:41 and third product. So what we're doing is we're looking how we can give serverless better and better monitoring for the new services that are entering that field. And also we're very strong believers in that developers today are doing much of that monitoring or observability. You can choose whatever you want. And that means that it goes all the way into debugging. So we think that doing those two together, bringing together the monitoring and debugging is a great opportunity for our users just to save them more time because it's the same person who's going to do both those things and trying to keep being best of breed in serverless and doing those two
Starting point is 00:33:27 together. I think that's going to be hard. And that's exactly the challenge that we're taking. And we want to see how we're doing it the best. And I think that that is probably the best way to approach it. If people want to learn more about what you're up to, how you view these things, and ideally kick the tires on Lomigo and see for themselves where can they find you. So easiest thing you can do, just search for Lumigo in Google. You'll get to lumigo.io. And from there, it's very easy to try us out.
Starting point is 00:33:57 And we'll, of course, put links to that in the show notes. Thank you so much for taking the time to speak with me today. I really appreciate it. Thank you, Corey. It was great fun and looking forward for taking the time to speak with me today. I really appreciate it. Thank you, Corey. It was great fun and looking forward for the next time. Absolutely. Avi Admor, co-founder and CTO at Lumigo. I'm Chief Cloud Economist Corey Quinn, and this is Screaming in the Cloud.
Starting point is 00:34:18 If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you hated this podcast, please leave a five-star review on your podcast platform of choice, along with a long, rambling comment telling me how very wrong I am on the wonder that is Lambda at Edge. If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duck Bill Group works for you, not AWS. We tailor recommendations to your business,
Starting point is 00:34:58 and we get to the point. Visit duckbillgroup.com to get started. This has been a HumblePod production. Stay humble.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.