Screaming in the Cloud - Bringing Visibility to Cloud Backups with Chadd Kenney

Episode Date: May 27, 2021

About ChaddChadd Kenney is the Vice President of Product at Clumio. Chadd has 20 years of experience in technology leadership roles, most recently as Vice President of Products and Solutions ...for Pure Storage. Prior to that role, he was the Vice President and Chief Technology Officer for the Americas helping to grow the business from zero in revenue to over a billion. Chadd also spent 8 years at EMC in various roles from Field CTO to Principal Engineer. Chadd is a technologist at heart, who loves helping customers understand the true elegance of products through simple analogies, solutions use cases, and a view into the minds of the engineers that created the solution.Links:Clumio: https://clumio.com/Clumio AWS Marketplace: https://aws.amazon.com/marketplace/pp/prodview-ifixh6lnreang

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. This episode is sponsored in part by Chaos Search. As basically everyone knows, trying to do log analytics at scale with an elk stack is expensive, unstable, time-sucking, demeaning, and just basically all-around horrible.
Starting point is 00:00:49 So why are you still doing it, or even thinking about it, when there's Chaos Search? Chaos Search is a fully managed, scalable log analysis service that lets you add new workloads in minutes and easily retain weeks, months, or years of data. With Chaos Search, you store, connect, and analyze, and you're done. The data lives and stays within your S3 buckets, which means no managing servers, no data movement, and you can save up to 80% versus running an Elk Stack the old-fashioned way. It's why companies like Equifax, HubSpot, Klarna, AlertLogic, and many more have all turned to Chaos Search. So if you're tired of your elk stack falling over before it suffers, or of having your log analytics data retention squeezed by the cost, then try Chaos Search today and tell them I sent you.
Starting point is 00:01:39 To learn more, visit ChaosSearch.io. This episode is sponsored in part by our friends at Lumigo. If you've built anything from serverless, you know that if there's one thing that can be said universally about these applications, it's that it turns every outage into a murder mystery. Lumigo helps make sense of all of the various functions that wind up tying together to build applications. It offers one-click distributed tracing so you can effortlessly find and fix issues in your serverless and microservices environment. You've created more problems for yourself?
Starting point is 00:02:14 Make one of them go away. To learn more, visit lumigo.io. Welcome to Screaming in the Cloud. I'm Corey Quinn. Periodically, I talk an awful lot about backups and that no one actually cares about backups, just restores. Usually, they care about restores right after they discover they didn't have backups of the thing that they really, really, really wished that they did. Today's promoted guest episode is sponsored by Clumio, and I'm
Starting point is 00:02:39 speaking to their VP of product, Chad Kenney. Chad, thanks for joining me. Thanks for having me. Super excited to be here. So let's start at the very beginning. What is a Clumio? Possibly a product, possibly a service, probably not a breakfast cereal, but again, we try not to judge. Awesome.
Starting point is 00:02:58 Well, Clumio is a backup as a service offering for the enterprise focused in on the public cloud. And so our mission is effectively to help simplify data protection and make it a much, much better experience to the end user and provide a bunch of values that they just can't get today in the public cloud, whether it's invisibility or better protection or better granularity. And we've been around for a bit of time, really focused in on helping customers along their journey to the cloud. Backups are one of those things where people don't spend a lot of time and energy thinking about them until they are, I guess, befallen by tragedy in some form.
Starting point is 00:03:34 Ideally, it's something minor, but occasionally it's, oh yeah, I used to work at that company that went under because there was a horrible incident and we didn't have backups. And then people go from not caring to being overzealous converts. Based upon my focus on this, you can probably pretty safely guess which side of that quick chasm I fall into. But let's start, I guess, at your positioning. You said that you are backup for the enterprise. What does that mean exactly? Who are your customers? We've been trying to help customers like into their cloud journey. So if you think about many of our customers are coming from the on-prem data center. They have moved some of their applications, whether they're lift-and-shift applications or whether they've stalled doing net-new development on-prem and doing all net-new development in the public cloud.
Starting point is 00:04:16 And we've been helping them along the way in solving one fundamental challenge, which is, how do I make sure my data is protected? How do I make sure I have good compliance and visibility to understand? Is it working? And how do I be able to restore as fast as possible in the event that I need it? You mentioned at the beginning, backup's all about restore, and we 100% agree. I feel like today you get this clomped together series of solutions, whether it's a script or it's a backup solution that's moved from on-prem or it's a snapshot orchestrator. But no one's really been able to tackle the problem of help me provide data protection across all of my accounts, all of my regions, all of my services that I'm using within the cloud. And if you look at it, the enterprise has transitioned dramatically to the cloud and don't have great solutions to latch on to to solve this fundamental problem.
Starting point is 00:05:05 And our mission has been exactly that, bring a whole bunch of cool innovation. We're built natively in the public cloud. We started off on a platform that wasn't built on a whole bunch of EC2 instances that look like a box that was built on-prem. We built the thing mostly on Lambda functions, very event-driven, all AWS native services.
Starting point is 00:05:23 We didn't build anything, proprietary data structure for our environment. And it's really been able to build a better user experience for our end customers. I guess there's an easy question to start with of why would someone consider working with Clumio instead of AWS Backup, which came out a few months after reInvent,
Starting point is 00:05:42 I want to say 2018, but don't quote me on that, may have been 2019, but it has the AWS label on the tin, which is always a mark of quality. Well, there's definitely a fair bit to be desired on the AWS backup front. And if you look at it, what we did is we spent, you know, really before going into development here, a lot of time with customers to just understand
Starting point is 00:06:02 where those pains are. And I've nailed it, kind of the four or five different things that we hear consistently. One is that there's near zero insights. I don't know what's going on with it. I can't tell whether I'm compliant or not compliant or protecting not enough or too much. They haven't really provided sufficient security
Starting point is 00:06:20 on being able to air gap my data to a point where I feel comfortable that even one of my admins can't accidentally fat finger a script and delete, whether the primary copy or secondary copy. Restore times have a lot to be desired. I mean, you're using snapshots. You can imagine that doesn't really give you
Starting point is 00:06:36 a whole bunch of fine-grained granularity and the timeframe it takes to get to it, even to find it, is kind of a time-consuming game. And they're not cheap. The snapshots are at $0.05 per gig per month. And I will say they leave a lot to be desired for that cost basis. And so all of this complexity kind of built in as a whole has given us an opportunity to provide a very different customer experience.
Starting point is 00:06:58 And what the difference between these two solutions are is we've been providing a much better visibility just in the core solution. And we'll be announcing here on May 27th, Clumio Discover, which gives customers so much better visibility than what AWS backup has been able to deliver.
Starting point is 00:07:15 And instead of them having to create dashboards and other solutions as a whole, we're able to give them unique visibility into their environment, whether it's global visibility, ensuring data is protected,
Starting point is 00:07:26 doing cost comparisons, and a whole bunch of others. We allow customers to be able to restore data incredibly faster at fine-grained granularities, whether it's at a file level, directory level, instance level, even in RDS, we go down to the record level of a particular database with direct query access. And so the experience as a whole
Starting point is 00:07:45 has been so much simpler and easier for the end consumer that we've been able to add a lot of value well beyond what AWS Backup uses. Now, that being said, we still use snapshots for operational recovery at some level where customers can still use what they do today. But what Clumio brings is an enhanced version of that by actually using air gap protection inside of our service for those data sets as well. And so it allows you to almost enhance AWS backup at some level, if you think about it, because AWS backups are really just orchestrating the snapshots. We can do that exact same thing too, but really bring the air gap protection solution on top of that as well. I've talked about this periodically on the show,
Starting point is 00:08:23 but one of the last, I guess, big migration projects I did when I was back in my employee days before starting this place was a project I'd done a few times, which was migrating an environment from EC2 Classic into a VPC world. Back in the dark times before VPCs were a thing, EC2 Classic is what people used. And they were not just using EC2 in those environments, they were using RDS in this case. And the way to move an RDS database is to stop everything, take a final snapshot, then restore that snapshot, which is their equivalent of backup, to the new environment. How long does that take? It is non-deterministic. In the fullness of time,
Starting point is 00:09:03 it will be complete. That wasn't necessarily a disaster restoration scenario, so much as it was just a migration. And there were other approaches we theoretically could have taken, but this was the one that we decided to go with based upon a variety of business constraints. And it's awkward when you're sitting there just waiting indefinitely for,
Starting point is 00:09:21 it turns out, about 45 minutes in this case. And you think everything's going well, but there's really nothing else to do during those moments. And that was, again, a planned maintenance. So it was less nerve wracking than the site is down and people are screaming. But it's good to have that expectation brought into it. But it was completely non-transparent. There was no idea what was going on. And actual disasters, things are never that well-planned or clear-cut. And at some level, the idea of using backup restoration as a migration strategy is kind of a strange one, but it's a good way of testing backups. If you don't test your backups, you don't really have them in the first place, at least as that's always been my philosophy.
Starting point is 00:09:59 I'm going to theorize, unless this is your first day in business, that you sort of feel the same way, given your industry. Definitely. And I think the interesting part to this is your first day in business, that you sort of feel the same way, given your industry. Definitely. And I think the interesting part to this is that you have the validation that backup's occurring, which is you need visibility on that functioning at some level. Like, did it actually happen? And then you need the validation that the data is actually in a state that I can recover. Task failed successfully. Exactly.
Starting point is 00:10:21 And then you need validation that you can actually get to the data. So there's snapshots, which give you this full entire thing. And then you got to go find the thing that you're looking for within it. I think one of the values that we've really taken advantage of here is we use a lot of the APIs within AWS first to get optimization in the way that we access the data. So as an example, on your EC2 example, we use EBS direct APIs and we do change block tracking off of that. And we send the data from the customer's tenancy into our service directly, right? And so there's no egress charges. There's no additional cost associated to it. It
Starting point is 00:10:54 just goes into our service and the customer pays for what they consume only. But in doing that, they get a whole bunch of new values. Like now you can actually get file level indexing. I can search globally for files in an instance without having to restore the entire thing, which seems like that would be a relatively obvious thing to get to. But we don't stop there. You could restore a file. You could go browse the file system. You could restore to an AMI. You could restore to another EC2 instance. You could move it to another account. In RDS, not an easy service to protect, I will say. You get this game of, I've got to restore the entire instance and then go find something to query the thing. And our solution allows you direct query access. So we can see a schema browser. You
Starting point is 00:11:37 can go see all of your databases that are in it. You can see all the tables, the rows in the table. You can do advanced queries to join across tables, join across tables to go get any results. And that experience, I think, is what customers are truly looking for to be able to provide additional values beyond just the restoration of data. I'll give you a fun example that a SaaS customer was using.
Starting point is 00:11:57 They have a centralized customer database that keeps all of the config information across all of the tenants. I used to do something very similar with Route 53 and everyone looks at me strangely when I say it, but it worked at the time. There are better approaches now, but yeah, very common pattern.
Starting point is 00:12:11 And so you get into a world where it's like, I don't want to restore this entire thing at that point in time to another instance, and then just pull the three records for that one customer that they screwed up. Instead, it would be great if I could just take those three records from a solution and then just import it into the database. And the funny part to this is that the time it takes to do all these things is one component. The accidentally forgetting to delete all the
Starting point is 00:12:35 stuff that I left over from trying to restore the data for weeks at a time that now I pay for in AWS is just this other thing that you don't ever think about. It's like inefficiencies build in with the manual operations that you build into this model to actually get to the data sets. And so we just think there's a better way to be able to see and understand data sets in AWS. One of my favorite genres of fiction is reading companies' DR plans for how they imagine a disaster is going to go down. And it's always an exercise in hilarity. I was not invited to those meetings anymore after I had the temerity to suggest that maybe if the city is completely uninhabitable and we have to retreat to a DR site, no one cares about this job that much. Or if US East 1 has burned to the ground over in AWS land, that maybe your entire staff
Starting point is 00:13:25 is going to go quit to become consultants for a hundred times more money by companies that have way bigger problems than you do. And then you're not invited back. But there's usually a certain presumed scale of a disaster where you're going to swing into action and exercise your DR plan. Okay, great. Maybe the data center is not a smoking crater in the ground. Maybe even the database is largely okay. What if you lost a particular record or a particular file somewhere? And that's where it gets sticky in a lot of cases, because people start wondering, do I just spend the time and rebuild that file from scratch, kind of? Do I do a full restore? All I have is either nothing or the entire environment. You're talking about row-level restores, effectively,
Starting point is 00:14:07 for RDS, which is kind of awesome and incredible. I don't think I've ever seen someone talking about that before. How does that map as far as, effectively, a choose-your-own-disaster dial? There's a bunch of cool use cases to this. You've definitely got disaster recovery, so you've got the instance where somebody
Starting point is 00:14:23 blew something away and you only need a series of records associated to it. Maybe the SQL query was off. You've got compliance stuff. Think about this for a quick sec. You've got an RDS instance that you've been backing up. Let's say you keep it for just even a year. How many versions of that RDS database has AWS gone through in that period of time? So that when you go restore that actual snapshot, you've got to rev the thing to the current version, which should take you some time to get up and running before you can even query the thing. And imagine if you do that, like years down the road, if you're keeping databases out there and your illegal teams asking for a particular thing for discovery, let's say, and you know, you've got to now go through all of these iterations to try to get it back.
Starting point is 00:15:03 The thing we decided to do that was genius on the Hinge team was we wanted to decouple the infrastructure from the data. So what we actually do is we don't have a database engine that's sitting behind this. We're exporting the RDS snapshot into a Parquet file and the Parquet file then gets queried directly from Athena. And that allows us to allow customers to go to any timeframe to be able to pull not specific database engine data into whether it's a restore function or whether I want to migrate to a new database engine. All I can pull that data out and reimport it into some other engine without having to have that infrastructure be coupled so closely to the data set. And this was really kind of a way for customers to be able to leverage those data sets in all sorts of different ways in the future with being able to query the data directly from our platform. It's always fun talking to customers
Starting point is 00:15:54 and asking them questions that they look at me as if I've grown a second head, such as, okay, so in what disaster scenario are you going to need to restore your production database to a state that was in nine months ago? They look at me like I've just asked a ridiculous question because, of course, they're never going to do that. If the database is restored to a copy that's back up more than 15 minutes or so in the past, there are serious problems. That's why the recovery point objective or RPO of what is your data window of loss when you do a restore is so important for these plannings.
Starting point is 00:16:24 That's great. Okay, then why do you have six years of snapshots of your database taken on an interval going back all that time if you're never going to ever restore to any of them? Well, something, something compliance. Yeah, there are better stories for that. But people start keeping these things around
Starting point is 00:16:41 almost as digital pack rats. And then they wind up surprised that their backup bill has skyrocketed. I'm going to go out on a limb and presume, keeping these things around almost as digital pack rats. And then they wind up surprised that their backup bill has skyrocketed. I'm going to go out on a limb and presume, because if not, this is going to be a pretty awkward question, that you do not just backup management, but also backup aging as far as life cycles go. Yeah. So there's a couple of different ways that are fun for us. We see multiple different tiers within backup, right? So you've got the operational recovery methodology,
Starting point is 00:17:05 which is what people usually use snapshots for. And unfortunately, you pay that at a pretty high premium because it's high value. You're going to restore a database that maybe went corrupt or got somehow updated incorrectly or whatever else. And so you pay a high number for that for, let's say, a couple of days, or maybe it's just even a couple hours. The unfortunate part is that's all you've got really in AWS to play with. And so if I need to keep long-term retention, I'm keeping this high value item now for a long duration. And so what we've done is we've tried to optimize the data sets as much as possible. So on EC2 and EBS, we'll dedupe and compress the data sets and then store them in S3 on our tenancy. And then there's a lower cost basis for the customer. They can still
Starting point is 00:17:45 use operational recovery. We'll manage that as part of the policy, but they can also store it in an air gap protected solution so that no one has access to it, and they can restore it to any of the accounts that they have out there. Oh, separating access is one of those incredibly important things to do, just because, first, if someone has completely fat-fingered something, you want to absolutely constrain the blast radius. But two, there is the theoretical problem of someone doing this maliciously, either through ransomware or through a bad actor, external or internal, or someone who has compromised one of your staff's credentials. The idea being that people with access to production should never be the people who have access to, in some cases, the audit logs or the backups themselves in some cases. So having that gap, an air gap, as you call it, is critical.
Starting point is 00:18:28 The only way to do this really in AWS, and a lot of customers are doing this and then they move to us, is they replicate their snapshots to another account and vault them somewhere else. And while that works, the downside, and it's not a true air gap in a sense, it's just effectively moving the data out of the account that it was created in. But you double the cost. So that sucks because you're keeping your local copy and then the secondary copy that sits in the other account. The admins still have access to it.
Starting point is 00:18:53 So it's not like it's just completely disconnected from the environment. It's still in the security sphere. So if you're looking at a ransomware attack, trust me, they'll find ways to get access to that thing and compromise it. And so you have vulnerabilities that are kind of built into this altogether. So what's your security approach to keeping those two accounts separated? The sheer complexity that it takes to wind up assuming a role in that other account, that no one's going to be able to figure it out because we've tried for years and can't get it to work properly. Yeah, maybe that's not plan A. Exactly. And I feel like while you can club these
Starting point is 00:19:21 things together in like various scripts and solutions and things, people are looking for solutions, not more complexity to manage. If you think about this, backup is not usually the thing that is strategic to that company's mission. It's something that protects their mission, but not drives their mission. It is our mission, and so we help customers with that. But it should be something we can take off their hands and provide as a service versus them trying to build their own backup solution as a whole. translate well to cloud or multi-cloud environments, and that's not even counting IoT. ExtraHop automatically discovers everything inside the perimeter, including your cloud workloads
Starting point is 00:20:12 and IoT devices, detects these threats up to 35% faster, and helps you act immediately. Ask for a free trial of detection and response for AWS today at extrahop.com slash trial. Back when I was an employee, if I was being honest, people said, so what is the number one thing you're always sure to do on a disaster recovery plan? My answer is I keep my resume updated because on some level you can always quit and go work somewhere else. That is honest, but it's also not the right answer. In many cases, you need to start planning for these things before you need them. No one cares about backups until right after you really needed backups. And keeping that managed is important.
Starting point is 00:20:52 There are reasons why architectures around this stuff are the way that they are, but there are significant problems around how a lot of AWS implements these things. I wound up having to use a backup about a month or so ago when some of my crappy code deleted data. Imagine that from a DynamoDB table. And I have point-in-time restores turned on. Cool. So I just rolled it back half an hour, and that was great. The problem is, is there was about four megabytes of data in that table, and it took an hour to do the restore into a new table and then migrate everything back over, which was a different colossal pain. And I'm sure there are complicated architectural reasons under the hood, but that is almost as slow as if someone just retyped it all by hand,
Starting point is 00:21:29 and it's an incredibly frustrating experience. You also see it with EBS snapshots. You back up an EBS volume with a snapshot. It just copies the data that's there. Great. Every time there's another snapshot taken, it just changes the delta, and that's the storage it gets built to. So what does that actually cost? No one really knows. They recently launched direct APIs for EBS snapshots. You can start at least getting some of that data out of it if you just write a whole bunch of code, preferably in a Lambda function, because that's AWS's solution for everything. But it's all plumbing solution, where if you're spending all your time building the scaffolding and this tooling, backups are right up there with monitoring and alerting. For the first thing, I will absolutely
Starting point is 00:22:08 hurl to a third party. I 100% agree. I know you're a third party. You're hardly objective on this. But again, I don't partner with anyone. I'm not here to show for people. You can't buy my opinion on these things. I've been paying third parties to back things up for a very long time because that's what makes sense. The one thing that I think we hit on at the beginning a little bit was this visibility challenge. And this was one of the big launch around, including Discover, that's coming out on May 27th there, is we found out that there was near zero visibility. And so you're talking about the restore times, which is one key component.
Starting point is 00:22:48 Then you restore after four hours, you don't have what you thought you did. And so I would love to see, am I backing things up? How much am I paying for all of these things? Can I get to them fast? I mean, the funny thing about the restore that I don't think people ever talk about, and this is one of the things
Starting point is 00:23:03 that I think customers love the most about Clumio, is when you go to restore something, even that DynamoDB database you talked about earlier, you have to go actually find the snapshot in a long scroll. So first you have to go to the service, to the account, and scroll through all of the snapshots to find the one that you actually want to restore with. By the way, maybe that's not a monster amount for you, but in a lot of companies, that could be thousands, tens of thousands of snapshots they're scrolling through,
Starting point is 00:23:35 and they've got a guy yelling at them to go restore this as soon as possible, and they're trying to figure out which one it is. They hunt and peck to find it. Wouldn't it be nice if you just had a nice calendar that showed you, here's where it is, and here's all the different backups that you have on that point in time and then just go ahead and restore it then? Save me from the world of crappy scripts for things like this that you find on GitHub. And again, no disrespect to the people writing these things, but it's clear that people are scratching their own itch. That's the joy of open source. Yeah, this is the backup script or whatever it is that works on the 10 instances I have in my environment. That's great. You roll that out to 600 instances and everything breaks. It winds up hitting rate limits as it tries to iterate through a bunch of things rather than building a queue and working
Starting point is 00:24:12 through the rest of it. It's very clearly aimed at small scale stuff and built by people who aren't working in large scale environments. Conversely, you wind up with sort of the Google problem when you look at solving it for just the giant environments. Great that you wind up with sort of the Google problem when you look at solving it for just the giant environments. Great that you wind up with this over-engineered, enormously unwieldy solution. Like, oh yeah, it's the Continental saw. We use it to wind up cutting continents in half. I'm trying to cut a sandwich in half here. What's the problem here?
Starting point is 00:24:37 It becomes a hard problem. idea of having something that scales and has decent user ergonomics is critically important, especially when you get to something as critical as backups and restores. Because you're not generally allowed to spend months on end building a backup solution at a company. And when you're doing restore, it's often 3am and you're bleary-eyed and panicked, which is not the time to be making difficult decisions. It's a time to be clicking the button. I 100% agree. I think the lack of visibility, this being a solution, less a problem I'm trying to solve on my own, is I think one area no one's really tackled in the industry, especially around data protection. I will say people have done this on-prem at a decent level, but it just doesn't exist inside the public cloud today. ClueMe Discover, as an example, is one thing that we just heard constantly.
Starting point is 00:25:26 It was like, give me global visibility to see everything in one single pane of glass across all my accounts, ensure all of my data is protected, optimize the way that I'm spending in data protection, identify if I've got massive outliers or huge consumers, and then help me restore faster. And the cool part with Discover
Starting point is 00:25:45 is that we're actually giving this away to customers for free. They can go use this, whether they're using AWS Backup or us, and they can now see all of their environment. And at the same time, they get to experience Clumio as a solution in a way that is vastly different than what they're experiencing today. And hopefully they'll continue to expand with us as we continue to innovate inside of AWS. But it's a cool value for them to be able to finally get that visibility that they've never had before.
Starting point is 00:26:12 Did you know that AWS users can have multiple accounts and have resources in those accounts in multiple regions? Oh yeah, lots of them. Because the reason you know that apparently is that you don't work for AWS backup, where last time I checked, there are still something like eight or nine regions that they are not present in. And you have to wind up configuring this, in many cases, separately and, of course, across multiple accounts, which is a modern best practice, separate things out by account. There we go.
Starting point is 00:26:39 But it is absolutely painful to wind up working with. Sure, it's great for small-scale test accounts where I have everything in a single account and I want to make sure that that data doesn't necessarily go on walkabout. Great. But I can't scale that in the same way without creating a significant management problem for myself. Just the amount of accounts that we see in enterprises is nuts. And with people managing this at an account level, it's unbearable. And with no visibility, you're doing this without really an understanding of whether you're successfully executing this across all of those accounts at any point in time. And so this is one of the areas that we really want to help enterprises with. It's not only make the protection simple, but also
Starting point is 00:27:20 validate that it's actually occurring. Because I think the one thing that no one likes to talk about in this is the whole compliance game, right? Like doing something is next to useless. You've got to prove that you're doing the thing. Yeah. Yeah. I got an auditor who shows up once a quarter and says, show me this backup.
Starting point is 00:27:35 And then I got to go fumble to try to figure out where that is. And oh my God, it's not there. What do I tell the guy? Well, wouldn't it be nice if you had this like global compliance report that showed you whether you were compliant or if it wasn't, which, you know, maybe it wasn't for a snapshot that you created, at least would tell you why like an RPO was exceeded on the amount of time it took to take the snapshot. Okay. Well, that's good to know. Now I can tell the guy something other than just make something up because I have no information. So you'd have multiple snapshots in flight
Starting point is 00:28:03 simultaneously. Always a great plan. Talk to me a little bit about Discover, your new offering. What is it? What led to it? I love talking to customers, for one, and we spend a lot of time understanding where the gaps exist in the public cloud. Our job is to help fill those gaps with really cool innovation. And so the first one we heard was, I cannot see multiple services, regions, accounts in one view. I had to go to each one of the services to understand what's going on in it versus being able to see all of my assets in one view.
Starting point is 00:28:39 I've got a lot of fragment reporting. I've got no compliance view whatsoever. I can't tell if I'm over protecting or under protecting. Orphan snapshots are the bane of many people's existence where they've taken snapshots at some point, deleted an EC2 instance, and they pay monthly for these things. We've got an orphan snapshot report. It will show you all of the snapshots that exist out there with no EC2 instance associated to it, and you can go take action on it. And so what Discover came from is customers saying, I need help.
Starting point is 00:29:08 And we built a solution to help them. And it gives them actionable insights globally across their entire set of accounts, across various different services, and allows them to do a whole bunch of fun stuff, whether it's actionable and help me delete all my orphan snapshots to I've got a 30 day retention period.
Starting point is 00:29:27 Show me every snapshot that's over 30 days. I'd like to get rid of that one too. Or how much are my backups costing me in snapshots today? Yeah, today the answer is. And imagine being able to see that with effectively a free tool
Starting point is 00:29:42 that gives you actionable insights. That's what Discover is. And so you pair that with Clumio Protect, which is our backup solution, and you've got a really awesome solution to be able to see everything, validate it's working, and actually go protect it, whether it's operational recovery or a true air gap solution, of which, you know, it's really hard to pull off an AWS today. One problem that's endemic to the backup space is that from a customer perspective, you are either invisible or you have failed them. There are remarkably few happy customers talking about their experience with their backup vendor. So as a counterpoint to that,
Starting point is 00:30:16 what do customers love about you folks? So first and foremost, customers love the support experience. We are a SaaS offering and we manage the backups completely for the end user. There's no cloud infrastructure the customer has to manage. There's a lot of these fake SaaS offerings out there where I better deploy a thing and manage it in my tenancy. We've created an experience that allows our support organization to help customers proactively support it. And we've become an extension to those infrastructure teams and really help customers to make sure they have great visibility
Starting point is 00:30:48 and understanding what's going on in the environment. The second part is just a completely new customer experience. You've got simplicity around the way that I add accounts, I create a policy, I assign a tag, and I'm off and running. There's no management or handholding that you need to do within the system. The system scales to any size environment and you're off and running. There's no management or handholding that you need to do within the system. The system scales to any size environment and you're off and running. And if you want to validate anything, you can validate it via compliance reports, audit reports, activity reports, and you can see all of your accounts, data assets in one single pane of glass. And now with Clumio Discover, you get the ability to be able to see it in one single view and see history, footprint, and all sorts of other fun stuff on top of it.
Starting point is 00:31:29 And so it's a very different user experience than what you see in any other solution that's out there for data protection today. Thank you so much for taking the time to speak with me today. If people want to learn more about Clumio and kick the tires with themselves, what should they do? So we are on AWS Marketplace. So you can get us up and running there and test us out. We give you $200 of free credits.
Starting point is 00:31:52 So you can not only use our operational recovery, which is kind of snapshot management, similar to AWS Backup, which is free. You can check out Clumio Discover, which is also free and see all of your accounts and environments in one single pane of glass with some awesome actionable insights, as we mentioned. And then you can reach out to us directly on Clumio.com, where you can see a whole bunch of great content, blog posts and the like around our solution and service. And we're looking forward to hearing from you. Excellent. We'll, of course, throw links to that in the show
Starting point is 00:32:19 notes. Thank you so much for taking the time to speak with me today. I appreciate it. Well, thank you so much for having me. I had an awesome time. Thank you. Chad Kenney, VP of Product at Clumio. I'm cloud economist Corey Quinn, and this is Screaming in the Cloud. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with a very long-winded comment that you accidentally lose because the page refreshes and you didn't have a backup. If your AWS bill keeps rising and your blood pressure is doing the same, then you need the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying.
Starting point is 00:33:06 The Duck Bill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started. this has been a humble pod production stay humble

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.