Screaming in the Cloud - Creating Value in Incident Management with Robert Ross

Episode Date: December 5, 2023

Robert Ross, CEO and Co-Founder at FireHydrant, joins Corey on Screaming in the Cloud to discuss how being an on-call engineer fighting incidents inspired him to start his own company. Robert... explains how FireHydrant does more than just notify engineers of an incident, but also helps them to be able to effectively put out the fire. Robert tells the story of how he “accidentally” started a company as a result of a particularly critical late-night incident, and why his end goal at FireHydrant has been and will continue to be solving the problem, not simply choosing an exit strategy. Corey and Robert also discuss the value and pricing models of other incident-reporting solutions and Robert shares why he feels surprised that nobody else has taken the same approach FireHydrant has. About RobertRobert Ross is a recovering on-call engineer, and the CEO and co-founder at FireHydrant. As the co-founder of FireHydrant, Robert plays a central role in optimizing incident response and ensuring software system reliability for customers. Prior to founding FireHydrant, Robert previously contributed his expertise to renowned companies like Namely and Digital Ocean. Links Referenced:FireHydrant: https://firehydrant.com/Twitter: https://twitter.com/bobbytables

Transcript
Discussion (0)
Starting point is 00:00:00 Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud. Developers are responsible for more than ever these days. Not just the code they write, but also the containers and cloud infrastructure their apps run on.
Starting point is 00:00:38 And a big part of that responsibility is app security from code to cloud. And that's where Snyk comes in. Snyk is a frictionless security platform that meets development teams wherever they work, automating security controls across the AWS application stack to scan for vulnerabilities using Snyk on AWS CodePipeline,
Starting point is 00:00:57 Amazon ECR, Amazon EKS, and several others. Deploy on AWS, secure with Snyk. Learn more at Snyk.co slash morning. That's S-N-Y-K dot C-O slash morning. Welcome to Screaming in the Cloud. I'm Corey Quinn, and this featured guest episode is brought to us by our friends at FireHydrant. And for better or worse, they've also brought us their CEO and co-founder, Robert Ross, better known online as Bobby Tables. Robert, thank you for joining us. Super happy to be here. Thanks for having me. Now, this is the problem that I tend to have when I've been tracking companies for a while,
Starting point is 00:01:39 where you were one of the only people that I knew of at FireHydrant. And you kind of still are. So it's easy for me to imagine that, oh, it's basically your own side project that turned into a real job sort of side hustle that's basically you and maybe a virtual assistant or someone. I have it on good authority, and it was also signaled by your Series B, that there might be more than just you over there now.
Starting point is 00:02:03 Yes, that's true. There's a little over 60 people now at the company, which is a little mind-boggling for me, starting from side projects, building this in Starbucks, to actually having people using the thing and being on payroll. So a little bit of a crazy thing for me. But yes, over 60. So I have to ask, what is it that you folks do? When you say fire hydrant, the first thing that I think of is when I was a kid getting yelled at by the firefighter for messing around with something I probably shouldn't have been messing around with.
Starting point is 00:02:31 So it's actually very similar where I started because I was messing around with software in ways I probably shouldn't have and needed a fire hydrant to help put out all the fires that I was fighting as an on-call engineer. So the name kind of comes from what do you need when you're putting out a fire, a fire hydrant. So what we do is we help people respond to incidents really quickly, manage them from ring to retro. So the moment you declare an incident, we'll do all the timeline tracking
Starting point is 00:02:56 and eventually help you create a retrospective at the very end. And it's been a labor of love because all of that was really painful for me as an engineer. One of the things that I used to believe was that every company did something like this. And maybe they do, maybe they don't, but I'm noticing these days an increasing number of public companies will never admit to an incident that very clearly ruined things for their customers. Now, sure, they're going to talk privately to customers under NDAs and whatnot, but it feels like we're leaving an era where it was an
Starting point is 00:03:29 expectation that when you had a big issue, you would do an entire public postmortem explaining what had happened. Is that just because I'm not paying attention to the right folks anymore, or are you seeing a downturn in that? I think that people are skittish of talking about how much reliability they, are issues they may have, because we're having this weird moment where people want to open more incidents, like the engineers actually want to say we have more incidents and officially declare those. In the past, we had these like shadow incidents that we weren't officially going to say it was an incident, but was pretty a big deal and, but we're not going to have a retro on it. So it's
Starting point is 00:04:04 like it didn't happen. And kind of splitting the line between what's a SEV1, when should we actually talk about this publicly? I think our companies are still trying to figure that out. And then I think there's also opposing forces. We talk to folks and it's public relations. So sometimes get involved. My general advice is you should be probably talking about it no matter what. That's how you build trust. Trust with incidents is lost in buckets and gained back in drops. So you should be more public about it. And I think my favorite example is a major CDN had a major incident and took down the
Starting point is 00:04:38 UK government website. And folks can probably figure out who I'm talking about. But their stock went up the next day. You would think that a major incident taking down a large portion of the internet would cause your stock to go down. Not the case. They were on it like crazy. They communicated about it like crazy.
Starting point is 00:04:56 And lo and behold, people were actually pretty okay with it as far as it could be at the end of the day. The honest thing that really struck me about that was I didn't realize that that CDN that you're referencing was as broadly deployed as it was. Amazon.com took some downtime as a result of this. It's, oh, wow, if there are that many places, I should be taking them more seriously, was my takeaway. And again, I don't tend to shame folks for incidents because as soon as you do that, they stop talking about them.
Starting point is 00:05:25 They still have them, but then we all lose the ability to learn from them. I couldn't help but notice that the week that we were recording this, there was an incident report put out by AWS for a Lambda service event in Northern Virginia. It happened back in June. We're recording this in late October. So it took them a little bit of time to wind up getting it out the door, but it's very thorough, very interesting as far as what it talks about, as far as their own approach to things. Because otherwise, I have to say it is easy as a spectator slash frustrated customer to assume the absolute worse. Like you're sitting around there like, well, we have a 15 minute SLA on this,
Starting point is 00:06:05 so I'm going to sit around for 12 minutes and finish my game of solitaire before I answer the phone. No, it does not work that way. People are scrambling behind the scenes because as systems get more complicated, understanding the interdependencies of your own system becomes monstrous. I still remember some of the very early production engineering jobs that I had where, to what you said a few minutes ago, oh yeah, we'll just open an incident for every alert that goes off. Then we dropped a core switch and Nagio sent something like 8,000 messages inside of two minutes. And we would still, 15 years later, not be done working through that incident backlog had we done such a thing. All of this stuff gets way harder than you would expect as soon as your application or environment becomes somewhat complicated. And that happens before you realize it. Yeah, much faster. I think that in my experience, there's a moment that happens for companies where maybe it's the number of customers you have, number of services you're running in production, that you have this like, oh, we're running a big workload right now in a very complex system that impacts
Starting point is 00:07:09 people's lives, frankly. And the moment that companies realize that is when you start to see like, oh, process change, you build it, you own it. Now we have an SRE team, like there's this catalyst that happens in all of these companies that triggers this. And it's, I don't know, from my perspective, it's coming at a faster rate than people probably realize. From my perspective, I have to ask you this question, and my apologies in advance if it's one of those irreverent ones, but do you consider yourself to be an observability company? Oh, great question.
Starting point is 00:07:46 No, no, actually. We think that we are the baton handoff between an observability tool and our platform. So, for example, we think that's a good way to kind of, as they say, monitor the system, give reports on that system. And we are the tool that based on that monitor, maybe going off, you need to do something about it. So for example, I think of it as like a smoke detector in some cases, like in our world, like that's the smoke detector is the thing that's kind of watching the system. And if something's wrong, it's going to tell you, but at that point, it doesn't really do anything that's going to help you in the next phase, which is managing the incident, calling 911, driving to the scene of
Starting point is 00:08:30 the fire, whatever analogies you want to use. But I think the value add for the observability tools and what they're delivering for businesses is different than ours, but we touch each other very much so. Managing an incident once something happens and diagnosing what is the actual root cause of it, so to speak, quote unquote root cause. I know people have very strong opinions on that phrase. Yeah, say the word. Exactly. It just doesn't sound that hard. It is not that complicated. It's more or less a bunch of engineers who don't know what they're actually doing and why are they running around chasing this stuff down is often the philosophy of a lot of folks who have never been in the trenches dealing with these incidents themselves. I know this because before I was exposed to scale, that's what I thought.
Starting point is 00:09:20 And then, oh, this is way harder than you would believe. Now, for better or worse, an awful lot of your customers and the executives at those customers did, for some strange reason, not come up through production engineering as one thing that you've got in your back pocket, which I always love talking to folks about, is before this, you were an engineer, and then you became a CEO of a reasonably sized company. That is a very difficult transition. Tell me about it. Yeah. Yeah. So a little bit background. I mean, I started writing code. I've been writing code for two-thirds of my life so i'm 32 now i'm relatively young and my first job out of high school skipping college entirely was was writing code i was 18 i was working at a web dev shop uh was making good enough money i said you know what i don't want to go to college that sounds i'm making money why would i go to college and i think it was a good decision because I got to be able, I was right kind of in the centerpiece of when a lot of really cool software things were happening. Like DevOps was becoming a really cool term and we were seeing the cloud kind of emerge at this time and becoming much more popular.
Starting point is 00:10:38 And it was a good opportunity to see all this confluence of technology and people and processes emerge into what is kind of like the base plate for a lot of how we build software today, starting in 2008 and 2009. And because I was an on-call engineer during a lot of that and building the systems as well that I was on call for, it meant that I had a front row seat to being an engineer that was building things that was then breaking and then literally merging on GitHub. And then five minutes later, seeing my phone light up with an alert from our alerting tool. I got to feel the entire process.
Starting point is 00:11:19 And I think that was nice because eventually one day I snapped. After a major incident, I snapped. I said, there's no tool that helps me during this incident. There's no tool that helps me run a process for me because the only thing I care about in the middle of the night is going back to bed. I don't have any other priority at 2 a.m. So I wanted to solve the problem of getting to the fire faster and extinguishing it by automating as much as I possibly could. The process that was given to me in an outdated Confluence page or Google Doc, whatever it was, I wanted to automate that part so I could do the thing that I was good at as an engineer, put out the fire, take some notes,
Starting point is 00:12:05 and then go back to bed and then do a retrospective sometime next day or in that week. And it was a good way to kind of feel the problem, try to build a solution for it, tweak a little bit, and then it kind of became a company. I joke and I say on accident, actually.
Starting point is 00:12:20 I'll never forget one of the first big hairy incidents that I had to deal with in 2009 where my coworker had just finished migrating the production environment over to LDAP on a Thursday afternoon and then stepped out for a three-day weekend. And half an hour later, everything started exploding because LDAP will do that. And I only had the vaguest idea of how LDAP worked at all. This was a year into my first Linux admin job. I'd been a Unix admin before that. And I suddenly have the literal CEO of the company breathing down my neck behind
Starting point is 00:12:53 me trying to figure out what's going on. And I have no freaking idea myself. And it was feels like there's got to be a better way to handle these things. We got through. We wound up getting it back online. No one lost their job over it, but it was definitely a touch and go series of hours there. And that was a painful thing. And you and I went in very different directions based upon experiences like that. I took a few more jobs where I had even worse on-call schedules than I would have believed possible until I started this place, which very intentionally is centered around a business problem that only exists during business hours. There's no 2 a.m. AWS billing emergency. There might be a security issue, masquerading as one of those, but you don't need to reach me out of business hours because anything that is a billing
Starting point is 00:13:39 problem will be solved in Seattle's timeline over a period of weeks. You leaned into it and decided, oh, I'm going to start a company to fix all of this. And okay, on some level, some wit that used to work here wound up once remarking that when an SRE doesn't have a better idea, they start a monitoring company. And on some level, there's some validity to it because this is the problem that I know and I want to fix it. But you've differentiated yourselves in a few key ways. As you said earlier, you're not an observability company. Good for you.
Starting point is 00:14:11 Yeah, that's a funny quote. Pete Cheslock, he has a certain way with words. Yeah. I think that when we started the company, we kind of accidentally secured funding five years ago. And it was because this genuinely was something I bought a laptop for because I wanted to own the IP. I always made sure I was on a different network if I was going to work on the company and the tool.
Starting point is 00:14:35 And I was just writing code because I just wanted to solve the problem. And then some crazy situation happened where an investor somehow found FireHydrant because they were like, oh, this SRE thing is a big space some crazy situation happened where like an investor somehow found fire hydrant because they were like, oh, this SRE thing is a big space and incidents is a big part of it. And we got to talking and they were like, hey, we think what you're building is valuable. And we think you should build a company here. And I was like, you know, the Jim Carrey movie,
Starting point is 00:15:02 yes, man. Like that was kind of me in that moment. I was like, sure. Carrey movie, Yes Man. That was kind of me in that moment. I was like, sure. And here we are five years later. But I think the way that we approached the problem was, let's just solve our own problem. And let's just build a company that we want to work at. And I had two co-founders join me in late 2018. And that's what we told ourselves. We said, let's build a company that we want to work for,
Starting point is 00:15:25 that solves problems that we have had, that we care about solving. And I think it's worked out. You know, we work with amazing companies that use our tool, much to their chagrin, multiple times a day. It's kind of a problem when you build an incident response tool
Starting point is 00:15:40 is that it's a good thing when people are using it, but a bad thing for them. I have to ask, of all of the different angles to approach this from, you went with incident management as opposed to focusing on something that is more purely technical. And I don't say that in any way
Starting point is 00:15:59 that is intended to be sounding insulting, but it's easier from an engineering mind, having been one myself, to come up with, here's how I make one computer talk to this other computer when the following event happens. That's a much easier problem by orders of magnitude than, here's how I corral the humans interacting with that computer's failure to talk to another computer in just the right way. How did you get onto this path? Yeah, the problem that we were trying to solve for
Starting point is 00:16:25 was getting the right people in the room problem. We think that building services that people own is the right way to build applications that are reliable and stable and easier to iterate on. Put the right people that build that software, give them the skin in the game of also being on call. And what that meant for us is that we could build a tool that allowed people to do that a lot easier, where allowing people to corral the right people by saying,
Starting point is 00:16:53 this service is broken, which powers this functionality, which means that these are the people that should get involved in this incident as fast as possible. And the way we approached that is we built a part of our functionality called runbooks, where you could say, when this happens, do this. And it's catered for incidents. So there's other tools out there you can kind of think of as like a workflow tool, like Zapier, or just things that like fire webhooks at services you build, and that ends up being your incident process. But for us, we wanted to make it like a really easy way that a project manager could help define the process in our tool. And when you click the button, say declare incident LDAP is broken. And I have a CEO
Starting point is 00:17:30 standing behind me. Our tool just would corral the people for you. It's kind of like a bat signal in the air where it was like, hey, there's this issue. I've run all the other process. I just need you to arrive and help solve this problem. And we think of it as like, how can FireHrant be a mech suit for the team that owns Incinincin and is responsible for resolving them? There are a few easier ways to make a product sound absolutely ridiculous than to try and pitch it to a problem that it is not designed to scale to. What is the, you must be at least this tall to ride envisioning for FireHydrant? How large slash complex of an organization do you need to be before this starts to make sense? Because I
Starting point is 00:18:11 promise as one person with a single website that gets no hits, that is probably not the best place for to imagine your ideal user persona. Well, I'm sure you get way more hits than that. Come on. It depends on how controversial I'm being in a given week. Yeah. Also, I have several ridiculous nonsense apps out there, but honestly, they're there for fun. I don't charge people for them, so they can deal with my downtime
Starting point is 00:18:35 till I get around to it. That's the way it works. Or like spite visiting your website. No, for us, we think that the must be this tall is when do you have sufficiently complicated incidents? We tell folks, if you're a 10-person shop and you have incidents, just use our free tier.
Starting point is 00:18:54 You need somebody that opens a Slack channel? Fine. Use our free tier. Or build something that hits the Slack API that says create channel. That's fine. But when you start to have a lot of people in the room and multiple pieces of functionality that can break and multiple people on call,
Starting point is 00:19:11 that's when you probably need to start to invest in incident management. Because it is a return on investment, but there is a minimum amount of incidents and process challenges that you need to have before that return on investment actually comes to fruition. Because if you do think of an incident that takes downtime, or you're a retail company and you go down for, let's say, 10 minutes, and your number of sales per hour is X,
Starting point is 00:19:37 it's actually relatively simple for that type of company to understand, okay, this is how much impact we would need to have from an incident management tool for it to be valuable. And that waterline is actually way, it's way lower than I think a lot of people realize. But like you said, you know, if you have a few hundred visitors a day, it's probably not worth it. And I'll be honest, sir, you can use our free tier. That's fine. Which makes sense. It's challenging to wind up sizing things appropriately. Whenever I look at a pricing page, there are two things that I look for. Incidentally, when I pull off someone's website, I first make a beeline for pricing
Starting point is 00:20:14 because that is the best way I've found for a lot of the marketing nonsense words to drop away and it get down to brass tacks. And the two things I want are free tier or zero dollar trial that I can get started with right now, because often it's two in the morning and I'm trying to see if this might solve a problem that I'm having. And I also look for the enterprise tier contact us because there are big companies that do not do anything that is not custom, nor do they know how to sign a check that doesn't have two commas in it. And what differs between those two, okay, that's good to look at to figure out what dimensions I'm expected to grow on and how to think about it. But those are the two tentpoles. And you've got that, but pricing is always going to be a dark art. What I've been seeing across
Starting point is 00:20:55 the industry, and if we put it under the broad realm of things that watch your site and alert you and help manage those things, there are an increasing number of, I guess what I want to call component vendors, where you'll wind up bolting together a couple dozen of these things together into an observability pipeline style thing. And each component seems to be getting extortionately expensive. Most of the wake up in the middle of the night services that will page you, and there are a number of them out there, at a spot check of these, they all cost more per month per user than Slack, the thing that most of us tend up living within. This stuff gets fiendishly expensive fiendishly quickly. And at some point, you're looking at
Starting point is 00:21:35 this going, the outage is cheaper than avoiding the outage through all of these things. What are we doing here? What's going on in the industry other than money printing machines stop going brr in quite the same way? Yeah. I think that for alerting specifically, this is a big part of the journey that we wanted to have in FireRider was we also want to help folks with the alerting piece. So I'll focus on that, which is I think the industry around notifying people for incidents, texts, call, push notifications, emails, there's a bunch of different ways to do it. I think where it gets really crazy expensive is in this per seat model that most of them
Starting point is 00:22:15 seem to have landed on. And we're per seat for the core platform of Fire Hydrant. So before people spite visit Fire Hydrant, look at our pricing pitch like we're per seat there because the value there is like we're the full platform for the service catalog retrospectives run books like there's a whole other component of higher status pages but when it comes to alerting like in my opinion that should be active user for a few reasons i think that if you're gonna have people responding to incidents and the value from us is making sure they get to that incident very quickly because we wake them up in the middle of
Starting point is 00:22:51 the night, we text them, we call them, we make their hue lights turn red, whatever it is, then that's the value that we're delivering at that moment in time. So that's how we should probably invoice you. And I think what's happened is that the pricing for these companies, they haven't innovated on the product in a way that allows them to package that any differently. So what's happened, I think, is that the packaging of these products has been almost restrictive in the way that they could change their pricing models. Because there's nothing much more to package on. It's like, cool, there's an alerting aspect to this, but that's what people want to buy those tools for. They want to buy the
Starting point is 00:23:29 tool so it wakes them up. But that tool is getting more expensive. There was even a price increase announced today for a big one that I've been publicly critical of. That is crazy expensive for a tool that texts you and call you. And what's going on now? People are looking at the pricing sheet for Twilio and going, what the heck is going on? To send a text on Twilio in the United States is fractions of a penny. And here we are paying $40 a user for that person to receive six texts that month because of a web hook that hit an HTTP server and is supposed to call that person. That's kind of a crazy model if you think about it. Engineers are kind of going, wait a minute, what's up here? And when engineers start thinking,
Starting point is 00:24:17 I could build this on a weekend, something's wrong with that model. And I think that people are starting to think that way. Well, engineers, to be fair, will think that about an awful lot of stuff. I've heard it said about Dropbox, Facebook, the internet. Dropbox is such a good one. BGP. Okay, great. Let me know how that works out for you. What was that Dropbox comment on Hacker News years ago? Like, just set up
Starting point is 00:24:37 NFS and host it that way and it's easy. Or RSA or something. Yeah. What are you going to make with that? Who's going to buy that? Basically everyone for at least a time. And whether or not the engineers are right, I think, is a different point. It's the condescension and dismissal of everything that isn't writing the code that really galls on some level.
Starting point is 00:24:54 But I think when engineers start thinking about like, I could build this on a weekend, that's a moment that you have an opportunity to provide the value in an innovative, maybe consolidated way. We want to be a tool that's your incident management ring to retro, right?
Starting point is 00:25:10 You get Paige in the middle of the night, we're going to wake you up. And when you open up your laptop, groggy eyed, and like you're about to start fighting this fire, fire hydrants already done a lot of the work. That's what we think is like the right model to do this. And candidly, I have no idea why the other alerting tools in the space haven't done this. I've said that, and people tend to nod in agreement and say, yeah, it's kind of crazy how they haven't approached this problem yet.
Starting point is 00:25:37 I don't know. I want to solve, you've been teasing on the internet for a little bit now, is something called signals, where you are expanding your product into the component that wakes people up in the middle of the night, which in isolation, fine, great, awesome. But there was a company whose stated purpose was to wake people up in the middle of the night. And then once they started doing some business things such as, oh, I don't know, going public, they needed to expand beyond that to do a whole bunch of other things. But as a customer, no, no, no, you are the thing that wakes me up in the middle of the night. I don't want you to sprawl and grow into everything else
Starting point is 00:26:20 because if you're going to have to pick a vendor that claims to do everything, well, I'll just stay with AWS because they already do that. And it's one less throat to choke. What is that pressure that is driving companies that are spectacular at the one thing to expand into things that frankly, they don't have the chops to pull off? And why is this not you doing the same thing? Oh man, the end of that question is such a good one. And I like that. I'm not an economist. I'm not like that's, I don't know if I have a great comment on like, why are people expanding into things that they don't know how to do?
Starting point is 00:26:54 It seems to be like a common thing across the industry at a certain point. Particularly generative. We've been experts in this for a long time. Yeah, I'm not that great at dodgeball, but you also don't see me mouthing off about how I've been great at it and doing it for 30 years either. Yeah. I mean, there's a couple of ads during football games I watched. I'm like, what is this AI thing that you just tacked on the letter X to the end of your product line, and now all of a sudden it's AI? I have plenty of rants that are good for a cocktail at some point. But as for us, I mean, we knew that
Starting point is 00:27:25 we wanted to do alerting a long time ago, but it does have complications. Like the problem with alerting is that it does have to be able to take a brutal punch to the face the moment that AWS US East 2 goes down. Because at that moment in time, a lot of webhooks are coming your way to wake somebody up for thousands of different companies. So you do have to be able to take a very, very sufficient amount of volume instantaneously. So that was one thing that kind of stopped us. In 2019 even, we wrote a product document about building an alerting tool and we kind of paused.
Starting point is 00:28:03 And then we got really deep into incident management. And the thing that makes us feel very qualified now is that people are actually already integrating their alerting tools into Fire Hydrant today. This is a very common thing. In fact, most people are paying for Fire Hydrant and an alerting tool. So you can imagine that gets a little expensive when you have both. So we said, well, let's help folks consolidate. Let's help folks have a modern version of alerting. And
Starting point is 00:28:31 let's build on top of something we've been doing very well already, which is incident management. And we ended up calling it signals because we think that we should be able to receive a lot of signals in, do something correct with them, and then put a signal out and then transfer you into incident management. And yeah, we're excited for it, actually. It's been really cool to see come together. There's something to be said for keeping it in a certain area of expertise. And people find it very strange when they reach out to my business partner and me asking, okay, so are you going to expand into Google Cloud or Azure or increasingly lately, Datadog, which has become a Fortune 500 board level expense concern, which is kind of wild to me, but here we are. And asking if we're going to focus on that, our answer is no, because it's very, well, not very, but it is relatively easy to be the subject matter
Starting point is 00:29:27 expert in a very specific, expensive, painful problem. But as soon as you start expanding that, your messaging loses focus. And it doesn't take long, since we do view this as an inherent architectural problem, where we're saying we're the best cloud engineers and cloud architects in the world. And then we're competing against basically everyone out there. And it costs more money a year for Accenture or Deloitte's marketing budget than we'll ever earn as a company in our entire lifetime. Just because we are not externally boosted, we're not putting hundreds of people into the field.
Starting point is 00:30:00 It's a lifestyle business that solves an expensive, painful problem for our customers. And that focus lends clarity. I don't like the current market pressure toward expansion and consolidation at the cost of everything, including, it seems, customer trust. Yeah, that's a good point. I mean, I agree. I mean, when you see a company and it's almost getting hard to think about what a company does based on their name as well. Like names don't even mean anything for companies anymore. Like Datadog has expanded into a whole lot of things beyond data. And if you think about some of the alerting tools out there that have names of like old devices that used to attach to our hips, that's just a different company name that represents what they do.
Starting point is 00:30:46 And I think for us, incidents, that's what we care about. That's what I know. I know how to help people manage incidents. I built software that broke. Sometimes I was an arsonist. Sometimes I was a firefighter. It really depends. But that's the thing that we're going to be good at. And we're just going to keep building in that sphere. I think that there's a tipping point that starts to become pretty clear when companies focus away from innovating and growing and serving customers into revenue protection mode. And I think this is a cyclical force that is very hard to resist. But I can tell even having conversations like this with folks, when the way that a company goes about setting up one of these conversations with me,
Starting point is 00:31:26 you came by yourself, not with a squadron of PR people, not with a whole giant list of talking points you wanted to go to. Just let's talk about this stuff. I'm interested in it. As a company grows, that becomes more and more uncommon. Often I'll see it at companies a third the size of yours, just because there's so much fear around everything we say must be spoken in such a way that it could never be taken in a negative way against us. That's not the failure mode. The failure mode is that no one listens to you or cares what you have to say. At some point, yeah, I get the shift, but damn if it doesn't always feel like it's depressing. Yeah. These are such good questions. Because I think that for the way I think about it is,
Starting point is 00:32:07 I care about the problem. And if we solve the problem and we solve it well, and people agree with us on our solution being a good way to solve that problem, then the revenue happens because of that. I've gotten asked from VCs and customers, what's your end goal with Fire Hydrant as the CEO of the company? And what they're really asking is, do you want to IPO or be acquired?
Starting point is 00:32:31 That's always the question every single time. And my answer is maybe, I don't know, philosophical, but I think if we solve the problem, one of those will happen. But that's not the end goal. Because if I aim at that, we're going to come up short. It's like how they tell you to throw a ball, right? Like they don't say aim at the glove. They say, like, aim behind the person.
Starting point is 00:32:51 And that's what we want to do. We like we just want to aim at solving a problem and then the revenue will come. You have to be smart about it, right? It's not a field of dreams. Like if you build it, like revenue arrives. But so you do have to be conscious of the business and the operations and the model that you work within, but it should all be in service of building something that's valuable.
Starting point is 00:33:11 I really want to thank you for taking the time to speak with me. If people want to learn more, where should they go to find you other than to their most recent incident page? No, thanks for having me. So to learn more about me and you can find me on twitter on or x what do we call it now i call it twitter because i don't believe in dead naming except when it's companies yeah twitter.com slash bobby tables if you want to find me there if you want to learn more about fire hydrant and and what we're doing to help folks with incidents and instant response and all the all the fun things in there it's firehydrant.com or firehydrant.io. But we'll redirect you to firehydrant.com. And we will, of course, put a link to all of that in the show
Starting point is 00:33:54 notes. Thank you so much for taking the time to speak with me. It's deeply appreciated. Thank you for having me. Robert Ross, CEO and co-founder of Fire Hydrant. This featured guest episode has been brought to us by our friends at Fire Hydrant. And I'm Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast platform of choice. Whereas if you've hated this podcast, please leave a five-star review on your podcast platform of choice, along with an insulting comment that will never see the light of day because that crappy platform you're using is having an incident that they absolutely do not know how to manage effectively. If your AWS bill keeps rising and your blood pressure is doing the same, then you need
Starting point is 00:34:38 the Duck Bill Group. We help companies fix their AWS bill by making it smaller and less horrifying. The Duck Bill Group works for you, not AWS. We tailor recommendations to your business, and we get to the point. Visit duckbillgroup.com to get started.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.