Screaming in the Cloud - Honeycomb on Observability as Developer Self-Care with Brooke Sargent
Episode Date: May 25, 2023Brooke Sargent, Software Engineer at Honeycomb, joins Corey on Screaming in the Cloud to discuss how she fell into the world of observability by adopting Honeycomb. Brooke explains how observ...ability was new to her in her former role, but she quickly found it to enable faster learning and even a form of self care for herself as a developer. Corey and Brooke discuss the differences of working at a large company where observability is a new idea, versus an observability company like Honeycomb. Brooke also reveals the importance of helping people reach a personal understanding of what observability can do for them when trying to introduce it to a company for the first time. About BrookeBrooke Sargent is a Software Engineer at Honeycomb, working on APIs and integrations in the developer ecosystem. She previously worked on IoT devices at Procter and Gamble in both engineering and engineering management roles, which is where she discovered an interest in observability and the impact it can have on engineering teams.Links Referenced:Honeycomb: https://www.honeycomb.io/Twitter: https://twitter.com/codegirlbrooke
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
This promoted guest episode, which is another way of saying sponsored episode,
is brought to us by our friends at Honeycomb.
And today's guest is new to me.
Brooke Sargent is a software engineer at Honeycomb.
Welcome to the show, Brooke. Hey, Corey. Thanks so much for having me.
So you're part of, I guess I would call it the new wave of Honeycomb employees, which is no
slight to you. But I remember when Honeycomb was just getting launched right around the same time
that I was starting my own company. And I still think of it as basically a six-person company versus, you know, a couple of new people floating around. Yeah, it turns out,
last I checked, you were, what, north of 100 employees and doing an awful lot of really
interesting stuff? Yeah, we regularly have, I think, upwards of 100 in our all-hands meeting.
So definitely growing in size. I started about a year ago. And at that point,
we had multiple new people joining pretty much every week. So yeah, a lot of new people.
What was it that drove you to Honeycomb? Before this, you spent a bit of time over at Procter &
Gamble. You were an engineering manager, and now you're going, you went from IC to management,
and now you're IC again. There's a school of thought that I vehemently disagree with that that's a demotion. I think
there are, they are orthogonal skill sets to my mind, but I'm curious to hear your journey
through your story. Yeah, absolutely. So yeah, I worked at Procter & Gamble, which is a big
Cincinnati company. That's where I live. And I was there for around four
years. And I worked in both engineering and engineering management roles there. I enjoy both
types of roles. What really drove me to Honeycomb is my time at Procter & Gamble. I spent probably
about a year and a half really diving into observability and setting up an observability practice on the team that I was on, which was working on connected devices, connected toothbrushes,
that sort of thing. So I set up an observability practice there and I just saw so much benefit
to the engineering team culture and the way that junior and apprentice engineers on the team were
able to learn from it that it really caught my attention.
And Honeycomb is what we were using.
And I kind of just wanted to spend all of my time working on observability type of stuff.
When you say software engineer, my mind immediately shortcuts to a somewhat outdated definition of what that term means.
It usually means application developer to my mind.
Whereas I come from the world of operations, historically sysadmins, which it still is, except now with better titles, you get more
money. But that's functionally what SRE and DevOps and all the rest of the terms still cardinally
are, which is, if it plugs into the wall, congratulations, it's your problem now to go
ahead and fix that thing immediately. Were you on the application development side of the fence?
Were you focusing on the SRE side of the world or something else entirely? Yeah, so I was writing Go code in that
role at P&G, but also doing what I call it like AWS pipe connecting. So a little bit of both
writing application code, but also definitely thinking about the architecture aspects and
lining those up appropriately using a lot of
AWS serverless and managed services. At Honeycomb, I definitely find myself, I'm on the APIs and
partnerships team, I find myself definitely writing a lot more code and focusing a lot more
on code because we have a separate platform team that is focusing on the AWS aspects. One thing that I find interesting
is that it is odd in many cases
to see first a strong focus on observability
coming from the software engineer side of the world.
And again, this might be a legacy
of where I was spending a lot of my career,
but it always felt like
getting the application developers
to instrument whatever it was that they were building
felt in many ways like it was pulling teeth. And in many further cases, it seemed that
you didn't really understand the value of having that visibility or that perspective into what's
going on in your environment until immediately after you really wished you had that perspective
into what was going on in your environment, but didn't. It's similar to no one is as zealous
about backups as
someone who's just suffered a data loss. Same operating theory. How was it that you came from
the software engineering side to give a toss about the idea of observability?
Yeah, so working on the IoT, I was working on the cloud side of things. So in Internet of Things,
you're keeping a mobile application, firmware, and cloud synced up. So I was working on the cloud side of things. So in Internet of Things, you're keeping a mobile
application, firmware, and cloud synced up. So I was focused on the cloud aspect of that triangle.
And we got pretty close to launching this Greenfield IoT cloud that we were working on
for PNG. We were probably a few months from the initial go-live date, as they like to call it.
And we didn't have any observability. We were just kind of sending things to CloudWatch logs. And it was pretty painful to
figure out when something went wrong from hearing from a peer on a mobile app team or the firmware
team that they sent us some data and they're not seeing it reflected in the cloud that is syncing it up. Figuring out where that went wrong just using CloudWatch logs was pretty difficult,
and syncing up the requests that they were talking about to the specific CloudWatch log
that had the information that we needed, if we had even logged the right thing.
And I was getting a little worried about the fact that people were going to be going into stores
and buying these toothbrushes, and we might not have visibility into what could be going wrong or even being able to be proactive
about what is going wrong. So then I started researching observability. I had seen people
talking about it as a best practice thing that you should think about when you're building a system,
but I just hadn't had the experience with it yet. So I experimented with Honeycomb a bit and ended up really liking their approach to observability.
It fit my mental model and made a lot of sense. And so I went full steam ahead with implementing
it. I feel like you just said is very key. The idea of finding an observability solution that
keys in to the mental model that someone's operating with. I found that a lot of observability solution that keys in to the mental model that someone's operating with.
I found that a lot of observability talk sailed right past me because it did not align with that
until someone said, oh yeah, and then here's events. But what do you mean by event? It distills
down to logs. And oh, if you start viewing everything as a log event, then yeah, that
suddenly makes a lot of sense. And that made it click for me in a way that, honestly, it's a little embarrassing that it didn't before then.
But I come from a world before containers and immutable infrastructure,
and certainly before the black boxes that are managed serverless products,
where I'm used to, oh, something's not working on this Linux box.
Well, I have root.
So let's go ahead and fix that and see what's going on.
A lot of those tools don't work in either at scale or
in ephemeral environments or in scenarios where you just don't have the access to the environment.
So there's this idea that if you're trying to diagnose something that happened and the container
that it happened on stopped existing 20 minutes ago, your telemetry game has got to be on point
or you're just guessing at that point. That is something that I think I
did myself a bit of a disservice by getting out of hands-on keyboard operations roles
before that type of architecture really became widespread.
Yeah, that makes a lot of sense. On the team that I was on, we were using a lot of AWS Lambda
and similarly tracking things down could be a little bit challenging. And
emitting telemetry data also has some quirks with Lambda.
There certainly is. It's also one of those areas that on some level being stubborn to adopt it
works to your benefit because when Lambda first came out, it was a platform that was almost
entirely identified by its constraints. And Amazon didn't do a terrific job,
at least in the way that I tend to learn,
of articulating what those constraints are.
So you learn by experimenting and smacking face-first
into a lot of those things.
What the hell do you mean you can't write to the file?
Oh, it's a read-only file system, except slash temp.
What do you mean it's only half a gigabyte?
Oh, that's the constraint there.
Well, what do you mean it automatically stops after, I think, back in that point,
it was five or 10 minutes? It's 15 these days. But I guess it's their own creative approach to
solving the halting problem from computer science classes, where after 15 minutes,
your code will stop executing whether you want it to or not. They're effectively evolving these
things as we go. And once you break your understanding in a few key ways,
at least from where I was coming from, it made a lot more sense. But that was a rough couple
of weeks for me. Yeah, agreed. So a topic that you have found personally inspiring is that
observability empowers junior engineers in a bunch of ways. And I do want to get into that. But beforehand, I am curious as
to the modern day path for SREs, because it doesn't feel to me like there is a good answer for
what does a junior SRE look like? Because the answer is, oh, they don't. It goes back to the
old sysadmin school of thought, which is that, oh, you basically learn by having experience.
I've lost count of the number of startups I've encountered
where you have a bunch of early 20-something engineers,
but the SRE folks are all generally a decade into what they've been doing
because the number one thing you want to hear from someone in that role is,
oh, the last time I saw it, here's what it was.
What is the observability story these days for junior engineers?
So with SRE, that's a conversation that I've had
a lot of times on different teams that I've been on is just, can a junior SRE exist? And I think
that they can. I mean, they have to, because otherwise it's, well, where does an SRE come
from? Oh, they spring fully formed from the forehead of some god out of mythology. It doesn't usually work that way. Right. But you
definitely need a team that is ready to support a junior SRE. You need a robust team that is
interested in teaching and mentoring. And not all teams are like that. So making sure that you have a team culture that is receptive to taking on a junior SRE
is step number one.
And then I think that the act of having an observability practice on a team is very empowering
to somebody who is new to the industry.
Myself, I came from a self-taught background learning to code. I actually have
a music degree. I didn't go to school for computer science. And when I finally found my way to
observability, it made so many kind of light bulbs go off of just giving me more visuals to go from,
I think this is happening to I know this is happening. And then when I started mentoring
juniors and apprentices and putting
that same observability data in front of them, I noticed them learning so much faster.
I am curious in that you went from implementing a lot of these things and being in a management
role of mentoring folks on observability concepts to working for an observability vendor, which is,
I guess I would call Honeycomb the observability vendor. They were the first to really reframe a lot of how we considered what used to be called monitoring and
now is called observability, or as I think of it, hipster monitoring. But I am curious as to when
you look at this, my business partner wrote a book for O'Reilly, Practical Monitoring, and he loved
it so much that by the end of that book, he got out of the observability monitoring space entirely and came to work on AWS bills with me. Did you
find that going to Honeycomb has changed your perspective on observability drastically?
I had definitely consumed a lot of Honeycomb's blog posts. Like that's one of the things that
I had loved about the company is they put out a lot of interesting stuff, not just about observability, but about operating healthy teams. And like you
mentioned, a pendulum between engineering management and being an IC and just interesting
concepts within our industry overall as software engineers and SREs. So I knew a lot of the thought leadership that the company put
out and that was very helpful. It was a big change going from an enterprise like Procter & Gamble to
a startup observability company like Honeycomb. And also going from a company that very much
believes in in-person work to remote first work at Honeycomb now.
So there were a lot of cultural changes,
but I think I kind of knew what I was getting myself into as far as the perspective
that the company takes on observability.
That is always the big, somewhat awkward question
because if the answer goes a certain way,
it becomes a real embarrassment.
But I'm confident enough having worked with Honeycomb
as a recurring sponsor
and having helped out on the AWS bill side of the world, since you were a reference
client on both sides of that business, I want to be very clear that I don't think I'm throwing you
under a bus on this one. But do you find that the reality, now that you've been there for a year,
has matched the external advertising and the ethos of the story they tell about Honeycomb
from the outside? I definitely think it matches up. One thing that is just different about working inside of a
company like Honeycomb versus working at a company that doesn't have any observability
at all yet is that there are a lot of abstraction layers in our code base and things like that. So
me being a software engineer and writing code at Honeycomb compared
to P&G, I don't have to think about observability as much because everybody in the company is
thinking about observability and had thought about it before I joined and had put in a lot of thought
to how to make sure that we consistently have telemetry data that we need to solve problems
versus I was thinking about this stuff on the daily at P&G.
Something I've heard from former employees of a bunch of different observability companies has
a recurring theme to it. And that is, is that it's hard to leave. Because when you're at an
observability company, everything is built with an eye toward observability. And there's always
the dogfooding story of we instrument absolutely everything we
have with everything that we sell the customers. Now, in practice, you leave and go to a different
company that is almost never going to be true, if no further reason than based on simple economics.
Turning on every facet of every observability tool that a given company sells becomes extraordinarily
expensive and is an investment decision. So companies say yes to some, no to others. Do you think you're going to have that problem if and
when you decide it's time to move on to your next role? Assuming, of course, that it's not at a
competing observability company. I'm sure there will be some challenges if I decide to move on
from working for observability platforms in the future. The one that I think would be the
most challenging is joining a team where people just don't understand the value of observability
and don't want to invest the time and effort into actually instrumenting their code and don't see
why they need to do it versus just like they haven't gotten there yet or they haven't
had enough people hired to do it just yet. But if people are actively like kind of against the idea
of instrumenting your code, I think that would be really challenging to kind of shift to,
especially after over the last two and a half years or so, being so used to having this extra sense
when I'm debugging problems and dealing with outages.
I will say it was a little surreal
the first time I wound up taking a look
at Honeycomb's environment,
because I do believe that cost and architecture
are fundamentally the same thing when it comes to cloud.
And you had clear lines of visibility
into what was going on in your AWS bill by way of
Honeycomb as a product. And that's awesome. I haven't seen anyone else do that yet. And I don't
know that it would necessarily work as well, because as you said, there, everyone's thinking
about it through this same shared vision. Whereas in a number of other companies, it flat out does
not work that way. There are certain unknowns and questions.
And from the outside, and when you first start down this path, it feels like a ridiculous thing to do until you get to a point of seeing the payoff.
And yeah, this makes an awful lot of sense.
I don't know that it would, for example, work as a generic solution for us to roll out to our various clients and say, oh, let's instrument your environment with this and see what's going on. Because first, we don't have that level of
ability to make change in customer environments. We are read-only for some very good reasons.
And further, it also seems like it's a step one, change your entire philosophy around these sorts
of things so we can help you spend less on AWS seems like a bit of a tall order.
Yeah, agreed.
And yeah, on previous teams that I've been on, I definitely, and I think it's fair, absolutely
fair that there were things where, especially using AWS serverless services, I was trying
to get as much insight as possible into adding some of these services to our traces.
Like AppSync was one example where I could not for the life of me figure out how to get an AppSync API request onto my Honeycomb trace.
And I spent a lot of time trying to figure it out.
And I had team members that would just be like, you know, let's time box this.
Let's not like sink all of our time into it. And so I think as observability evolves, hopefully carving out those
patterns continues to get easier so that engineers don't have to spend all of their time carving out
those patterns. It feels like that's the hard part, is the shift in perspective. Instrumenting
a given tool into an environment is not the heavy lift compared to appreciating the value of it.
Do you find that that was an easy thing for you to overcome back when you were at Procter & Gamble,
as far as people already had bought in on some level to observability from having seen it in
some scenarios where it absolutely saved folks bacon? Or was it the problem of first, you have
to educate people about the painful problem that they have before they realize it is in fact a painful and be a problem and then see that
you have something to sell them that will solve that.
Because that pattern is a very hard sales motion to execute in most spaces.
But you were doing it from the customer side first.
Yeah, yeah.
Doing it from the customer side, I was able to get buy-in on the team that I was on. And I should also say the team that I was on was considered an innovation team. We were in a separate building from the corporate building and things like that, which I'm sure played into some of those cultural aspects and dynamics. But trying to educate people outside of our team and try to build an observability
practice within this big enterprise company was definitely very challenging. And it was a lot of
spending time sharing information and talking to people about their stack and what languages and
tools that they're using and how this could help them. I think until people have had that kind of
magical moment of using observability data to solve a problem for themselves, it's very hard.
It can be very hard to really make them understand the value.
That was always my approach because it feels like observability is a significant and sizable
investment in infrastructure alone, let alone mental overhead, the teams to manage these things,
et cetera, et cetera. And until you have a challenge that observability can solve,
it feels like it is pure cost, similar to backups, where it's just a whole bunch of
expense for no benefit until suddenly one day you're very glad you had it.
Now, the world is littered with stories that are very clear about what happens when you don't have backups.
Most people have a personal story around that.
But it feels like it's less straightforward to point at a visceral story where not having observability really hobbled someone or something.
It feels like, because in the benefit of perfect hindsight, oh, yeah, like a disk filled up and we didn't know about that.
Like, ah, if we had just had the right check, we would have caught that early on. Yeah, coulda, woulda, shoulda, but
it was a cascading failure that wasn't picked up until seven levels downstream. Do you think that
that's the situation these days, or am I misunderstanding how people are starting to
conceive about this stuff? Yeah, I mean, I definitely have a couple of stories of, even once I was on the journey to observability adoption, which I call a journey because you don't just kind of snap your fingers and have observability.
I started with one service instrumenting that and just like gradually over sprints would instrument more services and pulled more team members in to do that as well. But when we were in that process of instrumenting
services, there was one service, which was our auth service, which maybe should have been the
first one that we instrumented, that a code change was made and it was erroring every time somebody
tried to sign up in the app. And if we had had observability instrumentation in place for that service, it wouldn't have taken
us like the four or five hours to find the problem of the one line of code that we had changed.
We would have been able to see more clearly what error was happening and what line of code it was
happening on and probably fix it within an hour. And we had a similar issue with a Redshift database that we were running more on the metrics
side of things. We were using it to send analytics data to other people in the company. And that
Redshift database just got maxed out at a certain point. The CPU utilization was at like 98%. And
people in the company were very upset and having a very bad time querying their analytics
data. It's a terrific sales pitch for Snowflake to be very direct because you hear that story kind
of a lot. Yeah, it was not a fun time. But at that point, we started sending Redshift metrics data
over to Honeycomb as well so that we could keep a better pulse on what exactly was happening with
that database. So here's, I guess, sort of the acid test. People tend to build
software when they're starting out Greenfield in ways that emphasize their perspective on the
world. For example, when I'm building something new, it doesn't matter if it's tiny or for just
a one-off shitposting approach, and it touches anything involving AWS, first thing I do out of
the gate is I wind up setting tags so that I can do cost
allocation work on it. So someday I'm going to wonder how much this thing costs. That is, I guess,
my own level of brokenness. When you start building something at work from scratch, I guess this is
part you, part all of Honeycomb. Do you begin from that Greenfield approach of hello world,
of instrumenting it for observability, even if it's not explicitly an observability-focused workload? Or is it something that you wind up retrofitting
with observability insights later once it hits a certain point of viability?
Yeah. So if I'm at the stage of just kind of trying things out locally on my laptop,
kind of outside of the central repo for the company,
I might not do observability data because I'm just kind of learning and trying things out on
my laptop. Once I pull it into our central repo, there is some observability data that I am going
to get just in the way that we kind of have our services set up. And as I'm going through writing code to do this, whatever new feature I'm trying
to do, I'm thinking about what things, when this breaks, not if it breaks, when it breaks,
am I going to want to know about in the future? And I'll add those things kind of on the fly just
to make things easier on myself. And that's just kind of how my brain works at this point
of thinking about my
future self, which is kind of like the same definition of self-care. So I think of observability
as self-care for developers. But later on, when we're closer to actually launching a thing,
I might take another pass at just like, okay, let's once again take a look at the error paths
and how this thing can break and make sure that we have enough information at those points of error to know what is happening within a trace view of
this request. My two programming languages that I rely on the most are enthusiasm and brute force.
And I understand this is not a traditional software engineering approach, but I've always
found that having to get observability involved a retrofit on some level.
And it always was frustrating to me just because it felt like it was so much effort in various ways
that I've just always kicked myself. I should have done this early on, but I've been on the
other side of that. And it's like, should I instrument this with good observability?
No, that sounds like work. I want to see if this thing actually works at all or not first.
And I don't know what side of the fence is the correct one to be on,
but I always find that I'm on the wrong one.
Like, I don't know if it's like one of those,
there's two approaches and neither one works.
I do see in client environments where observability is always, always,
always something that has to be retrofit into what it is that they're doing.
Does it stay that way once companies get past a certain point? Does
observability adoption among teams just become something that is ingrained into them? Or do
people have to consistently relearn that same lesson in your experience? I think it depends
kind of on the size of your company. If you are a small company with a smaller engineering
organization where it's, I won't say easy, but easier to get full team buy-in
on points of view and decisions and things like that, it becomes more built in. If you're in a
really big company like the one that I came from, I think it is continuously educating people and
trying to show the value of why we are are doing this coming back to that why and like
the magical moment of like stories of problems that have been solved because of the instrumentation
that was in place so i guess like most things it's an it depends but uh the larger that your
your company becomes i think the harder it gets to keep everybody on the same page
i am curious in that I tend to see the world
through AWS bills,
which is a sad, pathetic way to live
that I don't recommend to basically anyone.
But I do see the industry,
or at least my client base,
forming a bit of a bimodal distribution.
On one side, you have companies like Honeycomb,
including, you know, Honeycomb,
where the majority of your AWS spend
is driven by the application that is Honeycomb, the SaaS thing you
sell to people to solve their problems. The other side of the world are companies that look a lot
more like Procter & Gamble, presumably. Because I think of, oh, what does Procter & Gamble do?
And the answer is, a lot. They're basically the definition of conglomerate in some ways.
So you look at a bill at a big company like that, and it
might be hundreds of millions of dollars, but the largest individual workload is going to be a couple
million at best. So it feels very much like it's this incredibly diffuse selection of applications.
And in those environments, you have to start thinking a lot more about centralization,
things you can do, for example, for savings plan purchases and whatnot. Whereas at honeycomb-like companies, you can start looking at, oh, well, you have this
single application that's the lion's share of everything. We can go very deep into architecture
and start looking at micro-optimizations here that'll have a larger impact. Having been an
engineer at both types of companies, do you find that there's a different internal philosophy?
Or is it that when you're working in a larger company on a specific project, that specific project becomes your entire professional universe?
Yeah, definitely at P&G, for the most part, IoT was kind of the center of my universe. that I notice as being different, and I think this is from being an enterprise and a startup,
is just the way that thinking about cost and architecture choices kind of happened.
So at P&G, like I said, we were using a lot of Lambda and pretty much any chance we got,
we used a serverless or managed offering from AWS. And I think a big part of that reasoning
was because, like I said earlier, P&G is very interested in in-person work. So everybody that
we hired had to be located in Cincinnati. And it became hard to hire for people who had Go and
Terraform experience because a lot of people in the Midwest are much more comfortable in.NET and Java. There's just a lot more jobs using those technologies. So we had a lot of
trouble hiring and would choose because P&G had a lot of money to spend to give AWS that money
because we had trouble finding engineers to hire. Whereas Honeycomb really does not seem to have
trouble hiring engineers. They hire
remote employees, and lots of people are interested in working at Honeycomb. And they also
do not have the bank account that Procter & Gamble has. So just thinking about cost and architecture
is kind of a different beast. So Honeycomb, we are building a lot more services
versus just always choosing a serverless
or easy AWS managed option to think about it less.
Yeah, at some level, it's an unfair question.
It's just because it comes down to,
in the fullness of time,
even Honeycomb turns into something
that looks a lot more like Procter & Gamble.
Because, okay, you have the Honeycomb application. That's great. But as the company continues to grow and offer
different things to different styles of customers, you start seeing a diffusion where, yeah,
everything still is observability focused, but I can see a future where it becomes a bunch of
different subcomponents. You make acquisitions of other companies that wind up being treated
as separate environments
and the rest.
And in the fullness of time, I can absolutely see that that is the path that a lot of companies
go down.
So it might also just be that I'm looking at this through a perspective lens of just
company stage, as opposed to what the internal story of the company is.
I mean, Procter & Gamble is what, a century old, give or take?
Whereas Honeycomb is an ancient tech company by which, I mean, it's over 18 months old.
Yeah. P&G was founded in 1837.
Almost 200 years old.
Quite old.
Yeah.
And for some reason, they did not choose to build their first technical backbone on top of AWS back
then. I don't understand why for the life of me.
Yeah, but totally agree on your point that the kind of difference of thinking about cost and architecture definitely comes from company stage rather than necessarily the industry.
I really want to thank you for taking the time out of your day to talk with me about what you're up to and how you view these things. If people want to learn more, what's the best place for them to find you? Yeah. So I think
the main place that I still sometimes am is Twitter at CodeGirlBrooke is my username,
but I'm only there sometimes now. I feel like that's a problem a lot of us are facing right
now. Like I'm more active on Blue Sky these days, but it's still invite only. And it feels like it's too much of a weird flex
to wind up linking people to it just yet.
I'm hoping that changes soon,
but we'll see how it plays.
We will, of course,
with links to that in the show notes.
I really want to thank you
for taking the time out of your day
to talk with me.
Yeah, thanks so much for chatting with me.
It was a good time.
Brooke Sargent,
software engineer at Honeycomb.
This has been a promoted guest episode brought to us by our friends at Honeycomb and I'm cloud economist,
Corey Quinn. If you've enjoyed this podcast, please leave a five-star review on your podcast
platform of choice. Whereas if you've hated this podcast, please leave a five-star review in your
podcast platform of choice, along with an insulting comment that will somehow fail to post because
that podcast platform of choice, for some reason,
did not opt for a decent observability story.
If your AWS bill keeps rising and your blood pressure is doing the same,
then you need the Duck Bill Group.
We help companies fix their AWS bill by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business, and we get to the point.
Visit duckbillgroup.com to get started.