Screaming in the Cloud - Chronosphere on Crafting a Cloud-Native Observability Strategy with Rachel Dines
Episode Date: November 28, 2023Rachel Dines, Head of Product and Technical Marketing at Chronosphere, joins Corey on Screaming in the Cloud to discuss why creating a cloud-native observability strategy is so critical, and ...the challenges that come with both defining and accomplishing that strategy to fit your current and future observability needs. Rachel explains how Chronosphere is taking an open-source approach to observability, and why it’s more important than ever to acknowledge that the stakes and costs are much higher when it comes to observability in the cloud. About RachelRachel leads product and technical marketing for Chronosphere. Previously, Rachel wore lots of marketing hats at CloudHealth (acquired by VMware), and before that, she led product marketing for cloud-integrated storage at NetApp. She also spent many years as an analyst at Forrester Research. Outside of work, Rachel tries to keep up with her young son and hyper-active dog, and when she has time, enjoys crafting and eating out at local restaurants in Boston where she’s based.Links Referenced:Chronosphere: https://chronosphere.io/LinkedIn: https://www.linkedin.com/in/rdines/
Transcript
Discussion (0)
Hello, and welcome to Screaming in the Cloud, with your host, Chief Cloud Economist at the
Duckbill Group, Corey Quinn.
This weekly show features conversations with people doing interesting work in the world
of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles
for which Corey refuses to apologize.
This is Screaming in the Cloud.
Welcome to Screaming in the Cloud.
I'm Corey Quinn.
Today's featured guest episode is brought to us by our friends at Chronosphere.
And they have also brought us Rachel Dines, their head of product and solutions marketing.
Rachel, great to talk to you again.
Hey, Corey. Yeah, great to talk to you again. Hey, Corey.
Yeah, great to talk to you too.
Watching your trajectory has been really interesting
just because starting off when we first started,
I guess, learning who each other were,
you were working at CloudHealth,
which has since become VMware.
And I was trying to figure out,
huh, the cloud runs on money.
How about that?
It feels like it was a thousand years ago, but neither one of us is quite that old.
It does feel like several lifetimes ago.
You were just this snarky guy with a few followers on Twitter,
and I was trying to figure out what you were doing, mucking around with my customers.
We kind of both figured out what we were doing, right?
So speaking of that iterative process,
today you are at Chronosphere,
which is an observability company. We would have called it a monitoring company five years ago,
but now that's become an insult after the observability war dust has settled.
So I want to talk to you about something that I've been kicking around for a while,
because I feel like there's a gap somewhere. Let's say that I build a crappy web app because
all of my web apps inherently are crappy and it makes money through some mystical form of alchemy
and I have a bunch of users and I eventually realize, huh, I should probably have a better
observability story than waiting for the phone to ring and a customer telling me it's broken.
So I start instrumenting various aspects of it that seem to make sense.
Maybe I go too low level, like looking at all the disks on every server to tell me if they're getting full or not, like they're ancient servers.
Maybe I just have a Pingdom equivalent of, is the website up enough to respond to a packet?
And as I wind up experiencing different failure modes and getting yelled at by different constituencies in my own career trajectory, my own boss, you start instrumenting for all those different kinds of breakages.
You start aggregating the logs somewhere and the volume gets bigger and bigger with time.
But it feels like it's sort of a reactive process as you stumble through that entire
environment. And I know it's not just me because I've seen this unfold in similar ways in a bunch
of different companies. It feels to me very strongly like it is something that happens to
you rather than something you set about from day one with a strategy in mind. What's your take on
an effective way to think about strategy when it comes to observability?
You just nailed it. That's exactly the kind of progression
that we so often see.
And that's what I really was excited
to talk with you about today.
Oh, thank God.
I was worried for a minute there
that you're like,
what the hell are you talking about?
Are you just like some sort of crap engineer?
And yes, but it's mean of you to say it.
But yeah, what I'm trying to figure out
is, is there some magic
that I just was never connecting?
Because it always feels like you're in trouble
because the site's always broken. And like, if the disk fills up, yeah, oh, now we're going to start
monitoring to make sure the disk doesn't fill up. Then you wind up with getting barraged with alerts
and no one wins. And it's an uncomfortable period of time. Uncomfortable period of time. That is one
very polite way to put it. I mean, I will say it is very rare to find a company that actually
sits down and thinks, this is our observability strategy. This is what we want to get out of
observability. You can think about a strategy in the old school sense. I think you know I was an
industry analyst, so I'm going to have to go back to my roots at Forrester with thinking about the
people and the process and the technology. But really what the bigger component here is like, what's the business impact?
What do you want to get out of your observability platform?
What are you trying to achieve?
And a lot of the time people have thought,
oh, observability strategy, great.
I'm just gonna buy a tool.
That's it.
Like, that's my strategy.
And I hate to bring it to you,
but buying tools is not a strategy.
I'm not going to say like, buy this tool.
I'm not even going to say buy Chronosphere.
That's not a strategy. I mean, you should buy Chronosphere, but it's not a strategy. I'm not going to say like, buy this tool. I'm not even going to say buy Chronosphere. That's not a strategy.
I mean, you should buy Chronosphere,
but it's not a strategy.
Of course, I'm going to throw money
by the wheelbarrow
at various observability vendors
and hope it solves my problem.
But if that solved the problem,
I've got to be direct.
I've never spoken to those customers.
Exactly.
I mean, that's why this space
is such a great one to come in
and be very disruptive in.
And I think back in the
days when we were running in data centers, maybe even before virtual machines, you could probably
get away with not having a monitoring strategy. I'm not going to call it observability. It's not
what we called it back then. You get away with not having a strategy because what was the worst
that was going to happen, right? It wasn't like there was a finite amount that your monitoring
bill could be. There was a finite amount that your customer bill could be. There was a finite amount that your customer impact could be.
You're paying the penny slots, right?
We're not in the penny slots anymore.
We're in the $50 craps table.
And it's Las Vegas.
And if you lose the game, you're going to have to run down the street without your shirt.
The game and the stakes have changed.
And we're still pretending like we're playing penny slots.
And we're not anymore.
That's a good way of framing it.
I mean, I still remember some of my biggest observability challenges were building highly
available RSYS log clusters so that you could bounce a member and not lose any log data
because some of that was transactionally important.
And we've gone beyond that to a stupendous degree.
But it still feels like you don't wind up building this into the application
from day one. More is the pity. Because if you did and did that intelligently, that opens up a
whole world of possibilities. I dream of that changing, where one day, whenever you start to
build an app, oh, and we just push the button and automatically instrument it with OTEL. So you
instrument the thing once everywhere it makes sense to do it, and then you can do your vendor
selection and what you send where decisions later in time. But these days, we're not there.
Well, I mean, there's also the question of just the legacy environment and the tech debt. Even
if you wanted to, he actually was having a beer yesterday with a friend who's a VP of engineering,
and he's got his new environment that they're building with observability instrumented from
the start. How beautiful. They've got OTEL. They're going to have tracing.
And then he's got his legacy environment, which is a hot mess.
So, you know, there's always going to be this bridge of the old and the new.
But this was where it comes back to.
No matter where you're at, you can stop and think, like, what are we doing and why?
What is the cost of this?
And not just cost in dollars, which I know you and I could talk about very deeply for
a long period of time,
but like the opportunity costs, developers are working on stuff that they could be working on
something that's more valuable or like the cost of making people work around the clock, trying to
troubleshoot issues when there could be an easier way. So I think it's like stepping back and
thinking about cost in terms of dollar cents, time, opportunity, and then also impact. It's
starting to make some decisions about what you're going to do in the future that's different. Once
again, you might be stuck with some legacy stuff that you can't really change that much, but
you've got to be realistic about where you're at. I think that that is a hard lesson to be very
direct in that companies need to learn it the hard way
for better or worse. Honestly, this is one of the things that I always noticed in startup land
where you had a whole bunch of, frankly, relatively early career engineers in their
early twenties, if not younger. But then the ops person was always significantly older because the
thing you actually want to hear from your ops person, regardless of how you slice it, is, oh yeah, I've seen this kind of problem before. Here's how we
fixed it. Or even better, here's a thing we're doing and I know how that's going to become a
problem. Let's fix it before it does. It's the, what are you buying by bringing that person in?
Experience, mostly. Yeah, that's an interesting point you make. And it kind of leads me down this
little bit of a side note,
but really interesting anti-pattern that I've been seeing in a lot of companies is
that more seasoned ops person, they're the one who everyone calls when something goes wrong.
Like they're the one who like, oh my God, I don't know how to fix it.
This is a big, hairy problem.
I call that one ops person or I call that very experienced person.
That experienced person then becomes this huge bottleneck into solving problems
that people don't really,
they might even be the only one
who knows how to use the observability tool.
So if we can't find a way to democratize
our observability tooling a little bit more,
so like just day-to-day engineers,
like more junior engineers, newer ones,
people who are still ramping
can actually use the tool and be successful.
We're gonna have a big problem
when these ops people walk out the door.
Maybe they retire.
Maybe they just get sick of it.
We have these massive bottlenecks in organizations,
whether it's ops or DevOps or whatever,
that I see often exacerbated by observability tools.
Just a side note.
Yeah.
On some level, it feels like a lot of these things
can be fixed with tooling.
And I'm not
going to say that tools aren't important. You ever tried to implement observability by hand?
It doesn't work. There have to be computers somewhere in the loop, if nothing else.
And then it just seems to devolve into a giant swamp of different companies doing different
things, taking different approaches. And on some
level, whenever you read the marketing or hear the stories, any of these companies tell you almost
have to normalize it from translating from whatever marketing language they've got into
something that comports with the reality of your own environment and seeing if they align.
And that feels like it is so much easier said than done.
This is a noisy space.
That is for sure.
And I think we could go out to 10 people right now
and ask those 10 people to define observability
and we would come back with 10 different definitions.
And then you throw a marketing person in the mix, right?
Guilty as charged.
And I know you're a marketing person too, Corey.
So you've got to take some of the blame.
It gets mucky, right?
But like I said a minute ago, the answer is not tools.
Tools can be part of the strategy. But if you're just thinking, I'm going to buy a tool and that's going to solve my problem, you're going to end up like this company I was talking
to recently that has 25 different observability tools. And not only do they have 25 different
observability tools, what's worse is they have 25 different definitions for their SLOs and 25 different names for the same metric.
And to be honest, it's just a mess. I'm not saying go be draconian and tell all the engineers,
you can only use this tool, only use that tool. You got to figure out this kind of balance of
hands-on, hands-off. How much do you centralize? How much do you push and standardize? Otherwise,
you end up with just a huge mess.
On some level, it feels like it was easier back in the days of building it yourself with Nagios.
Because there's only one answer, and it sucks.
Unless you want to start going down the world of HP OpenView.
Which, step one, hire a 50-person team to manage OpenView.
Okay, that's not going to solve my problem either.
So, let's get a little more specific.
How does Chronosphere approach this?
Because historically, when I've spoken to folks at Chronosphere,
there isn't that much of a day one story in that I'm going to build a crappy web app that's instrumented for Chronosphere.
There's a certain, you must be at least this tall to ride,
implicit expectation built into the product just based upon its origins.
And I'm not saying that doesn't make sense,
but it also means there's really no such thing as a greenfield build-out for you either.
Well, yes and no. I mean, I think there's no greenfields out there because everyone's doing
something for observability or monitoring or whatever you want to call it, right? Whether
they've got Nagios, whether they've got the dog, whether they've got something else in there,
they have some way of introspecting their systems, right? So one of the things that
Chronosphere is built on
that I actually think this is part of something,
a way you might think about building out
an observability strategy as well
is this concept of control and open source compatibility.
So we only can collect data via open source standards.
You have to send us data via Prometheus,
via open telemetry.
It could be older standards like, you know,
StatsD, Graphite, but we don't have any proprietary
instrumentation.
And if I was making a recommendation to somebody building out their observability strategy
right now, I would say open, open, open all day long, because that gives you a huge amount
of flexibility in the future.
Because guess what?
You know, you might put together an observability strategy that seems like it makes sense for
right now.
I actually was talking to a B2B SaaS company that told me that they made a choice a couple of years ago on an observability tool. It seemed like the right choice at the time.
They were growing so fast, they very quickly realized it was a terrible choice. But now it's
going to be really hard for them to migrate because it's all based on proprietary standards.
Now, of course, a few years ago, they didn't have the luxury of open telemetry and all
of these.
But now that we have this, we can use these to kind of future-proof our mistakes.
So that's one big area that, once again, both my recommendation and happens to be our
approach at Chronosphere.
I think that that's a fair way of viewing it.
It's a constant challenge, too, just because increasingly, you mentioned the dog
earlier, for example. I will say that for years, I have been asked whether or not at the Duckbill
Group, we look at Azure bills or GCP bills. No, we are pure AWS. Recently, we started to hear that
same inquiry specifically around Datadog to the point where it has become a board level concern
at very large companies. And that is a challenge on some level. I don't deviate from my typical path of, I fix AWS bills and that's
enough impossible problems for one lifetime. But there is a strong sense of you want to record as
much as possible for a variety of excellent reasons, but there's an implicit cost to doing
that. And in many cases, the cost of observability becomes a massive contributor to the overall cost. Netflix has
said in talks before that they're effectively an observability company that also happens to
stream movies just because it takes so much effort, engineering, and raw computing resources
in order to get that data and do something actionable with it.
It's a hard problem.
It's a huge problem.
And it's a big part of why I work at Chronosphere, to be honest, because when I was, you know,
towards the tail end at my previous company in cloud cost management, I had a lot of customers
coming to me saying, hey, when are you going to tackle our dog or our new relic or whatever?
Similar to the experience you're having now, Corey, this was happening to me three, four years ago.
And I noticed that there was definitely a correlation between people who are having
these really big challenges with their observability bills and people that were
adopting like Kubernetes and microservices and cloud native. And it was around that time that
I met the Chronosphere team, which is exactly what we do, right? We focus on observability for these cloud native environments where observability data
just goes wild. We see 10x, 20x as much observability data, and that's what's driving
up these costs. And yeah, it is becoming a board level concern. I mean, and coming back to the
concept of strategy, like if observability
is the second or third most expensive item in your engineering bill, like obviously cloud
infrastructure, number one, number two, number three is probably observability. How can you not
have a strategy for that? How can this be something the board asks you about? And you're like, what are
we trying to get out of this? What's our purpose? Troubleshooting?
Right, because it turns into business metrics as well.
It's not just about, is the site up or not?
There's a, like one of the things that always drove me nuts in not just in the observability space,
but even in cloud costing is where,
okay, your costs have gone up this week.
So you get a frowny face or it's in red,
there's traffic light coloring.
Cool, but for a lot ofy face or it's in red, there's traffic light coloring. Cool. But for a
lot of architectures and a lot of customers, that's because you're doing a lot more volume.
That translates directly into increased revenues, increased things you care about.
You don't have the position or the context to say that's good or that's bad. It simply is.
And you can start deriving business insight from that. And I think that is the real observability
story that I think has largely gone untold at tech conferences, at least.
It's so right. I mean, spending more on something is not inherently bad if you're getting more value
out of it. And definitely a challenge on the cloud cost management side. My costs are going up,
but my revenue is going up a lot faster, so I'm okay. And I think some of the place, like, you
know, we put observability in this box of like, it's for low level troubleshooting, but really if you step back
and think about it, there's a lot of larger, bigger picture initiatives that observability
can contribute to in an org, like digital transformation. I know that's a buzzword,
but like, that is a legit thing that a lot of CTOs are out there thinking about. Like,
how do we, you know, get out of the tech debt world? How do we get into cloud native?
Maybe it's developer efficiency. God, there's a lot of peopleTOs are out there thinking about. Like, how do we get out of the tech debt world? How do we get into cloud native? Maybe it's developer efficiency.
God, there's a lot of people talking
about developer efficiency.
Last week at KubeCon, that was one of the big, big topics.
I mean, and yeah, what about cost savings?
To me, we've put observability in a smaller box
and it needs to bust out.
And I see this also in our customer base.
Customers like DoorDash use observability not just to look at their infrastructure and their applications, but also look at their business.
At any given minute, they know how many dashers are on the road, how many orders are being placed, cut by geos, actually down to the second.
And they can use that to make decisions. This is one of those things that I always found a little strange coming from the world of running systems in large environments to fixing AWS bills.
There's nothing that even resembles a fast, reactive response in the world of AWS billing.
You wind up with a runaway bill.
They're going to resolve that over a period of weeks on Seattle business hours. If you wind up spinning something up that
creates a whole bunch of very expensive drivers behind your bill, it's going to take three days
in most cases before that starts showing up anywhere that you can reasonably expect to get
at it. The idea of near real time is a lie unless you want to start instrumenting everything that
you're doing to trap the calls and then run cost extrapolation from there.
That's hard to do. Observability is a very different story where latencies start to matter,
where being able to get leading indicators of certain events, be they technical or business,
start to be very important. But it seems like it's so hard to wind up getting there from
where most people are. Because I know we like to talk dismissively about the past, but let's face it. Conferenceware is the stuff we're the proudest of.
The reality is the burning dumpster of regret in our data centers that still also drives giant
piles of revenue. So you can't turn it off or would you want to, but you feel bad about it as
a result. It just feels like it's such a big leap. It is a big leap. And I think the very first step,
I would say, is trying to get to this point of clarity and being honest with yourself about
where you're at and where you want to be. And sometimes not making a choice is a choice,
right, as well. So sticking with the status quo is making a choice. And so as we get into things
like the holiday season right now, and I know there's going to be people that are on call 24-7 during the holidays,
potentially to keep something that's just duct taped together,
barely up and running, I'm making a choice.
You're making a choice to do that.
So I think that's like the first step is the kind of,
at least acknowledging where you're at, where you want to be.
And if you're not going to make a change,
just understanding the cost and being realistic about it.
Yeah, being realistic, I think think is one of the hardest challenges because it's easy to wind up
going for the aspirational story of in the future when everything's great. Like, okay,
cool. I appreciate that you need to plant that flag on a hill somewhere. What's the next step?
What can we get done by the end of this week that materially improves us from where we started the
week? And I think that with the aspirational conferenceware
stories, it's hard to break that down into things that are actionable, that don't feel like they're
going to be an interminable slog across your entire existing environment. No, I get it. And
for things like, you know, instrumenting and adding tracing and adding OTEL, a lot of the time,
the return that you get on that investment is, it not quite like I put put a dollar in and get a dollar out.
I mean, something like tracing, you can't get to 60 percent instrumentation and get 60 percent of the value.
You need to be able to get to like 80, 90 percent and then you'll get a huge amount of value.
So it's sort of like you're trudging up this hill, you're trudging up this hill.
And then finally you get to the plateau and it's beautiful.
But that hill is steep and it's long and it's not
pretty. And I don't know what to say other than there's a plateau near the top and there's
companies that do this well, really get a ton of value out of it. And that's the dream that we want
to help customers get up that hill. But yeah, I'm not going to lie. The hill, the hill can be steep.
One thing that I find interesting is there's almost a bimodal distribution in companies that I talk to.
On the one side, you have companies like, I don't know, a Chronosphere is a good example of this.
Presumably, you have a cloud bill somewhere, and the majority of your cloud spend will be on what amounts to a single application,
probably in your case called, I don't know, Chronosphere. It shares the name of the company. The other side of that distribution is the large enterprise conglomerates where they're spending, I don't know, $400 million a year on cloud, but their largest workload is $3 million.
And it's just a very long tail of a whole perspective of the product you have, not you in this metaphor, which gets confusing, is it feels easier to instrument a Chronosphere-like company that has a primary workload that is the massive driver of most things and get that instrumented and start getting an observability story around that than it does to try and go to a giant company and, okay, 1,500 teams need to all
implement this thing that are all going in different directions. How do you see it playing
out among your customer base if that bimodal distribution holds up in your world? It does
and it doesn't. So first of all, for a lot of our customers, we often start with metrics.
And starting with metrics means Prometheus. And
Prometheus has hundreds of exporters. It is basically built into Kubernetes. So if you're
running Kubernetes, getting Prometheus metrics out, actually not a very big lift. So we find that we
start with Prometheus, we start with getting metrics in, and we can get a lot. I mean,
we have a lot of customers that use us just for metrics and they get a massive amount of value.
But then once they're ready, they can start instrumenting for OTEL and start getting traces
in as well.
And yeah, in larger organizations, it does tend to be one team, one application, one
service, one department that kind of goes at it and gets all that instrumented.
But I've even seen very large organizations when they get their act together and decide like, no, we're doing this, they can get OTEL instrumented. But I've even seen very large organizations when they get their act together
and decide like, no, we're doing this, they can get OTEL instrumented fairly quickly.
So I guess it's just like lining up. It's more of a people issue than a technical issue a lot
of the time, like getting everyone lined up and making sure that like, yes, we all agree we're
on board, we're going to do this. But it's usually like it's a start small and it doesn't have to be
all or nothing.
We also just recently added the ability to ingest events, which is actually a really beautiful thing. And it's very, very straightforward. It basically just we connect to your existing other DevOps
tools. So whether it's like a Buildkite or a GitHub or like a LaunchDarkly, and then anytime
something happens, one of those tools that gets registered
as an event in Chronosphere.
And then we overlay those events over your alerts.
So when an alert fires,
then the first thing I do is I go look at the alert page
and it says, hey, someone did a deploy five minutes ago
or there was a feature flag flip three minutes ago.
I solved the problem right then.
I don't think of this as,
there's not an all or nothing nature to any of this stuff.
Yes, tracing is a little bit of a, you know, like I said, it's one of those things where you have to
make a lot of investment before you get a big reward. But that's not the case
in all areas of observability. Yeah, I would agree. Do you find that there's a
significant, easy early win when customers start adopting Chronosphere. Because one of the problems
that I've found, especially with things that are holistic, and as you talk about tracing,
well, you need to get to a certain point of coverage before you see value. But human psychology
being what it is, you kind of want to be able to demonstrate, oh, see, the mean time to dopamine
needs to come down, to borrow an old phrase. Do you find that some of those easy wins that start
to help people to see the light?
Because otherwise,
it just feels like a whole bunch of work
for no discernible benefit to them.
Yeah, at least for the
Kronos for customer base,
one of the areas where we're seeing
a lot of traction this year
is in optimizing the costs,
like coming back to the cost story
of their overall observability bill.
So we have this concept
of the control plane in our product
where all the data that we ingest hits the control plane.
At that point, the customer can look at the data,
analyze it and decide, this is useful, this is not useful.
And actually not just decide that,
but we show them what's useful, what's not useful,
what's being used, what's high cardinality and high cost,
but maybe no one's touched it?
And then we can make decisions around aggregating it, dropping it, combining it, doing all sorts
of fancy things, downsampling it.
We can do this on the trace side.
We can do it both head-based and tail-based.
On the metric side, it's as it hits the control plane and then streams out.
And then they only pay for the data that we store.
So typically customers are,
they come on board and immediately reduce
their observability data set by 60%.
Like that's just straight up, that's the average.
I've seen some customers get really aggressive,
get up to like in the 90s where they realize
we're only using 10% of this data.
Let's get rid of the rest of it.
We're not going to pay for it.
So paying a lot less helps in a lot of ways.
It also helps companies get
more coverage of their observability. It also helps customers get more coverage of their overall
stack. So I was talking recently with an autonomous vehicle driving company that recently came to us
from the dog and they had made some really tough choices and were no longer monitoring their pre
prod environments at all because they just couldn't afford to do it anymore. It's like, well, now they can
and we're still saving them money. I think that there's also the downstream
effect of the money saving too. For example, I don't fix observability bills directly,
but why is your CloudWatch bill through the roof or data egress charges in some cases?
It's, oh, because your observability vendor is pounding the crap out of those endpoints
and pulling all your log data across the internet, etc.
And that tends to mean, oh, yeah, it's not just the first order effect.
It's the second, third, and fourth order effects this winds up having.
It becomes almost a holistic challenge.
I think that trying to put observability in its own bucket on some level, when you're
looking at it from a cost perspective, starts to be a, I guess, a structure that makes less
and less sense in the fullness of time.
Yeah, I would agree with that.
I think that just looking at the bill from your vendor is one very small piece of the
overall cost you're incurring.
I mean, all of the things you mentioned, the egress, the cloud watch, the other services it's impacting. What about the people?
Yeah, it sure is great that your team works for free.
Exactly. Right. And, you know, and it makes me, it makes me think a little bit about that viral
story about that particular company that with a certain vendor that had a $65 million per year
observability bill. And that impacted not just them,
but it showed up in both vendors' financial filings.
How did you get there?
How did you get to that point?
And I think this all comes back to the value
and the ROI equation.
Yes, we can all sit in our armchairs and be like,
well, that was dumb.
But I know there are very smart people there
that just got into a bad situation
by kicking the can down the road
on not thinking about the strategy.
Absolutely.
I really want to thank you
for taking the time to speak with me
about, I guess, the bigger picture questions
rather than the nuts and bolts of a product.
I like understanding the overall view
that drives a lot of these things.
I don't feel I get to have enough
of those conversations some weeks.
So thank you for humoring me.
If people want to learn more,
where's the best place for them to go?
So they should definitely check out
the Chronosphere website.
Brand new, beautiful spanking new website,
chronosphere.io.
And you can also find me on LinkedIn.
I'm not really on the Twitter so much anymore,
but I'd love to chat with you on LinkedIn
and hear what you have to say.
And we will, of course, put links to all of that in the show notes.
Thank you so much for taking the time to speak with me.
It's appreciated.
Thank you, Corey.
Always fun.
Rachel Dines, Head of Product and Solutions Marketing at Chronosphere.
This has been a featured guest episode brought to us by our friends at Chronosphere.
And I'm Corey Quinn.
If you've enjoyed this podcast,
please leave a five-star review on your podcast platform of choice.
Whereas if you've hated this podcast,
please leave a five-star review
on your podcast platform of choice,
along with an angry and insulting comment
that I will one day read
once I finish building my highly available
rsyslog system to consume it with.
If your AWS bill keeps rising
and your blood pressure is doing the same,
then you need the Duck Bill Group.
We help companies fix their AWS bill
by making it smaller and less horrifying.
The Duck Bill Group works for you, not AWS.
We tailor recommendations to your business and we get to the point.
Visit duckbillgroup.com to get started.