Screaming in the Cloud - Insights from a Vendor Insider with Ian Smith
Episode Date: September 19, 2024It turns out, you don’t need to step outside to observe the clouds. On this episode, we’re joined by Chronosphere Field CTO Ian Smith. He and Corey delve into the innovative solutions Chr...onosphere offers, share insights from Ian’s experience in the industry, and discuss the future of cloud-native technologies. Whether you're a seasoned cloud professional or new to the field, this conversation with Ian Smith is packed with valuable perspectives and actionable takeaways.Show Highlights:(0:00) Intro(0:42) Chronosphere sponsor read(1:53) The role of Chief of Staff at Chronosphere(2:45) Getting recognized in the Gartner Magic Quadrant(4:42) Talking about the buying process(8:26) The importance of observability(10:18) Guiding customers as a vendor(12:19) Chronosphere sponsor read(12:46) What should you do as an observability buyer(16:01) Helping orgs understand observability(19:56) Avoiding toxicly positive endorsements(24:15) Being transparent as a vendor(27:43) The myth of "winner take all"(30:02) Short term fixes vs. long term solutions(33:54) Where you can find more from Ian and ChronosphereAbout Ian SmithIan Smith is Field CTO at Chronosphere where he works across sales, marketing, engineering and product to deliver better insights and outcomes to observability teams supporting high-scale cloud-native environments. Previously, he worked with observability teams across the software industry in pre-sales roles at New Relic, Wavefront, PagerDuty and Lightstep.LinksChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcastIan’s Twitter: https://x.com/datasmithingIan’s LinkedIn: https://www.linkedin.com/in/ismith314159/SponsorChronosphere: https://chronosphere.io/?utm_source=duckbill-group&utm_medium=podcast
Transcript
Discussion (0)
We hold ourselves to a really high bar internally, so we feel comfortable at these things.
Maybe not entirely comfortable. It's still uncomfortable when someone says,
well, tell us about a downtime incident. We're human. We've still had some downtime
incidents, but we have incredibly high reliability.
Welcome to Screaming in the Cloud. I'm Corey Quinn, back after a hiatus.
This promotedest episode is
brought to us by our friends at Chronosphere, and they have brought a returning guest. When last I
spoke to you, Ian Smith, you were the field CTO at Chronosphere, and now you're the chief of staff.
Congratulations or condolences? Oh, definitely condolences. Definitely for Chronosphere, at least.
Complicated environments lead to limited insight.
This means many businesses are flying blind
instead of using their observability data to make decisions,
and engineering teams struggle to remediate quickly
as they sift through piles of unnecessary data.
That's why Chronosphere is on a mission
to help you take back control with end-to-end visibility
and centralized governance to choose and harness the most useful
data. See why Chronosphere was named a leader in the 2024 Gartner Magic Quadrant for Observability
Platforms at chronosphere.io. So if you talk to a bunch of different folks who are various
chiefs of staff, chief of staves, however it pluralizes, you're going to get
more answers than people you ask to around what the role is. What is a chief of staff in the
Chronosphere context? At least in the Chronosphere context, the chief of staff component of my role
is really focused on the overall effectiveness of the executive team, making sure they're set up for
success so that they can focus on the departmental needs and obviously the company's needs.
And then there's a second layer to it where I sort of this hybrid role between chief of staff and head of strategy.
I look overarchingly at, you know, what is the company doing? How are we gauging the market?
Not just how are we selling, but what are we presenting to the market? What's our narrative
there? And directionally, where are we going over the next one, two, three, five years?
So it's a very interesting role, multifaceted. And I get to do a whole bunch
of cool things like this podcast. Which is always fun and exciting. Other cool things you've done
lately include making an appearance in the Gartner Magic Quadrant for, I'm sorry, I forget which one.
They have so many quadrants these days that at this point, it feels like they're inventing new
dimensions. They've gone three-dimensional. Why they're still talking in terms of two,
I don't know. But first, congratulations.
That's a big deal.
Yeah, thank you.
Yeah, being and not just debuting in the Magic Quadrant, but as a leader in the observability
space, given the relatively short tenure of the company overall, it's been a huge amount
of work by the team.
I definitely don't take any credit for that.
But it's been very gratifying to see that work pay off and obviously see the interest
from the market based off that recognition by Gardner. It's always interesting. I used to
take a lot of, I guess, salt with the idea of, oh, this is whatever all the cool kids are talking
about. And okay, if someone's in the quadrant, is that just pay for play? What is it? And the more
I talk to people, the less I believe that that's accurate. There's value in, especially when you're a large company,
figuring out what other companies are doing in a realistic way.
And frankly, having something to point at
that helps shore up and justify that decision
with something a little bit more scientific
than just vibes.
Yeah, absolutely.
And sometimes it can be a good starting point.
And there's a lot of depth of analysis
that Gartner goes through.
I can say from our experience,
they definitely make you jump through a lot of hoops. They introspect a lot of not just what the product
does, but how are you positioning it? Where are you going? There's obviously a vision component
to the magic quadrant as well, but all of that should condense down into not just that nice
graphic, but obviously the depth analysis behind it and the report. And the report itself isn't
just a pathway to what do you buy,
but it's what's going to be relevant to you. And obviously, as you consider solutions,
you'd be thinking about, okay, well, what's relevant to me? And then take that into
reading something like the Gartner report and identify, well, they're strong at these things,
but maybe these things aren't super important to me. Maybe someone who is placed elsewhere
in the quadrant is an ideal fit for me based off what I need.
You've been talking a bit lately about the buying process, and that resonates with my philosophy on
things. For a little bit of context on this, one of the things I do as a consultant is help
negotiate AWS contracts on behalf of customers with AWS. And it's not that necessarily that I'm
a fantastic negotiator so much as it is that
as a customer, you deal with this every what year, maybe two years, if you have a number of
different contracts, AWS deals with this multiple times a week. So do we. So at some point you wind
up sort of solving for that experience differential where, oh, you're doing this all the time. We're
doing this all the time. Great. We can pick out the ebbs and flows and the sea changes that happen. Whereas if you only do it like buying
a car, once you head out every five years, you've done it or 10 years, however often you, I drive
things and buy, I buy things and drive them into the road. But at that point, it's great. But let's,
let's figure out how do I do this? And I feel like a babe in the woods every time I go through that.
Having someone who does this day in, day out is valuable.
I hadn't thought about doing that from an observability perspective, which is what makes
this a bit new.
Tell me more.
Yeah.
I mean, just like the car analogy, I mean, I, like you, I buy a car.
I've spent a lot of time and effort in it.
It can be quite stressful.
And I might keep the car in this case.
I think I've actually had my most recent car for about 10 years.
And I've been thinking about buying a new one recently.
One of the things that I went to was like, well, what material is out there for people
like myself?
And obviously there's an explosion of content out there.
I found an interesting trend on YouTube that there are these people who've spent 30, 40
years in the car selling industry who now have pivoted to essentially be content creators.
And they talk
about, well, these are all the tricks, here's all the hints. And yes, there's aspects around
negotiation, right? And everyone has a procurement team when you work at a company, but there's those
aspects of like, well, how do you really think about what you need and how do you translate that
into your purchasing and then going and getting that outcome that you want, which is maybe it's
a car that holds six people. Maybe it's a car that allows you to take those great road trips. And I think in a similar
fashion, there's a lot of things in the observability space that are somewhat stacked
against you. You mentioned the fact that, you know, a vendor, let's say Amazon, or even like
Chronosphere or other vendors in the observability space, we do this all the time. I've spent 10
years in a pre-sale sense. I've worked with so many different customers.
And I'll be honest, there are some tricks that we perform.
And there are some things that we bank on in terms of that unpreparedness for the buyer,
generally the technical decision maker.
So it's not someone who buys for a living, unlike procurement.
But even that buying for a procurement thing, they are deferring a lot to the technical
decision maker.
They're saying, you're the expert,
you're the one who's bought out.
Well, we can hope anyway.
Sometimes that doesn't hold true
and no one likes the results there.
Exactly.
But you're going through this process,
you're trying to collect requirements,
you're understanding where this solution
might fit into the business.
And you're essentially making the case
that we need to go spend money
and not just spend money,
but spend time and effort doing implementation. Most organizations already have an observability solution so there's a migration
component that's very top of mind for people the disruption to all of the end users which are
generally all of engineering so there's a lot of weight behind this but when you think about it
just like with the car every 10 years in my case how often are you buying an observability solution
and maybe there's multiple solutions in the thing but but if you're buying a platform, let's say that has
all of your key components in it, how often are you really doing that? As an individual,
maybe across a 10 year period, you might do that once, twice, three times if you're hopping between
a lot of organizations that really need improvement. And meanwhile, on the vendor side,
they're doing this all the time. Are you packaging
all of this stuff up neatly? Do you have a clear process? Or are you just going and talking to
people and seeing sort of who checks out? You talk about the car analogy, and I think it holds
very well. I mentioned the contract negotiation analogy. What makes observability special in this
sense, as opposed to effectively any form of enterprise SaaS? I mean, I think there are lots
of commonalities, but observability in particular, for me, I think about the impact observability has on really the
core of what the company is doing. Most software companies, most even enterprise businesses really
rely on software to deliver experience to their customers, generate revenue. And it has a big
impact on a lot of their employees. It has a big impact on their customers. For me, observability,
reason why I've spent so long in it is its ability to go and impact the industry and sort of the digital
society we have as a whole but are we really doing those right things as a as an observability sector
in the industry are we doing the right things in terms of like buying the right tools and pattern
matching if you get the observability solution wrong, if say, for example,
you buy something that's incredibly unreliable for you,
it was great in the pilot,
but it doesn't hit the reliability that you need
because you then need to be able
to provide reliability to your customers.
Your business is going to suffer.
The quality of life of your engineers is going to suffer.
Your observability team is going to suffer as well.
And why?
Like we don't need that to happen.
And ultimately, if the observability industry as a
whole doesn't get better then the software industry is not going to get better that's me
maybe tilting a little bit of windmills but that's part of why i've been really passionate about
observability so as we think about that sort of purchase and move to observability solutions
picking the right solution and i'm not saying i work for chronosphere i'm not saying chronosphere is the right for every single person but picking the right thing using And I'm not saying I work for Chronosphere. I'm not saying Chronosphere
is the right for every single person, but picking the right thing, using resources like Gartner,
coming in prepared, focusing on those outcomes rather than tick box features. That's really
important to getting that rock solid software and facilitating development of all the things
that we rely on on a daily basis. I don't disagree with what you're saying,
but you also work at a vendor.
You clearly have a vested interest
in that decision going a particular way.
So is talking about that experience differential,
just you taking a victory lap?
Are you trying to educate people
as far as how to handle this?
Is this just a, hee hee, here's what we're doing
and there's nothing you can do to stop us.
How are you envisioning this?
Definitely not. And I would say that, you know, as we work with really sophisticated customers,
it makes us better. And Chronosphere is focused on large organizations, organizations with very
sophisticated needs. And we take a very partnership driven approach, right? We're not going out there
and trying to plunder these customers and try and trick them because, you know, we have amazing
retention, all those kinds of things. That's the victory lap component. But at the end of the day,
us listening to what the customers want, like them being able to push us in certain directions has
led to us having a more robust experience. And it means that we can be more guiding to these
customers. So when a customer comes to us and says, okay, I just want to sort of evaluate and
take a look at something and that's going to be great.
It's like, well, have you thought about requirements?
Have you thought about all the different stakeholders in your organization?
And oftentimes that's a great way of us building good credibility with the customer and also making sure that we are the right solution for them.
The last thing as a vendor that you want to do, particularly as someone who provides a lot of white glove engagement, is you don't want to go and spend a lot of time, effort, energy. And in our case,
we stand up dedicated infrastructure for our customers and even our pilots. We don't want to
go and spend those things unnecessarily. So if we can have those good conversations, if we can
help people be prepared, right time, right place, right ideas, then the buying process is actually
going to be easier for us. And at the end of the day, if the whole industry is sort of lifted up and the bar is raised,
then and we're already there, we're going to be in a good place as a vendor.
But naturally, as I said before, I've spent the last decade in this space.
I want the industry to get better.
And I think at least a better outcomes and happier customers.
Complicated environments lead to limited insight.
This means many businesses are flying blind
instead of using their observability data to make decisions,
and engineering teams struggle to remediate quickly
as they sift through piles of unnecessary data.
That's why Chronosphere is on a mission
to help you take back control with end-to-end visibility
and centralized governance to choose and harness the most useful data.
See why Chronosphere was named a leader in the 2024 Gartner Magic Quadrant for Observability platforms at chronosphere.io.
So, if I'm a customer and I'm looking at an observability solution, I can generally, from my own experience living in the SRE life,
a few things are almost certainly true.
One, I have a problem.
It might look like a reliability problem,
but blessed few places start on day one
building an app with an eye toward,
and as we go, we're going to instrument this thing.
And even if they do,
they get it hilariously wrong from positions of scale,
how the product winds up evolving, et cetera.
So there's a painful problem that they have to deal with. And if there's one area that I think observability vendors
compete in the most, and this is obviously, as someone who runs a sponsored podcast, more visible
to me than some others, it seems to be with marketing dollars positioning what they do
differently, more so than it is, in many cases, technical differentiation. I'm not saying that
you or any given company necessarily falls into this, but it does seem like there's a lot of, all of these options are
pretty good along some baseline stuff and then tend to differentiate around the margins. So if
you're a babe in the woods buyer going in to purchase a solution in this space, because you
have an expensive problem and you're getting yelled at most likely, what should they do?
What should I
do as a new buyer from your perspective? Yeah. I mean, as you point out, there are very solid,
I would say, commoditized capabilities across the board. I think you can think of things like
the margins as becoming more important. But as you think about a lot of organizations or a lot
of technical evaluators think about products and technical features, but instead you think about the outcomes. We've talked about this before, but
yes, the features lead to outcomes, but how do you evaluate that kind of thing? A lot of times
people go, okay, give me a list of features. And then in the pilot, I'm going to check those
individual features. A good pattern I see, it can be hard to set up, but a good pattern I see from
a sophisticated buyer is, hey, we want to go put data from a production system into the pilot environment. And then we want to be able to compare
the solution that we have now with a vendor or maybe multiple vendor solutions. And let's say
that we had an incident last week. What if we tried to investigate that thing from a workflow
perspective? And importantly, and there's layers to this, right? You might think, oh, that's a good
idea. But then who do you put in the seat to investigate that thing? One of the anti-patterns
that I see a lot of is, oh, we'll put the most sophisticated observability user in the company
in that hot seat. They already know about the incident. They're an expert in the current tool.
They're intended to be an expert in the future tool. And that's a signal.
But your experts are not indicative of the entire set of engineers in your organization.
This is a common problem that I've encountered where, especially when observability was new,
it felt like, great, can you even explain to me what your company or product does?
And the answer is yes.
But first, go and do the tutorial, which is three to five years experience as an sre in a scaled out environment then you'll understand what we're talking about now the state
of explaining these things has dramatically improved which is necessary as someone who's
trying to forget those three to five years of experience myself but it's a but it was always a
you must be at least this smart to even understand what we're talking about so you have the experts
inside of a company going for this and then expecting a bunch of disparate teams
to suddenly all get on board with this.
It's a recipe for disappointment.
Right.
And so this comes to, I guess,
maybe the summation of the point that I make on this,
which is you need to think about what outcome
that you're trying to get.
And at the end of the day, it's generally about people.
It's not about how many logs can I go send through a system. It's
the value they're trying to get. The value I'm trying to get is that my engineers can respond
to issues and investigate issues. Well, what do my engineers look like? How is that representative
of my engineers? How do I get that kind of signal? And then you think about this further. Okay,
well, when do I need it the most? I need most when things are busted and broken.
And obviously, Chronosphere is a SaaS solution, for example.
But, and I'm slightly biased here, if you think about a lot of smart engineers are like,
well, I love to tinker with things.
Maybe I should go deploy it in my own environment.
Okay, if my environment's having issue and my observability solution is co-located on the same infrastructure as the application's having issue, am I going to have visibility at all? It's the bootstrapping problem is how I like to think about that, where I was
working at a web hosting company many moons ago, and I was brought in as the voice of experience
on a relatively junior team that was running the data center ops. Great. Awesome. So where's the
runbook on how to get this place up from a cold start? Oh, it's in Confluence. And where's
Confluence? Oh, it's on that server over there. And I looked at the person and they said,
we should print that out and put it over here. Like, there we go. That's the phrase.
Make sure that, for example, if your hypervisor needs DNS servers to come up,
maybe don't put the DNS resolvers on the VM that lives inside of a guest on that hypervisor.
It's the idea of make sure that if nothing else,
you can figure out that things are down.
Every cloud provider that I am aware of somewhere
has something running on competitors
that just give the outside perspective of,
suddenly, are we just internally all talking to ourselves?
Let us know if somehow from the rest of the internet,
we drop off.
That's one of the first things I would always build out when I took a role running
on my Linux box at home.
It was great.
Can I wind up hitting the website?
If all else fails, let me know.
Right.
And so at the end of the day, this is a reliability problem, right?
And so again, you have many layers to it.
Okay.
Do I want to be personally reliable for the observability solutions uptime when I'm also simultaneously trying to solve
my own applications uptime?
Maybe, maybe not.
Okay, but then if I do go down the SaaS pathway,
what does reliability mean?
What kind of uptime am I guaranteed?
That stuff's on paper.
So have you talked to someone
who has actually used this solution for a long time
and what has their experience been?
And there's nuances to this.
It can be, okay, what happens when things go down? What kind of guarantees does the solution
provide you? What kind of mitigation? Do they even notify you, right? Because it's one, if the
observability solution goes quiet in the dark and then you don't know about it, you can ask the whole,
you know, does a tree make a sound if it falls in the forest? It's a sketchy thing to rely upon.
But again, there's many layers.
What does reliability mean to you and how do you get signal from it?
And on that signal piece, regardless of reliability or anything else, I think one of the most
important things you can do is actually go and talk in depth with a customer that hopefully
looks like you and prioritizes the same things as you.
My philosophy, and maybe this is actually a question for you from your perspective,
having been on the selling side of it, I tend to be more skeptical of customer references that
I'm pointed at by the vendor than ones that I find myself organically. And it's not necessarily
that I'm out there like, all right, who has an ax to grind against this particular company?
Invariably, when someone has too much of an ax to grind, they're generally a former employee
and things didn't end well.
But I want to hear the real story, not something that you're doing out of some contractual
commitment where, because on back end at enterprise deals, there's always a reference and I can
use you as a testimonial and you'll say positive things style approach.
I really want to get the real dope.
How do you avoid the problem of, I guess,
in the perception that anyone you introduce me to is only going to say glowing things?
I think there's, again, layers to it. One, there's the worst case scenario in my mind,
which is the written case study. You can't ask questions. Everything's been heavily edited,
right? Not the one that you're- What I want to see in the written case study is the name of
the person. Then I'm going to go track that person down
and say, what's the real dope here?
Like, wait, I have a case study?
Great.
Now we're cooking with gas.
Right.
So on the references thing, like a live reference,
one of the things that I think every customer should expect
is you should be able to ask for a reference
pretty early on in the process, right?
It shouldn't be, oh oh this is the last thing
before we sign the contract because from a tactics perspective i've worked at companies that do that
and the explicit tactic that's described internally is look even if that call doesn't go super well
they're already so far down the process like it's just it's just got a ticker box it doesn't have to
be a ringing endorsement they don't need to get all of their questions answered. They're so close to signing. But if you have that reference earlier,
that could have a much bigger impact. Another expectation you should have is you shouldn't
expect to have anyone from the vendor on the call, right? Oh, absolutely. No one is going to,
well, not no one. I'm a jerk, but most people are not and will not talk smack about a vendor
directly in front of the vendor. Whereas I have always found the most honest approaches that I've gotten have been over
beers outside of an office one-on-one.
And to be clear, I don't want to give the wrong impression here because it sounds like,
all right, tell me everything that's terrible about it, because there is some of that.
But it's also people are generally pretty even-handed and fair about these things.
And if not, even if you wind up with an overwhelmingly negative person, I found it useful to, okay, now continue to tell me something positive about it.
And if the answer is no, then okay, that tells me something.
And also, you need to do this more than once.
You cannot make an informed decision based on one data point.
Right.
Other things that I would think about is, does this customer look like me?
In the sense of, do they prioritize the same things?
Are they at a similar scale?
Are they using the same product features?
How long have they been a customer?
If they've been a customer for three months,
probably still a honeymoon phase.
Has it been long enough to see potentially bad behaviors?
And to that point, what are the questions you're going to ask?
If you just go in and say,
okay, I'm going to do this reference call. One of the things that a vendor is going to do, and I've
done this, you brief really heavily, right? These are the things that I want you to talk about.
But if you, as the prospective customer are talking to the reference, say, look, that's great,
but I have some very specific questions, questions that I would recommend asking, for example,
hey, tell me what happened when something actually went wrong. Because you can talk about the happy path all you want, but
when, hey, tell me when an outage happened. Tell me when they didn't manage to solve a problem for
you. Like you had a, maybe in the observability space, you had a big outage and ultimately the
tool didn't help you at all. When you took that problem back, what was the dialogue like?
What happens when you file support like? What happens when you
file support tickets? What happens when you have a massive surge in data volumes? What is that
experience like? How has the negotiation been? Have you dealt with overages? So all of those
sort of potentially negative circumstances, and everyone's had them, right? If you work with
vendor tools, you've had negative circumstances. As you say, just go and ask about what is it like when,
and have a list of those, the things that you should be worried about for any given vendor,
but particularly for observability vendors have that subset. If they don't have answers like,
that's never happened, maybe a little suspect. Either they don't want to tell you or it really
hasn't happened, so they can't give you the signal you're looking for.
It is interesting to me, just as an aside, that you are advocating for this perspective, because it doesn't matter who the vendor is.
This is a very agnostic thing. At Chronosphere, if a customer or prospective customer goes through
this process, they will uncover negative things about Chronosphere by definition. That is the
approach. I don't think that anything you're saying particularly advantages Chronosphere
versus any other vendor, other than what are the
legitimate customer experiences that you have had in the marketplace? So it's laudable. I'm sort of
surprised you're allowed to do it. You're the chief of staff. You're allowed to do whatever
you say you are. My mistake. This sounds like a great way for a junior employee to get themselves surprise fired.
The theory is ultimately, as I mentioned before, is it's raising that bar, right? And we hold ourselves to a really high bar internally. So we feel comfortable at these things, maybe not
entirely comfortable. It's still uncomfortable when someone says, well, tell us about a downtime
incident. We're human. We've still had some downtime incidents, but we have incredibly
high reliability and we have things that we can point to that lead to that. It's not
a matter of luck. We've been supporting customers in production for years at this point and some of
them continuously. Being able to say, hey, we've provided between four and five nines of uptime to
that customer, not on a monthly basis, but for the entire duration of the customer's lifestyle basis,
that is not a mistake. And so if, for example, reliability is a really important thing to you,
we have those data points to point to. And not just the, well, it's been this number, but this
is why. And we can explain the why and the effort and investment that we put into these things.
And everything should be backed up. So if we are willing to put our money where our mouth is
and do these things, then everyone else should.
And as I said before, it's like you've raised the bar
and you set people's expectations high.
Our belief is that we will be able to clear the bar
and for the types of customers that care about the things
that we've built our product and not just the product,
but also the vendor experience around,
then we will continue to be successful as we have been.
But ultimately, as I mentioned before,
there's a slightly selfish aspect
beyond the success of Chronosphere,
which is if customers come prepared,
it actually makes our lives easier.
What I see a lot of the time
is that a customer will come to us
and we'll have some of these conversations.
They'll be like, great, we hadn't thought about that stuff.
We need to come back to you in three months' time.
Yeah, it's also sometimes, I guess the worst possible scenario for a vendor is when you
have one prospect talking to an existing customer on the reference check approach, and they
convince the existing customer to go in a different direction.
I mean, that's got to be a weird experience where it's like, hey, that's a good point.
They were terrible at this.
We should look into something else.
I don't imagine that happens all that often,
but the idea is funny.
It doesn't happen a lot,
but at the end of the day, as I said,
you don't want to waste as a company, as a business,
you don't want to waste resources
on something that shouldn't have happened
in the first place.
And if you think about the opposite,
if you think about someone
who's maybe going to a competitor or credit sphere,
maybe hears this, maybe takes some of this advice and drives all of this stuff and goes and does research, as you said, like maybe go and find some of your own references and go talk to someone who maybe evaluated that product and chose someone else.
Maybe that someone else chose Chronosphere and maybe they get a sense of why did they do that?
So the hope is, okay, we may deflect one or two who may not have been a good fit for us, but for every one
of two that deflected, maybe three or four come back in. There's a definite, I think, sense as
well that it is not winner-take-all in the observability space. Everyone I talk to has
what could charitably be called an observability pipeline, but in practical terms, it's generally
something of a mess where there's a huge number of products already deployed. And as much as they say, oh, this new vendor will help us consolidate some of these
things, it's, nope, we're just adding one more to the count. Yes. Yeah. And it's definitely,
I think, a very valid desire to want to consolidate down. But particularly for the
enterprise, I see a fairly large problem in the sense of oftentimes there are point solutions or maybe solutions that
are targeting a relatively narrow use case. They might be platforms, there might be multiple
products in the suite, but they may be targeting a relatively narrow use case. Maybe it's, hey,
we still have some stuff in a data center in Europe, and that all has to be on-prem. You're
like, well, I want something that can be on-prem and SaaS as well. Is that a really good approach?
Because now you're starting to get to lowest common denominator approaches.
And fascinatingly for enterprises
who are in this multifaceted transitionary phase,
I would say two years ago,
for those companies who had historically
large investments in APM
and were looking to sort of the future
and also looking to things like open telemetry,
there was a sense of,
well, what I should look for
is something that could do everything
that my APM solution did, but it's very open source compatible, very cloud native suitable, and makes your youngest, most tech savvy engineers in the terms of microservices and whatnot, very, very happy.
I want that.
And we saw a lot of RFPs and we saw a shotgun kind of blast.
And we had conversations with some, some came on board as
customers. There's a large portion who were like, well, we're going to try something else. And
recently in the last, let's say six to nine months, we've seen a lot of them, names that you
would recognize, finance, healthcare, who are like, wow, we tried this thing of either going really
far in the future and consolidating everything there or leaning very much on our historical
vendor who was like, hey, we're going to add open source capabilities.
And it's not worked out because lowest common denominator is not.
And this, I think, is a great example of what we've been talking about,
which is what are the outcomes you want?
Not what does the solution look like?
Does it look like a box of donuts?
No, no.
You want to figure out how to feed people.
Yeah.
What is the actual problem you are attempting to solve for here,
as opposed to checking boxes?
I do want to go back, I guess come full circle here, to the car analogy, where I think people often get confused here.
When you're buying a car, that is generally a one-off transaction.
Once the contract is signed and maybe the cooling off period has expired, it's over, it's done.
It has to basically burn your house down in order to be unwound in any
realistic way. Enterprise software observability in particular is not like that. Yes, there's some
sunk cost in building it out and instrumenting things, but if it's bad enough, people will leave
and they certainly are going to complain about it. So what you said a few minutes ago about the idea
that you don't want to have a solution in place where it never should have been there, it causes you more harm down the road. There needs to be a longer-term view than
this quarter's numbers. Yeah, absolutely. And as these environments get more and more complex,
it's not I sign and I'm done. The cost is going to have not just the whether I churn off or not,
but I have this big implementation cost. And then I've got, well, if I go somewhere else, I've got yet another implementation cost.
It becomes very, very complex. And those are some of the aspects that I think people really
need to think about in terms of just the outcomes and the organization. Observability and the
adoption and the purchase of it is not a technical problem by itself. The technical problems are manifestations
of the organization's pain.
And ultimately, if you aren't thinking
about the organization,
you aren't thinking about the people,
you aren't thinking about the processes,
and you're just focused on the other P
or product or technology,
you're really doing yourself a disservice
and you are very unlikely to be successful.
You might pick the right solution.
The technology might be there, but did you pick the right timing?
Did you have the right people involved?
Did you have the right stakeholders there?
Did you set things up for success for that implementation?
Was it something that you needed to do at a particular time period?
Often, as we go through conversations with customers
who have deeply entrenched SaaS solutions, they realize, oh, I thought I just needed to sign.
And then I have that as close to my previous solutions cancellation period as possible.
But actually, I need multiple months to do some sort of migration or implementation.
And these are factors that may not be directly, well, what is the product
going to do with it? There can be things that help you import in the product and technology side,
but also from the vendor, do they help you with that? And I'm not talking about getting
nickel and dime by pro services. Do they have experience in doing these migrations? So again,
talk to a customer, a reference, dig into that. What was the migration experience like? Was it
longer or shorter than you expected? How much effort did they expend on their behalf?
Did they provide tooling?
There's all of these things that can really layer on top of just the, great, I have metrics,
logs, and traces.
Done.
It's kind of nice to be able to just check boxes and move on with our day.
But unfortunately, the real world problems don't look like that anymore.
I wish they did.
And that's where, as I said, sophisticated customers have really pushed us to make our buying process and our selling process what it is.
And so we want to be able to push that back down because it does make our lives easier.
If people have clarity about what they're looking for and they can have a real conversation with us
about what that is, that's easy to get alignment straight away, not have to waste a whole bunch of time and resources on something or invest even more in bringing everyone up right so this is part of that
approach of can we get that message out they get people thinking about it even just incrementally
a little bit more before they come and have those conversations there's obviously benefits as i said
potentially people you know reflecting back into us going hey that sounds like an experience that
we want.
But ultimately, if the whole industry gets this way, if every other vendor also promotes this kind of thing, then one, everyone's experience is going to be better. And two, our lives also
get easier. Indeed. I really want to thank you for taking the time to speak with me.
If people want to learn more, I don't know, try and attempt to retain you as a buying consultant
for observability software, where's the best place for them to find you?
Chronosphere.io is our website. I'm always floating around. Our team is absolutely happy
to talk you through this process and not just, hey, here's a demo, but what does a buying process
look like with Chronosphere? What does host signing look like? What does implementation
look like? These are things that we're happy to talk about upfront. But yeah, you can find me on what used to be
known as Twitter under Datasmithing or find me on LinkedIn. And we'll, of course, put links to that
in the show notes. Thank you so much for taking the time to speak with me. I appreciate it.
Thanks, Corey. Great to talk to you. Ian Smith, Chief of Staff at Chronosphere.
This promoted guest episode has, of course, been brought to us by our friends at Chronosphere. And I'm Corey Quinn. If you've enjoyed this podcast,
please leave a five-star review on your podcast platform of choice. Whereas if you've hated this
podcast, please leave a five-star review on your podcast platform of choice. But I'm still going
to wind up questioning the negative comment you lead by looking for several more testimonials.