No Priors: Artificial Intelligence | Technology | Startups - What happens to Observability If Code is AI-Generated? The Potential for AI in DevOps, with Datadog Co-founder/CEO Olivier Pomel

Episode Date: June 15, 2023

Olivier Pomel, co-founder and CEO of Datadog, the leading observability company, discusses the company’s founding story, early product sequencing, platform strategy, and acquisitions. Olivier also s...hares his thoughts on their more recent expansion into security, and why he’s bullish on the potential for AI in DevOps. ** No Priors is taking a summer break! The podcast will be back with new episodes in three weeks. Join us on July 20th for a conversation with Devi Parikh, Research Director in Generative AI at Meta. ** No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Nov 22, 2022: Product-led Growth: Founder 1-on-1 with Datadog and Aiven - Olivier Pomel & Oskari Saarenmaa May 25, 2022: Datadog, Inc. (DDOG) CEO Olivier Pomel Presents at J.P. Morgan's 50th Annual Global Technology, Media and Communications Conference Jan 6, 2021: Datadog CEO Olivier Pomel on the cloud computing outlook Datadog’s Official Website Olivier Pomel LinkedIn Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Oliveur Show Notes: [00:10] - DevOps and AI Potential [06:54] - Datadog and Generative AI [20:40] - Datadog's Acquisition and Expansion Strategy [31:46] - LLMs in Automation and Precision [42:35] - Datadog's Customer Value and Growth

Transcript
Discussion (0)
Starting point is 00:00:00 Welcome to No Pryors. Today we're speaking with Olivier Pommel, the co-founder and CEO of Datadog, the company at the forefront of the DevOps revolution. Datadog is a leading observability and security platform for cloud applications. Its execution and ambition has impressed me for years, especially since learning more about the company after it acquired screen in 2021,
Starting point is 00:00:24 a security startup I was the board member for. I'm excited to be talking about the potential for AI and of ops. Olivia, welcome to NoPriars. Thanks for having me. So let's start with a little bit of personal background. You're French. You've been in the U.S. working on startups since 99. How did you start to think about starting a company and Datadog in particular? Yes. So yes, I'm from France. I guess nobody's perfect. I'm an engineer also. I got into computers largely through computer graphics. And when I was a kid, I used to follow the demo scene in Europe, you know, which was all about 3D and, you know, doing interesting things in real time. This led me later on to be one of the first authors of VLC, the media player, who, you know, I think is mostly used for viewing illegally downloaded videos. And I should say most of the people who made that successful came in after me, picked up the project and did something fantastic with after I left for the US. And then move to the US to work out of all places for IBM research. in Upset New York and ended up.
Starting point is 00:01:29 I thought I would stay six months and I've been here since 1999, so it's been a while. I worked for a number of startups through the, I would say, the tail end of the dot-com boom, I arrived right in time for the bust. And after that, you know, worked for, I think eight years for an education software company, education startup that was doing SaaS for schools based in New York. And that's when I was at this company that I spent quite a bit of time with the person who's no my co-founder at Datadog. And that's where we had the idea to stop Datadog, basically, which was I used to run the Dev team there. And he used to run the Ops team.
Starting point is 00:02:08 And we hired a different one on our teams. We tried hard not to hire our jerks. We were very good friends. You know, we've known each other since the IBM days, basically. And we still ended up, you know, with Devon upsetting each other, people pointing fingers in each other all day long, big fights. So the starting point for Datadogos. was not monitoring. It was not even the cloud, initially.
Starting point is 00:02:28 It was, let's get Devonovs on the same page. Let's give them a platform, some place you can work together, see the world the same way. Yeah, that's actually quite different than thinking of it as like a, like a ticketing relationship, right? A quite siloed relationship between the two areas.
Starting point is 00:02:42 I think most people assume that Data Dog comes from a place that was more like metrics or like we knew the cloud was coming. And I'm sure both these things are true, but it's interesting that the core starting point is really around like Dev andOps collaboration. You are, I think, pretty long NYC and have been challenged on, like, building in for an NYC as, I think, to even attempt before. Like, tell me about your original thinking on that or if it even crossed your mind.
Starting point is 00:03:08 Well, I mean, so I stayed in the U.S. because I loved NYC. You know, that's why I ended up staying here. I love the energy. I love the diversity of the city. I also met my wife in NYC, and she's also not French, you know, so it's a, she also not American, you know, so it also made sense for us to stay in the city. So when we started, it made total sense to start a company in New York. We also knew fantastic engineers we could hire in New York. So it was very obvious on that same point.
Starting point is 00:03:35 I would say it was less obvious when we started fundraising because we didn't come from systems management or observability or anything like that. And we were based in New York, which was not seen as a great place to start in fossilry company at the time. So I would say for most investors, especially barrier investors, I think it was considered as some form of mental impairment to stay in New York at the time. It made it harder to fundraise. And I think as a result,
Starting point is 00:04:02 it made us more successful because we were so scared of getting it wrong and so scared of not being able to fund the company any further that we really doubled down on building the right product for one thing, but also we built a company that was fairly efficient from day one and hovered around profitability throughout its whole existence, pretty much. And I think, you know, in the long run, it's been an advantage.
Starting point is 00:04:24 Like everything else, you know, if it's a, if everything that's a long run or long-term advantage turns out to be very difficult in the short terms, typically. How else do you think New York has benefited you? Because I feel like now it's, it's kind of an obvious place to start a company into your point when you first got started. It was very different. It feels like there was always some good talent polls there. I know Google had a giant office there and meta set up one.
Starting point is 00:04:43 And, you know, it was really sort of flourishing over the last decade. And now it definitely feels like a very strong standalone ecosystem. But are there other aspects of either recruiting or other things that, you know, that you've really benefited from being in New York? Yeah, I would say there's really two things. The first one is, from a customer perspective, we're sort of out of the echo chamber of the Bay Area, which makes it easier, I would say, to latch on to what really matters to customers.
Starting point is 00:05:08 And what's not just a fantastic idea, you told three people, and then repeated three others, and then it came back to you, and it sounds even better now. And there's a lot of companies of basically many non-tech companies in New York, you can sell too, and you can get a good idea of what they need, basically. The second aspect I think that benefited us is that So it's a bit more difficult to recruit in New York There's less pure tech talent
Starting point is 00:05:30 There's less deep tech talent in New York Than there is in the Bay Area But the retention is a lot higher So, you know, if you give people You know, great responsibilities, interesting work, treat them well They're going to stay with you for three, four or five years more, you know Which I think in the Bay Area is pretty much on the, you know, very high end of what you can expect What we see from looking at data, we have data from most of our customers are engineers and
Starting point is 00:05:56 most of our users are engineers. And so we see basically when their individual accounts churn at our customers' organizations, and we see that it's not rare for companies in the barrier to have engineers churn every 18 month. We think it's really hard to build a successful company that way. I think you do have to overinvest so you want to do that. So one last background question before we ask you to talk just a little bit about Datadog today. Can you explain the name? So, yeah, so it's interesting because I'm not a dog person, microphone, and I've had any dogs.
Starting point is 00:06:31 In our previous company, we used to name production servers dogs, and Datadogs were production databases. And Datadog 17 was a horrible Oracle database that everybody lived in fear of that had to double in size of a six months. to support the growth of the business, and that could not go down. So for us, it was the name of pain. It was the old world. It was where we were coming from. It was name of pain. So we used it as a codename when we started the product.
Starting point is 00:06:59 And we actually, we called it Datadog 17. Everybody remembered Datadog. So we kept it. We dropped the 17, so it wouldn't sound like a MySpace handle. And then we had a designer proposed, you know, a puppy as the logo, among a sea of, you know, alpha, alpha dogs and things like that and hunting dogs. And I think that's, that's the smartest branding decision we've made was to keep the name and to keep the puppy.
Starting point is 00:07:25 Love it. So Data Dog is clearly a leader in observability and security for cloud environments. You've had an enormous success. I think you're now approaching a $2 billion run rate, 26,000 customers. You're really mission critical to a variety of different folks who spend in some cases more than $10 million a year on you. But for those listeners that we have who may be a little bit less familiar, could you give this quick background on what Datadog provides and almost like a Data Dog 101.
Starting point is 00:07:50 Yeah, so what we do is we basically gather all the information they have to gather about the infrastructure customers are running, the applications they're running, how these applications are changing over time, what the developers are doing to it, who the users of these applications are using them, you know, what are they clicking on, what are they're going next, what the applications are logging, what they're telling about what they're doing themselves on the systems. So we cover everything end-to-end, basically. We sell to engineers.
Starting point is 00:08:18 So our customers are, I would say, the folks who buy our product are typically the ops teams or DevOps teams in a company. And the vast majority of the users are developers and engineers. And then some of our users are going to be product managers. They're going to be security engineers. They're going to be all of the other functions that gravitate around product development and product operations. How do you think about translating some of those products or areas into the, generative AI or LLM world. I know that, you know, obviously, CloudSpend is now 25% of IT, and there's been these really big shifts now in terms of adoption of AI, and it's very early,
Starting point is 00:08:53 right? It's extremely early days, at least for this new wave of LLMs. Obviously, we've been using machine learning for 10, 20 years. How do you think about, you know, where observability and other aspects of your product go in this sort of emerging new area? Yeah. So we find here quite exciting, actually. I mean, there's two parts to be. One part is a demand side, you know, so what's the, what's happening in the market that is driving the use of, you know, compute and building more applications and things like that. And the other side is the, what we do on the solution side with the product and hope we can use the generative AI there. On the demand side, it's exciting at so many levels. You know, if you think of the
Starting point is 00:09:32 highest level possible and the, what might happen in the long run, we think that there's going be so many more applications written by so many more people. It's going to improve productivity of engineers. And at a high level, you know, if you imagine that one person is going to be 10 times more productive, it means that they're going to write 10 times more stuff, but they're also going to understand what they're right 10 times less, just because they don't have the time to see in this and everything they do. And as a result, you know, we think it actually moves a lot or transfers a lot of the value
Starting point is 00:10:05 from writing the software to then understanding it, securing it, running it, modifying it, modifying it when it breaks, things like that, which ends up being what we do. So we think, you know, for our industry, you know, it's great in general. From a workload perspective, we already see actually an explosion of the workloads in terms of providing AI services or consuming AI services. You know, so it's actually consumes a lot of infrastructure to train those models, to run them. So we're going to see, you know, a lot more of that. We also see a lot of new technologies that are being used there, new components,
Starting point is 00:10:38 this whole new stack that is emerging. So overall, it's exciting at every single level. But to your earlier point, though, it's still very early. So, you know, it's hard to tell what actually is going to be the killer app, you know, for all of that. You know, in six months, in a year, in two years. It's possible that some of the things we've seen with LLMs, you know, where, you know, all of a sudden, everything is a chatbot. It's possible that it's not the way people want to eat. to interact with everything, you know, two years from now.
Starting point is 00:11:11 You know, for example, you, when you start your car, you don't want to play 20 questions with it. You know, you just want to start it. It might be the same thing with a lot of the products that are today starting to implement LLMs. But what I think is pretty certain is that we're going to see an expansion of the use cases, an expansion of workloads. Also, maybe an acceleration of the transformations
Starting point is 00:11:32 and whether it's digital transformation or cloud migration that are bringing all of that, you know, making all of that possible. If you want to adopt AI, you actually have to have your data digitally. I mean, it sounds obvious, but, you know, it's not the case still for most companies. And second, you also have to be in the cloud. You know, how else would you do it? Like, if you try to build everything on print today, you wouldn't even know what to buy
Starting point is 00:11:56 because the technology is changing so fast. So I think it's accelerating on those trends, which is very exciting. I've definitely seen a lot of enterprise buyers sort of change their mind almost on a monthly basis in terms of what they view is the primary components of stack that they're using. And that could be the specific LLM they're using. Should they use a vector database or not? The whole set of components seem to be very rapidly morphing. And you mentioned earlier, you sort of seeing this emergence of a new stack. Are there any specific components that you'd like to call out or that you think, you know, are going to kind of stick? Or how do you
Starting point is 00:12:26 kind of view the evolution of this area? So I would start saying it's extremely hard to know what's going to stick in the end. And for us, you know, it's actually a, it's a very new place for to be, you know, we, as a company, we've been very good over the past 10 years at understanding which trends are picking up what actually are going to be the winning platforms, you know, when the world went from VMs to cloud instances, from containers, from containers to manage containers with Kubernetes, from that to serverless. We always had, you know, it always took, you know, a year, two, or three years for those technologies to gain mass adoption.
Starting point is 00:13:06 and it was very clear what the killer apps and what the winners were going to be. With generative AI, it's not the case. It's changing so fast, and everybody's exploring all of the various permutations of the stack and all the various technology so fast that it's really, really, really hard to tell what's going to stick. You can take a guess. I think the one thing that's been the most surprising was the speed at which the open source ecosystem has been innovating and building. of better and better models.
Starting point is 00:13:37 Nobody has quite caught up to open AI yet in terms of for the frontier model leads in the maximum level of performance you can get. But I think we've all been surprised by the amount of new technologies that has come out in the open source that does as good or even better on smaller, more identified use cases.
Starting point is 00:13:54 And I think we should expect to see a lot more of that. So when we look at our customers and what they're doing and we do serve the largest providers of AI but also the largest consumers, we think we see that today, everybody is testing and prototyping with one of the best
Starting point is 00:14:09 API-geted models, like typically the Open-A-I-I-1, there are a few others, but everyone is also keeping in the back of their minds, but they can still, we can then bring in a house with an open-source model they trend themselves and host themselves,
Starting point is 00:14:23 and what part of the functionality they can break down to which one of those models. So I think we probably will have a very different picture a year or two years from now. Yeah, it definitely feels like the most sophisticated users are basically asking, when can they fall back on the cheapest possible solution and when do they need to use the most advanced technology
Starting point is 00:14:39 and then how do they effectively route a specific prompt or user action or something else against those? So it's very interesting to watch this evolve. I know that you all released an LLM API monitoring product. Is there a whole new observability and tooling stack needed for AI? And if so, what are the main components and if not, you know?
Starting point is 00:14:58 Yeah, well, first there is the, there is an observability in the stack needed period, right? So you do need to fold that in, because that's just one more component you're using. And as I said earlier, there's a whole new stack that's emerging. So if you're going to use a frontier model from one of the big API-gated
Starting point is 00:15:15 providers, you need to monitor that. You need to understand what goes into it. You need to understand who it responds to you, whether it responds to you right or wrong and how it interacts with the rest of your application. Then if you use a vector database, or if you host a model yourself and you use specific computing infrastructure for that, and you have
Starting point is 00:15:32 GPUs and things and that, you also need to instrument. and observe all that and figure out how you can optimize all this. So there's a whole new set of components from this new stack that needs to be observed. And that can say can be observed pretty much just the same way anything else can be observed, which is with metrics, traces, and logs, and that sort of stuff. I would say there's probably also a whole new set of use cases around, you know, what used to be called MLOPS or not might be called LMOPS.
Starting point is 00:16:02 And that's a field, by the way, we're only watching from afar over the past few years. And the reason for that was that, you know, we saw 100 different companies do that, but a few of them, you know, reaching true traction. And the reason for that was that the use cases for that were all over the map. And the users tended to be very small groups of data scientists that also preferred to build things themselves in a very bespoke way.
Starting point is 00:16:27 So it was very difficult to actually come up with a product that would be widely applicable and that would be also something you can sell to your customers. I think today has changed quite a bit because LLMs are the killer app. Everybody is trying to use them
Starting point is 00:16:44 and the users instead of just being a handful of data scientists and every company end up being pretty much every single developer. And they are less interested about the building of the model themselves than they are about making use of it in an application in a way that is reliable, makes sense.
Starting point is 00:17:00 and they can run in and day out for their customers. So I think there's a whole new set of use cases around that that are very likely to emerge and be very valuable to those developers. And these have more to do about understanding what the model is doing, whether it is doing it right or wrong, or it is changing over time, and how the various changes to the application can improve or not improve them all. It seems like one of the things that has given data dog enormous nimbleness
Starting point is 00:17:28 is this unified platform that you've built, which is both a big advantage and a big investment. And, you know, my understanding is a pretty large proportion of the data doc team is working on the platform right now. How do you think about resources being allocated towards the main platform, maintaining it versus new initiatives like AI and LLMs? Yeah, so the rule of thumb for us is about half the team is on the platform. But that relates to what we do, right?
Starting point is 00:17:53 We sell a unified platform. So internally, you know, as I said, half is on the platform. the other half is more on the product side, like specific use cases. But even the way we organize those use cases, those teams that work on the use cases tend to be more focused on problems that tend to be forward-looking or where the market's
Starting point is 00:18:09 going. Whereas what we sell to our customers tends to be more aligned to categories that tend to be more backward-looking, which is whole people are used to buying stuff. And that's very important. You know, when you talk to customers in all space, there's like 12, 15, 20 different categories.
Starting point is 00:18:25 always interesting acronyms and they correspond to things that customers have been buying for 10, 15, 20 years and that's how they understand the market. So we sell into that, while at the same time delivering everything as part of a unified platform
Starting point is 00:18:39 that is itself shaped towards more where we think the world is going to be. So it's very possible, even likely, that 5 or 10 years from now or skews on our products have changed drastically because they correspond to the evolution of the market as opposed to being pinned into very specific,
Starting point is 00:18:55 static definitions of categories. An example of that being observability is emerging as one big super category that really encompasses what used to be in far social monitoring, application performance monitoring, and log management. And we still sell those three as different skews today, but I think it's very likely that, you know, five or ten years from now, you don't even think of them as being separate categories anymore. Like they really become part of one, you know, super integrated category.
Starting point is 00:19:22 I would say there's a specific cost, you know, when it comes to, maintaining a unified platform, which is that we also do some MNA and we acquire companies, such as, you know, screen the company Sarah was on the board of and gracefully signed the order to sell the company. But when we do so, the first thing we do is we actually re-platform the companies who've acquired. So we spend the first year, really, post-acquisition, rebuilding everything that the company had built on top of a unified platform. So it's an extra cost. But again, what we deliver to our customers is into an integration, bringing everybody into the same conversation, into the same place,
Starting point is 00:20:02 more use cases, more people, more different teams into the same place. And we see it as a necessary part of maintaining or differentiation there. You've made a handful of other acquisitions to expand the product suite and I think the sort of talent group at Datadog. What else has made them successful? It seems to me that it has really continued to drive, like useful product innovation at Datadog, which is not always true with acquisitions.
Starting point is 00:20:31 Yeah, I mean, as you know, making an acquisition is easy. Like, signing a piece of paper and wiring money is super easy. Anybody can do that. The problem is what happens next, you know? So now you've done it. Now you have to measure two things and make them work. I think, you know, in general,
Starting point is 00:20:49 the way we approach acquisitions is they always correspond to product areas we want to develop. And, you know, we're fairly ambitious. Like, we, there's a lot of different product areas we're going to cover in the end that span from observability to security to a number of other things. At the end of the day, we're ready to build them all. But if we can find some great companies to, you know, get us two or three years of a head start in a specific area, you know, we'll do it whenever we can. So we start with a very broad pipeline or very large funnel of companies. And then we focus basically on the ones that are going to be fantastic fits for us post-acquisitions,
Starting point is 00:21:30 meaning there are teams that want to build and entrepreneurs that can really take us to the next step there with the experience they've gained in a specific area. One thing we're very, very careful of is so we select for entrepreneurs who want to stay and want to build as opposed to entrepreneurs who are tired and want to sell out. It's a fine reason to sell your company. It's not a great reason for us to buy. So, you know, that's what we're going to do. The other thing we do is when we close the acquisition, but before we've closed,
Starting point is 00:22:00 we have a very, very specific and very short fuse on the integration plan after that. So we have a plan basically that calls for shipping something together within three months of the acquisition, which is very short. Close an acquisition, people celebrate a little bit, you know, and then they have to get oriented and you, HR systems and whatnot. Three months is a very short time. We don't really care. but we get shipped within three months,
Starting point is 00:22:24 but we care that something gets shipped within three months. And what it does is that it forces everyone to find their way. It also makes it very easy for the acquired company to start showing value, which then builds trust. The main issue you have when you acquire companies is, or the main risk, is that it's not that you waste the money of the acquisition. It's that you demoralize everybody else in the company, because they see these new committee is being acquired
Starting point is 00:22:54 and they don't understand the value or they wonder why you might pay so new people a lot of money instead of paying the people you have a lot of money to do the same thing. And it's very, very important to show value very, very quickly for that. And we put a high emphasis on doing that. So far we've done it well. I'm still expecting us to make some mistakes someday, but so far it's worked out for us.
Starting point is 00:23:15 I want to go back to something you were saying around, like calling, let's say, environmental technology changes well, like, you know, progression from like VMs to containers to here we are with managed Kubernetes and such. But Datadog as a company has always been amazing to me because the spectrum of things that you want to, you're ambitious to go after is very broad, right? Like I was at Greylock investing in companies that were APM companies and, you know, logging companies and you have this platform advantage, but you also are still attacking like many different things, customer problems and different categories.
Starting point is 00:23:52 Can you talk a little bit about how you organize that effort, like in your mind or as a leadership team and how you sequence it? So in terms of what we go after, first of all, it took us a very long time to go beyond our first product. So I think we spent the first six or seven years of the company on our first product.
Starting point is 00:24:11 And the reason for that is it was really hard to catch up, you know, with the demand for that product. We also realized after the fact that, you know, we were fairly lucky in terms of when we entered the market. We had an opportunity to enter what's a sticky market that's hard to displace because of the repletforming that came with the cloud. So we could start with a smaller product and then expand to it as customers
Starting point is 00:24:33 were themselves growing into the cloud. They really was new into the cloud at the time. So we had to spend that time getting to the, I would say, the minimum full product for infrastructure monitoring. After that, what has driven the expansion of the platform was really what we saw our customers build themselves. So before we started building APM, we had our customers, or we saw a number of customers build a poor man's APM on top of data dog.
Starting point is 00:24:57 We didn't have the primitives for it, you know, but our product is open-ended enough that they could actually build and script around it and do all sorts of things. So we saw it. It made perfect sense. If it made sense for them to build it and for us to be part of the solution there, we thought it would make sense for us to build it for them. So in great part, that's what guides the development of it. of the platform.
Starting point is 00:25:19 The first threshold was really to get from a single product company to having two or more products that were successful. It was not easy. Once we had done that, that's what gave us, for one thing, the confidence to take the company public because we understood we could grow it a lot for a very long time. But also, it really opened us up to, let's look at what our customers are doing with our product,
Starting point is 00:25:45 but problems we can solve for them and use the secret weapon of the company, which is the surface of contact we have with customers. We deploy on every single one of the servers they have because we start with infrastructure monitoring, so we deploy everywhere, and we touch every single engineer, so we use by everyone every day.
Starting point is 00:26:01 And that's the first of contact is then what lets us expand solve more problems for the customers and build more product. You guys have, I think, over 5,000 security customers now, but relative to the overall data log base where the security industry, it's still newer, effort. You know, I've been on the boards of companies that sell to IT and security, but it is hard conventional wisdom as it's like quite different audiences, even if the, as you said, the surface area is there or it makes architectural sense to consolidate the tools because a lot of
Starting point is 00:26:31 the data is the same. I think, you know, the world or the investor base might see this as a bigger jump than some of the monitoring products you guys have released before. What do you need to do as a business to succeed in security? Yeah, it's a great question and it's definitely, it is true it is a bigger jump because you can argue that most of the other new products we've released outside of security, where part of the same larger category around observability, the users were the same, the buyers, and two largexam were the same. With security, we get new types of users and new types of buyers. Our approach to it was that there's actually just been no shortage of security solutions today.
Starting point is 00:27:10 They all, like there's tons of technology. It's typically sold very, very well in a top-down fashion to the CISO and everything. What it's not doing well is it's actually not producing great outcomes. Everybody's buying security software. Nobody is more secure as a result. So our ambition there is to actually start by delivering better outcomes. And for that, we think we need actually a different approach. We think that if you sell a very sharp point solution to a CISO, which is what's done today,
Starting point is 00:27:39 you're not going to have this good outcomes. On the flip side, if you rely on the large numbers of developers, and operations engineers to personalize security and you deploy it everywhere on the infrastructure and in the application and every single layer, you have a chance of delivering better outcomes. The analogy I would make is that there are great medicines today for security, it's great technology.
Starting point is 00:28:04 But for it to work, you need to inject it in every single one of your organs every day and nobody's doing it. And I think the way we intend to do it is, we can deliver it to you in an IV and that's it. You know, you're going to have it always on, and it's going to be fine. So, again, it requires approaching the market fundamentally differently because we are building on usage and deployment.
Starting point is 00:28:25 We are building on ubiquity, not building on great sales performance at the top level. It's possible that later on we need to combine out with great sales performance at the top level because that's how it's done in large enterprises. But for now, our focus is really on getting to better outcomes. I want to go back to sort of the AI opportunity for you guys, as a lot was touching on. So, like, if you just take one sort of very naive example, like anomaly detection on logs, on metrics, on security data has existed for a really long time. You guys have this watchdog intelligence layer. I'm sure you're working on lots of interesting things with classical ML approaches and security as well.
Starting point is 00:29:01 Like, how would you rank, like, the AI opportunity within your products in these different domains? So there are so many new doors that we can open. That's really exciting. One thing I would say is, in general, we've been careful about not overmarketing the AI. And the reason for that is we think it's very easy to overmarket AI, and it's very easy to disappoint customers with that. And that's the one thing I find a little bit worrying with the current AI explosion is that the expectations are going completely wild, you know, in terms of what can be done with that. And I think, you know, there's going to be maybe a little bit of disillusionment after that, though I actually am a believer that we can deliver. things that were impossible, you know, just a few years ago.
Starting point is 00:29:46 So I don't think the old methods are going away because you still need to do numerical reasoning on data streams. For example, you know, you mentioned watchdog, like watching every single metric all the time everywhere for, you know, statistical deviation. There are methods that work fantastically well for that, that only involve language or language models or transformers or any of that. I mean, it's possible that we see some of some new methods. emerge using transformers because there's so much work being done on that today.
Starting point is 00:30:17 But that's not yet the case. So I think those methods are not going anywhere. Those methods are also a lot more precise than what you get with large language models. So I'll give you an example specifically for operations. If you talk to a customer and if you ask them, would you rather have a false positive and you'll decide whether it's right or wrong, but the computer is going to bring up new situations for you or just nothing. If we're not sure, we won't tell you.
Starting point is 00:30:46 Customers will all tell you or give me the false positive and decide. The reality is you send them two false positives at night the same week, and they'll turn you off forever. So the reality is you need to be really, really precise. And with operational workflows like we do, you're making judgments a thousand times a minute. So if you're wrong, even 2% of the time, it becomes really painful really quickly. So you need to set the bond really high. And for that, those methods are not going away. Now, what's really interesting is that there's a number of other new doors that are open with LLMs.
Starting point is 00:31:24 One of them is there's so much data that was off limits before that we can put to use now. Everything that's in knowledge bases in email threads and everything else, all of that actually can be used to put the numerical information in context. Another way I've seen the LLMs described online, which I thought was very astute. There are basically calculators on language. And you can actually use that really well. Actually, what you can do is structure or bring together metadata
Starting point is 00:31:56 from many different places, output from many different numerical models, and use the language models to combine that. Maybe with some other wikis you have internally. And this allows you to combine data in ways that were impossible before. And a lot of new intelligence is going to emerge out of that. Now, the challenge, of course, is you still need to be correct. And I think that's what we're working on right now.
Starting point is 00:32:21 I give you one last example. We obviously, we've been working on that quite a bit. And the first thing people do when they see an error in their production environments and they have a chat GPT window open is they take. the stack trace of their error and they ask that GPT what's wrong. Does it work? Well, 100% of the time it says you, oh, I tell you, this is this thing is wrong. Problem is, in more than the majority of the case, it is wrong.
Starting point is 00:32:49 And there's a good reason for it. It's that it just can't know because if you don't have the problem, the program state, you just can't know. What we found, though, is that if you combine that stack trace with the actual program state, you know, which is what are the variables and what were things when it error. when it air up, you actually can get a very, very, very precise answer pretty much all of the time from the language models. So I think the, at least in the short term, the magic is going to come from combining the language models with the other sources of data and the, I would say, the more traditional numerical
Starting point is 00:33:23 models to bring new insights to our users. That makes a lot of sense. It seems like there's a real opportunity to build. And I'm sure it sounds like, you know, this is the first steps towards building almost like an AISRE co-pilot or eventually a more automated solution that can really help not only surface different things, but also understand them in real time and, you know, provide an opinion on what what, what, what, what, what, what potential issue may be. That's that's definitely, I think we are at the cusp of a maybe doing more automation than
Starting point is 00:33:52 we could be for. I would say on the cusp, you know, because we, we're still not there. I just see quite a bit of what needs to happen there. I would say the, the best test for that is even at places that are extremely uniform and have very large scale, such as, you know, the Googles of the world, the level of automation is still fairly low. It is there, it's increasing, but it's still fairly low. And if
Starting point is 00:34:13 you use the self-driving cars metaphor, like, you know, Google is all highway, you know. So if that can be automated, like most of the other companies out there cannot either, like, because most of the companies are like downtown Rome, you know, so it's a quite a bit more complicated.
Starting point is 00:34:29 What do you think is missing, though? Do you think it's a technology issue, or do you think it's just implementation? Because I feel like LLMs are so new, right? ChatGPT launched six months ago. And GPT4 launched three months ago. So is it just new technology and people need to adapt to it? Or do you think it's other obstacles in terms of actually, you know, increased automation through the application of this technology? So for one thing, it's very hard. Right. So many of the problems you need to solve, like if you think of self-driving car, like, you know, everybody over the age of 15 can drive with the debugging production issues in a complex environment. you need a team of PhDs and, you know, we'll take them time and they will disagree. And, you know, so I think it's hard. That's one reason. That being said, I think a lot of it might be possible in the end. The biggest question, I think, not just for observability, but for everything else with LLMs, is, is this the, is this the innovation we need? Or do we have, do we need another breakthrough on top of it to make it to the end?
Starting point is 00:35:29 So if I were to click quite, again, I love analogies, so I keep throwing them at you, but with LLMs, we have, we clearly have ignition, like there's innovation everywhere, it's happening. We might have lift off probably in the next year or so with real production use cases, because right now most of those stuff is still not production. It's still demo wear and, you know, private beta and that sort of stuff. I think the question there is, do we need a second stage or not? I don't know. And I think that won't be clear for another maybe a couple of years as all those, this current will of innovation, rich is production. I would say, though, there's some use cases for which it's very clearly there.
Starting point is 00:36:06 Like, for example, creativity, generating images, generating text. As humans, we're good enough for debugging what comes out of the machine. Like, you know, you generate an image and the dog has five legs. You immediately, you know, roll the dice and, you know, you get to get something that works. I think when you try and write code or debug an issue in production, when the system is wrong, is less obvious. And so that's what we still need to work on. Yeah, makes sense.
Starting point is 00:36:33 Are there any other near-term sort of productivity gains that you think are most likely to occur either for data dog your customers in terms of this new type of AI? Because I feel like, to your point, there's a lot of forward-looking complexity in terms of some of these problems, and some of them may be the self-driving example in terms of you need more stuff to happen before you can actually solve it. And then separate from that, it seems like there's a set of problems or class of problems where this new technology actually is dramatically performant. If you look at notions and corporation of LLMs into how it's rethinking documents, or, you know, if you look at mid-journey and art creation to your point on images and, you know, the subsummation of teams that are generating clip art or marketing copy or other things like that. I'm just sort of curious, like, where do you think the nearest terms gains are for your own area are likely to come?
Starting point is 00:37:18 Well, I mean, we see it already in everything that has to do with offering, writing, drafting. I think it's there. It's already good enough. It hasn't dramatically changed anybody's processes just yet, but I think it's going to happen in the near term. We see, and we'll see much more of an improvement when it comes to developer productivity. I think there's a whole class of development tasks that are becoming a lot easier. with an AI advisor, like using a new API, for example, used to be painful. Now, you know, it just takes a few minutes to ask the machine to show you how to do it,
Starting point is 00:37:57 and it's going to, and that's a real immediate productivity gain there. I think there are some areas that will probably be completely rewritten by AI in terms of not just we can do different things, but also we'll stop doing things because it becomes inefficient as everybody does them with AI. I'll give you an example, so email marketing and things like that, when, you know, you don't need a human to send an email anymore. You can send a million of them from a machine and everybody's doing it. That whole avenue and the whole field might change quite drastically.
Starting point is 00:38:26 You think we just killed the channel? It will change. It will have to take a different shape, I would say. Someone will have to get through the noise a different way. Super interesting. And I love the analogies. Okay. We're running out of time.
Starting point is 00:38:42 So I want to ask a few questions on leadership to wrap up if that's okay, because we have a lot of founders and CEOs who listen to the podcast, and Datadog is a company that just keeps executing. The company delivered 30% revenue growth when a lot of people are slowing as they face a different macro environment. You guys released a cloud cost optimization products. You're doing certain things that are very specific to the environment, or maybe they were in the works for a long time. What else are you changing about how you run business, if anything? So we're actually not changing who we run business. So we've always run the business. the profitability in mind. So we've always looked at the margins, looked at building the system
Starting point is 00:39:24 from the ground up in a way that was sustainable and efficient. We never actually built things top down and thinking, well, we'll get it to work and they would optimize it. And the reason for that is we were scared initially that we wouldn't be able to finance the business. But also, we think it's really hard to shed bad habits. You know, once you start doing things in an efficient way, it's really hard to move away from that. We've definitely seen that around us in the industry. The one thing we're a bit more careful of is we're tuning our engine a little bit differently when it comes to understanding customer value and product market fit, because customers themselves are more careful about what they buy and how they buy and how much of it they buy,
Starting point is 00:40:08 so we need to retune our sites a little bit, or we adjust our sites a little bit, so we don't make the long decisions based on that. To the point I was making earlier at the same time, we also need to move a lot faster in some areas such as generative AI, just because the field itself is moving so fast, which is also causing us to message with teams internally a little bit differently, so we're telling teams, hey, I know you're used to having, to be really good at filtering out the noise and taking three, four quarters to zeroing on the right use case. For generative AI, you can't actually do that because the noise is part of voice being developed. So, accept the fact that you might be wrong a little bit more,
Starting point is 00:40:45 but we need to iterate over it with the rest of the market. Do you guys need new talent to go attack these areas, or does it change your view on talent at all? Not really. I think it's a talent is always about, you know, finding, so finding people who are entrepreneurial, who want to build, who want to grow, and who are going to prove themselves by making a whole array of the business just disappear.
Starting point is 00:41:08 Like, you know, the best people just obliterate problem area has, you have black holes for problems. You send problems they disappear, and that's where you know you can promote them. You can easily find them in organizations because you see all the work going to them. Work avoids people that are not performing, and it finds people that are fantastic. And so following the work is a great way to find their great performance. I want to ask one last question because I feel like Datadog has so many unique attributes as a company. Another, perhaps, sort of less obvious path that the company took
Starting point is 00:41:45 was sort of many different customer segments, right? From sort of very small engineering teams to Fortune 100s. And, you know, my understanding is you guys have done that for quite a long time. I've grown up a little bit into the enterprise as everyone does. How did you think about that? Or why did it work for you? Well, I mean, look, the starting point was bringing everybody on the same page. So we focused on the humans.
Starting point is 00:42:07 We focused on thinking, well, the humans, whether they're in dev or in ops, are wired the same, so let's bring in the same page on the same platform. And it turns out the humans are also largely the same, whether they work for a tiny company or for large, you know, multinational. I think it was made possible also by the fact that in the cloud and open source generation, like the tooling is the same for all companies. If you go back 15 years, if you were a startup, you were building an open source. And if you were a large company, you were buying whatever Oracle or Microsoft was selling
Starting point is 00:42:39 and building on top of that enterprise-y platform. Today, everybody's building on AWS with open source. It's the same components up and down the stack, so it's really possible to serve everyone. It's been good for us. It's given us a lot of network effects that you wouldn't find in an enterprise software company otherwise by giving us this very broad spectrum of customers.
Starting point is 00:43:07 It's a great differentiator because it's really hard to replicate, you know, you can't just replicate the product. You have to replicate the company around it, which is hard, you know, from a competitive perspective. It also creates some complexities, you know, because as much as you serve, like the users are humans, and they feel the same across the whole spectrum. Commercially, you don't deal the same way with individuals and with large enterprises. And it's hard for your messaging not to leak from one side to the other, you know. So an example of that is, I think, so recently there was some articles in the news about some of our customers
Starting point is 00:43:40 that pay us tens of millions of dollars a year and you have on the individual side there are people who wonder who is it possible to pay even tens of millions of dollars a year whereas on the high end obviously customers do that because it's commensurate to the infrastructure they have and they do it because it saves them money in the end
Starting point is 00:43:56 so you do have this balancing act between the very, very long tail of users and the very high end of large enterprises. Awesome. Thanks so much for joining us on the podcast. This is great Thank you so much. Thank you. This was fantastic. Thanks.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.