PurePerformance - What is Dynatrace Grail and Why should you care with Andreas Lehofer

Episode Date: November 21, 2022

Dynatrace recently announced Grail – promising boundless observability, security and business analytics in context.You may think: that’s a lot of nice words that other solutions claim as well. So ...why should you care about Grail? What is the real problem it solves and how does it solve it?Tune in and hear from Andreas Lehofer, Chief Product Officer at Dynatrace as he boils it down to two critical issues:* Cost vs Value of your data: Current approaches are expensive as you keep 95% of your data not knowing whether you ever need it!* Functional Limits with having siloed observability data: When you need answers the current siloed approach is slow and limited!Thanks Andreas for the discussion, the insights on the hidden costs of current approaches, the technical explanation on our architecture as well as giving us some glimpse on what’s coming next.Show Links:Dynatrace Grail Announcement:https://www.dynatrace.com/platform/grail/Andreas Lehofer on Linkedin:https://www.linkedin.com/in/andreaslehofer/

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for Pure Performance! Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson. Hello everybody and welcome to another episode of Pure Performance. My name is Andy Grabner and as always I have with me my co-host Brian Wilson. And in fact it's funny because that's super funny I have to say, Brian, because I just said I had nothing prepared. And that was a total mistake. I couldn't find something. That wasn't even a plan one. That was a complete slip-up. So yes, I'm Andy.
Starting point is 00:00:49 I'm Andy today. You can be Brian. You can make the dumb comments and I'll be like the smart guy. And we'll just do some role reversal today. So you can just say duh a lot during the show. So that means for once... I have one more question and bundle five questions together. So for once,
Starting point is 00:01:05 you are the smart guy in the conversation. For once, yeah. For once, yeah. No, but yeah. Hey, I'm happy to impersonate you, but it's hard to impersonate your accent and you are also not doing a really good job in impersonating an Austrian accent.
Starting point is 00:01:18 I did not want to. I did not. Sam, I'm not even prepared to do it. I'll go too much into Arnold if I try. So, you know, it's too cliche. It's too cliche. Hey, and I think we just heard a little chuckle, so maybe we already at least are a little bit funny to our guest.
Starting point is 00:01:34 And not letting, I don't want to keep him waiting for much longer. Today, I just learned something new because I typically say Andy to every Andreas that I meet because I always assume that Andreas is in the world like the short version of it, Andy, which I do. But it's not always the case. And that's why I will keep calling Andreas, Andreas.
Starting point is 00:01:54 So Andreas Lehofer, welcome to the show. Thank you so much for finding time. And for those people that don't know you, who are you? What do you do? Why do you think you're on the show? Servus, Andy. Servus. don't know you uh who are you what do you do why do you think you're on the show and this is i'm actually trying to help here a bit because we have so many andreas's or andy's in in in austria very popular name for people of about our age that it's it's good to to have an anchor to to figure out who who is who So, yes, I'm Andreas, and I'm Chief Product Officer at Dynatrace
Starting point is 00:02:28 for the Dynatrace core platform. And the core platform is everything about data storage and analytics as much as a lot of other functionality that is common to everything that we are building at Dynatrace, such as reporting, dashboarding, functionality. And I think that is related to the topic things that we are building at Dynatrace such as reporting dashboarding functionality. Cool. And I think that is related to the topic. It is too.
Starting point is 00:02:51 It is too. It is. But before I because you just reminded me you're right. Basically Andreas is a popular name, but it's really more like in the I guess 70s early 80s. Right. That was so now people have a clue about about our age yeah yeah yes but i'm not with the with diana trace since that time i'm i'm i'm only with diana trace since early 2009
Starting point is 00:03:16 which is still a good time working in i've been working in different product management roles since then. Yeah. And I think also just shows just like I'm as long in the company as you started in 2008. It just shows that we love working here. And I think the reason why I at least say is because there's always something new and interesting coming up. We're not stopping innovating, right? And talking about innovation, Andreas, the reason why we obviously have you on the call, we have recently launched and announced, I think it was October 4th or 5th in London, Grail. So Grail is the big new thing
Starting point is 00:03:55 that Dynatrace kind of brought to the world. But my real question is, and I'm sure this is also the question for many of the listeners, our customers, partners, and everybody else that looks at what Dynatr is doing what is grail why did we build grail what problem does it really solve why do we need another storage system what what's you know why care i think that's something that you actually brought up earlier the question is why should i care that grail is actually there absolutely Absolutely. This is, I think, a really important question
Starting point is 00:04:26 to understand what is the value that Grail delivers to our customers. We will get a bit into what Grail is as well. This question is related with the other. But Grail is really solving two of the most crucial problems that exist in the observability and security space and around these two spaces. So the first one is what I'm frequently calling a cost value problem customers, businesses have with storing observability data and also security data. This problem is especially huge when it comes to log data, but it is not specific to log data only, to add that immediately. But with log data, I
Starting point is 00:05:15 think it's the best and also the most well-known example that data is, of course, increasing exponentially. The volumes are growing. The complexity of data is growing all the time, and so is the cost of storing but the problem here is that the perceived value that nowadays typical solutions are delivering to their customers is not at all proportional to the spend and of course that is then increasing if the if the data volumes are exploding to give you a number to to illustrate that i'm asking customers customers or prospects whenever we are talking about this, always about what they think the percentage of data is that they are storing and never reading. And the figures that I typically get are somewhere between 90% and 95% of data. And what does that mean? That it's, of course, still,
Starting point is 00:06:29 there is a reason why they are storing all that data because eventually they might need it. If you have a forensic situation, if you have to do some troubleshooting, you need that data later on. But at Ingress, you don't know what that data will be. So there is little alternative to store everything but of course there is a huge difference in the value that this 90 95 percent of data that are very very infrequently analyzed if at all have to the five percent or ten percent of data where you have to
Starting point is 00:07:01 do intensive and very very frequent analysis so the smaller proportion of data has the high value and the larger volume of data has very, very low value, let's say by gigabyte, which is the interesting unit. And this is one of the big problems because today you have to pay typically by ingress, a very, very high price point and this is just not appropriate for for that situation and we have addressed this in grail with a completely new architecture that is allowing us to be very very efficient in ingressing processing data at the ingress storing data also long time storage short time storage whatever
Starting point is 00:07:47 customers need and do very very powerful on-demand analysis if it is needed without customers having to carry a very very high price point on interest when they don't know whether they will analyze the data at all so that is like the the biggest problem from a from a commercial uh point of view there is a second problem i want to uh to mention and i think that let it be good then with with those two which is more from a functional perspective uh a problem from a value perspective assuming that the first problem is solved, of course, which is that most of the solutions today are focusing on certain types of data only. So I think everybody knows a solution that is doing everything with log data, if possible. So they're looking at everything through a log lens.
Starting point is 00:08:41 There are other solutions that are doing everything through a time series lens, like all the time series databases. There are very classical APM solutions still in the market that are looking at everything through traces. And this is creating silos because you have then a set of solutions, all of them specialized into one thing, real-world use cases are typically requiring having everything, right? And if you don't have that in a single solution, it is very, very difficult. And the problem actually remains with the customer to solve it, to get then the analysis done across the silos.
Starting point is 00:09:23 And the big thing I was only mentioning, the high volume data here is also the, what you could think, consider like the glue between the volume data points, which Dynatrace calls smartscape, the topology model, the entity model. It's very important that you are also understanding where is my data coming from.
Starting point is 00:09:47 So if you have a log file, you want to know which machine, which process, which service is writing into that log file. And you want to do a join from the log data to the trace data written by that process. If you have a time series written on a service level about, let's say, you have an SLO, that I think is a great example. Typically sitting on an application or service level, you want to go from the SLO down to all the data sources that I mentioned before. And this is not possible with what is available today in the market and was also a key requirement to us building Grail to not only solve that commercial business challenge, but to also provide customers access to all the observability, all the security data, the also business operational data around it into a single layer that we that we
Starting point is 00:10:49 are calling grail now yeah i fully understand i mean especially the number two now this is also the way i try to explain when people ask me because historically right we as dynatrace we started with the apM space with traces and we stored them in an efficient way and then we added more metrics because we went into the infrastructure, we went to real user monitoring, we went to logs, we went into security
Starting point is 00:11:15 and we obviously stored data in whatever storage was the most efficient for this type of data, but then ran into the same problem that I guess other solutions are also running into that started maybe, as you mentioned, on logs and then expanded to metrics and traces and so forth. So that means these data silos and then making efficient queries, getting efficient answers out of this unconnected data is very hard.
Starting point is 00:11:40 So I do understand this. I have a couple of questions later, but I first would like to go to back to point number one because i i want to understand one thing you said we're optimizing the ingress because you're charged by ingress which now my question is how do we optimize this because if i if i have a terabyte of logs whether i i ingest to one tool or another, what is the optimization on our end? This is what I'd like to understand. This is going now a bit into the architecture of Grail.
Starting point is 00:12:14 And I think we are good to go a bit into this. So what is happening on Ingress? And this is now where things are deviating depending on what solution you are looking at. Most of the solutions that are today in the market are making use of heavy indexing at ingress, which is a very costly operation, both in CPU cost and also in terms of storage required. And you need very, very expensive storage in order to maintain those indices. There is, of course, a reason why those solutions are doing that. So having those indices is making the analysis, the read type of access later on highly efficient if the search query is covered
Starting point is 00:12:57 by an index, of course. If your query is, if you don't have the right indices in place, it's kind of tough luck. I mean, this is then really a problem because then you are kind of really completely thrown back. You have to re-ingest the data and re-index and everything. But with the assumption that your indices are defined right, then the value of that index is that it makes read easy later on. But it's creating a high cost on ingress. And that is exactly now the problem that I illustrated earlier.
Starting point is 00:13:31 If you think 95% of data are never read. So you are maintaining all those indices and you have to create indices for everything that you want to do later on. You're doing that really without no reason. And we have minimum indices. There is, of course, a certain level in place for certain performance reasons. It's not that Grail takes all the data identical. There are structural differences between a log line
Starting point is 00:13:58 and a time series record. And the trace span is, again, a bit of a different structure. And we are using those things that we do know but the difference is we are not forcing people to define indices and we have an an analytics layer that can do every analysis that is thinkable without indices this is where a very modern approach architecturally it's it's called massive parallel processing. And if I say massive parallel processing, think about 1,000 cores, that magnitude, right, in the Dynatrace SaaS environment. have any any shortcut we really have to scan the data as as as we have it we use very very efficient in also by by economic cost hardware as it is available by by the hypervisors that gives us huge throughput getting the data into our processing nodes and then we have like 1,000 cores that are doing the search in parallel,
Starting point is 00:15:07 and that is allowing customers to do everything they want to do on read without indices, so this is schema on read. You can pass the data that you could not pass at ingest, right? I mean, that is the other thing to say. If you think about a troubleshooting, a complicated troubleshooting scenario or a forensic scenario in the security space it is a likely thing that you will only learn when you start researching at which data structures you really want to look at from if you stick with that log example and we can do that on the on the fly and and we are saving a lot of cpu and and we can support very very cost efficient hardware for for data retention by not maintaining all those
Starting point is 00:15:54 indices that is actually the trick so there is no there is no magic uh in the in the in the grail i'm sorry about about that no but i i think for me it is magic but the word that i had that i misunderstood when we talk about ingress for me ingress was just the how much data do we need to send from the target system to the log system in our case grail and a terabyte is a terabyte but what you are basically telling me is it's the total cost of ingress because the one terabyte in a traditional solution is not one terabyte. It's one terabyte plus whatever the index is, right? Yes, in size.
Starting point is 00:16:33 And it's the effort to calculate and maintain that index, which is actually the most expensive thing of all. There is, of course, in Grail, like with other comparable solutions, more in the ingress than that. You can also transform data. You can filter data. We have certain PII use cases supported there. So it's a bit more than just sending the data in and storing it. I think if you come at it from the point of view of a user, it doesn't sound like anything magical. It sounds like something that would be expected, right?
Starting point is 00:17:11 If we think about the magical world of computers that it's supposed to deliver, if we look at how computers are portrayed in movies, you can just use them and they do what you need them to do. And I think what's been lacking in the past is the ability for them to do what you need to do. But as you just explained, you can ask it whatever you want and don't have to think about how did we address it. If we take a look back to earlier, we mentioned 70s and 80s, right? I was thinking about the old text-based games, right? The text-based computer game, you know, look left, right? And you had computer game you know look left right and you had a certain words
Starting point is 00:17:45 you can use and it could only understand certain things that you put in and if you wanted to say pick up pick up the the dog and it had it in there as canine right it wouldn't understand you and now it's like imagine a text-based game that you could type anything in it would understand what you're doing exactly it's it's really bringing that approach up to speed with expectations of modern users. You don't want to know all the queries ahead of time. So when you're storing data, especially. And the other thing is, I think also nobody is familiar anymore with the idea of spending a lot of time managing data retention, right? I mean, I was thinking about that.
Starting point is 00:18:27 When was the last time when you two cleaned up your Outlook inboxes? You don't want to think about that time. Or when did you delete data from your OneDrive? So we are not used to think that storing data is a very expensive thing. But with observability data, it is until today. And customers are spending huge amounts of money to just add that on exactly that aspect. So many of them have homegrown solutions on top of commercial log monitoring solutions for the only reason to make sure that they are filtering out data that where they think or where they guess they will not need it later on. So all of that, I think it's a waste of time
Starting point is 00:19:21 and human resources. You would do it nowhere else, right? I mean, where is the filter to delete all the unnecessary and duplicate slides from PowerPoint? That's completely strange, right? But in that space, it's common.
Starting point is 00:19:40 So, first of all, thanks for explaining what we do, how we do it. I also now, I think, have a better understanding on the word ingress. I think this is where I, and maybe others as well, you know, we're kind of not understanding because one terabyte is one terabyte, it's the total cost of storage. I have one question though, right? You mentioned a couple of keywords, like you mentioned no hydration, no indexes uh schema and read i think some of these terms we are throwing around now i gotta be honest with you and i want to challenge
Starting point is 00:20:12 you on this and i hope you give me a good answer um because i also get challenged when i talk with people and they say well you're not the only vendor that talks about this there's solutions like snowflake out there that also provide all the stuff that you claim to do on your on your website on your marketing material so why again why is grail why did we build grail if other solutions seems to exist that also solve this type of problem so in by the objectives that i defined earlier we we did a long research and we found no solution in the market that covers them so if the objective is to store the type the data types that are mentioned in the in the beginning on the scale that we need and with the cost effectiveness nothing nothing exists until today there are
Starting point is 00:21:07 solutions that are indeed going into a similar direction which I think is also proving that we are going into the right directions with with our architecture you mentioned snowflake you can also look at some of the things that hypervisors are doing like open search or custos from uh from microsoft so there are certain similarities on the architectural side of the house also certain similarities on the on the query language side of the house we did not speak about the dynatrace query language so so far today so all of of this proven proving that it is really a modern approach what what we are doing reducing indices supporting a massive parallel processing so there are things in common but there are also things that are different um just very very briefly both um the the mentioned hypervisor solutions are very very
Starting point is 00:22:00 focused on on locks or not covering the the aspects that i mentioned on the functional side of the house snowflake is a generic database system so it's more generic than grail which is great if you have those requirements if you want to fulfill let's say a business use case and you want to replace your classical relational database it's a good thing to do and that's also why you see it adopted very very widely but they are not specializing into observability and and security data types which grail is doing and this is of course always a trade-off that you have in software many many times a very generic solution has the advantage of being more generic, but of course it's not as optimized for a bit more specialized use cases and with the super high data volumes
Starting point is 00:22:52 that we have with our customers. If you think about a petabyte of data ingress a day, the specialization does matter from a performance and also from a cost perspective. So this is why snowflake and grail, this is not a direct comparison. It's a different problem that is solved. Yeah, and I think I really like that explanation. That's also what I typically give. We have 15, 20 years of experience with observability data and we know the specifics on how to treat observability data and therefore we could optimize all of this especially specifically
Starting point is 00:23:34 for our use cases and as you said you know even if you're just let's say 10 20 30 percent more efficient than the general purpose storage solution in a world where we have problems with efficiency where we all know the current economic climate and the challenges we have with green IT or sustainability I think that was the right move and it's great to see that Dynatrace decided to go down that path years ago because it's not that we started with this last year. No. So this is for me the phenomenal thing.
Starting point is 00:24:10 And this is why. But I wanted to challenge you because I wanted to just see what you are providing as an answer. Absolutely. And we challenged ourselves on that one. You can trust me on that one. It was, of course, also a big internal discussion. If you bring up a project, hey, we are going to build such a thing on ourselves, then that question comes up automatically.
Starting point is 00:24:32 That question needs an answer. Yeah, and I think where it again really comes in is the scale of environments we deal with. I guess if you have one instance instance WordPress somewhere running around, then you know, there might be other solutions as well. There is also one other additional dimension or argument that we did not yet speak about. It's not the first thing people see usually, but it's very, very important for our customers, which is also the enterprise grade of Grail as a database. It's a multi-tenancy data store, but we have strict isolation, different encryption per
Starting point is 00:25:17 tenant. We have a built-in security concept that is able to manage data access in an enterprise ready ways for these are all things that are very very hard to find if you if you try to go with things that that are available in the market so at the end of the day there was really no alternative then then building it cool hey and where's you brought up the term damage with queryrace Query Language, we also call it DQL. Can you fill us in a little bit on why the DQL is also really the big part on top of Grail that will enable a lot of use cases? What's the speciality of DQL?
Starting point is 00:25:57 So it is when it comes to accessing data that is in Grail, it's the single interface that we have. So all the data access in Grail is going through the single interface that we have. So all the data access in Grail is going through the Dynatrace query language. It's a new query language that we have designed to our needs and the needs which are the needs of our customers. Of course, so the focus of the design was simplicity, first of all, So it's very, very, very clean. It is also a very, very powerful language to give you a sense for the power. We have about 150 functions in the query language as of today. So this is functionality like all the mathematical stuff that you need,
Starting point is 00:26:41 statistical stuff. You can aggregate data data you can pass data so there is a specific passing language part of the dql that is very easy to learn yet very very powerful so far away from regex also highly performant so really like an antagon an antithesis to uh to to to regex in in many ways and of course it is also showing the differentiators of grail that i mentioned before so there's a built-in element in the language to query uh topologic topology sorry to query topological information so you can do a query that starts with a let's say filtering log files and then you can do a join over the topology model like i want to go from
Starting point is 00:27:34 from the from all the hosts that are matching that where those log files are coming from over the processes to the services running and i want to see certain spans on the service level and you can put that into a single query language statement and do whatever you need to do. One of the important things to stress with that last bit you just said there, right, how you can join across all the different elements and components. One thing that I think a lot of people take for granted is that, and I know I'm going to sound like a Dynatrace commercial here, though, is that you do not have to establish those relationships.
Starting point is 00:28:12 That's part of the smartscape. That's part of what we're also pulling from the other logs and other components. All these relationships are automatically being built and maintained. And I think that's one of the most amazing pieces of this, that similar to you don't have to build the indexes, you don't have to build the relationships, which makes it just infinitely more powerful and more convenient
Starting point is 00:28:32 and more, hey, it works as I expect it to work, which under the hood is a huge feat of engineering. But to the end user, it's like it's doing what it logically should do. Absolutely. the end user it's like it's doing what it logically should do you know which absolutely and this is also from the perspective of an existing dynatrace user i think many of the of the of the listeners to this uh to this blog yeah will um to this podcast will um will understand the improvement here because today Dynatrace has many many APIs in the product you can access all the data that is in Dynatrace but when you're trying to do exactly what we what we just described when you want to go from one type of data through the topology model that Dynatrace has built in today and it's also you can query the Dynatrace topology model as of today but you have to do to
Starting point is 00:29:27 put the pieces then together yourself as of today and with dql all those limitations are removed you can do everything in a single query and you don't have to do things like the stitching from from one storage to the to the other storage in your own logic. It's much easier and customers have almost unlimited power with that query language. Whatever you want to do, if you have the data in Dynatrace, you get the answer
Starting point is 00:29:56 out of Dynatrace with DQL. That's awesome. And we have for the folks that are listening, there are obviously different channels in Dynatrace that we use to educate our user base. I already have planned some sessions, some of my observability clinics starting in November on Grail, on log analytics, on that first use case kind of that where we really see the power of of grail for most of our customers so you will learn about tql and uh and what it can do and also there will be more material on i assume dynatrix university webinars and so on and so forth let me maybe add one thing we are talking
Starting point is 00:30:36 about tql because there there is a chance that that people will get that wrong. So no worries. Grail exists completely under the hood of Dynatrace. So you are not forced to learn DQL when you start using Grail. DQL is used internally in product. So all the views that you have in Dynatrace, all the nice things built into Dynatrace are using DQL to get the data internally. You don't have to worry about that.
Starting point is 00:31:09 Only if you want to access the data in Dynatrace directly, you can use DQL. And I think there it is a strong asset. There are also other ways to access the data in Dynatrace that come without DQL. There is a classical log viewer that is just filter based and and completely ui guided there are many views in the product that uh allow you to select metrics and do all the common things with metrics without tql but i think as everybody knows who who is like has a bit of a programmer background or or has used a data query language whether it's sql or or whatever type of query language there are certain things then where it
Starting point is 00:31:54 is actually easier to use a query language than to try things that are a bit more uh challenging in the ui the ui is at a point getting into the way of a more advanced user and then you can really do everything that you want to do with DQL. But there is no force. It's not that everything is now going through DQL in Dynatrace. Right. Yeah. Thanks for the clarification. That's good. We have so many great uh great uh features in dynatrace that just make it extremely easy for people to navigate the data to analyze it and they stay the same underneath the hood they you know use obviously tql to access the data but a great thing to know is we open it up for everyone in case you need access to the to the data yourself and the
Starting point is 00:32:41 power of tql um andreas i know it's always hard to ask a product manager for the roadmap of what's coming ahead. And I don't want you to obviously go too far out of your comfort zone because we obviously as a public traded company, we also are only able to say so much about what's coming. But maybe you can say at least one or two things that excites you about the road ahead. the time when this airs i think it's going to be probably early mid of november of
Starting point is 00:33:12 2022 so if you think about november and then going up what's coming what what what excites you about the next couple months we have actually already been a bit into roadmap between the lines. So with the current version of Grail that is in GA, as this podcast is going live, Grail is focused on log and event data. So this is available as of today and it's available for AWS SaaS environments. sas environments there will be upcoming releases shortly so we are not talking about a year ahead but more like spring 23 we will add support to the other data types that are mentioned today to grail we will also add support to asia environments, Asia South environments to GRAIL. And of course, we will constantly improve the scalability of GRAIL. So there are many, many details that we are going to add.
Starting point is 00:34:18 But I think the main thing is to understand the availability when it comes to hypervisors and also the support of different data types. And there are actually a couple of major announcements for early calendar year 23 that I can't make here because we want people to join us and perform. And we have to keep a few things for us until perform. But I can say without exaggeration that we are going to announce a few things on top of Grail that are at the same level of innovation and same magnitude than Grail itself.
Starting point is 00:35:06 So a lot more to come on on top of of great yeah i think we're all looking forward to perform and that's a good commercial if you want to join us i think it's in the week of february 14th i know this is valentine's day but maybe you bring your valentine to vegas that's that's a good way of doing it we do uh two hot days uh hands-on training days and then followed by uh the conference we go to perform.dynametries.com you'll find all the details just one additional thing on a clarification perspective in case people got this wrong uh you mentioned that currently it's supporting aws environments and then asia is coming later this really means that we are running Grail. We are hosting our SaaS offering on Grail in either AWS or Azure. And then Google is following as well.
Starting point is 00:35:53 This does not mean our general observability support for AWS because that's the same as it has been for years. If you run your own systems in Azure, in Google, in AWS, in Alibaba, in your own cloud. It doesn't matter. We got you covered. It's just that the massive parallel processing capabilities of Grail is stuff that we run in our SaaS environments. And we started with AWS and then the other hyperscalers are following soon.
Starting point is 00:36:19 Yeah. Thanks for the clarification. This is important. Cool. Andreas, I think it's exciting times the observability space is obviously growing and flourishing on the other side it's also going to be interesting to see um what's happening in the market right um and i think with what we've been showing over the last at least 15 years that the two of us have been with the company uh it seems that we have always done the right things because otherwise we wouldn't be still here because we love it and
Starting point is 00:36:50 dynatrace wouldn't be in a place where we are right now so i'm pretty sure what we're doing with grail and the next big thing that we announceatrace has left the observability-only space. We actually left it two years ago, but Grail is really an important element in bringing security, especially application security and observability together, right? I mean, that starts with all the data that is stored in GRAIL and that is very relevant, of course, for observability, but also for security. If you think about a forensic use case where a security specialist tries to figure out if something has happened on a machine, what is the difference between such a use case and a use case of a performance expert that is trying to figure out the stability problem of an application or a performance
Starting point is 00:37:58 problem of an application. It's about the same data. There is no real difference in the data. There is also no real difference in the type of analysis that you are doing. You need full access to the data. You need this powerful, ad hoc, schemaless type of capabilities that we described. And so GRAIL is giving us the back end power, so to say, to also get Dynatrace into this type of forensic use cases. We have, this is not, I think, the time to talk about the existing support we have
Starting point is 00:38:33 in Dynatrace for application security around things like vulnerability detection, runtime application protection, but it's just an example of what everybody is speaking about. Observability and security are converging markets, and I think it's always important to think about those things in parallel if you think about Dynatrace as a company going forward. Yeah.
Starting point is 00:39:02 I remember when UEM first came out, which was way, way, way, this is almost that,
Starting point is 00:39:08 you know, as bigger than that, you know. UEM was then kind of integrated into
Starting point is 00:39:14 observability. I think with security is so large, this is likely not going to
Starting point is 00:39:21 happen. Right, right, right. Awesome. All right, Brian, any final thoughts from you before we wrap it up? No, I mean,
Starting point is 00:39:32 I think that, you know, the big takeaway is that as observability was being defined, one of the flaws, or there were two flaws. You could either get a lot of your data ingested, but then you had a lot of problems using that data, accessing that data, doing something with it.
Starting point is 00:39:56 Or you can get some of your data ingested, but you could do cool things with it. And yet the message was always collect all of your data and then do what you need to do with it. And what I see Grail giving the opportunity to do is do both halves of those. Get all the data in cost efficiently and then ask of it what you want. The other parallel I go to is way back when my daughter was young,
Starting point is 00:40:28 she was using YouTube kids' videos, and she would speak into the microphone and say princess videos, and it would bring up a bunch of princess videos. And to her, she thought at three or four years old, she can click the button when she got sick of those princess videos and say different princess videos. And it should come back with a list of new ones. Obviously, it didn't because it wasn't designed for that,
Starting point is 00:40:49 but it's the idea of asking for what you want and getting it, which is dependent on the data and having a flexible query capability enough to do that. And that's really what I see as the huge undertaking. Again, it's almost, almost I mean it's fantastic I don't want to underplay the fantasticness of it but it's we're being given what we expect or what we should expect which in a way doesn't seem that overwhelming but when you look at it and say like but we can't do that now that is the that is the
Starting point is 00:41:23 big huge component. I think that's what people really just need to realize is that, where else can you do this? It's amazing. I just really, really love what everyone on the product and engineering side has done. It's amazing. So thanks to everyone for that. Cool. Hey, with this, Andreas, we may have you back maybe next year because i know there's more stuff coming and i think giving our listeners an update from time to time on our innovation is great and hearing
Starting point is 00:41:52 it from somebody like you that is basically leading the product initiatives here uh is a great way of doing this and uh especially what i like is we talk a lot we talked a lot about financials and grail today but still I think we also educate people on the challenges that we're really solving and let's hope they solve it with Dynatrace because we built something for them but if not at least they're educated and they know why maybe
Starting point is 00:42:16 at some point they will run into limitations with the current approach so that's why perfect then Brian we will wrap it up thank you Andreas so that's why perfect yeah then Brian all right we will wrap it up
Starting point is 00:42:28 thank you Andreas and thank you Andy but who am I talking to with each of those names I don't know thank you for being on today and
Starting point is 00:42:37 thank you for our listeners for always listening to us and enduring Andy and I's jokes in the beginning. But anyway, if you have any questions, comments, you can tweet us at Pure underscore, no, PureDT on Twitter, or email us at pureperformance.dietreus.com.
Starting point is 00:42:55 And thanks, everyone, and we'll see you soon. Bye-bye. Bye-bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.