PurePerformance - OpenTelemetry for the Mainframe and more with Christian Schram

Episode Date: February 13, 2023

Did you know that almost 60 years after IBM presented the mainframe 92 of the worlds top 100 banks run mainframes handling 90% of all credit card transactions? We didn’t either until we recorded thi...s episode with Christian Schram, Solutions Engineer at Dynatrace, who has spent the last 20+ years helping organizations optimizing their mainframe environments. Tune in and learn about the mainframe, how the cloud native project OpenTelemetry has made it to the mainframe and what the most common performance patterns are on the mainframe.As discussed check out the following links in case you want to learn more:A Brief History of the Mainframe World (Blog)Modernizing the Mainframe (YouTube)Eliminating inefficiencies on IBM Z (Blog)End-2-End IBM Z transactional visibility (Blog)

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for Pure Performance! Get your stopwatches ready. It's time for Pure Performance with Andy Grabner and Brian Wilson. Hello everybody and welcome to another episode of Pure Performance. My name is Brian Wilson and with me is my co-host who's really trying hard not to mock me right now. Andy Grabner, you know what's special about today's episode? What is special about today's episode? Looking at our faces, the three faces, I see a progression in a lot of hair, receding hair, no hair. I hadn't thought about that one, but it's our 175th episode. Wow. Yes.
Starting point is 00:00:59 Very interesting. Interesting. And the thing that amazes me about 175th episode is it's one of these topics that when you first look at it, you're like, really? That's what we're going to talk about? How does that have any relevancy in today's world, right? But that's what we're here to learn today. And based on some of the prep for this show, holy crap, yeah, quite a lot of relevancy. So I'm really excited to get into it. And I thought, you know, not to, I'll let you do a magic segue like you're, you normally are, but I just got to say right before we start, I, uh, for, for as soon as you hear the topic, don't run off people because it's a heck of a lot more, uh, exciting and relevant than you could possibly even imagine. And like, I just, yeah, I'm excited and i know i'm going to learn a lot today me too me too and it's like when you said 175 it's like back to the future i'm not sure if that makes any sense now with this reference but we're kind of in the future we're looking back yeah with the movie yeah and we're talking after 175 episodes we didn't progress into future technologies but we're looking into technology that actually makes a lot of stuff possible that works today.
Starting point is 00:02:07 Looking at my phone, if I make a bank transaction, I'm pretty sure it's not a bad segue. It probably hits the mainframe. And now we finally give our guest the chance to introduce himself. Christian, so sorry that it takes that long for the two of us to actually give the mic to you. But Christian, thank you so much for being on the podcast. Could you do me the favor, Christian? Introduce yourself. Who are you? What do you do? And why are you excited about the mainframe, which is the main topic of today? Okay. Yeah. yeah, thanks Andy for having me on the podcast today about mainframe. Yeah, Christian Schramm from Dynatrace in Austria, located in the Linz lab. I'm working at Dynatrace
Starting point is 00:02:57 for 23 years now and I have a very strong mainframe background. I'm working in that space for almost 30 years. And starting out as a mainframe developer systems programmer, I worked as a performance analyst and consultant. And in 2000, I joined Dynatrace and I'm working with mainframe customers globally with a focus on Europe of course as I'm located here but I'm an interface between the field, our sales organization, our solution engineers and the lab, the product management and give feedback regularly. What is needed on the market, which technologies are hot on the market out in the field and what do we actually need to further improve our solution. Before we go, I got to ask a question because there's something I just learned there, or
Starting point is 00:04:13 maybe because it really confused the heck out of me. Christian, you said you've been with Dynatrace for 23 years. That is new. Was that by way of CompuWare or... Because I know Dynatrace product-wise came in around 2006 or 2007, at least as far as I know. Was Dynatrace the actual company started from development and stuff in that time period? Or were you coming in from Compuware? Yeah, you guessed it correctly. I'm coming from CompuWare. So 2000 originally I joined CompuWare, which was then, which
Starting point is 00:04:47 acquired Dynatrace in 2011. Then I moved over, made the transition to Dynatrace. I thought it was like, holy cow, because I thought the actual inception of the OG Dynatrace was way back then. I was like, I had no idea. So I needed to clarify that for myself. So sorry for cutting you off, Andy. I couldn't let that one rest. My mind was exploding there. Christian, we probably have a lot of listeners that are looking at some of our previous podcasts that they listen to. And we talked about DevOps. We talked about SRE. We talked about serverless. We talk about all sorts of as you said future trend technology or stuff that is currently
Starting point is 00:05:30 hot microservices how does the mainframe still fit into this picture i mean is this and this is the interesting thing right we never think about it. These technologies actually still power so many of our services we use on a day-to-day basis. As I said earlier on my phone, when I make a bank transfer, I'm pretty sure it hits the mainframe. So now to rephrase the question, Christian, for the people that never thought that the mainframe is still a thing and that it's relevant for them in their day- do the live, that they're actually interacting with it. Can you give us first maybe a quick history, a little bit of an overview where the mainframe come from
Starting point is 00:06:11 and also answer why is it still relevant in the applications we're using today? That would be an interesting kind of segue that you could make. Yes, definitely. The mainframe, as you said it has a strong history a long history so the first mainframes were introduced even before 1964 but the really really revolutionary mainframe platform was the IBM S360 that was introduced on April 7, 1964. So very important date, so almost 60 years ago.
Starting point is 00:06:56 And that really revolutionized the IT industry. So the development cost for IBM for that specific platform was around 5 billion. You have to imagine that's really an amazing figure. So today I would guess that would be equal to 40 billion or something like that. So really a massive figure. And yeah, it developed over time. More and more technology stacks were added and Java was added as well to the mainframe with new processor types reflecting that that are able to run on these processor types so the the mainframe was set dead already in the early 90s when i started working on
Starting point is 00:07:57 the mainframe but hey after 30 years it's still there. It's still important. And as you mentioned, it still runs business-critical transactions on the backend, not only banking transactions. I think almost 90% of credit card transactions are processed on a mainframe backend. So that is really an impressive figure. Not only, I would say, traditional applications are running on the mainframe, but also newer technology stacks like Java-based technologies, even OpenShift, can be run on a mainframe and yeah these older technologies like COBOL and PL1 applications that have grown over 40 years and have been developed over time are still made available to the outside world with different technologies. So that is really an important topic today to make these old assets available to the outside world. Recently, I worked with a customer who is calling mainframe transactions from AWS Lambda services. So that was really impressive for me how easy it is to make these assets available to the outside world. So cloud meets mainframe here, old stuff
Starting point is 00:09:39 meets new stuff. And yeah, the mainframe is still important and is running a lot of uh business critical transactions nowadays and the more important it is to make it available to the outside world and also have observability into these mainframe assets i just want one before before we move off of this topic here, I wanted to ask I imagine it would also be very important to continue to educate people
Starting point is 00:10:14 about mainframe and everyone's going into front-end development, back-end development, all these fancier code bases. But I imagine that there is quite some trouble hiring younger generation people into mainframe operations, learning COBOL, learning how to even use and interact. I can imagine a lot of aging out of the workforce who knows how to work with mainframe, and
Starting point is 00:10:42 that's probably another issue going on there and yeah i mean because when you think about mainframe right you're talking about if we go back to the old old ones right if you look at the old science fiction movies those are the ones with the large tape reels spinning around and all little beeping lights and well i guess they probably didn't really be they beeped in the movies right but obviously they're much different these days but it's um yeah i can imagine that's another another big part of this too anyhow i didn't want to sidetrack but just just to throw out to people out there especially if you know you're you're not happy with where you're at now
Starting point is 00:11:15 you probably have a a good longevity job in mainframe yeah you say it. Nowadays, this becomes more and more of an issue, of course. Old main framers are retiring, so there's a skills gap, definitely. to reflect that and educate people inside their organizations. So I recently met a systems administrator of a big insurance company, and I really was surprised that this was a 30-year-old guy. He was educated in-house and is now responsible for business-critical IMS systems of this large insurance company. So, yeah, there's a reaction already, but still, there's a lot more to do. A big problem is, of course, that COBOL PL1 normally is not taught anymore on universities and schools. I know that there are some schools who do it again, I would say, but there should be
Starting point is 00:12:41 more, of course. But as you said, it's a big problem and organizations need to react appropriately to fill that skills gap. Talking about COBOL, it just reminds me, and we probably need to redo the screenshot later on, our picture, because I just, over the weekend, I was at my parents' house and I was digging through some of my old stuff and I found the old COBOL programming book, the handbook that I got in my high school. Because when I was in high school in the 90s, we learned COBOL. And back then they also told us, right? Yeah, we're not sure, but I think the mainframe will not be relevant anymore when you're finished
Starting point is 00:13:21 with school. But as you said, 30 years later, it's still very very very relevant christian i have a question because as as brian said when when when i hear mainframe and many maybe hear mainframe they say well this is old hardware old computers but it's actually not right i mean i assume there is new hardware being developed there is also new operating systems is there still in is there still, quote-unquote, going on, or is it more on making sure that the operating system is optimized for the latest hardware? Because obviously hardware needs to be renewed,
Starting point is 00:13:56 and I guess they're building new pieces. They're optimizing and making it more efficient. Is it still happening on a regular basis by IBM? Yes, definitely. There is new development concerning OpenShift with acquisitions. IBM definitely also wants to bring new technologies also to the mainframe. And with the acquisition of Red Hat, of course, OpenShift on the mainframe and with the acquisition of redhat of course open shift on the mainframe is a big topic and more and more customers are using that as well
Starting point is 00:14:33 so there is new development and new technologies coming on the mainframe platform And yeah, of course, the operating system needs to reflect that as well. And yeah, the new technology, of course, is a big topic for IBM as well. But let me ask you a question then. If OpenShift and also Java, you mentioned Java earlier, I can run OpenShift and I can run Java on the mainframe. Why would I pick the mainframe and why would I not run it on somebody, some virtual hardware somewhere in the cloud? Why do people, why do organizations choose the mainframe?
Starting point is 00:15:27 Typically, it's organizations that already have a mainframe. So if they have some traditional applications, they also go down that route to include also these new technologies on the mainframe because it's easy to achieve and IBM has done a good job in the past to make these new technologies more attractive for customers. If we think about the introduction of the SAP processors, which are now SIP processors, it's some specialty processors that have a fixed cost and the Java workload can be shifted over there and is more cost effective for mainframe customers. So that also closes then the skills gap because Java developers are easier, I say cautiously, easier to acquire than COBOL and PL1 developers. But yeah, IBM has done a good job to make the mainframe more attractive also for these new technology stacks. But you're not necessarily getting the speed benefit from the mainframe for languages like Java, correct? I mean, and I only asked with one bit of context
Starting point is 00:16:54 that I had way back. I was working with a financial company and they were writing all of their code in C or C++. I forget which one. Because basically, Java and.NET, because it was like a high-speed trading platform, from just a code execution point of view, were too slow for that volume.
Starting point is 00:17:14 So it's not like you're going to get... Mainframe can run things really fast and really high volume, but languages have a limitation of their own correct so you might get a little benefit from the mainframe but you're not going to get that full throttled mainframe speed with more of a heavy language like java there's that yeah sorry to interrupt you but yeah that was probably an issue in the past with the first releases on Java on the mainframe. Nowadays with these modern WebSphere application servers or Liberty servers running on the mainframe on these specialty processors, that's not an issue anymore. A problem, of course, was in the past if you had some mixture of languages if you were calling java from cobalt or
Starting point is 00:18:09 vice versa then it could have an impact on response times and resource usages but that was in the past nowadays that's not an issue anymore so So that's more that innovation. Yeah, cool. Yes, definitely. So IBM has invested a lot to streamline the newer technologies also on the mainframe. Learn something new, Brian. There you go. This whole episode is new, right? You had outdated data in your memory.
Starting point is 00:18:43 I sure did. It's like JetG did it's like chad gpt because chad gpt only goes until 2021 and if you ask something more recent right but uh hey christian uh we talked about you know you're on a pure performance podcast and performance engineering is obviously also relevant for anything that runs on the mainframe and what we need to do performance engineering is visibility and observability into systems. Can you give us a lay of the land and kind of an overview of how does observability
Starting point is 00:19:13 in general work maybe on the mainframe? What type of data metrics or insights do we get? And to do a little bit of a segue into kind of our quote unquote modern world. Does open telemetry play any role in this, which is kind of like an emerging or the standard nowadays when it comes to distributed tracing? Yes, definitely. That's a topic as well. So when dealing with mainframe applications, of course, it's very important not to only have a very siloed view on the mainframe platform.
Starting point is 00:19:50 It's essential to have end-to-end visibility and to end observability to see which applications are calling into the mainframe which services are hitting mainframe transactions and that observability is key to understand who is stressing the mainframe platform because every CPU cycle on the mainframe is associated with cost and therefore it's in the interest of every organization to reduce MSU usage, which is CPU usage on the mainframe, to reduce the total cost of ownership of the mainframe platform. So therefore, observability from an end-to-end perspective is really key to see who is calling into the mainframe and how often
Starting point is 00:20:48 and where the hotspots are inside the mainframe applications, not only Kicks IMS, but also Java And when you talk about open telemetry, of course, that's a big topic as well on Java on CUS. attributes or if a certain end-to-end observability solution does not support certain protocols to also provide or do context propagation to achieve that end-to-end visibility from the front end into the mainframe backend. So yes, OpenTelemetry plays an integral part. I recently worked with a customer who changed their code, added the OpenTelemetry code to capture some additional attributes, which allows them to find additional information in some other sources and the key aspect of OpenTelemetry of course is that there's no vendor login so organizations can use it with our solution with Dynatr, but they can also send the data into other platforms.
Starting point is 00:22:29 So OpenTelemetry really plays a more and more important part on the mainframe as well. Do you see OpenTelemetry also go beyond Java on the mainframe? Do you see it's being even considered by ibm to be put into some of the other languages and runtimes or is this doesn't make sense uh i don't think so that this is a big topic for the traditional workload um because it's it's native code, COBOL PL1. From a technology standpoint, I assume it's not that easy to provide something similar that we have for Java with OpenTelemetry. Okay, that's good to know. You know, it's interesting too with OpenTelemetry. Another point with the OpenTelemetry, I think, is, and this is probably not as much of a concern today as it was even
Starting point is 00:23:27 a few years ago, obviously with the more modernized workforce and some of the innovations. But I know when I first started in Dynatrace, um, 2011, there was a lot of the talk around mainframe was people like, I don't want to touch the mainframe. I don't want to breathe on the mainframe, right? Cause one wrong thing and it's going to fall over. Right. But even sticking with that fear, which probably is a lot less these days again, if you get the open telemetry instrumentation into what's running there,
Starting point is 00:23:58 you don't have to touch it again. Right. So that's another benefit of you're going to swap out different things like, hey, we'll do it once we're done. We can keep your hands off and not worry about, you know, this supposedly fragile system, which probably is not that easy to add additional code. Of course, it depends on the importance. But for newer developments, why not add it right away to be fit for the future. So adding OpenTelemetry to a new framework that is being developed definitely makes sense. And that's also what I see with customers. If they make new developments, they are, I would say, very motivated to add OpenTelemetry code. It's easier for them to do it with applications that are still under development or only small
Starting point is 00:25:13 web services that they are developing. Just adding that code to be fit for the future, wherever they want to send it. CHRISTIAN BRINKHOFF- Christian, a little focus or explanation, not on the future, but a little bit on the current set and the past. Because obviously, we work for a vendor. We work for Dynatrace. And we have our, we've invested a lot of years in our technology
Starting point is 00:25:40 to actually get insights into the mainframe. So what type of insights and observability does, for instance, our agent get on the mainframe that you would not get, let's say, through OpenTelemetry or other means? What is the specialty that we do? The specialty is that we place agents inside the mainframe applications for the different
Starting point is 00:26:06 technology stacks could be Kix-IMS, could be Java and C-OS, WebSphere application server, Liberty, but also see Linux. So if customers run Linux on the mainframe, they can place agents there as well. And if they also place the agents on the open systems host, they will get end-to-end visibility automatically. So just by placing the agents there, the magic is done under the hood by Dynatrace and will provide end-to-end visibility. And when it comes to the mainframe, we provide, of course, the same metrics, which are important for mainframe customers, response times, of course, CPU times. So it's possible to record the CPU times for each single transaction. So it's easy to identify the hotspots inside the mainframe applications. And on top of that, of course, failures are always an important topic so these ones are recorded as well if transactions fail
Starting point is 00:27:30 with data exceptions for instance when talking about kicks transactions the famous asura events these ones are captured as well and not only do you see these failures inside the mainframe transactions, you also see the impact on the actual users because of the end-to-end visibility back into the applications. And yeah, it's a big difference if only one user is affected for an internal application in opposite to thousands of users countrywide are affected in a business critical app. Yeah thanks for that insight because we you know we we try to to keep this you know podcast always very neutral to the to individual tool implementations, but Dynatrace is in a unique position thanks to also the work that Compuware back then obviously did on the mainframe.
Starting point is 00:28:35 And then with the acquisition, we brought that visibility and observability into the Dynatrace platform. That's just great to know. Christian, Brian and I often, when we talk about performance engineering, we always love to talk about common problem patterns. We always like to talk about what are the classical things that then go wrong.
Starting point is 00:28:56 And then if you look at it and it's like, of course, this is always what we see. Do you have some use cases, some examples of what are the classical things that actually either make the mainframe slow or expensive or just where you then do firefighting and trying to figure out what's wrong? What are some of the common things we see out there or you see out there? Yeah, the common things are of course response time degradations inside mainframe transactions for different reasons. Often DB2
Starting point is 00:29:37 plays a role there because indexes are not designed, I would say, perfectly, or wrong indexes are chosen by DB2. But not only that, that the response time degrades in certain transactions, it's also key to see the impact on the distributed services. I worked with a customer, they really saw performance degradations when a certain amount of user was reached concurrently in a business critical app,
Starting point is 00:30:22 and they had no clue what was the root cause. And by testing it out, it was clearly visible immediately. So they increased during a load test the number of users. And at a certain point in time, the response time degraded. So that was already with 30 users. And the reason was a deadlock in DB2. So with that end-to-end visibility, they saw it immediately that this deadlock in DB2 was then the root cause because multiple transactions were trying to access
Starting point is 00:31:08 the same data, the same pages in DB2 and that was effectively causing that problem. You see the deadlocks in DB2 also on the main mainframe itself but you do not have that visibility where it is actually caused so often it is not even noticed well it's noticed but somebody says, yeah, unless someone shouts out loudly that there's a problem, let's forget about that minus 911, as it is called, this deadlock in DB2. So this end-to-end visibility is key again. Another problem pattern, of course, is high resource usage on the mainframe. It can be in traditional programs where certain information is picked up wrongly or redundantly so that there are hotspots in the programs. So that is also a problem pattern that occurs frequently. What else?
Starting point is 00:32:35 Also that visibility who is calling into the mainframe. So I also had a customer they had no clue why the number of mainframe transactions was so significantly increased in one year because they had no context between mainframe transactions and open systems world. And with the end-to-end visibility, they saw it immediately. Well, on the one hand, of course, new applications hit the mainframe, but also the existing applications had more and more users, and therefore they were triggering more and more mainframe transactions also introduction of mobile apps of course spread the the mainframe platform you have your mainframe actually in your pocket you go to your banking account look at your account, and maybe you pay something that you have
Starting point is 00:33:50 purchased, travel or something else, and on the back end, you hit a couple of mainframe transactions. And maybe not only one or a couple of mainframe transactions. I worked with a customer that identified that with one single click in the browser they were triggering hundreds of mainframe transactions and some of them even redundantly. So they reacted really quickly and made this more efficient to reduce the number of mainframe
Starting point is 00:34:25 transactions that are being triggered. So these are the common problem patterns that we typically see out in the field. I wanted to ask with the costs of mainframe, right? And this is where obviously my knowledge is going to fall apart. I it's somewhat transactional based i forget if it's iops or not and is that the same as per transaction because and maybe you can explain this right so where i'm going with this is the idea that not only is it the performance that needs to be looked at but more so than in the cloud, there's a direct cost to that performance. So whether it's the amount or the amount of compute you're using, this is how customers at least used to get charged for mainframe. I imagine that's still the model.
Starting point is 00:35:15 So is there, if that's still true, is there a balance that needs to be made between response time performance and compute performance in order to keep costs down, maybe at the cost of a little bit of a slower transaction? How does that all come into play? Because I imagine that would be something in certain cases important to look at. Yes, definitely. That's still one of the most important topics on the mainframe to save CPU cycles. So every CPU second that you're using, especially at peak times, is
Starting point is 00:35:55 related to additional costs. On the mainframe you're typically charged for MSUs, not only by IBM, but also by other vendors. In the past, and it's still widespread nowadays, you're charged for the peak of the four-hour rolling average MSUs on the mainframe. So that is normally for financial organizations. It's the first day of the following month where there is the peak usage for month-end processing. And 1st and 2nd of January is even worse because then you have all that year-end processing. So that is really massive concerning peak consumption. And customers are charged for that.
Starting point is 00:36:49 But IBM has already acknowledged that this is an approach which can be streamlined as well for making the mainframe platform more attractive. So they are offering also a tailored fit pricing for a couple of years now, which has the effect that you pay for actually used MSUs, so you're charging for MSU hours when customers choose that tailored fit pricing. And that is, of course, more attractive because you really pay what you are using and you are not paying the peak value in a month, which is, I would say, a little bit unfair. You're paying for the peak and not... You could have a very solid MSU usage throughout the whole month
Starting point is 00:37:57 and one day you have a really high peak and you're paying for that with the traditional approach, but with the newer licensing which we will offer as well very soon so that is to come this year we will also charge for msu hours but i imagine though the challenge then becomes not just looking at like let's let's take the classic andy and i's one of our favorite problem patterns, right? The N plus one problem. Typically, you don't want an N plus one problem in standard programming. When it comes to mainframe, then I imagine the challenge would be, well, let's try with and without N plus one and see what the tradeoff between cost and performance is.
Starting point is 00:38:43 Because if it's going to cost us significantly cheaper to run the n plus one for some reason i don't know why that would be but if that were the case i mean i think before you know with the observability and performance in mind you might not throw out the old performance patterns you'd probably want to do a little bit more investigation first to see what that cost performance ratio is. Whereas in standard non-mainframe computing, oftentimes the compute is so much cheaper. They don't, you know, most people, you know,
Starting point is 00:39:13 we talked about the $4 billion. I can imagine that be auto-scaling cloud costs at some point, right? But it's a lot more, right? So would it be fair to say that we can't just ignore or adhere to not following, or not avoiding the old performance problem patterns, because they may in fact, in some weird case, lower the cost?
Starting point is 00:39:39 Or do we not find any correlation between it? Basically, if it's good for performance, it's typically good for um compute and i don't know if there's a known correlation or if it's if it's something that people have to test thoroughly on that stuff on mainframe yeah of course you need to test that of course but there's not necessarily a correlation between response times and CPU cycles. So you can have slow running transactions which are not using a lot of CPU because they are waiting for something, for some resources that are locked for whatever reason. So there's not always a direct relationship between response times and CPU cycles. But yes, with observability, you have the possibility to make comparisons between different scenarios to see the impact of certain things.
Starting point is 00:40:49 My take on this, because Brian, I'll try to figure out where you, if I could come up with a use case where inefficient algorithms would still lead to more cost-effective execution. But I think in general, it's hard to imagine anything like this, right? The M plus one query or anything else means you have more roundtips, more data to fetch. And eventually you need more compute
Starting point is 00:41:13 to process the data that comes in batches. And therefore, I think that general advice should be optimize your code. The question is, if it's old code, right? If you don't have the resources, it takes really long to get that code changed through. And how much does this cost versus the cost that you have? Because maybe that function is only called twice a month.
Starting point is 00:41:34 Who knows? I mean, that's... I'm just thinking of the processing gets put to JVM running in an EC2 instance, instead of it being put into the database to gather up the large chunk. Yeah, I don't know. Again, this is all academic thoughts, but it does raise this question of cost of operation is much more of a thing on mainframe, and that is a performance consideration. We always talk about the cost of performance in regular cloud, but I think it's a lot more of a focus on mainframe.
Starting point is 00:42:09 Christian, one thing that you brought up earlier, and I think this was for me, it's almost like an eye opener, because you said with building new apps and bringing use cases to your phone, you're completely changing the data behavior. Because I may have gone in the old days, 20 years ago, I may went to the bank once a month. Then maybe I went to the ATM to check my balance maybe once a week. Now I can do it every hour in case I'm waiting for something. Which means... Every second.
Starting point is 00:42:45 Every second, right? Because we are providing services, convenient services to end users. They have a different behavior now, but it still hits the same backend. And I think that's a really interesting thought that system architects, or also whoever designs new apps and use cases they need to they need to keep
Starting point is 00:43:05 in mind right and and i guess there's ways to to counteract this so you can put a cache in there because if i if i refresh every five seconds yeah you know what you know that's what it is right and i get some old outdated data um but yeah it's really interesting. Yeah, correct. So caching, of course, is key here in that context. Yeah, but it's a real issue that you have to take into consideration. So checking the balance by just refreshing inside your mobile banking app. And if you every time trigger hundreds of transactions, that makes no sense. That always gets the same data back.
Starting point is 00:43:55 And yeah, that can be streamlined, of course. But it's important to do it. If you don't do it, it can be a real issue. So if we don't like our bank bank we should refresh our balance a bunch open up a bank account on the on the on the competing bank and then just kill them with the costs on the on yeah hey christian i kind of like getting to the end here because we are running a little bit close on time. I learned that the mainframe has been around since 1964, right? April 7, a date to remember April 7, 1964.
Starting point is 00:44:38 Quite impressive. This has been, well, I wasn't born back then, and I guess many of our listeners have not. But I was close. We have, I know, a lot of resources. You did a great job in preparing for this podcast. You brought a lot of links to articles, blog posts for the Dynatrace community out there. We have different videos, observability clinics, also blog posts with case studies.
Starting point is 00:45:10 So we'll put them all into the description and the summary of the podcast so you can follow up in case you're interested in more. Christian, is there anything else that we missed? Is there any other topic or any parting words where you say, this is what I wanted to get off my chest in this podcast that we didn't discuss so far? No, I think we covered everything which is relevant for an introduction to mainframe and what is important nowadays and how it has evolved over time. So, yeah, it's still important and it's still important to get observability
Starting point is 00:45:53 into these apps as a mainframe platform is really business critical. Brian, what did you learn today? Well, I learned several things, but the one takeaway, right, Andy, we always, at least in the past, we used to talk a lot about leveling up, right? And looking at some of this other information that Christian sent us was some key, some the bigger industries that will leverage mainframe. I'm just going to read it from the slide. Banks, insurance, healthcare, airlines, retail, car manufacturing, government.
Starting point is 00:46:28 Sure, there's others. But especially if you work in those areas, you may be writing system components that are interacting with the mainframe somewhere down the line. So it's a good thing to maybe ask around, hey, do we have mainframe? Maybe talk to those mainframe folks, find out how you're interacting with that mainframe, even if you're not directly, maybe it's something down the line. But it'll also give you a chance
Starting point is 00:46:53 to meet some mainframe people, learn a few things about mainframe and grow your knowledge, but also grow your value, level up. And yeah, mainframe is, what's amazing is that the use of it keeps growing. And when you think about things like chat GPT and all these others,
Starting point is 00:47:12 with the processing power that mainframes have and the speed and the compute, it's hard to imagine this, depending on what side of the fence you're on, utopian or dystopian future where computers are semi-sentient without some sort of mainframe involvement, um, because of the scale and speed that they would have. Um, so it's, uh, and I'm not taking it from that point of view, but just, just, just to illustrate the fact that like the immense power in these now, again,
Starting point is 00:47:43 will quantum computing you know surpass this if that ever becomes a reality reality that's you know feasible who knows but it's it's definitely out there it's definitely important and it's just a great reminder that no matter how old or outdated we think something might be it's probably just because we're ignorant of how it's being used because it's not the fancy, sexy new language that's out there, right? So what's old is new again. And really, really thank you a lot, Christian, for bringing this topic to both Andy and I and our listeners as well because we love learning. And that's what we're doing today. So thank you so much.
Starting point is 00:48:24 Thank you so much. Thank you. Cool. Thanks for having me.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.