PurePerformance - 028 Mainframe: Must knows especially for distributed and Cloud Native folks
Episode Date: February 13, 2017Mike Horwitz (https://www.linkedin.com/in/mike-horwitz-a40a139 ) has been working with Mainframe since the mid 80s. In this podcast he explains basic terminology and the challenges that come with the ...interaction to the distributed and cloud native world. Monitoring end-to-end is a critical capability especially when it comes to cost savings and including the mainframe components in a CI/CD/DevOps environment.If you want to learn more about common mainframe performance and monitoring challenges check out our YouTube Performance Clinic: https://www.youtube.com/watch?v=8eodOw3gnMA&list=PLqt2rd0eew1bmDn54E2_M2uvbhm_WxY_6&index=55
Transcript
Discussion (0)
Hey everybody, this is Brian, and before we get to the show, for the first time ever I need to pre-empt the show
just to admit and say I was a bit of an idiot today, and in my early morning still waiting for coffee to kick in
kept calling the mainframe the database. So, yeah, every time you hear me talking about the database
I was really talking about mainframe. Anyhow, on with the show.
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello and welcome to another exciting episode of Pure Performance. That's right, I did say exciting again. I think last podcast, Andy, I was a little downtrodden, even though it
was a very fun recording. I was like, hello, welcome
to another. I was very reserved. Anyhow, Andy, how are you doing today?
Good, good. Well, I have to also say last episode, it was amazing because I learned
a lot, but I've never felt so embarrassed because I didn't know as much about the topic
as I should have. And, you know, that last episode was about Docker, Mesos, Kubernetes. It was great to have an expert on the line.
And so it's kind of like this new cool technology that we talked about that people need to be aware of what it's doing.
And I believe now, today, we have not the opposite opposite, but from a timeline perspective, we talk about a technology that has been around for a very long, long time. And yet, I don't know as much as I should know about it, even though I believe all of
us, listeners and the two of us, and also Mike, who I will introduce in a second, we
are probably interacting with that type of technology every single day.
What type of magic technology would this be?
What type of magic technology could that be?
Well, I want to now actually give the chance to Mike Horvitz.
Hey, Mike, are you here?
I am here, Andy and Brian.
How are you guys?
Good.
Good, good.
Hey, so, Mike, what is this magic technology that has been around for a long, long time?
Most of us probably don't know what it really does, but yet we're interacting with it probably every single day.
Well, I can only assume that you're referring to the mainframe, Andy.
Exactly.
That's what it is.
So yeah, the mainframe has been around about as long as computers have been around.
Yeah.
So since the Stone Age, huh?
Right.
Yep.
And like you said, pretty much everybody, whether they know it or not,
interacts with it on a pretty daily basis. Most of the financial institutions and the larger health
insurance institutions are all still using their mainframe backends. So since how long have you
been working in that field? Because I assume you've been working there for quite a bit. Yeah, so I originally started on the mainframe back in 1985. Wow. And then around 1990 or so, I started making the shift
to what is known as the distributed world. I did both for about 10 years. Then I did purely
distributed for about another 10 or 15 years. And then about five years ago, I joined my current project, which is a blend of both
the distributed and the mainframe world.
Cool.
So you basically started with mainframe when I just left kindergarten.
That's awesome.
We have to give some time reference here.
Right, right.
I'm a seasoned veteran.
And I wanted to know, are they still – they're still manufactured, right?
It's not like everyone's still running like a dusty old thing.
No, that's correct.
And there are updates constantly being made to –
Just kind of setting the level for everybody that this is still –
Yep.
The hardware is being updated.
The software is constantly being updated, both the operating system and, for lack of a better term, the adapters that allow the distributed world to interface.
Because that's really where, you know, once TCP IP became ubiquitous, especially on the mainframe, that opened up the ability to have the distributed world through either
a mobile front end or a web front end interact with the mainframe backend.
And there's a variety of adapters and transaction managers that allow, you know, a phone, for
example, to eventually get to a DB2 database on the mainframe.
Cool.
Hey, now, so for people like me that, you know, were born after
the whole thing really kicked off, and I've obviously, I interact with mainframe through
our customer base a little bit, but could you fill us in, and especially the audience, on some of the
main terminology, some of the terms that we should know about that, you know, just to give us all background information?
Yeah, so in general, there are two ways that mainframes process data.
One is via batch processing, and those are jobs.
Think of those, you know, like a Chrome job or a scheduled job on a Windows machine
where they run in the background generally.
They run a lot of times in the evening or overnight, and they do what you would think
of as batch-type processing.
But the real guts of it is the interactive processing that mainframes do on a 24-by-7
basis.
And the two main systems, I guess you could say, are what's known as KICS, which is the Customer Information Control System, and IMS, which is the Information Management System.
And those are two different ways of doing interactive technologies for mainframes.
And in the old days, you used to have what were called green screens. So you'll
still hear people talk about green screens. And that was the old green and black and yellow,
you know, 40 by 24, 80 by 24 entry screens. And for the old school guys, you watch some of them
and they can manipulate those things like magic. They don't even use a mouse. But for a lot of younger people, both developers and users, everybody wanted the fancy web front ends or, like I was saying earlier, the mobile front ends.
So now what you've got are those front ends, and they communicate to these existing CICS and to these existing IMS backends.
So that legacy code on the backend hasn't changed,
but they've taken what used to be that green screen entry screen
and converted it to an API.
So you use web services to interact via that API now.
There are also a series, as I was saying,
IBM has come out with a series of adapter technologies. A very common one is CTG,
which is the CICS transaction gateway. And that does exactly what you would think it does. It's a gateway that lets WebSphere, for example, execute a CICS transaction through that gateway, and it routes it to the CICICS or the IMS backend, and it will behave just like that CTG or just like a regular web service.
And so the development language on the backend, is that old COBOL or is there anything else?
No, COBOL is still a really big one.
Another very popular one is PL1 or PLI.
You know, different people call it different things.
And you will see a handful of assembler still being written.
And again, for the same reasons you would expect on any other platform, speed.
The other common language actually is C.
So what people started doing initially before IBM came out with some of these adapter technologies
is they would write C code to do the TCP IP listening,
and that would be the adapter that would then forward the calls to the COBOL or the PO1 legacy code
that's running. You will also find a fair amount of Java now on the mainframe. Java is supported
within CICS, but what you'll also find, so CTG, for example, is a JVM. So when IBM developed that
tool, that is actually a Java virtual machine, completely written in Java.
But the legacy code is still primarily COBOL or PL1.
Go ahead, Brent.
Thanks.
Obviously, I think we want to dive into a little bit more on the mainframe side, but just before we go into that too, for all the more distributed type
developers who are going to be interacting with the mainframe, based on what you were saying about
the service calls and the APIs, it almost sounds like it's just another endpoint that they'll be
sending another service request to. Like a developer or Java developer, Node or anyone
else who's going to eventually be sending something down or making a request to the mainframe doesn't need to know what's behind the curtain, right?
They're just going to, hey, I have to send up a service call and format it in this way, and it'll
get my data back. Is that? That's correct. You know, it's really just another tier in an enterprise application.
And one of the things we'll probably talk about that you guys will probably be interested in is that tier methodology way of thinking fits perfectly into a enterprise APM solution because it really is just another tier.
And as you say, the front endend developers, they're just making an API call.
If it fails, they need to find out where in any number of a back-end tier it failed,
but it would be no different than analyzing it if that back-end tier was another JVM or a.NET server.
Hey, there's one more term that is floating around all the time, which is MIPS.
And I believe MIPS is very critical.
And could you explain what MIPS is and why it is so critical to the mainframe folks?
So the biggest difference between the mainframe and the distributed world in terms of cost of your application is paying for CPU.
So on the mainframe, you actually pay IBM for the CPU that you use.
So if you buy a distributed machine, you've effectively bought that hardware 100%. And no
matter how much CPU you use on it, that's a fixed cost. On the mainframe, you're paying for those
instruction cycles or MIPS that you're using. So it's extremely critical on the mainframe
to be able to monitor that CPU usage and keep it low. Not to say that application response time
isn't important, but that's not a hard monetary value that you would pay for MIPS.
So understanding your MIPS usage and how it's impacted by, as Brian was just saying,
a developer who makes a call into the mainframe now could affect that MIPS usage, both positively
and negatively. And it's very important for the mainframe performance analyst engineers to
understand that. And I think, I mean, so first of all, a clarification MIPS stands for millions instructions
per second. Is that right? I believe that is correct. Yeah. Go ask the Oracle. Yeah, go ask
the Oracle. It's called Google. Right. And, you know, what I think is really interesting is so
this has been the business model from IBM for since the beginning, right? I would assume.
As long as I've been around, yes.
And you've been around for a while, so it's been for a while.
So now if you think what's happening nowadays with infrastructure as a service,
it's the same thing from the cloud providers because they basically charge me per day resources that i consume and and even though it's like amazon ec2 if i you know
obviously there it is like an ec2 instance of different the different sizes but if we the next
cool thing is serverless technology which means or like functions as a service lambda and there
i'm actually getting charged exactly in the same way how often do i execute code and how long i
mean this is this is basically going back to what IBM has came up with
from a business perspective, a business model years ago, decades ago.
And I think what I like that you said earlier,
this is something that I've been trying and Brian and we have been trying
to educate people on is performance engineering used to be about performance
because you got your hardware and then you did whatever you could do with that hardware to squeeze out the best performance.
And maybe that meant just writing algorithms differently, doing something asynchronously, using more threads, more memory, whatever, to improve performance, response time.
But now we need to think about not only response time but also resource efficiency.
And I believe this is something that is very important, obviously, for the mainframe world, but will become more and more important the more applications we are moving to these environments that run on a system where we get charged by CPU cycles, by its center of the network, by its store- stored to disk, transactions made with the database.
So I think – Yeah.
Yeah.
It's a very similar model.
It's kind of eerie.
It is.
And so hopefully we're all smart enough to talk with the mainframe folks because they've been going through this.
And hopefully we can learn from them.
So, Mike, thanks for being on the call today.
Oh, you're welcome.
So I think, Brian, you wanted to kind of segue over now to some of the things we do as performance
engineers with the mainframe, what we should do. And Mike, you and I, and also Christian Schramm,
one of our colleagues, we did a performance clinic a couple of weeks ago where you actually walked through some of the use cases that monitoring solutions actually help to solve use cases that we provide.
Can you tell us a little more about the main use cases or what are the biggest problems that people run into, which then leads to high number
of MIPS that we should optimize? Yeah, so the biggest thing that we've noticed
is when these new front ends that I was referring to, when a change is made there,
the impact that that can potentially have on the backend is substantial. What we have found
is that the actual mainframe backend tiers, that legacy code has been around so long and the
testing is in place and the monitoring tools that have been in use there for years do a reasonably
good to a very good job of letting the performance engineers know that this particular
DB2 call or this set of DB2 calls within a given CICS transaction took a certain amount of time
and took a certain amount of resources. They have the tools to do that. The key problem pattern that
we've been seeing in the last couple of years is when a change is made, say, on a mobile front end or a web front end, that now invokes that transaction 10 more times than it needs to accidentally.
So the old mainframe tools are saying, oh, this transaction in itself is still running very smoothly and very efficiently. But what the customers need to know is now, okay, we're now calling that transaction too many times because of a change that happened in the front end.
I think that's the most common two, three, four years ago maybe even,
when we came up with the distributed tracing from the distributed world into the mainframe,
a blog post that I think it was maybe Klaus who wrote the blog post saying,
from mobile to mainframe and a slight configuration change in the middleware java tier caused duplications of calls going to the mainframe
and therefore doubling the number of mips and therefore obviously exploding the costs right and
that's exactly the the type of thing that can happen and the other thing is not only is it a
potentially a configuration change or even a coding change.
I don't know if you guys remember there was a Christmas commercial that came out like two years ago.
I think it was for FedEx.
I'm not sure who it was.
But it was guys in a warehouse, and they were going crazy because their sales were going through the roof.
And what it turned out, they flashed back, and a little kid had just gotten an iPad for Christmas and he was just banging on it, not realizing that he was hitting the buy button on his, on his cart. And it, you
know, it's funny and we, and we laugh, but the biggest thing now is say banks, you have people
on their phone who can check their account balance anytime during the day. You know, you don't have
to go to an ATM anymore. You don't have to be in front of a computer anymore during the day. You know, you don't have to go to an ATM
anymore. You don't have to be in front of a computer anymore to do that. You can simply just,
you know, every half hour, if you want on your phone, check your app, that balance eventually
is going to make its way to a CICS or IMS backend that's making a DB2 call.
Yeah. And I know for myself, my, my bank account interest gets compounded by the minute, so I'm
always on it about, you know. Andy, I know you had something there. I just wanted to just point
out, Andy, and probably people who've been listening, hearing the same exact problem coming
up in the mainframe, right? It's the chattiness or possibly N plus one. Same the same issue that we typically see going to a database.
We're seeing it all the time now with the distributed services where in general there's like a way too much chattiness and the N plus one.
Same exact problem existing or becoming a new problem for database with its renewed life in these situations.
So it's stop doing this.
Please, everybody.
Yeah.
So the only thought that I had on this bank example, and I think, Mike, this is what you tried to explain.
Back in the days, maybe 10 years ago, we went to our bank branch maybe, I don't know, once a month, maybe once a week.
And then the teller, he looked up through the blue screen, the green screen, probably.
Hopefully not the blue screen.
Not the blue screen, right?
But the green screen, what they compounded, that means one transaction per customer once
a month.
Now, potentially, I do this once a day, which means 30 days in a month. So 30 times more MIPS consumed by an individual customer, which obviously increases the price.
So here's my thought now.
What can architects learn, especially the folks that build the front of the mobile apps?
Can they be smarter, like caching some data?
I mean, the account balance doesn't change every minute
right is there some is there something in place already from from ibm that actually says well we
do not always have to go through the whole mainframe if you want to query some type of
data we can put it into a cache and then invalidate the cache for you is there something
like that already available my assumption is that there is.
It's been a little while since I've done pure mainframe application development. But as you say,
it would certainly make sense for them to have it. And I have to believe that this is a problem pattern that's come up in the last five, six, seven years. So the key there is knowing that
there is a solution, recognizing that you have this problem, and then implementing that solution to verify that you fixed your problem.
So as you guys know, there's all kinds of steps, not just finding that caching software to do it.
And with – go ahead, Brian.
I was going to ask, and I don't know if it's – I was going to change the subject a little bit.
But in terms of – obviously, these are all kind of problems you can see external to the database.
A lot of calls coming in.
You might see MIPS increasing on the database.
But what happens when there is a slowdown within the database? How do, what do databases have to, you know, how does,
how is that triage or how do people understand performance within the database itself?
Yeah, there's usually a couple of steps on the mainframe. There's generally a DBA who's strictly
in charge of monitoring the database performance on its own. And like I said earlier, there have been tools that are around forever for DB2 on the mainframe. The problem nowadays is not so much realizing that, oh,
you know, DB2 is hung up or it's slow. What everybody wants to know now is, okay,
what customers were impacted by this? So you've got this DB2 slowdown, which is in turn causing a CICS transaction to be slow,
which is in turn calling who's ever invoking that CICS transaction from the non-mainframe world
to be delayed in its response. And somewhere there's a user who's not happy.
So there's a collection of tools and a collection of people. And the mainframe world is a lot more siloed.
So the involvement tends to be several different people.
You know, you get a database guy, you'll get your CICS guy, you'll have a network guy, you know, at least a distributed guy.
And between the group of them, they have to triage that entire transaction. And so obviously this is a great use case for, let's say, modern APM tools that actually
can really do full, correct end-to-end tracing.
And if there's a slowdown on the DP2 side, it can not only tell it that the DP2 statement
is slow, but also who was actually originatingating who is the originator of that call
and did this you know maybe come because of a strange use case of the end user or maybe because
of a configuration change and a deployment change on the java side that is now doing some very weird
things maybe bypassing a caching layer that used to be put in place to avoid this but maybe i've
seen this once uh actually not too long ago, where the example that I brought earlier, the caching layer, the caching layer
was implemented on the distributed side.
So Java caching layer before going into the mainframe, but through a configuration change
that was accidentally done, the caching layer was bypassed and therefore the mainframe was
pounded all the time, reading the same data, even though the data hardly ever changed.
And that caused some huge issues.
And then obviously, people want to like the marketing and salespeople want to know, OK, which customers, as you said, are impacted because you want to proactively reach out to them and apologize and say, sorry for the inconvenience working on the issue, blah, blah, blah.
Right. Yep.
And like you said, that's where a modern APM tool that can cover all of those tiers at the same time and in one common view is great.
What we've seen is people talk about war rooms.
And a lot of times when these things come up, if you have the proper tooling, you don't need a war room.
Your war room becomes the tool instead of all of a sudden getting 10 people together in a room and a white
board. And then you could get the blame game going also to some extent. And that's where the tooling
is so important. And I think it's interesting to the idea of knowing who the person is. I,
when I was living back in New Jersey, I did a lot of work with the financial institutions
and you know, I, what I had no idea about, you think about banking, right?
And everyone thinks, oh, it's me going to my bank.
But there's many layers of banking.
You know, Andy, Mike, you, all three of us were like pebbles in the banking world.
If we came or went from a bank, it wouldn't really matter so much.
But then there's this whole side of private banking or the large, large income people, right? Or even just think about a bank CEO, the amount of money they're moving through,
the amount of money they're making transfers with and all this kind of things is astronomical.
And if you can't identify problems with those people, especially like, you know, I'm not saying
banks are evil and they don't care about performance for us, but they care much more for performance for those, you know, high rollers.
And being able to identify if one of those people had a problem is critical.
Right.
I remember, yeah, what happens if the president of the bank complained about performance?
Are you going to be able to find it?
Right. performance. Are you going to be able to find it? And that's the tie-in with what we were talking about earlier, Brian, is that if you've got just, you know, like you say, a regular user who's,
you know, beating on their phone 10, 20 times a day to get their balance,
whether or not their response is slow may not be a big deal. But if those balance checks are
slowing down the system and somebody else's more critical response is slow, then you run into a
headache. Hey, one thing, and I'm just looking at the slides that we used at our recent performance
clinic. And by the way, if listeners want to see what you guys actually presented,
you can find it on the YouTube channel uh bit.ly bit.ly slash dt
tutorials and if you search for mainframe you'll find the recording now one very interesting thing
talking about mips and the mobile world i believe ibm they have some special pricing right for for
mobile mobile front ends now yeah that's correct there's a percentage discount on the CPU usage on the mainframe
for transactions that were generated from the mobile world.
Go ahead, Andy.
Yeah, and that's basically,
how would you normally get this type of information
if you are just looking at your mainframe monitoring
tools? That's a great question. And it's almost impossible without tagging it or tying it,
correlating it together somehow. So again, you really need a tool that can do your true end-to-end transaction tracing because you have to prove to IBM that
somebody on a mobile device generated this transaction that's running on the mainframe.
And through, you know, there are ways I would guess in-house that people could do some type of,
you know, using the old IBM ARM technology, you could modify your source
code to do some tagging and things like that.
But the most effective way would be to find, as you call it, a modern APM tool that could
show you or create the input that IBM requires that literally shows you, okay, these DB2 CPU cycles were generated
from the mobile front end or these CICS, because the way it works is you have to actually specify
to IBM the product that was used on the back end. So there may in fact be different price breaks
for IMS versus CICS versus DB2 or even straight Java WebSphere on ZOS.
But the basic concept is you have to prove without a doubt to IBM
that those transactions and CPU cycles came from a mobile front end.
And there's a very specific format that you need this data in,
and it goes into a tool, and then that tool produces another set of information
that you supply to IBM. Okay. the billing cycle is probably monthly so do you have to do this monthly
on a monthly basis? Yeah I know the reporting is hourly but so when you create the input data
it comes out hourly but as far as the actual goes, my guess is that it's monthly. I haven't gotten
involved too much in the billing aspect. For me, it's been more of a technical, okay, you know,
how does this report work? Yeah. Now, besides banking, we talked about banking a lot,
other industries that we may not know about that are using the mainframe?
Yeah, the two big ones are financials in general.
So, you know, people think of banking in some of the contexts that we were talking about earlier,
but just general financials, you know, credit card companies, for example.
The health industry and insurance companies are also two very large ones. So generally, if you think about the industries that are servicing millions and millions and millions of people, so it would be things that everybody uses.
So theoretically, everybody should have insurance.
Theoretically, everybody should do some banking at some point.
And, you know, not just car insurance, but you should also have
health insurance. And then you think about hospitals and how many hospitals are there?
How many hospital patients are there? You know, and that type of load and the fact that these
industries have been around forever usually is a good indicator that at some point they were on
the mainframe. Is the travel industry, the airlines, still a lot on mainframe, or have they moved?
Because there have been a lot of issues they've had with some kind of central systems.
I didn't know lately.
I didn't know if that was mainframe.
Yeah, they were for the longest time.
And to be honest with you, Brian, I'm not really sure if they've moved off or not.
But years ago, it's funny because when you grow up on a mainframe
like I did, you notice those green screens. And, you know, in years past, I've been either at,
you know, there was a furniture vendor in town, you know, where I could see the salesperson on
their monitor entering stuff and it was actually a green screen. In the old airlines, you know,
when you'd go to the gate to check in, those were actually the old green screens. And the old airlines, you know, when you go to the gate to check in,
those are actually the old green screen. So it's funny when you still see those.
Hey, talking about, and actually maybe as a nice conclusion also to the whole session.
Now, I've been hearing when I went to school and I, you know, I went to school to learn
software engineering in the 90s and I heard about the mainframe yeah we actually learned COBOL but they said yeah learn COBOL
but uh you know it's going to go away anyway so now 20 25 years later we're still here any
any predictions on when this technology will really be replaced or do you think my prediction Nope. My prediction is it won't be. So I teach some of the young guys who come out of school and join our company. I teach them sort of what we're talking about today, mind having to relocate. The mainframe is the
place to go. It's not going anywhere. There are not a lot of people who know how to do it. There
are not a lot of people currently interested in doing it. The issue is there aren't a lot of
mainframe sites. There are open jobs, but you may have to relocate or you may have to work remotely.
But it's not, to answer your original question, it's not going anywhere. You think about all the
major industries that are still using it on the back end, those back ends will still be there.
The part now is as the front end technologies change and get fancy. 10 years ago, there was
no iOS, there was no Android.
Somebody's got to make that stuff talk to the mainframe.
But you don't think there is a chance of, you know, with this whole microservices model coming out, are the jobs and the kind of things mainframe is doing something that can be
translated into a microservices kind of framework, or is it just?
Well, if it can run on the mainframe so that those size of processors can continue to handle the load.
That's really just about the compute is what you're saying, right?
Yeah, it's mostly, it's not just a load issue, it's a security issue.
There's a whole slew of things that the mainframe provides and has provided forever that the distributed world is still trying to model and get caught up to.
So the way that the applications are architected in terms of monolith and microservices, a lot of that has changed in time.
The applications themselves are not as
monolithic as a lot of people have thought. They have actually broken apart those apps
so that you have certain CICS regions that do certain functions and other CICS regions that
do other functions, and they are separated. And you now have these adapters that handle
the incoming data so people have
really applied the proper architectural rules to it um and whether that cobalt whether that
programming language has shifted so you know whether or not you'll have as many cobalt jobs
may not be the case those may move to java jobs but they're still going to be mainframe
jobs yeah it's a different mindset, right?
Absolutely.
And I think the biggest thing, obviously,
from a performance engineering perspective is going to be
how can you not only squeeze out the next millisecond,
but how can you also help to reduce the operating costs
because of resource consumption?
And I really like this, that this is a big topic for mainframe,
and I think it's also a big topic for the distributed world
and for the cloud native world.
Actually, now I have one more quick question.
I've been promoting DevOps, CI, CD, continuous delivery pretty heavily.
How would something like continuous delivery work in the
mainframe world so how is something like devops is there any thoughts any ideas around that it
fits perfectly into that same model and for a lot of the same reasons now if you think of the main
if you think of devops for your application your enterprise, at that level it fits perfectly.
What you want to do is get your testing into a continuous integration, continuous development delivery model.
That way when you make changes in your mobile front end, your web front end, or any of your middleware, it's a part of all of this. So when those changes
happen, if you're monitoring the transactions and the DB2 calls on the mainframe back end,
as a part of that continuous integration and continuous testing, your DevOps team should pick
up automatically if changes occur. So not only changes in functional specs, so a test still may go green,
but now if you're doing development in a continuous integration with an APM tool,
you can see that, oh, it stayed green.
However, the response time or the MIP usage shot way up.
Yeah, that's cool.
Yeah, that's perfect.
Right, that's the key.
And the example that I draw in a presentation I give is if you've got a test that is failing all of a sudden because somebody made a change, somebody else may make a change to make that test go green from red. But in that change, they may have caused a MIPS usage issue. And if you're doing CI and DevOps, you will pick that up. And again, it's the same model.
The earlier the feedback you get across the board, the better.
So we're basically telling the same story.
That's great.
And now, follow-up.
So that means are you seeing companies having their own mainframe that they use in the pre-prod environment,
or are they mocking away these layers by just obviously letting them…
Both. Both. Okay. Yeah. So yeah so just and again it's just like any
other tier you know anybody who's writing code that's making a call to another tier should at
some point have mocked it out but then at some point whether it's a sandbox or a test or a pre
prod or whatever it is you still need to have a full-blown system.
When you speak of a mainframe, remember, and here's another key term that we actually forgot
earlier, think about partitions. And they're called LPARs for logical partitions. So a lot
of sites will have a mainframe, but that singular mainframe can run two, three, four, five, six logical partitions, which are completely isolated from each other with the exception of the physical hardware.
And what they'll do is they'll set up production integration and their DevOps on those test and sandbox systems.
Pricing is the same from an IBM perspective?
Yes. Yeah, they're all cycles.
Yeah. Okay. Cool. Wow.
Brian.
Yeah.
Any other follow-up questions? Any last thoughts, final thoughts?
No more questions, but yeah, definitely some thoughts.
What I found fascinating is for every way that a mainframe is a unique and completely different beast than distributed,
there's another way in which it's exactly the same.
When we're talking about it's just another endpoint, you make your API interface, it's going to make calls back to the database.
And in the testing cycles, as you guys were just talking about at the end there, knowing how many requests are going to it, the monitoring of it and baking it into the CICD lifecycle that all still applies,
just the same as everything else.
But once you get onto the mainframe, that's when everything changes,
and that's where that mainframe expertise comes in.
But if you're just a developer on the outside of the mainframe,
it's basically just another endpoint.
You just need to have an understanding of what impacts the mainframe, you know, where on your distributed side.
You know, you want to have, you need to know that you're interacting with the mainframe because you want to be aware of how many calls you're making to and how many MIPS you're costing it, right?
So there is a little bit of a difference that you need to know about.
But in many ways, it it's it's more the same and it's funny brian because from the mainframe guys perspective that's the same way
they think of the distributed world except they think they were first but you know they're like
everything the distributed world has done in the last 15 20 25 years basically has come from the
mainframe world it's just been done in the distributed world.
Yeah. I just think it's important for developers out there to be aware of that, right? Because although it is just another endpoint, there are some other slightly different considerations to
keep in mind, especially in terms of that cost of computing, right? Because that's the biggest thing.
That's the single biggest thing, right? I always say, you know, it's like a guy making a call to
a Java backend as opposed to a.NET backend. end you know they're completely different in a lot of respects but
similar in some but like you just said there at the end the biggest difference on the mainframe
is mips yeah cool all right hey mike uh thank you so much for sharing your my experience right i
mean at the beginning people don't know but when we started recording you weren't really sure
if this is going to be the live recording that we do.
And you were kind of a little scared.
But I think this is an amazing session that we did.
I learned a lot.
Hopefully, our listeners learned a lot as well.
And I love the fact that we are all trying to solve the same problems, making software more efficient in the end.
And with efficiency, eventually also comes performance and happy users.
That's what I was just going to say too, Andy, is the ultimate goal is to keep your users happy.
Yeah.
Perfect.
Annie, you mentioned before we started recording you had a talk at CMG last year.
Any other appearances that you may have?
No, I don't have anything currently lined up.
Most of mine have been with customers directly.
I haven't gotten fully out into the conference cycle just yet.
Well, then let this be a shout-out to the conference people out there.
Mike Horvitz, he has a lot. I'm available. Yeah. Yeah. And we'll, we'll also get a, uh, on the, on our pages,
we'll put a link up to the, um, the YouTube video that you all did, um, to get that out there. Uh,
yeah. Excellent. And are you, um, a Twitter person that you would like to
broadcast your handle to, so people can follow you and all your wisdom?
No, actually, Andy and I joke about this.
As an old mainframe guy, I don't have any of that social media stuff.
I rely on Andy.
He'll get a hold of me.
Excellent.
Well, for our regular listeners or any—
I am on LinkedIn.
Okay, excellent.
We'll put a thing up on there then for your LinkedIn. I'll find you on there.
But also, if anyone wants to follow us, it's at Pure underscore DT.
You can contact us at PurePerformance at Dynatrace.com.
Any show ideas, feedback, anything else, please contact us.
You can follow me on Twitter at Emperor Wilson.
And Andy, you are?
It's GrabnerAndy. Andy with an I.
That's right.
Always very important. And I think I saw somebody
address you as Andreas in an email
the other day and I chuckled internally.
And speaking of that
right, I am Horowitz with one O.
You just pronounced it
Horowitz. You added the O in the pronunciation.
No, I never do. It's always Horowitz But it sounds like there's an O in there
I guess maybe mentally, because it's
Right
And as I said, your brother
Is Adam Horowitz
For anybody who
And what was the other Horowitz?
Oh, my uncle is Vladimir, the classical pianist
Yes
Alright, my check That's my beastie Vladimir, the classical pianist. Yes.
All right.
My check.
That's my beastie boy.
I just gave it away.
Darn it.
All right.
Anyway, thank you, everybody.
Have a great rest of your day, depending on when you're listening to this.
And talk to you next time, everybody.
Goodbye.
Bye.
Take care.
Bye-bye. Bye-bye.
Bye-bye.
Bye-bye.