PurePerformance - MCPs (Model Context Protocol) are not that magic, but they enable magic things with Dana Harrison
Episode Date: April 14, 2025MCPs (Model Context Protocol) is an open source standard for connecting AI assistants to the the systems where data lives. But you probably already knew that if you have followed the recent hype aroun...d this topic after Anthropic made their announcement end of 2024.To learn more about that MCPs are not that magic, but enable "magic" new use cases to speed up efficiency of engineers we have invited Dana Harrison, Staff Site Reliability Engineer at Telus. Dana goes into the use cases he and his team have been testing out over the past months to increase developer efficiency.In our conversation we also talk about the difference between local and remote MCPs, the importance of keeping resiliance in mind as MCPs are connecting to many different API backends and how we can and should observe the interactions with MCPs.Links we discussedAntrohopic Blog:Â https://www.anthropic.com/news/model-context-protocolDana's LinkedIn:Â https://www.linkedin.com/in/danaharrisonsre/overlay/about-this-profile/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready!
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always I have with me my wonderful co-host Andy Grabner.
And I was going to say something disparaging about him, but what I will say is, you know, This is not our official 10-year episode,
but to give listeners and our guests some clout,
by the time we're recording this year, this episode,
we have been recording at least for 10 years now.
Wow. 10 years.
Very impressive. 10 years.
10 years.
Yes, and I believe, check this out, what I'm gonna do here. 10 years. Very impressive. 10 years. 10 years. Yes.
And I believe, check this out, what I'm going to do here.
I believe there's been a lot of amazing topics and ideas we've discussed on these.
We've had a lot of great guests and we love our audience and all this has connected us
with all different kinds of ideas and people throughout time.
And it's given a lot of context to our lives and the technology that we all love and share.
Now that was a really poor attempt.
I was busting your chops, Andy, that after 10 years you still don't know what you're
doing with audio and after 10 years I still know what you're doing with audio and after 10
years I still can't do a segue.
So I'm going to hand it over to you but that was my attempt everybody.
That was my piss poor attempt.
Andy do your magic.
It's almost as poor as my attempt to come up with good audio quality and I'm extremely
sorry that today I won't have the audio quality that people deserve.
But I think what people deserve is a good conversation.
It's a good conversation based on a wide variety of data inputs that we now just somehow need
to gather and through a standard make it easier to consume.
Maybe that's a segue, also not really the best, but
today we're talking, maybe today is just not neither of our days. So the only hope that we
actually have is that our guest today, who is actually a repeat guest on our show, will tell
us more about what is this whole thing about MCPs, the Model Context Protocol.
And Dana, before I let you speak, you brought this topic to me at Perform.
This was first week of February this year, and I thought MCP never heard about it.
A day later, I opened up my LinkedIn and it felt like it was everywhere. And either this is just really proof that somebody's listening in and then they are
prioritizing certain topics in my feed, or you were really just at the pulse of time
and then it really took off.
Maybe you, whatever you did that day, you just sparked something.
But then I thank you so much for being back at the show.
We want to learn from you today about MCPs. But then I thank you so much for being back at the show.
We want to learn from you today about MCPs, what is it,
and also what does it have to do with engineering efficiency
and to get it kicked off, maybe you introduce yourself now.
Absolutely. Thank you for that introduction and before I even get into that, here's to hoping that your eleventh year of recording. You finally get good segues
and good audio recording going and grass on the ten year milestone. That's
phenomenal.
So yeah, thank you for having me back. I'm Dana Harrison. I am a staff site
reliability engineer at Tellis, which is one of Canada's large just
telecommunications companies. We do home solutions, internet, home phone,
mobile phone, tv, all of that fun jazz,
and we have
led by some incredible, incredible people been
enormously stepping into the AI space. There was a big press release. We did a
couple of weeks ago. I think at this point about how tell us is actually going
to be standing up Canada's first sovereign AI data center in a partnership with and video, which is amazing, amazing news. So we
are definitely, I think, way further into this space than a lot of other people,
especially in this part of our industry, but I think just generally and yeah,
Andy, as you said, MCPs,
I first learned about these at the end of January.
I think Anthropic released them in November,
something like that, and we've just been going absolutely gangbusters with them so far.
And I just came back from KubeCon London last week, And there was obviously a lot of talk around AI, around LLMs, new protocols to better and more efficiently
route the traffic to LLM workloads,
because LLM workloads are different to our regular
request response-based workloads that we know
from our classical applications.
I also think at the time of the recording,
Google Next is happening right now in Vegas.
There's a lot of talk around agentic AIs and building agents.
They just introduced an agent to agent protocol.
So there's a lot of things going on, but really for those people, maybe to get
started, that have not heard about MCPs yet and this whole topic on making the
AIs easier available, can you Dana, what have you learned?
Even though for you it's also a rather, let's say, young journey. Can you quickly explain what MCPs
are all about? I can do my absolute best. So I really like the way Anthropic themselves have
described MCPs, which is as USB-C for AI, so that, you know, the reversible plug that connects to everything.
That's essentially it. There's nothing really that intelligent about an MCP. An MCP is just
the interface for an LLM to be able to interact with external resources. So let's say you,
I've also heard it described as something like JSON over HTTP, and that's really it in its most basic form.
You are enabling an interface for an LLM
to interact with a tool by parsing out the data
that it's getting from it.
So you can, using Klein, for example,
this is an anthropic thing, so we're using,
you can use Cloud Desktop, you can use Klein
with any of use Cloud Desktop, you can use Klein with
any of the Cloud models, but it's starting to become a very, very widely adopted standard, which is fantastic. And I really love to see it. But yeah, that's really it. There's not too much
magic to it. It's just a way for an LLM to interact with external systems and data and then start to interpret that.
And the concept is that,
I think with some of the stuff that I've read up to, right,
it's one example I just saw in a video
and folks, there's so many videos out there
that explain MCPs.
One was, hey, if I wanna ask a question,
say, what is the chatter in our organization right
now, whether on Slack and in emails about a particular topic? And then the idea with
MCPs are that you can then build MCP servers that can actually then let's say reach out
to Slack, reach out to email, and then pull back the data for a particular topic and then
have the LLM on top or in front
of it and say, hey, how can I now, I guess, make this easier human consumable? Yeah. So, in that
example, that's a great example. I love it. You could build an MCP server or two separate MCP
servers if you wanted. You could build one for Slack. You could build one for Google Chat or
whatever you're using and say, in the MCP,
that is where you're building out the context
of how do I interact with this tool?
So like what rest, what gRPC or rest calls am I making?
What database queries?
How am I actually pulling the data out of this?
And then you expose those tools to the LLM.
So you can say, you know, in Klein,
you can say, hey, go get me the chatter
on this particular topic.
You could provide some directive around that
in your Klein rules, in your, you know,
wherever you're making those configurations
and say, if you're getting a question like this,
go use these tools. Now, for the most part, it'll figure itself out, but if you're getting a question like this, go use these tools.
Now, for the most part, it'll figure itself out,
but if you're like me and have about two dozen MCPs
installed, it does start to try and conflate things
a little bit, but you can provide it that directive
and say, if I'm getting a question to the effect of
what is the chatter on this topic, go use these tools.
And it will then build out the JSON
as per your MCP definition, go make those calls.
The MCP makes those calls.
The LLM is just essentially building out the JSON
to make the calls for you.
Go get the response back, interpret it,
and then put that in human readable format for me.
If you go take a look at the public MCP registries,
it is a space that has absolutely exploded
over the past couple of months.
We at TELUS have built out dozens of MCPs at this point,
especially with the power of, you know, outside of MCPs,
just using sort of regular client AI magic
or RU code AI magic at this point,
it is trivially easy to build your own MCP to interact with whatever service you want. Yeah
Andy Brian the good news there is that you can use this then to go onto slack and say who's who's trash-talking me
Who's to say I haven't already done that no, I have if any if any tell us listeners
I absolutely have not done that and do not have the ability or will to do so.
Yeah. My other dumb joke is there, is there going to be a micro MCP then?
Um, if it's the USB, yeah,
if they ever introduce like micro USB C, just like the tiniest,
it's just a barrel jack. Yeah.
As you said, right? I mean, it's, it's in the end, MCPs is not magic.
It's just a standard that was specified or donated by Anthropic and said,
hey, instead of us and other vendors building all these point-to-point integrations
with Slack, with MS Teams, with your back end database, with whatever
50,000 different systems, we are proposing an open standard so that we can have LLMs
or our AIs connect to any type of back end data source.
And I think it's not only data source, it's also actions.
What I've seen is some explanation and correct me if I'm wrong, that an MCP server can not
only query data but
you can also obviously do certain things where you can say as a developer I want somebody to
send out a message for me and or like a summary give me a summary and then send out a message and
then the MCP server can also then execute an action if I'm correct about that. Yeah that's
really all based on the constraints of how you've built out an MCP server.
And that's one of the great benefits of this model is that it is only constrained to how
far you build out the MCP server.
So if you want the MCP server to be sort of read-only, you're interacting with these remote
resources in a very safe way.
Let's say you're working with Dynatrace and you want to build an MCP server
against your observability platform, but you don't want to expose
all 30,000 API methods to the LLM.
You could build out just a few and then say these are what you get access to
through the MCP and then it's something that is predictable and repeatable.
There's no reason why you couldn't then also say, you know, the LLM is doing the brunt
of the work in trying to figure out sort of how to format this query, etc., and then
sending it to the MCP.
So there's no reason you shouldn't be able to say, you know, I want to send out a message,
make it look like this, and then using the tool of it exposed to it from the
MCP it figures that out and you know if you have a method available if you have a tool available in the MCP
That has the ability to post a message out to slack then it will do that for you
So you just said something that blew my mind here
I'm talking about like you know just it just in context of the Donna trace API's
But I guess this goes for anything, which is
what you were saying all along, the two of you with this example.
You know, I remember there was a time when I was doing a lot of work against the Dynatrace API
years ago, and I'd have to go through, read through the documentation, figure out all the fields, and there's a little bit. so Now this is just straight up AI kind of stuff
But with this protocol obviously makes it easier where I can have the protocol
Connect my LLM it can learn all this stuff for it
And then instead of me trying to figure out the API I can just type into or speak into my thing and say
Fetch me this from the API or I might not even have to sell
tell it from which API fetch me this from the tool. It's that easy. We have so that's why you need
the MCP. Yeah. So that's why you need the MCP. Cause if it's not standard, that's going to be a real
pain in the butt for it becomes a huge pain. Even, even thinking like, you know, we've been on an AI
journey for a while here and thinking back to maybe eight, 10 months ago, I was trying to connect it
to like Google drive and it was a huge pain and vectorizing things. And that is
just all gone now. The ability for us to interact with remote resources has
absolutely exploded since the dawn of MCPs.
Yeah, it's amazing how quickly this came up too. A lot of times when people start
saying, Hey, we should standardize on something, that's
years down the road.
Or nowhere near that far into acceptable or widely adopted AI and we've already got this
going.
So that's really cool in its own way.
I think Anthropic really scratched an itch.
This is like they found that a lot of people
were trying to do something like this
and figured out the right way to standardize it.
Cause you're right, like it feels like the new selling point
for anything is like this has an MCP server
that you can install.
Like it just is, it's the new sort of like shovelware.
Like let's just, it will throw an MCP into it
and then it suddenly is groundbreaking.
But it really has been a huge, huge enabler for us.
Now Dana, in the last meeting we had,
so I had you on our Dynamics with Guild
and you and your colleague,
you showed where in Visual Studio as an engineer,
I could say, describe my service
or give me some hints on how I can,
what are the hotspots of a particular service
I'm responsible for.
And the way I remember it,
your setup basically then said,
well, we have traces and metrics in Dynatrace.
So we're pulling that data in
from that service, metrics, traces, but you have your logs, I
think somewhere else.
So it also then goes off to that log system.
And then the LLM takes all this data and then says, hey, it seems that there's certain metrics
that are, there's a response time hotspot, there's a failure hotspot, there's some transactions
that look strange, and there's also some logs that look suspicious.
And then the LLM brought a nice summary of everything that I need to know as an engineer, but as an engineer, I don't care in the end where the data comes from.
I just phrase the question, show me the five hotspots in my service.
And then the through MCPs, the relevant data gets pulled in and then I guess gets then perfectly
formatted or like whatever the LLM is then doing to portray them and provide it in a
way for the engineers so that they can actually make sense of it without them having to go
to every single tool and trying to find this information themselves.
That's a really, I'm glad you brought up this example because funny enough, we just recorded every single tool and trying to find this information themselves.
I'm glad you brought up this example because funny enough we just recorded a demo internally the other day.
This reminds me I should send to you and I'll talk through.
But yeah, in the automation guild we walk through a small subset of this and until now on the podcast we've been really talking about
you can go query or interact with a single MCP or a single tool at once, but there is no reason with the right directives why you can't chain commands in between MCPs and have
them continually sending output from one to another.
So yeah, in your example, Andy, you're absolutely right.
In one observability solution, we have our traces and metrics, but in another we have
our logs and there are lots of reasons for that.
But now our developers don't need to care with the fact that we have trace context injection into our logs so our logs have trace
ids you can you can correlate those two things anyway it is trivially easy for us to say you know
given a particular service go investigate how this service is doing pull back all the traces i need
that are you know maybe anomalous pull back all the signals i need from those traces pull back the how this service is doing.
We just built out a demo internally that we're hoping to get our developers on board with where let's say it's 3 a.m., you've got a PagerDuty alert, nobody ever wants a PagerDuty alert,
shout out to that incredible team, but nobody wants a PagerDuty alert at 3 a.m.
You get one, you take that PagerDuty alert number and you pop into your Visual Studio code.
You've got your repo open, you've got client open, you've got all your LLMs set up,
and you say, tell me about this problem, give it the problem number. It
pulls it in, it pulls in all the alert details that that led to that problem or
that alert in PagerDuty. So let's say you're going Dynatrace to PagerDuty, it
then goes into Dynatrace, pulls the affected entities from that problem card
in Dynatrace, it pulls root cause, it starts pulling back all of the signals on that and then based on that and and at this point you could
have interacted with you know three different MCPs all at once
automatically you could say you've gone to MCP pager duty and diner trace and
your logs one and then based on the output of all that recommend fixes to my
code now you're outside of MCPs you You're just into regular Klein And I love that Klein is just sort of oh, it's just regular Klein. It's just AI now. It's not even MCPs
And from that, you know make recommendations to my code provide further directive to say once you've made
fixes to my code submitted your ticket go submit a github PR and
now from pager duty to Pull request request, you've done something in ten minutes.
The ability to chain in between these tools and remove developer context
switching and really enable them to see the data where they are in their ID without having to have, you know, that huge understanding
of like, Oh, well, what is my service named in Dynatrace and where are my logs and I have
to go search for this. It just does it all for you now.
Yeah, I think that's an awesome use case. Because also the way I understand it, that
means in your case Klein, it keeps also the conversational state, right?
So I start, I guess as an engineer, I start, hey, tell me everything I need to know about this particular problem.
And then it reaches out to various tools, maybe also to your internal communication channels,
because maybe somebody has already had a conversation with external people on the ticketing system.
So bring me everything and then recommend a potential fix for this particular problem
and then show me my source code where I can fix it, even do it.
So that means you would, it's really like you are the one person that is then using all of these assistants that you have
to really get your work done in a much more efficient way and those assistants are driven
by data that comes from different tools. But now with MCPs, coming back to this, these tools are now
basically proxied through a common layer and a common standardized interface.
And that means everybody that supports MCPs and whoever like both on the MCP
server side, when you build an integration with an external tool and
also the ones that are then consuming this data through the MCPs, this will be
a lot of time saved in building all these point to point integrations.
This also
almost reminds me, and I know I haven't used that word in a while, Brian, but in Captain,
we actually... Yeah, finally, Captain. He said the thing.
I said the thing. So with Captain, we standardize the way we bring in observability metrics back into Kubernetes to make automated decisions on deployments.
And we basically standardized the metrics provider
so that any type of tool that you have, whether it's Dynatrace,
Datadog, New Relic, whatever you have,
you can bring this data into Kubernetes and then use it
for automated validations or even for your auto-scaling.
So we standardized so that you only need to build
one integration and not many point to point integrations.
So that was kind of the idea.
Yeah.
Cool.
So MCPs, model, context, protocol.
Exactly.
context protocol. I know you said
Yeah, that's primarily who we're using it through but I think a whole bunch of it's very very new in terms of like other people starting to onboard into it
I don't even have the latest information on who all has started accepting MCPs as a
standard,
but while anthropic introduced it, it has really, really taken off across
multiple, multiple like model providers and things like that,
which is great. It's it's that it's that plumbing that we need to make all of our
lives easier. I mentioned this we we actually legitimately recorded a demo
the other day where we we introduced purposeful bugs into our indoor demo
application for the purpose of the demo, but we had and we were going slowly so
that we could show for the demo, but we had in a seventeen minute video like got a problem notification and had a fixed deployed to prod in seventeen
minutes and that was moving slowly and and with minimal interaction from us
like literally just go to these, you know, provide some directive on here.
Here's all the tools I want you to hit and you only need to set that up once
and then off you go and it just deployed it itself.
It's it's a just wildly wildly powerful tool and this is like. I feel like
we're using the old version of MCPs now. You know as old as something from
November can be
things move quickly in this space. We all know that things. I feel like I take
a weekend and then I come back on Monday and I'm like what new thing has been
introduced that is completely invalidated the last fifteen years of my
career.
Now with remote mcps, there is even more possibility to interact with these
systems like
have you ever,
either of you read too much into remote MCPs
or what's been the work been happening there?
No, we're both shaking our heads
for the people that cannot see us.
And the book is too much.
We have only, if you say the shaker.
Yeah, I was about to say,
if you can really shake it a lot more animated
then we will be able to hear it as as all of the listeners
remote mcp is not something i've played with too much at this point,
but sort of
the great thing about mcps is especially like intel as we have an internal
package or industry of mcps, so it's it's very easy for somebody to say all
right do an ntx install of this mcp. You know everything comes with a read me because we're using client to develop all right, do an N P X install of this M C P. You
know everything comes with a read me because we're using client to develop
all of our read me's
and from there, you know plug in let's whatever resource you're interacting
with plug in your a P I keys for that resource. So let's say I'm interacting
with Dina Trace, give it an API key and you're you're off to the races,
trace, give it an API key and you're you're off to the races, but that isn't necessarily for non developers the friendliest way of interacting with
these systems. So I I don't necessarily want to say hey for example I recently
built an MCP we have three different change management systems tell us it is
a long story. I built an MCP that says go out to each of
these change management systems, pull all of the changes in whatever time frame I specify
in the LLM, grab those back and do an analysis on it. It's not super complex, but it requires
API credentials that were difficult for me to acquire
through like the usual methods and stuff and rightfully so they don't want
everybody like totally hammering our change management solution and
overloading it. I get it there. There are great reasons for those to be
tightly controlled,
but
what that then means is anybody who wants to install that mcp then has to
go get their own api credentials because you don't want to like bundle that necessarily right. You don't want to put that mcp then has to go get their own api
credentials, because you don't want to like bundle that necessarily, or you
don't want to put that in the package to be like here. Everybody use the same
api credentials. There are security concerns about things like that. We
don't do that here,
but it does make it more difficult for people to get in on the ground level.
If they don't have sort of that base level of knowledge necessarily, and
again you can add things to a read be like with our Dynatrace MCP our pager duty one those are easy it's easy to make an API key
and go and we don't really care about sort of the usage and things like that
go for it easy to scope we can audit that legacy solutions a little harder
where remote MCPs come in is this is something you can build and deploy
And it can just be coded with all of the you know Api keys to remote systems and stuff and to all listeners
I will apologize now if you hear bumping or biting sounds my cat just woke up and she will try and go for this microphone
But with a remote mcp, you can build this you can deploy it somewhere
You can point point Klein to it
just like everything else or Claude desktop or whatever,
your preferred MCP interaction tool,
cursor I think will do it too.
And then there's authentication you can build
on top of remote MCPs so that you're not just
completely hammering them, but then the centralization
of how to interact with that tool is all done
through the remote MCP.
Makes things a little bit more user-friendly it makes things easier
You know if you want to expose the remote MCP to a web page for instance
And then you're starting to interact with these MCPs through web pages
And having it to that interpretation. It's it's a very very cool way of interacting with these things
which then also introduces a whole bunch of like
wild magic around server sent events that I've just been starting to read into like SSEs,
which is a streaming protocol for servers to be able to send events back to the client. In this
case, it's one way so that they can continue with request response. It is a whole new world.
that they can continue with request response. It is a whole new world.
For me, this sounds now,
there's a couple of things that I wonder,
at enterprise scale,
if you have these MCP servers
and they're talking to multiple critical systems
in the backend, and if I am Andy in one team
and Dana, you are in a different team
and then Brian is in the third team
and we're all against the same MCP server,
how would you handle the privilege system?
Like maybe I'm not allowed to see certain things.
How would this, if the MCP server is set up
with a certain token to a backend system,
how do you make sure that I only get to see the data
that I'm supposed to see?
So that means, right, I means, that's an interesting.
That's a great question.
Again, we haven't really played too much with,
you're talking about remote MCPs in this case,
I suppose.
Well, not only remote, but if you're also
in your organization, your team,
well, I guess if you do this, if MCPs by default
are more like personalized setups,
and then you are setting it up for you or your team with your team credentials.
That makes it easier.
You need to get those tokens for your team.
But if you start sharing these things, how do you propagate your credentials, your identity?
Yeah.
So we'll use sort of both examples here.
So if you're using a local MCP, This could be something that is a tool that is shared or a tool set that is shared within your organization
but ultimately the ability for that MCP to interact with a remote system is scoped to one how the MCP was built out and
To how your API token on your local is scoped
So if you only have access to view certain things,
the MCP is only going to have access to view those things.
That is, I would say a little bit easier to control.
Once you get into deploying remote MCPs, remote MCPs, as far as I know,
do support off so you can scope things further into that and say, you know,
if it's this user, then this is the things they get, they get scoped to.
You could have a list of API keys that are
stored server side and each of them has a different set of scopes that an OAuth user would get access
to, would just be one example of how to control things like that. But in general, because you
also mentioned earlier performance stability, if we are opening up all of these different tools now
to natural language interaction,
chances are that many more people will now use data and need data from systems that they normally never logged in.
So these MCP servers need to be engineered also in a way that they have, that they're not killing the backend API,
that they're building some caching, that they are, you, that they need to be resilient, I guess.
I'm not sure, how do you run an MCP server?
Is this just a container that you deploy anywhere
or how does it work?
Not even, it's essentially just a file.
We're building ours out in TypeScript.
If you're building them locally,
so if 100 developers have an MCP installed,
it's literally just like a TypeScript build file.
And you're saying this is the way
in which you interact with this tool.
It's not even so complex as something you have to run
on a given port.
All you're saying is this MCP server
is this file essentially.
Like that's really as easy as it is.
And then who runs this file and who runs,
like where does the logic then run?
Where does the...
It's all run locally.
So with traditional MCBs, it's all run locally.
But you're right, there are considerations to the builders and maintainers of a particular MCP
to then make sure that depending on how that's getting used
or will be getting used by the people in your organization,
that it is being used in a responsible way. depending on how that's getting used or will be getting used by the people in your organization,
that is being used in a responsible way.
So if you're building out an MCP that's hitting a legacy system, but you've got a hundred
users who are now connecting to this legacy system, I mean, theoretically, you could build
out the MCP to then target a remote cache and you could probably do a whole bunch of
magic like that. but if let's
say you're running purely locally, then yeah, you'd want to make sure you're
building out that API logic in a safe way with things like circuit breakers in
place. So you're not saying like, hey, this API is down, go slam it a hundred
more times this second so that I get a prioritized response when it finally
comes back online. You do need to consider those things. It's a very, very interesting
way of now, as you said, these users may not normally be logging into these systems because
maybe they find them too challenging. Maybe they haven't necessarily needed the data in a way that
has been easily presented to them before, and this makes it now much easier for them to access that.
In our Dynatrace MCP internally to tell us how we actually
built out so you can validate and run DQL queries. For those who are you know
using DQL you know that if you're not putting proper constraints in place that
can get expensive if you're running you know enormous enormous queries repeated
etc. So we built logic into our queries to say you you know, validate this, provide the user
with an answer on like this is how much this is going to cost to run heads up
make sure you're building out DQL in an efficient way, you know, limiting things
in our example, we build out the other day. We're only pulling back like the
top fifteen or twenty traces like we only need to examine, you know, just a
little subset of data. So it we're trying to build those constraints,
or those, not constraints, but controls and efficiencies in,
at the MCP level to say,
you want to interact with this remote system in a sustainable way,
here's how you can do it.
Go on, Andy, finish your thought there.
It just reminds me, last thought on this,
if you remember back in the days, and I know
they're still around the mainframes and they were only hit when you went to a bank and
you went to the teller, right?
They were opening up the terminals and then they were the ones that were hitting the mainframe
and then the websites came, online banking and then mobile banking came. I remember stories when the first mobile apps, mobile banking apps were basically,
obviously every time I opened up my mobile banking app, it was making a call to the backend mainframe.
And that became hugely, hugely expensive.
And nobody thought in the beginning that instead of 10 people working in the bank,
now you have 10,000 end users potentially
hitting the mainframe and not just once a week when they go to the bank, but once a
day because now everybody wants to see their bank account statements or like their current
amount on the account.
And I think this could come to a similar, I think we may end up in a similar situation where if
we make it now very easy and convenient to access certain data without even knowing by
the end user where this data comes from and how expensive it is, you've got to be very
careful with what you build.
Absolutely you do. I was trying to remember, I used to work in an org that was big in the
mainframe and we're in the insurance industry, so still big, they're still big in finance and insurance for the folks
who didn't know mainframes definitely still alive and kicking. And I could not
remember the name of like how they how they discern like compute units within an
LPAR in an IBM ZOS mainframe. But yeah, you're absolutely right. We now have to
be more considerate consumers of these data to which we now have, depending
on how the MCP is built, essentially unfettered access.
Yeah, and Andy, I'll one up you on that because you led into another aspect of where I've
... Dana, earlier in that last bit you said you had mentioned the word cache at some point,
and that immediately made me think about,
I don't even know it was in the context of what you're saying,
I was like, do we need caching layers
for some of this data, right?
Because if you're going in
and 20 people are requesting the same data,
because the MCPs make it so easy,
do we want to really query that data set 20 times
within five minutes, or are we doing a caching layer?
Is the, and I know I'm saying kind of like query,
but is the way this stuff being constructed,
are we now introducing N plus one problems?
Like, you know, with the LLM
that's going to be developing this, we suddenly start,
first we have this awesome thing
where AI is doing all this amazing stuff,
but then as it becomes used a lot more,
we go back to introducing potentially many of the same
performance patterns or anti-patterns
that we've seen throughout the years into this new model,
which is I know one of Andy and I's favorite topics,
because it always comes back, everything that's old is new again. And just even, you know, you were talking about the different
kinds of loads. So if 20 people have this local file running on it, obviously on their machine,
you don't have to worry about it, but is that going to overload the API? If you have a remote
MCP, how many people are using it at once? Is that machine that that's even running on going to be able to stand up?
Regardless of what kind of load it's going to put on the back end,
and are all these tool developers, are they ready for their APIs to be being hit that much?
What optimizations are they going to have to start making?
This thing is just suddenly snowballing in like the last 10 minutes of our conversation
and my head is exploding with this, you know?
Absolutely, it's a great, great point.
I'm glad you brought it up.
I know we had sort of talked about caching earlier.
I hope that if you have a solution in place where,
you know, this server could potentially be,
this remote resource could potentially be getting hit
20 times in a row with the same query,
that that server has some caching or controls built in
to say like, hey, I got this more than a couple of times,
I'm going to go put this into an in-memory cache,
like a Firestore or something.
So I would hope that that solution is more server side
than the client having to worry about it,
but you made a great point.
Especially as we're building out these MCPs using AI,
these AIs are only as good as the patterns
on which they were trained.
So if we have years and years and years of anti-patterns
on like, hey, I'm not going to reuse database connections.
I'm not going to use connection pooling.
I'm not going to use circuit breakers.
And that's what these things have been trained on.
As you're building these out, that is a consideration
you absolutely have to make when you are trying
to be a good
consumer of the data for your MCPs.
Yeah, absolutely.
And it's not like it's the MCPs fault that this is happening, but it's the MCP that's
allowing the ramp up to all this to accelerate so fast.
And I think a lot of people are going to probably be caught off guard because they're like,
oh, well, it's going to be well, oh, wait, we're here.
You know, as you're seeing is, you know, you've been saying this all along.
It's like every single day, almost some things are changing and getting more advanced.
So you know, the performance considerations can't be put off, right?
It's yeah, no, it's very, very interesting, very interesting problems.
Absolutely. That's how I feel every morning, Brian, is like, wake up and it's just, oh,
some new thing has been introduced and like, now I have to go learn this. And it is an
exciting time to be a software developer. I have a colleague who was talking about this
like traditionally, and not that I would call myself a software dev, I'm an SRE but I have a dev background,
I have a colleague Ahmed who made this great point
about for years software developers
have been the ones disrupting other industries
and now suddenly here comes AI and these tools
to disrupt our industry.
It is a paradigm shift that the likes of which I don't, I don't think
we've ever seen in this profession before. Yeah. Um, and, and huge shadow to Ahmed,
cause I think that was a really, really well-made point. Um, so it's an exciting time, but it is
like, there's a, there's a lot to keep up with. Yeah. Yeah. It's almost in a way the robotization of factories where now the factory workers become
you know robot and computer maintainers as opposed to the actual factory line workers.
But you know if you have one of those advanced factories you're still employing people but
with a different skill set than if they were actually on the assembly line.
Right.
I'd like to propose that I get a formal job change title.
I am now a prompt reliability engineer.
No longer an SRE, I'm now a PRE.
You heard it here first.
No, it's fascinating.
Do you know, because we're getting close to the end,
do you know if there's any built-in
observability into those MCP servers and and how the
Client is interacting with it because I want to know how expensive is every not only expensive
But you know what's actually happening when an engineer
Executes a certain prompt like what you said earlier explain or investigate or describe a certain code base and then it goes
off to multiple different tools and gets all the data.
Do we know, is there any observability?
Not that I've seen, but that would be relatively trivial to build in.
I mean, LLM observability, obviously huge industry.
I know Dynatrace is getting way into that.
When you're working with things locally, like you just have your MCP definition
in a TypeScript file, you could absolutely say,
as you're interacting with these resources,
fire off an event saying,
this is how I interacted with this.
You could also do it server-side.
So given a set of particular keys that you know
are the ones that are interacting with MCPs,
go push your events.
Go show how these things are responding.
So I think there are lots of points
at which you could build in that observability,
but it's not really a consideration that I've seen made yet.
Now it's very, very possible,
as with many things in the AI world right now,
I am just super out of the loop on that.
But I think that's a really cool point,
and I would love to see that taken a little bit further.
Yeah.
Well, Estes, this is still a new topic but it seems like there's a lot of
stuff already happened just in the last couple of months and I'm sure there's
more happening. Dana, it would be nice to follow your
experience in that matter with the next, with the coming months and
especially with your bodybuilding. Maybe we'll invite you back in a couple of
months and say, hey, what has happened in the time we spoke last?
And I'm sure maybe the next cool thing is coming up.
But who knows?
You know where to find me.
I would love to come back and chat more.
Yeah.
I think this is going to be a lot.
Just based on the speed of the things changing, I think this is going to be one of the more
heavy topics. You know, I think, again, it's like, oh, AI, AI, AI.
But you know, as I was talking about this, this isn't necessarily about AI.
This is about the runnings of AI, which I think is the more interesting side of it.
Right?
Because everybody can talk about, oh, AI is doing this, this and this.
Like, oh, yeah, but how does it run?
You know, and that to me is the more fascinating part., so if you're sick of ai, this isn't ai. This is how ai runs
you know that that is a
Fantastic distinction. I've seen so many posts on linkedin talking about oh mcps are ai. They're not
Just the plumbing for your ai to talk to things
There's nothing wildly magical happening in the middle.
Yeah.
Yeah.
So all the infrastructure is mundane.
It's what runs on the infrastructure.
That's cool.
Exactly.
Like the first monorail.
Cool, yeah.
Brian, I'm sorry that I disappointed you
in the beginning again with bad audio quality.
I promise in the next 10 years of podcasting, I'll do an effort to always have a good microphone
at hand.
I'll do my best.
Now you're leaning down, talking close to the microphone, changing your audio quality.
That's okay because in that last bit of interaction there, I forgot to unmute.
I'm recording directly to Cubase and I forgot to unmute my mic, so I'm going to have suddenly switched to Zoom audio quality for that last bit and then
back to this.
So I make the mistakes too.
See we're human.
We're not AI, Eddie.
We cannot be perfect, right?
Nor can I, I guess.
But yeah.
Anyway, it's been fantastic.
Any, yeah, I guess we need to wrap up here.
We're getting to the end there.
So why don't you do your typicals?
Yeah.
No.
As I said, then I thank you so much.
It's always great to have people on the show that introduce us to something new.
The goal that I had for this call, and I told you this in the beginning, I want to know
more about MCPs than an hour ago.
And I do.
It's great.
It's great to hear these and these use cases.
I think that's important for people that listen to this,
that wonder what is the real use case for it.
You had a couple of great use cases that you also demoed,
and I'm looking forward to the demo videos that you sent me.
And I'm sure we will see you probably
at one of the other conference in the future,
talking about this or other topics.
And yeah, all the best and keep us posted because it's an exciting topic for everyone.
Will do.
Thanks again for having me, folks.
It was a pleasure chatting with both of you.
Yes.
It's also a pleasure hearing that smooth voice of yours.
Oh yeah.
It's funny too, because I got to say when I was first looking into this, I was like,
boy, this sounds boring. MCPs, and I saw the USBC, like, all right.
But it's about what MCPs, the MCPs themselves, yeah, there's some interesting things in what
they do, but it's what they enable.
And that's what my big takeaway from today is like, this is changing everything because
of just the enablement factor. It's almost like,
I guess, watching the dawn of HTTP or something like that, where there's suddenly a standard of
how everything can communicate and that's going to blow everything up. Maybe both literally and
figuratively we'll have to see what the machines do.
Find out if you're 11th year of podcasting, if the world literally explodes.
Thank you. Thank you. But yeah, thanks a lot.
Look forward to the future episodes about this.
And thank you so much for being a repeat guest.
Thank you.
The Repeat Club.
All right.
And thank you for all of our listeners.
We'll give you the official 10-year thank you next time.
But thanks for if anyone's been with us this long, thank you.
That's all I can say.
And see you all next episode. Bye bye.
Bye bye. Bye bye.