PurePerformance - 037 101 Series: OpenStack
Episode Date: June 5, 2017What is OpenStack? Oh – it's not the same as OpenShift? So what is OpenStack? If these questions are on your mind and you want to learn more about why OpenStack is used by many large organizations t...o build their own private cloud offering than listen to this 101 talk with Dirk Wallerstorfer (@wall_dirk). We learn about the different OpenStack core controller services (Cinder, Horizon, Keystone, Neutron, Nova …) as well as the core cloud services (Compute, Storage, Network, …) it provides to its users. Dirk also explains why and who is moving to OpenStack and what the challenges and use cases are when it comes to monitoring OpenStack environments – both for an OpenStack Operator as well as for the Application Owners that run their apps on OpenStack.
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson. Hello, hello, hello to Curse Friday, everybody.
It's another episode of Pure Performance with you, and I promise we will keep it clean.
We just got all of our sailor language out as we were preparing for this.
Andy, how are you doing today?
I'm pretty good, but I'm just laughing that you actually brought this on air.
And I guess people are just now wondering why we call it Curse Friday.
Yeah.
It's actually a much better Friday.
It's Cinco de Mayo.
That's true.
Which is a much better day to celebrate.
And we are actually getting ready here in our office in Waltham, Massachusetts, for the Cinco de Mayo party.
And we already saw the margarita machines.
So that's going to be awesome in the afternoon.
Well, you know what?
Two more things, right?
So a year ago today, we were talking about Cinco de Mayo
and how it's really about this little battle that happened in Mexico
and had nothing to do with drinking, right?
That's just kind of the American bastardized version of it.
But a year ago today, Andy,
and I know this is being played later on and it's not being played on May 5th on Cinco de Mayo.
But as we're recording this a year ago today, our first podcast aired.
So happy anniversary to us.
And for our anniversary, we were kicking off a great series that I'm really, really excited about.
A great 101 series.
So, Andy, without further ado,
do you want to introduce what our first 101 series is and who our wonderful guest is today?
Yeah. So as you said, we wanted to kick it off with a series of what is a certain technology
all about. We call it one-on-ones and we try to invite experts on topics. Today, it's about what is OpenStack, one-on-one on OpenStack.
And my colleague that has spent a lot of time on OpenStack and is driving our innovation
when it comes from Dynatrace, but also working closely with the OpenStack community is Dirk.
Dirk, hi.
Hi, Andy.
I'm so proud to be part of the anniversary show.
I can't tell.
And I just realized that as we were starting.
Yeah.
So, Dirk, maybe a couple of words about yourself besides the name, some background and what you do at Dynatrace.
Sure.
So I work in the innovation lab at Dynatrace, and my role is called technology lead for OpenStack.
So obviously what I do is I do research on what's going on in the OpenStack communities.
And you know that they're on a six-month release cadence, so there is a lot of things going on, upstream development.
And I'm just following what the technology trends are and consult with our product management team to provide the best possible monitoring solution
for OpenStack that is out there. Cool. And so maybe to dig right into it, what is OpenStack?
Because it might be a term that people have heard, maybe they confuse it with OpenShift,
but OpenStack, what is OpenStack in very simple terms? So OpenStack is a cloud operating system, and it basically allows you to create your own private cloud in your data center.
So you basically take a few servers, install OpenStack on it, and you can launch virtual machines in your own data center as compared to like AWS or Google Cloud.
So it's like AWS at home.
Yeah, so that's interesting because a lot of companies,
obviously AWS especially is, I think, the leader right now.
But it's also Google, Microsoft, they have their cloud offerings
and many others have cloud offerings.
Yes, they do.
But if I don't want to go to the public cloud,
but I want to have the convenience of something like infrastructure service,
storage service, all this computer service,
then I can just take my physical machines that I have standing around
and install OpenStack on it,
and then I get the basic feature set of what a cloud is typically offering me.
Is this correct?
Exactly. That's exactly what it is.
So there are like these buildings blocks that like every cloud service is built of.
So there is like a compute service.
There's a networking service and there is a storage service.
So all of these building blocks are also making up like AWS and all these public cloud providers.
And it's just like the same for OpenStack and your private cloud.
So you find all of these building blocks in your private cloud as well.
I was seeing all that stuff in Azure as well,
all those different blocks that you have.
I was just missing out on that too.
It's basically the same concept for all, let's say,
cloud technologies that are out there,
let's say on the infrastructure as a service side.
And as you were mentioning before, yes,
it happens a lot that it gets confused with OpenShift.
So the main difference is actually that OpenStack is like the infrastructure as a service,
and OpenShift is like the platform on the service.
So you can actually run OpenShift on top of OpenStack.
And OpenShift would then take care of the applications,
while OpenStack actually takes care of like the instances and the virtual machines that are running.
So I usually like to compare it.
And so OpenStack rhymes with rack, and you usually have the hardware in a rack.
So that's like the, yeah, how to easily remember what's the difference.
So now who is OpenStack?
Is this an organization?
Is this a company?
Can you give a little detail?
Sure.
So OpenStack is basically
one of the largest open source projects that is out there right now with like 30,000 active
developers from over 170 countries. So it actually has come a long way since 2010 when it was founded
originally. And the governance of OpenStack itself, it being an open source project, is actually done by the OpenStack Foundation, which is like a group of individuals that get funding from the largest companies that are out there in the world.
So HP, Red Hat, Cisco, they're all heavily invested.
Intel, of course, because they're interested in selling CPUs.
They are all investing in this project.
And yeah, so that is basically who OpenStack is and who's behind OpenStack.
Yeah.
And of course.
Go on.
Yeah.
I was going to ask, wasn't NASA behind it originally?
Yeah.
As well.
Originally, it was founded by NASA and Rackspace in 2010.
And yeah, it's actually came a long way since now.
So it was like this.
I think the first the first OpenStack summit that took place in Austin, it was like 30 or 90 people.
And we're actually facing the next OpenStack Summit next week in Boston here.
And they're expecting like 8,000 people.
And it's actually amazing.
So, okay, so I understand there's companies that are driving the project it's an
open source project the main consumers the main users i would then assume are really either large
companies that really have their own data center but one on one um modernize it right instead of
i don't know providing different services homegrown home built i guess they are they are
then using open stack right large organizations yeah yeah and i think in a conversation with earlier you also talked about telcos yeah so
what's so on the one hand it's large large companies that actually have hardware standing
around and just want to like seize it even more so so make more of the stuff that they already
have because what's what open stack is actually also about is you have no vendor lock-in you can use components and hardware from any vendor so
commodity hardware so that is cool and yeah so large companies use it to basically
also save costs because apparently it if your environment is large enough, you actually save costs in comparison to public cloud providers.
But like the tipping point is somewhere beneath 5,000, 6,000 VMs or so.
So it has to be really large.
And with regards to the telco business, what virtualization which is actually just applying like the the
cloud parroting to to to telco providers like instead of love scaling vertically with always
bigger and faster hardware just like um scale horizontally with just spinning up 10 additional
instances if you need more computing power that That is basically what they're doing.
So Derek, in terms of if we go back several years ago, people were calling private cloud
just the idea of running VMware and their infrastructure because it gave them more flexibility
and more machines and it kind of felt like a cloud.
But nowadays we see what you know, what cloud is, is much different.
So how would you characterize the difference between OpenStack as a private cloud versus just using, you know, a bunch of virtual machines internally as a private cloud?
So OpenStack on the one hand, it's an open source project.
So you actually could save a lot of money instead of using VMware. On the other hand, as it is an open source project,
you can actually ask your developers to develop additional features
that you might need on your own,
and then you can actually consider committing it upstream
and just participating in the OpenStack community.
And what you actually also can do,
there is like this self-service concept of of open stack so you
can just create tenants for your business units and assign them a certain quota of resources so
computing power storage and stuff like that and they can actually spin up their virtual machines
and instances to their liking so just they can start instances as they go right as opposed to the old
way of having to go to ops and say hey we need another vm and what size and all that they can
just excellent great great points so but to follow up here so what is vm we're doing right now are
they also uh doing something with open stack whether they should see it as a competition or
are they participating in the whole open source project?
Well, they are actually doing one thing that is like to run OpenStack in VMware.
So that's VIO.
Okay.
And I think they're still participating
in OpenStack summits and still,
but I'm not sure if they're contributing anything upstream.
Okay. I don't know that. Okay, no worries. worries so but if we look back at what we said so so far so open stack
allows me to build my own private cloud with the flexibility of having apis to span up virtual
machines containers storage network that what i i think has been obviously very heavily driven
by the public cloud vendors. So
developers know the convenience of the public cloud, but if you can't get it, then you can
build your private cloud. What is now from a, obviously from a monitoring perspective,
we talked about performance and monitoring here. What are some of the things that OpenStack
provides itself in terms of monitoring and what are some of
the use cases where we actually said, well, this is not enough and therefore we build
something additionally on top of it.
Maybe let's start with what type of monitoring does OpenStack itself provide?
Okay.
So there is one specific telemetry project.
It previously was called Silometer, which is used to track resource consumption of your instances that are
running in your open stack cluster and you could actually use silometer like for for charging your
business units so you've used like two days of this instance so you have to pay that and that
amount of money and you can actually also use silo meter to um to auto scale your environment
so there is like this concept of there is a project in open stack called heat which is like
similar to to aws cloud formation it actually supports the same template type so it's really
pretty similar and you could actually like create the feedback loop between silometer, like the resource consumption model to heat, to spin up new instances if a certain threshold is reached.
Then there is another project that's called Monasca, which is like actually doing collection of more metrics than just like resource utilization and
i think they're also looking forward to to to integrating log monitoring as well um what else
is out there i think the trash is a really interesting project right now because it
actually deals with root cause analysis within openStack events.
Yeah, and then there are like the usual suspects, so Nagios and like this, let's call them old school tools for resource utilization monitoring that are also like pretty large still in this
OpenStack environment.
And so in all of these projects, I assume, expose APIs so we can actually get the data out and allowing vendors,
like Dynatrace, for instance, to pull data in.
Is that what we do?
Or is there some underlying OpenStack API that we can tab into
and see how many instances are running, what's the utilization?
Or does this only work in the combination of the projects
that you were referring to?
Actually, for our monitoring solution, we have a Dynatrace.
You don't need this project.
Because what we actually do is, so maybe a step back, there are like six core services in OpenStack.
And all of services in OpenStack actually an api where you can pull data okay so for
example there is the nova project that is actually takes care of starting and running the virtual
machines and you can like uh gather data from the from the nova api how many instances are running
and when was a new instance started and that is actually uh the one thing that we do we gather data from the
from the computing api from the storage api from the networking api to have like the basic
information and in addition to that we install the dynatrace one agent on the control hosts
and on the compute nodes so basically really on the bare metal machines and we're gathering yeah resource
utilization data and like on the control nodes where the services are actually running we collect
also performance data of these open stack services to actually have a great overview of everything
that is going on in your open stack cluster and that's obviously useful for OpenStack operators.
So these are the people that actually run the quote-unquote private cloud.
They need to know how healthy is OpenStack.
So we want each other.
So it's a Nova is for compute.
Yeah.
Then storage.
Storage is Cinder.
Cinder.
Block storage.
Then Neutron is the networking service.
Yeah.
Anything else?
Swift is for object storage.
Yeah. So those are like the projects that are mainly used okay and so that means for operators of open
stack it's important to also monitor these components itself because you want to make
sure that open stack itself is healthy yeah right that when you are when your developers try to
spawn up a new virtual machine it doesn't take forever, but actually it's responding fast. So these are good things.
Sorry, I was just going to ask, in terms of that combination, do you know
if customers ever see instances where the platforms that
are running within the OpenStack system are being impacted by performance issues
on the OpenStack base components?
Well, so as Andy was saying,
when new instances are started,
this is usually when errors that happen
in the OpenStack control plane,
so the control plane is like all of these services
that are there,
happen, then this usually impacts most
the start of new instances.
But what actually can happen in an OpenStack cluster,
that is a pretty cool example actually,
is the noisy neighbor problem.
So imagine you have one compute node
where you run several virtual machines on top.
So you run several virtual machines
on one bare metal machine.
So you have limited computing power and processing power.
And imagine that one of these virtual machines
takes up all of the CPU time.
So all of the other instances that run on the same bare metal host
actually don't have that much CPU time anymore.
And all of the services that run on this affected host
will actually degrade in performance.
So if that happens, for example, in a public cloud environment,
you will just see like a service performance degradation,
but you would not even know why.
And now with the OpenStack insights we could actually provide,
you not only see the service performance degradation,
you actually have insights into the cloud platform,
and you can really pinpoint actually the root cause why there is a slowdown.
So there is because, yeah,
the CPU utilization is so high
because of another instance.
But that means it's over-provisioning
what happened here.
And I would have assumed
that something like an OpenShift takes care of that
and doesn't allow me to actually consume more resources
than are virtually available. You now said OpenShift. So that is a doesn't allow me to actually consume more resources than are virtually available.
You now said OpenShift.
So that is a common mistake.
Perfect.
Thanks for correcting me.
So let me rephrase.
I would have assumed that OpenStack is smart enough to not allow me to put more virtual machines on a box that doesn't, you know,
I shouldn't over-provision because then I basically run into this issue.
Correct.
That is a wrong assumption.
Yeah, okay.
Because the Nova service actually has, so it's called over-commit ratio,
which actually tells you, so the default over-commit ratio for Nova Compute is actually 1 to 16, which in fact tells you for each physical CPU, you can deploy 16 virtual CPUs.
And yeah, but it's configurable, right?
So, for example, Red Hat OpenStack has set the default setting to 8.
So it really depends on your use case right because the assumption actually is
that you do not use 100 of the cpu time in each instance all the time so that it kind of like
flattens out in the end it's kind of like it's kind of like airplane booking
as long as we as long as we don't have to drag these virtual machines off uh the physical machine
and then you know hit them hard.
It's called migration.
So it actually does happen.
Yeah.
So that's when actually you can initiate a migration of one VM, obviously, to another physical box where more resources are available.
And is this something that is built into OpenStack or is it something I need to initiate myself?
I'm not sure if this works automatically,
but of course you can trigger it manually.
But then again, you would need to know that there are some problems.
So that's actually why monitoring is really helpful there.
And I guess that's one of the use cases
that we, I assume, obviously support a lot
because we actually see
the underlying OpenStack infrastructure,
but also the compute nodes and what happens within the compute nodes.
Yes.
And so we can say, hey, we have a microservice that went rogue,
and therefore we need to move it away, either kill it or fix it
or move it away to some other area where it's not becoming a noisy neighbor.
Exactly.
I like that term.
Noisy neighbor is actually a pretty cool term.
Wow. I like that term. Noisy neighbor is actually a pretty cool term.
So we actually spent some time the last couple of days here in Boston at the Red Hat Summit.
I know next week is OpenStack Summit.
But this week already, I took you aside and tried to get some explanation on what we do and some of the use cases.
So I think you covered it a little bit but you mentioned earlier when the number of or the time increases when you're trying to you know you
call an api to open stack and you're expecting to get a resource then the health of the system can
be defined by how fast they get a response yeah So this is one of the key metrics that you monitor, right?
Responsiveness of the system itself.
Is there anything else from a health monitoring perspective,
any key metrics that people should look at and be aware of?
Sure.
So with regards to the OpenStack services, we actually are recommending,
and it's like there's actually already literature out there that recommends that you look at the availability of your OpenStack services.
So are the services up and running?
You should definitely look at the performance of the services.
So you can just watch if the response time of these services is like increasing.
This is usually a point that something's wrong. Then, as everyone who knows OpenStack is that all of these services are heavy loggers.
So you should definitely get some log analytics in place.
Because if you have, let's say, an eight-node OpenStack cluster with high availability in the control plane,
there are at least 100 log files that you need to browse through
if something goes wrong.
So if you had to do that by hand, it probably takes a long time.
And, yeah, of course, the resource consumption,
not only on the control nodes with the compute services,
but also on the compute nodes where the virtual machines
are running is important on the one hand to identify like the noisy neighbor problem we just
covered but also on the other hand to get insights into uh into the capacity planning of your
instances so have you really over provisioned a certain instance does it have too little memory or too much storage and
can you like shave off a few gigabytes to save money in the end actually to reuse it in another
instance these are actual insights you would need to efficiently operate your open stack environment
i mean i i assume and i know there are obviously software vendors out there that try to solve that problem by shifting things around and making VMs more efficient in how they distribute them.
But it seems we have a lot of great data already.
Do we integrate with some of these vendors that are doing automatic shifting resources around?
Or in case we're not doing it yet at least we have the apis obviously where we
expose all of our data we could build integrations we have no integration as of now but as you said
we have all of this data we have an api where actually third-party vendors could pull this data
to yeah to do some action on that yeah but not yet cool um so to sum it up a little bit so it's basically i can build my own private
cloud it's about compute as a service uh infrastructure storage as a service network
as a service i think you also said dictionary as a service what was the other what was it
was like object storage storage yeah cool uh Any other terminology that we should be aware of?
There is a lot of terminology, but I think we won't be able to cover this in one podcast.
Okay.
And then the reason why I want to ask, there's one thing you actually mentioned this week,
and it's the term project.
Yeah, okay. So OpenStack has actually the possibility to create projects.
So we covered that before.
So you can create a project on OpenStack for a certain business unit, for example,
and you can assign a certain quota of computing resources to that project.
And then the business unit can actually spin up their instances as they see fit
until they have used up their
quota and then they can request more obviously yeah but that's actually is
like a certain aspect of open stack that covers like the self-service
capabilities and yeah it's actually pretty nice yeah it's like a tenant
system obviously right I would assume so you you mentioned this internal within
the big company business units or like the telco example, you have your customers and they all get a piece of your infrastructure.
Yes.
So if you're like a public cloud provider, which is actually one common use case for having OpenStack in place.
So let's take the systems Germany, for example, there is the OpenTelecom
cloud. They're actually running a large private cloud running on OpenStack, but make it a public
offering. So like they're creating projects for their customers where they can then make use of
the virtual resources that they are providing. So it sounds like with that approach, then you can take if you have an organization,
and you have your development team, your project teams, as you're talking about that,
one of the benefits of that is ensuring that your development team doesn't just start spinning up
VMs left and right and not paying attention to them, because they're, they'll reach a limit,
a hard limit, before you overwhelm your Open stack system because they're they're walled off
into a certain amount of resourcing right so if they are not paying attention and not managing
their instance as well they'll find out without taking down the entire system yeah right that's
really great cool um maybe maybe to uh at the end i know we talked a lot about what it is also a
little bit already what what we do from a Dynatrace perspective.
Can you explain in very simple terms how a normal Dynatrace customer that wants to monitor their open stack environment,
what they do, what they install where, and maybe if there are different options between,
maybe, I don't know, if there's an option between full-stack monitoring versus just compute monitoring, compute node monitoring, what are the different deployment models that
we typically see or expect to see?
Okay.
Let's say, let's focus on open-stack control plane or open-stack infrastructure monitoring
at first.
What you would need to do is install the Dynatrace One agent on the controller nodes and on the compute nodes.
And you would, in addition, need to set up the Dynatrace security gateway because the Keystone service is the authentication project
within OpenStack that actually knows all the endpoints of all other services.
So this is like the central connection point. And you configure the Keystone endpoint and
the credentials and actually then you're ready to go already. So then you have like insights into what is going on with your OpenStack services.
You get the log monitoring of all of the services out of the box.
And you also get some insights on what is the resource utilization of your compute nodes.
Then if you also want to get insights into the applications that run on top of your compute nodes,
you would actually need to install
the Dynatrace One agent in those virtual machines.
And yeah, then you not only get insights
into the application,
but we also include the data we get
from the One agent from the instance
in the OpenStack view
to get really detailed information
of what's going on with the resources within these
instances cool that's then really full stack monitoring i mean this is then perfect this is
like yes it is from the bare metal through the service layer into the nodes into the services
into the apps yeah it is um i know when we go to our do support, in order to get insights into the containers, we actually only need to install the Dynatrix One agent on the Docker host and then we automatically inject.
Is there anything like that planned as well for OpenStack, where if you have the One agent installed on the OpenStack infrastructure itself to monitor the OpenStack services, that we can then automatically inject into the compute nodes as well?
Well, no.
You need to install the one agent on the compute nodes.
But from then on, it's actually launching virtual machines.
And you would like to know what kind of operating system
you would launch within the virtual machine to get the right agent in there.
So it's a tricky thing.
But there are projects out there in OpenStack that actually do instead would automatically be able to inject into those Docker containers as they're starting.
And on the other hand, there is also a lot of development going on right now into containerizing the OpenStack control plane.
So all of these networking, storage, computing services to to to make it simple
would be would run in an in an own container which make it actually pretty easy for orchestration and
restarting services if they fail and which again makes it also easy for us to to monitor and
instrument those services again because of our docker capabilities so that's what's going on there
cool wow um brian anything else that we should cover any other questions that came up when you
did your research on it or are we good no i think we're good you know to me you know again with with
this 101 concepts you know what i'm really learning really learning in here from a great level is that
this is, it's very easy to think of OpenStack as an AWS, if you understand AWS or Azure,
if you have any familiarity with that and all the different components, you know, especially what I
saw recently when I was working on some Azure stuff where you have to assign the networking
group and the storage group and all these different components. You know, it's not just a, here's a VM, run it anymore,
or here's a machine and just run it. It's the, there's much more compartmentalized services that
support one another. You're just taking that and bringing it inside. That's, you know,
to really think about it in a basic way, OpenStack is your, your own private AWS,
your own private Azure.
And with that, then, you get the great, wonderful benefit of,
obviously, in AWS and Azure, there are monitoring points you can do.
Obviously, you can monitor your own components, but you can very easily then monitor the entire OpenStack infrastructure,
the health of that, the underlying components, so that you understand when there are issues there or issues there that might be
impacting your process that are running up higher. So I think, Dirk, thank you very, very much. It
was a great explanation and it really just kind of cemented it finally for me. And the other,
my only other takeaway would be that i think everybody would be much safer
calling it open s this way if you're talking about open shift or open stack you're covered
cool hey thank you so much dirk welcome it was my pleasure any Any, typically, I guess, Brian, you always ask our guests, but any appearances of you?
I know you're speaking next week at the, you know, you spoke this week.
Yes.
At the open, at the Red Hat Summit.
Yeah.
Any other things coming up in the summer months where you are, where people may be able to see you live?
Well, we're currently planning on hitting the OpenStack APEC event.
So I actually submitted some talks to OpenStack days in Japan and China.
So maybe there's an appearance.
And otherwise, there are some events throughout EMEA
where they will definitely attend and be present.
Nice.
And if people want to follow you, are you on Twitter
where you might be announcing different appearances if people want to follow you, are you on Twitter where you might be announcing
different appearances
if people want to keep track of you?
Sure.
So my Twitter handle is
wall underscore Dirk.
I think we can put it
in the description of the video.
Yes, we can.
Absolutely.
And you're also a regular blogger,
obviously, on blog.dynatrace.com.
So people will find information there.
And if people want to know
more in general
about open stack i believe there's a great open stack.org website where they can find everything
and if they want to find more out about dynatrace and open stack they could actually reach out to
open stack at dynatrace.com and just ask the question that's easy it is cool that end up
ends up in your inbox it does perfect openack at dynatrace.com.
I'm going to start using that.
Yeah.
Here we go.
All right.
Well, that's it for me.
Thank you.
Thank you very much.
Thank you for being on our, well, again, not calendar, but recording day anniversary show.
Excellent to have you.
And we're going to be having – Andy, do you want to do a little rundown for our listeners
of some of the other topics we're going to be covering in the upcoming episodes?
Yeah, sure.
Quickly.
So we have a couple of technologies that are new and we wanted to cover them.
So for instance, there will be a one-on-one on OpenShift, on Azure, on AWS, on Cloud Foundry.
We will be talking about serverless and Node.js.
We will be talking about Visual Complete and Speed Index.
So some of these technologies and techniques or frameworks that are out there
that are new kind of for the industry,
and we just want to get the experts in and get a session like this.
So I think I covered those that we at least are on the short list.
There might be more coming up.
If anybody has any ones that they want to hear about specifically, let us know.
Yeah.
Where can they contact us, Andy?
Well, I think there are multiple ways. Write an email.
Pre-performance
at dynatrace.com is one
option. Then on Twitter
it is,
hopefully I get this right, pew underscore dt.
Right. I'm putting you on the spot here. That's fine too.
It's good.
It's good.
And I believe that's the
best way. Or they can catch me on
at Grabner Andyy they can catch you
at Emperor Wilson
and then they should
just check out the other podcast and everything
else we do
and I think what I also want
to ask Dirk is if
we can get you on a
performance clinic one of these days which is
a visual web recording where we could
show some of the capabilities that would be nice of course let's do it yeah perfect life
derek thank you very much again um i am forever indebted to you
all right and uh i guess we'll uh be back in a couple weeks with the next 101.
Hope you all enjoyed it today.
Thank you.
Bye-bye.
Bye.