PurePerformance - The Good, The Bad, and The Ugly of Open Source with Goranka Bjedov
Episode Date: April 27, 2020Goranka Bjedov has seen the different sides of Open Source while she was working for organizations such as Google, Facebook or AT&T Labs. Before she takes the stage at www.devone.at later this year sh...e gives us her take on Scott McNealy’s quote “Open Source is free like a puppy is free”. Tune in and hear her thoughts on how to pick the right tools, languages or frameworks, how to grow a an open source project and what things you should definitely avoid.https://www.linkedin.com/in/goranka-bjedov-5969a6/https://devone.at/
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always my co-host is with me, Andy Grabner.
Andy, how are you doing today?
I'm good. You just paused there for like, what is Andy? Is he co-host? Oh yeah.
No, yeah, I'm good.
Because I always said, with me as always, i forget what i said i said something and i was
just like was that grammatically correct every you know the funny thing is when i go back and
edit these episodes you hear yourself talk and you hear when you use words wrong or you say the wrong
uh you know helper word or something and you just i just cringe and i'm like ah why did i say that
and i'm you know i'm not going to go in and overdub them to make myself sound perfect.
So for a moment, I paused to try to think of, did I say the right thing?
And I think I messed it up.
But it doesn't matter.
It just shows all of our listeners that, yes, I am human.
I'm not a robot.
And I talk awfully.
Anyway, Andy, since you asked, how's your day going?
It's over pretty much.
It's pretty much over.
It's 6 p.m. here.
And it's the time of the recording.
There's an interesting thing in our lives, I guess, impacting all of us, which is the COVID virus.
So that's an interesting challenge.
I was supposed to go on a ski trip tomorrow, but that got canceled.
And that means it gives me more time to work the rest of the week.
So wonderful.
I know.
That's all good.
All right.
But Brian, enough about us and what's happening in our lives.
What do you think?
Should we introduce the guest of honor, the reoccurring guest?
Should I?
Recurring.
No, I guess reoccurring, yeah.
There's a difference between recurring and reoccurring,
and I believe you did use it correctly. Again, I'm very excited about our guest. Now I feel like I always have to say it, but I am. That's going to be my running theme for the next few episodes, talking about how excited I am and how obligated I am to be excited. But I really am excited, because this is a very prestigious guest. I'm very honored to have her back on. But Andy, you always do the introductions.
Sure. So welcome back to the show, Goranka Biedow. We had her for episode 33 and 34. It's been more
than two years where Goranka talked about performance engineering at Facebook and
monitoring at Facebook and how DevOps works. And two years have passed, Goranka, and thank you for being back on the show.
And I know there's a lot of new things
that we want to talk about today.
Hi, Andy. Hi, Brian.
Really looking forward to the conversation.
Yeah, it's been a long time.
You know, I don't know about Andy,
but I often go back to a very reduced message
that you put out in some of those podcasts
where there was the idea of the developers
having to put in all of the criteria
with their success criteria, their build into it.
And then if it didn't meet it, it would just get smacked down
and they would lose their credibility.
Not their credibility, but their standing and releases and priority. And it's always something I, a story I like to
relate from that. So thank you for that because it's come in very handy quite often.
You know, I still haven't changed my opinion on that. I think it is incredibly important
to ask people before they release, what your success criteria because once uh once we
start seeing data and once we start seeing how a new feature or a new product is performing
it is very easy to fall back to like well this is exactly what i expected and this is a success
where ahead of time people have much different ideas of what is going to happen. Yeah.
And I got to say, Goranka, you helped me a lot
that you don't even know about how much you helped me.
But in the last two and a half years or three years,
I think the first time we met was three years ago in New Zealand at WAPR.
And I remember you talking about performance engineering at Facebook.
I believe you gave me some insights into Canary
deployment, enlightened us all that didn't know that Canary deployment is primarily done, at least
back then in New Zealand. And then you also told me about the, you gave me the analogy of with the
success criteria. And then if the success criteria is met,
you really then start optimizing your code.
It's like a fix-it ticket.
I think you gave me some analogies
and I've used these analogies
over the last three years
at a lot of my conference talks.
So thank you so much
for making it so much easier for me
to tell the world about Canary deployments,
about the success criteria,
about fixing things.
And it was really amazing.
Thank you.
Oh, you're very welcome.
I think everybody in New Zealand knows
that they are the first place
where everything gets released.
As a matter of fact,
I was just in New Zealand again for two weeks.
I love the country
and it's my favorite hiking destination.
And so I ended up chatting about things like that with our guides and so on,
and they're like, oh, yeah, we know.
We know.
Everybody deploys.
And they think it's cool, and probably is,
because some features never see the light of day after New Zealand,
and they get to see it all.
But sometimes it's also painful
because a lot of stuff that gets deployed originally
gets basically debugged in New Zealand.
So is it fair to say that New Zealand would see the good,
the bad and the ugly?
Oh, absolutely.
Yeah, which was a perfect segue to...
That's the title of your talk Absolutely. Yeah, which was a perfect segue to,
that's the title of your talk at DevOne,
where I know as the time of the recording,
we don't know if it's happening in April or maybe later in the year,
but still, so, Karan,
and this is also the reason
why we wanted to definitely have you back
on the podcast, especially now, because you talk about the good, the bad and the ugly of open source.
So this is what I would like to hear more about.
So I'm sure you're aware that, you know, when you are talking about technology space in general, and I tend to joke about it being a religious
environment. You know, what is your software religion? Are you, you know, IntelliJ or Eclipse
person? Are you Java or C++ person? And we tend to not be realistic and analyze things. It's been
one of my frustrations through like whatever, 20, 30 years
that I've been in this field. But in reality, you always have to look at what is your environment,
what are your constraints, and then pick the right tool. There is simply no simple, single,
right tool that will solve all of your problems. Or if there is, I certainly don't know of it. And I would wonder
why it hasn't been deployed all over. Right. And so you will meet people who will simply tell you
like, well, clearly open source is the solution for everything. I mean, why would you ever either
buy from a vendor or why would you ever, you know, contemplate developing something in-house for
yourself? Or it's like, this is stupid. You should never do that. You should always go open source. And then there is the other faction that says you should
never use open source. Open source is always terrible. Open source will always have problems
and you should never use it. And I just sort of look at both of those camps and shake my head
and go, you know, I cannot believe that you can be that convinced of the opposition and be that wrong, regardless of which one of those two we are talking about.
And so from my perspective, there are pitfalls with using open source software.
It's not going to solve all of your problems for free. I always like to quote, I think it was Scott McNeely who
said, open source is free only in the sense that a puppy is free, right? You can pick up a puppy
for free, but then from that point on, you better feed it and give it shots and so on. And it's kind
of like that with open source software. A lot of times I see people saying like, well, I'm just going to use X and it's going to solve all of my problems.
And I'm going like, all right, who's going to maintain it on your end?
Who is going to be deploying it? Who is going to be analyzing it?
Who is going to like, does it even fit on your stack?
You know, and you get back this blank looks of like, well, it's open source.
I mean, it's just going to work magically, right?
And that's very frustrating because then they try one open source package and it doesn't work.
And then they try the next one and it doesn't work the way they wanted.
And they try the next one and they leave and say, well, open source simply doesn't work, which is also not correct. You know, so I kind of feel like I've had some good experiences where I feel open source
really helped us put something together very quickly and sort of convinced a whole bunch
of people that their internal development was basically a wrong thing to do.
In that particular situation, I was 100% convinced that, oh, my God, doing internal development is absolutely terrible.
And it involved performance testing.
So I will always argue that anybody who is developing internally a load generating tool is making a giant mistake.
Yeah.
You know, call it whatever you want, but in particular load generating. tools performance tools because when something goes wrong how will you know if the problem is
in your tool or if the problem is in your product you know you know you just will have to figure
that out and that's not a trivial problem most cases i suspect you will have a problem in both
places but you may find one or
you may find the other and it's very easy to write off the problem in the
product because you are hoping that that's not what is going on so in cases
like that I would never write my own load generating tool I don't think that
makes sense now the question is which one of the plethora of tools in open source do you use?
And I can't even give you the answer to that question. It really depends on what are you doing
and what is the skill set of the people that will be using this tool.
Yeah, I think that's a really important point too, because Andy and I both have the low testing
background and work with other people with that.
And a big part of the question is,
what protocol are you using?
What kind of tests do you need to run?
All these kinds of questions.
And I think a lot of this ties into the general idea
that we even see when people are looking
to move things into the cloud.
And like, do I move to Kubernetes?
Do I move to just standard containers and microservices?
The question is not what tool do I pick? It's what do I need to do and what's going to best
get me there? And maybe the answer is, you know, the vendor-based one, maybe even, you know,
we even see a lot of times, a lot of people like, oh, we're going to take a lift and shift
everything in the cloud. And then we have no plans are like, oh, we're going to take and lift and shift everything in the cloud.
And then we have no plans to make.
And then we're going to containerize everything and make it all microservices.
And then they split it out to 500 microservices that are all making one-to-one calls.
And they added a bunch of network latency because they thought they were supposed to do that.
Whereas you need to take that step back and look at what's our goal?
And let's design around our goal.
And I think that same thing, the point you're probably making there is the same thing comes into whether or not you choose open source vendor or build your own,
all depends on what do you need and does it exist? Is the support important for you that
you want to pay a vendor? Is it something that you can just rely on the open source community or
nothing out there because you have such an edge case suffices your needs and you're going to have to just build this in-house. Oh, I would add even more to that. You're dealing with a different
situation depending on who you are as a company. I'm going to argue and I am by no means trying to
say, oh my God, unless you work in technology, you are obviously not smart or something like that. Not the case.
But there is a very different situation if you are, you know, any one of the fan companies, right?
Or if you are a bank or if you are, God forbid, a healthcare provider in the United States.
It's not just that there is different legal requirements and different risks involved. It is also,
you know, what is your expertise? Let's say you're a Citibank. Do you really want to build
a team of really good technologists that will do for your company what, for example, you know, different people or internal IT is doing for Facebook or for Google or so on.
And I would argue that if you choose to do that, you know, that's going to be a tough proposition.
And you are simply in a different situation. If you are a healthcare company, let's say you are a giant hospital, whether you are the Stanford Healthcare or something like that.
Well, holy crap, you are now under all of these laws that really govern the security of all the data and you probably have data that goes back, you know, 50, 60, 70 years, which means it's
in all different database forms. It's a very different situation. And I would say
hiring a company that specializes in, you know, solving problems like that versus trying to build an internal team that will do it for you,
I would probably go with hiring expertise. Anything that isn't my core job, I tend to
prefer to hire that expertise for as long as I can. It's difficult to develop expertise outside of your core space simply because you don't,
you know, you don't even know how to interview people for it.
You don't even, you sort of understand the problem on a very high level, but you don't
understand it on all of the levels in between.
And now you're trying to hire people that will actually know how to solve the problem
that you don't even know how to describe.
Those are the places where I always feel if I were a company, I would always try to get,
you know, the third party security team or the third party performance team, or even, you know,
somebody to do the architecture and do the analysis and tell me you know how big of a mess am i in
so what's interesting though and and i agree with you but then it's also a little contradictory
to what we hear when you go to some of these conferences and that i also i go to and talk it
where we say uh in order to be better competitive,
you need to in-house a lot of things because that gives you a competitive advantage.
And you need to build up your own teams that can do delivery, that know how to do CICD,
that know all these things.
And then the challenge with this, obviously, is we're all, as you said earlier, we don't have an unlimited pool of resources of engineers that we can all of a sudden draw from.
And then we struggle and then we kind of bound to fail right from the beginning, it seems,
or at least some organizations, because not everybody can hire all these people that we
would need to hire in order to build up
these expertise.
But it's interesting, right?
If you go to the DevOps days, if you go to some other conferences, it feels like all
these success stories are centered around, hey, we have built this team that is now doing
magic for us.
And now we have the brightest people.
And you have to replicate this in a way in
case you want to do this as well so i think you have two cases that i can speak of where i believe
that both facebook and google have obviously built a lot of stuff internally and i think both have
done phenomenal job um i actually think nobody should look at Facebook and Google and
say, well, this is the average case. They're not. They're outliers. And the one thing that I
remember, especially when I moved from Google to Facebook, is how Facebook's approach was always,
let's not build it until we really have to, until we have no other options. And we realize that two
or three years down the road, we will basically be stuck unless we solve this problem in-house
ourselves. And I think that is a very, very good approach. I think a lot of companies,
let me give you a different example, Netflix. Netflix, I think very few people would argue that Netflix isn't a
successful company. And if somebody is going to take up that argument, I would love to hear it.
I think Netflix is incredibly successful. However, Netflix does everything, to the best of my
knowledge, it does everything on AWS. Most of their stuff is in there. They didn't build their own data centers.
They didn't go the route of Google and Facebook. And when you look at it, you go like, well,
why didn't they? Well, because if you really think about it, their core business and their
core expertise and where they really want to invest money, and from my understanding,
I'm investing, is in creating and managing content, basically
giving people things that people want to watch, even if that includes commissioning and creating
some of those things themselves. Their core job is not building network all over the world and
trying to make sure that they have enough. So instead, I think their teams have,
and rightfully so, gone and said, well, who can we get the network from and make it incredibly
extensible and available? And they landed on AWS. And to the best of my knowledge,
they're using it to this day. You also have to know that the only stuff that I know about Netflix is not
even the third hand, but fourth or fifth. But I think this is one of the reasonably well-known
facts about them. And, you know, I obviously have that service in my house and I certainly
have no complaints. You know, I know some of the things that they've done, obviously they've put
their edges as close to the customer. They've made deals with, you know, service providers to homes,
and they've put their giant caches and stuff into the, like, Comcast data centers and so on,
which makes perfect sense.
But they didn't go and build those things on their own.
To the best of my knowledge, they didn't build their own machines.
They just basically said, hey, this is good enough.
If you brand new startup right now and you contact me and you say, hey, we'd like to
talk with you because we want to build our own machines and we know you were involved
with all of that stuff at Facebook.
How do we go about doing that?
My first comment is going to be, don't do it.
Don't do it.
Facebook didn't build its own machines
in 2004. The first time Facebook built its own machine was actually in 2011. And at that point
in time, it was only one type of a machine. So the first thing that you do is you go into somebody
else's data centers. You go into COA because in 2004, AWS was not a thing, right?
Today, I would say, I don't care which one you go, whether it's Azure or AWS or Google Cloud,
but go in there. Figure out first, is your product going to be a success? Or like it happens for
majority of the products, will the audience say, eh, don't care, right?
So before you invest all of your money into infrastructure, make sure that you have a product that can actually make money and support that infrastructure.
And it's astounding to me that I have to kind of argue this with some people.
And it's like, do whatever you like.
I'm just telling you what is, in my mind, the only thing that makes sense.
So that means what you're saying, if I kind of repeat back to you, is if Facebook would have started not in 2004, but maybe in 2015 or now, obviously you would have started, well, Facebook would have started most likely also in the public cloud to figure out this is a product,
this is a social network, really something that people will need.
Absolutely.
I would think anything else would be insane.
I think you have to find a solution that fits the time
and your circumstances, right?
So obviously the whole thing started in a dorm room.
Then once Mark ran out of disk space and so on,
they started adding some disks and stuff like that.
And then eventually you simply had to go to Colo
and put machines in there.
That approach lasted for quite a while, I would say.
So they started in 2004. So February 4th is the official
Facebook incorporation date and so on. And so all the way up to probably 2009,
that was the only thing that was planned for. And then you realize, oh my God,
this thing is really getting big. Now I need to go and look into what I'm going to do for the next
stage. And I can't say when they started looking into that because I joined in 2010. But I would
guess 2008, 2009, you know, they probably started looking and saying, hey, you know, if you're going
to be renting all of these colo spaces, should we think about buying our own and building our own?
And what about the machines?
How happy are we with the ones that we can buy?
And can we do better ourselves?
Because I don't think you just make up your mind to do this one day
and then in a month you have the solution.
And so somewhere in there they started thinking.
And in 2011 they had the first data center.
And it had a subset of machines.
The compute machines were Facebook-built and Facebook-created, Facebook architecture.
And the rest of the stuff was still vendor-provided.
And then over the years, one by one, Facebook is now running on pretty much all of its own hardware.
But it all depends on the circumstances and on the time.
I was going to say, Andy, going back to the comment you made about the conferences, the thought I had on that, which again, I don't know if it's 100% valid, but the way I interpret that, you know, again, the idea, people will go to the conferences, tell everyone to build everything their own.
I think there's like a combination of things going on there.
I was just talking to my old colleague, Francis.
Hi, Francis, if you're listening.
And we were just talking about the joy of learning, the joy of sharing knowledge, right?
And I think a lot of the core of what goes on in conferences is people get themselves in situations and they tinker and
they learn some things and they figure out how to do some stuff, right? And sometimes it's very,
you know, obviously there's the stories where it really takes off in an organization.
But I think a lot of what goes on in the conferences is knowledge sharing and sharing
that love of learning. And when you attend the conferences and when you listen to them, hopefully you have
in your mindset, all right, this is great stuff to learn, great stuff to absorb. But then you,
you know, internally, you have to know how to apply that rationally in your organization and
in your system. But obviously I think at a conference, people get a little overzealous
just because, you know, maybe you're on stage, it's all exciting and say, this is the best way to go,
and this is the best way to do things. And it would be interesting to see that part of the
message scale back and it really just be about, here's some cool things I learned. So even when
I go back to, you know, when I learned Python to deploy against the API, like in an automated
script, you know, that took me a long time to do.
It's a pain in the butt to maintain, you know, anytime the API changes or anything, I got to go
back and fix the script and make sure it works. If I had something that could automate that and
just like something else that would build it, you know, at that point, I think you look at that
situation like, yeah, let me use that tool instead of doing this on my own. But I'm just curious,
you know, from both of your point of views, at the conferences,
do you see it more like people are saying
this is the way to do it?
Or is it more of a sharing what I learned,
sharing the fun things I got to do
because I had time to play with these tools and do things?
Or do you feel it's really more like
do this to be successful?
So interesting question.
I feel that, you know, on a conference stage, I don't think they force you to say, or they don't say it in a way where you feel forced to repeat what they've done.
But sometimes it feels it gets really hyped in the conference space.
And then maybe if the wrong person hears it, or not the wrong person, but somebody hears it wherever they are high up in the organizational person, but somebody hears it on the, wherever they are high up in the organizational chain, they believe then that, hey, because they were successful and these are the business metrics
that they showed me on stage. Now we need to do this as well. And, and, and they kind of sometimes
feel this might be sometimes misleading. And I think you also made a good point. Sometimes when
you get on stage and you talk about your own success, sometimes we exaggerate, not exaggerate, but maybe nice talk things
and don't, what I'd say is only focus on the good things
and not on the bad things.
And therefore not always giving the perfect true picture.
And I'm guilty as charged with this as well sometimes, right?
Because you want to obviously tell the story in the best way.
And so, yeah, that's kind of my my feeling but i i want to
say it's not that people are saying you have to do this in order to make the same i don't know 50
performance improvements and blah blah blah whatever yeah so completely agree um one thing
i will add is who are the people who are making presentations? And this is a general problem, I think, in both research academic communities, but also in our technology communities.
You get to make a presentation if you were successful.
Nobody comes on the stage and says, here is the five things that we have tried and all five failed.
And here is why they failed, because you don't get to give the talk.
Nobody is going to invite you to give the talk. Nobody's going to
invite you to give the talk. You don't get tenure for those kinds of papers and so on.
And so who do you get? You get people who have succeeded and who are genuinely excited about it.
I think one thing that I will say for my colleagues left and right in the technology space,
people tend to love what they do. They love technology,
they love it to bits, and they get excited about it. I mean, that's why you have these religious
wars because, you know, individuals deeply believe that what they love is the right solution to
everything. And so when they're on the stage and they're giving a presentation, I don't think they
are faking. I don't think they are lying. They are excitedly talking about something that has been a phenomenal success for them.
However, they, by definition, are not the most objective person to analyze and say, oh, this worked because these 15 constraints were all fulfilled and therefore your solution
applied, right?
It's very difficult to analyze those kinds of things.
And especially if you worked on something for six months or nine months, thinking back
to what are those things, you know, a year ago that made you choose a particular tool,
it can be hard.
And I think we tend to forget that. I really wish we had
more examples and more presentations of, you know, here's what we tried and this didn't work. And
here is why it didn't work. Or it worked for a period of time from 20 million users to 500
million users. But then after that, it fell apart.
I think everybody could learn a little bit more about it, but our industry suffers from
another problem that I really hope companies start solving, and that is, look, writing
code from scratch is fun, okay?
When people say programming is hard, what they're talking about maintaining somebody
else's code is hard because you have to read their code and reading code involves trying to
understand what another human being thought at a particular point in time when they were writing
that line of code. And let's be very honest. Have you seen your own code that you wrote two years
ago and went like, what the hell was I thinking?
Right? And so now you get into the problem of doing that with somebody else's code.
Part of the reason why we start solutions from scratch is because we are lazy. It's hard to
read somebody else's code. And then we also delusional and go like, of course, this code is
crap. So what I'm going to do is I'm going to write a perfect piece of code that will
never have any problems and everybody will love. And then after nine months, we launch. And now I
don't want to maintain that code anymore. Now I'm just going to dump it onto somebody else and the
cycle repeats. Right. So there is a certain level of excitement in writing my own code, in getting it to work, in solving a problem, and not having to
spend the hard time of maintaining a giant piece or a code base that you're just kind of going like,
what is this doing? Why is this going on this way? And stuff like that. And so what we should do is also require people and not give them credit
until they have spent a year maintaining the feature that they have deployed. You can't walk
away. And then have them give a presentation after that year. Because at that point in time,
the information and the experience would be a lot
more relevant. It's very easy to be enthusiastic if you happen to be one of those people who,
I know the founder, I've been here for a long time, I get to do something new,
and then I dump it on a bunch of poor SOPs that get to maintain it.
And let me tell you, every company that I have ever heard of has cases like that.
Andy does it all the time.
I suspected that.
Hey, I wanted to ask, we talked about the pitfalls and benefits of the build-your-own, but what are the pitfalls of open source?
I mean, I know they're somewhat obvious,
but from your point of view,
obviously we're preparing a talk on this stuff and all.
What about the open source side of the house?
We know if it works well, it's great and all that,
but what do you really have to look out for?
And what are part of the decision matrix
that you think for that?
I'll mention a couple of things.
I'm probably going to expand on them during the conference.
But for example, pick the right tool.
And that can be a hard problem because a lot of people don't know how to pick a particular tool.
Let's assume you want to do some performance testing, load generation, stuff like that.
And you want to pick an open source tool.
Well, which one are you going to pick?
There is probably 15 on the list.
And I will tell you the way I pick the tool, I've gotten this reputation as, oh, she's
a JMeter advocate and she believes JMeter is better than any other tool and no tool
could ever come up to JMeter.
Nothing could be further from the truth. JMeter was the right tool for me in 2005, 2006,
when I was working at Google, because I was transferring my work once I would be done after
about a month or two of making sure everything is okay, to the QA teams that in many cases didn't have strong programming skills.
And so the GUI programming interface that JMeter had,
which in any programming environment would be a huge detriment and a negative thing,
was a huge plus
for me, right? I did not necessarily care what was the underlying programming language,
but the ability that I could give this script and that even people who are not spending all
of their time coding could play with parameters, change things, try them out, maybe add a listener
or a sampler or something
like that very quickly. And I could bring them up to speed in no time. That was a phenomenal
sort of plus thing feature to have. And, you know, at the time I was giving talks about what we've
done and how we've done some, you know, components with it. You know, people sort of took it to say,
oh, if I'm doing this, I have to use JMeter.
And it's like, if you give JMeter
to a bunch of really good programmers,
I cannot imagine anything that would be more frustrating,
to be honest,
because it wouldn't be the tool I would choose.
So be careful, pick the right tool.
The other thing that I will say is very frequently
people will come and say so here's this new open source tool um and i think it's awesome well
how many contributors does it have yeah three are you really going to you know basically uh
stake your future on on three people will they stick around? Will they go away? How long
has it been around? How many users does it have? So I would always say check how many people are
contributing to that particular project. The other thing that is incredibly important is
check how well the project is known. And here is a plug for this tool
that Google developed a very long time ago
and I absolutely love.
And for some reason,
nobody else has ever heard of or uses
and I will never understand why.
You know, you'll also hear now,
I am a terrible product person.
So I have always been completely wrong predicting how well a product is going to do.
Like I could not be more wrong.
But anyway, um, there is this product called Google Trends and you can find it under Google
Labs, which sort of exist or don't exist anymore.
I don't know.
I've been gone for a long time. But basically, Google Trends,
let's assume you're analyzing and trying to pick between four or five different tools.
Plug in the names for all of those tools, and Google Trends will tell you how many people are searching for that particular term over a period of time or even in a particular region and so on. Well, usually one or two of those tools will come up at the top.
Caveat, be careful if the tool name matches a common word.
At that point in time, you have to be very careful about how you apply your analysis.
But usually you will find out that there are certain tools, not because they are better,
but just because they have a better word of mouth, they're doing better advertising, whatever.
But there are certain tools that everybody is adopting.
And that's a powerful signal because especially if you're doing performance analysis, right?
The reason why I'm very much against internal tools for performance load generation and so on, if I'm selling a product that has to
go to somebody else, let's say Google search appliance. And now I'm trying to tell somebody,
okay, my Google search appliance, that particular type can do this many queries per second on this
many documents. It is perfectly reasonable for the buyer to say,
okay, I'd like to reproduce your results. Could you please give me a script? Oh, yeah, but you
see, I'm using my internal tool. Well, that's not a very good solution. If I'm a buyer, I am
certainly not going to accept that. I know that everybody who is selling a product will probably
stack some of their searches in
some way or the other, and I will want to verify those results. But if I say, sure,
here is a JMeter script, you set this up, here's how you run it, it's very easy for them to download
JMeter and verify my results. So again, a huge benefit and, in my opinion, a huge plus for stuff
like that. Not just that I didn't have to develop the tool,
but also it's very easy for me to give it out.
And so if I am actually using a product that is used by only five or ten people around
and I'm requiring, let's say, Bank of America,
I'm trying to sell something to Bank of America,
so the people that have a lot of money,
and I'm now saying, like, well, you know, my tool is great, but you now have to jump through these 55 hoops to verify that my results are
correct. While at the same time, they're hopefully looking at two or three other vendors and those
are coming in there and saying, oh, and here is a self-install, you know, I don't care,
open STA package that you can run and verify all these things. And, you know, I don't care, open STA package that you can run and verify all
these things. And, you know, you can see, you know, all of the signatures match. So you don't
even have to install any of this stuff. You know, it's all done for you. Well, obviously that person
is going to have an advantage over me because, you know, on the receiving end, their IT people probably have a
ton of work. This has just been dropped in their lab. Do they really want to spend 10 days
researching and figuring out how to develop a tool that has been around for only three months?
And oh, by the way, the last three builds, don't use them, but use this build? No. So in those cases, I genuinely believe it makes sense to go with a
tool that is incredibly well-known. And even if you dislike some aspects of it, you always have
to look at what is your goal. My goal as an engineer is to do what is right for my company. And if that requires me to learn how to use a tool that I would prefer not to deal with, well, you know, it's part of being a professional.
Yeah.
I got a question now for you.
The million dollar question.
So we are Dynatrace.
We also just released an open source project and uh it's called
captain and it's an event everybody drink everybody drink exactly yeah that's a little
game that brian starts playing with me every time i use the word captain drink uh you can have a
drink um so we we started this open source project and now we are in that stage of kind of making it
over the hump so the question is how do you in the beginning when you strongly believe
that you have you're solving the right problem problem with this tool how do you get how do
you convince people that are following your rules of how many contributors how well is the project
known how long has it been around,
and they see us and they think, well, this is a great thing,
but we don't really know if we should bet on them.
So do you have any advice on how we can make it over the hump?
So keep in mind you're getting free advice,
so it's probably worth what you're paying for it.
And also, second caveat, I've never been in that situation. So
yeah, this is going to be great advice. But you know, what I would do is I would ask people what
concerns they have. One of the things that I've learned over again, decades in this industry is
if even internally there is an internal product and people have to come to me
and force me to use it, product is crap. It's as simple as that. It is some VP somewhere who has
decided that this team is going to get a pass and we will be pushing this stuff on everybody else
and people will be grumbling. I am yet to see a software engineer that you will approach
and say, hey, here is this thing. It's going to take you 30 seconds and it's going to solve 25%
of your problems. Who's going to say like, you know, I really don't want 25% of my problems
solved. I would rather enjoy all 100% of them. Now, if you have a good tool, people will use it.
A lot of times it happens that people writing the tool do not understand the problem because
they are great tool writers.
They have their own personal image of what the problem is.
And then they go around forcing people to use their tool.
I've seen this many times with people coming and saying,
here, I have this great load testing environment. And it's like, great, why are you talking to me?
I would never use it, right? And well, but it's a great load testing environment, and I'm going to
get promoted if you're using it. Well, that's not my problem. And your promotion is not the problem
that I'm focusing on solving. So, you know, find somebody else.
So what I would say is, A, great thing that you have released it.
Try talking to people that are the most likely users.
I'm hoping that you have two or three companies.
Usually smaller companies will be more open to moving quickly, right?
All you, you know, if you have a company of 15 engineers, all you need is one engineer who is really into it and is working on these types of problems.
And then if you find that person and offer them a little bit of hand-holding, a little bit of extra help, they will more than happily jump on board and try the stuff.
But then you have to deliver.
Like when they come back and say,
hey, this thing is not working.
And so here is my patch
and it's not breaking anything else.
Somebody needs to go and code review that patch
and get it merged quickly and push it out
and thank the person publicly and credit them
and move on
forward. Because if you sit on that patch and I'm now waiting, waiting, waiting, waiting,
and I can obviously make my own branch and do the stuff for myself, but you run into the problem
that then you release a new version and now I have to remember what are all of the patches and port them back.
I think a lot of times it is very important to support the contributors and to actually help them, you know, help you make a better product.
Thanks for that advice.
And I think that's spot on.
That's also what we've seen.
Unfortunately, we are over that initial hump
of we have external contributors.
We have the first handful
of public referential external users,
not only internal users,
but also external users.
But I still think that
when people come across our tool
and they try to solve this problem
and then maybe they look at,
okay, how long has it been
around oh only a year and a half and uh how many contributors does it have okay it's 10 15 but
there's another tool that has twice the amount so that's why i'm trying to figure out how can we
make something um even though we sound some of the numbers we might not be there yet, still make it appealing enough to try it out.
So then I think your point is,
if it is a product that can really solve problems,
true problems, and it can be easily evaluated,
then most likely people will still see the benefit
and then maybe give a star on GitHub
or give a shout out and start contributing.
And with that, it may just take off on its own.
I have another suggestion.
Yeah.
Create a subdirectory of examples and invite people where you provide a sample code or
whatever and invite people to say, hey, we've done this example for this,
this example for this. Do you have an example that would help you evaluate the tool more quickly and
so on? Ping us, send us a note, and we'll try to support you as much as we can. And then provide
a couple of simple examples so that people can, I think it is much more satisfying to download the stuff and have something running by the end of the day.
And let's face it, all of us tend to start doing something brand new by looking at sample code. So having examples of like, Hey, here is how you, you know, generate some HTTP traffic or HTTPS traffic, or, you know,
I don't know what else people do to be honest.
But those kinds of things that they could literally download startup and start
running all on their laptop within a day, I think that would help.
I think that's great, too. And Andy, what I was going to say, too, the things I've learned from
Captain in terms of how I would in the future look at open source tools if I ever needed to
was number one. And I think you do have those examples out there, which is really, really cool. But I also like the fact that there's a Slack channel, right? So you can go on then to
that Slack channel and see what people are talking about. You know, you're not going to necessarily
see the people who are using it, experimenting with it in the repo because they're not necessarily
contributing. But when you see them in the Slack channel, when you get to see what kind of questions
are being asked, what kind of issues people are having, how quickly the supporting team
is responding to those, that gives you a really good idea of the trajectory of an open source
project.
Are people asking questions and it's sitting around for days before someone gets back to
it?
Or is there a lot of engagement?
Because, yeah, I think that's the hugest issue.
If you go back to the early days of the smartphones, right?
When the app stores were new,
a new product would come on,
it would be easy to, kind of easy, right?
To get noticed out there.
Nowadays, try getting a new app
if you're not a major app company
out there with LL Promotions.
Same thing I think goes with open source.
Like how do you,
if no one's going to pick you
because you don't have contributors,
well, how do you get contributors?
It's the catch 22. But I think there's a lot of those supporting things like
karenko was mentioning uh and the things like the slack channels and all those pieces uh that would
give you uh give a user an understanding of what kind of open source project this is and
the people that are running it, how engaged are they with it?
Agree 100%. I think support through all of the social networks is incredibly powerful.
So I would say, interestingly enough, I don't have a Twitter account. I don't use Twitter, but I do think Twitter tends to be really good for these kinds of things. So people can bring
their grievances, problems, questions,
and everything else there.
I also understand that that kind of puts more work on the company
because somebody has to really follow those accounts.
Similar thing with like Facebook group,
open group for it may work reasonably well.
Truthfully, the best tool for open source development,
in my opinion, has been Google Wave.
Unfortunately, nobody else cared to use it for anything else.
But man, that tool was fantastic
for any kind of group development projects.
I actually have no idea what is going on with Veo.
I'm assuming it's completely retired and doesn't exist.
But any place where you can enable other users to also answer questions is a good place.
I think Slack is fantastic.
Love the product, even if i am the um the old style person you know
but i think slack slack has a really good product in my opinion it uh it just sort of works they've
done some things right uh even though i can't quite put my finger on what is it about it that
i like so much uh there is a lot i like cool Cool. Hey, quick question. So the good, the bad, and the
ugly of open source. I know you're going to talk
about it a lot at the conference,
is there any...
So you
explained a little bit about what you would do to pick
the right tool. Is there any other advice
kind of at the end of the show now
that you want to
give maybe some some highlight of your talk uh whether it's the good the bad or the ugly something
that when people are start looking into open source yes or no shall we shall we use this
library shall we build it our own is there any other thing you want to share so one thing that
comes to mind on the don't do this side, and I know Facebook has been
accused of it on occasion, is if you're a large company and you release something in
open source, as you guys have done right now, don't make it just your project.
Go and look for contributors, look for people that will, you know, pitch in and listen
for feedback. You may be large, you may be, you know, fantastic, and you may know what you need.
But if you're putting something into open source, the fundamental motivation should be to help other
people and to actually share this with other people, which means you have to accept other people's input.
If it's open source and you're not listening to anybody,
then that's not really open source.
You know, if you claim, well, this is still my backyard
and I am going to be the gatekeeper
and nobody is going to, you know, send something in without me,
you know, returning it 15 times
and asking for modifications just because,
and I'm going to refuse to listen to any feature suggestions and stuff,
you are really completely missing what open source is supposed to be about.
Cool.
Hey, Goranka, i know you are you love hiking and i know you just said that you are
just hiked a lot in new zealand i am really though looking forward having you in austria
because i know there's a lot of cool places to hike here as well um and uh uh yeah i hope that
you know whenever we will see when when Dev1 is really going to happen,
but whether it's in the spring as it is settled right now
or whether it's in the fall,
it's going to be great having you back in Linz.
And I think it's going to be your first time in Linz.
Have you been here?
I have never been in Linz.
I've obviously been, what is surprising
to me is that it's really not that far away and not that far off of Salzburg. And obviously I've
been to Salzburg, you know, but really looking forward to it. And I will tell you, I am planning,
I still have Tour du Mont Blanc on my hiking list.
So it's definitely going to happen at some point in time.
So there is a lot of phenomenal hikes in that whole area.
Alps, Dolomites, right?
I'm really, really looking forward to it.
Yeah, very cool.
Brian, is there anything else that you want to have covered
before we summon the Summonerator?
No, I think I'm good.
I think we got a lot of great stuff out of this.
So why don't we go ahead and summon the Summonerator?
Do it now.
All right.
So, Goranka, as you may remember, at the end,
I'll try to summarize what I've learned at the podcast.
Now, first of all, what I learned is that in the beginning, you started off with open
source versus non-open source.
It feels like the religious war of Windows or Linux.
So I think this is obviously not the right way of looking into this if we just have these
completely opposite sides of thinking.
I think as you made clear, it always depends on your situation and your
environment. I believe you have to figure out what is the problem that you need to solve and
then figure out what can help you solve the problem. One thing, one term that I wrote down
in my notes is whatever you do, whether it's going to be open source or whether you build it
yourself, you always have to evaluate in the beginning, maybe the total cost of ownership.
So how long, you know, what's maybe the initial cost,
but also what's going to be the running costs
if I decide to build something myself
or if I use open source
or if I decide to in-house a service from somebody else.
I think total cost of ownership
in the end to solve the problem is a big thing.
Also, thank you so much for the ideas on how to pick a tool.
You gave a great example with JMeter.
While JMeter for you worked pretty well back then
because you could hand it off to somebody
that is not deep into code every day,
it might be completely frustrating to somebody
that is coding every day and then working with JMeter
might be a frustrating experience because there's other ways for them to
get to solve their problems. And in the end, I really like the thing,
don't do this. If you're starting an open source project,
don't just make the project to solve your problem, but really make sure it
solves problems also of the people that should benefit
from the open source project these are external folks if you solve problems
that help them as well you will be more accepted you will get more contributions
and this is the road to success and not if you just solve a problem that you
have internally I think these are kind of the highlights and as I said again,
very much looking forward to having you at DevOne. I really hope we still meet in April,
but yeah, that's a pretty good summary. Thank you. That's great. Yeah, it's quite a talent and
when I just had this thought that it would have been awesome if we did have video accompanying this,
because I would really love for Andy to do his summaries while dancing.
I just think that would really put it over the top.
Like, just, you know, do some salsa.
Anyway, Geronimo, thank you so, so very much for coming on.
It's been a great pleasure to have you back.
I do want to mention to any of our listeners, you know, I don't know,
I think this is going to be airing towards the end of April
maybe very early May
but not knowing where things are going to be
with the whole coronavirus thing
again, if you do have
a talk
cancelled because
conferences are being cancelled and you don't have a way to present this
and you've already done all the work
if you want to present it in a podcast format we we'd love to have you on and share it.
Obviously, there won't be any visual aids to be able to assist with it,
and we don't really care if it's not completely on topic on the performance side.
We just really want to, you know, a lot of people did a lot of work,
especially the new time speakers, to get things ready for conferences,
and some of them are being delayed indefinitely.
So if you are in that situation
and you'd like to at least talk about it
and get it out there, get in touch with us.
You can get in touch with us at pure underscore DT
on Twitter or Grabner Andy on Twitter
or I'm Emperor Wilson,
or you can send us an email at pureperformance
at dynatrace.com.
So if you're in that situation, let us know.
If not, well, hey, maybe you'll get to be at a conference sometime in the future or
just keep up the great work.
And we appreciate all your listeners.
And yeah, well, thanks again, Granker.
Awesome, awesome having you back on.
And hopefully we'll talk to you soon again.
Thank you for the invite, guys.
Thank you.
All right.
Bye-bye.
Bye.