PurePerformance - What is Liquid Software with Baruch Sadogursky
Episode Date: March 8, 2021You heard about Continuous Integration, Continuous Delivery and Continuous Deployment. Liquid Software aims to provide the next step towards Trusted Continuous Updates in the DevOps World.In this epis...ode Baruch Sadogursky, DevOps Advocate from JFrog, explains how as engineers we need to add “Updateability” to our non-functional requirements and how product managers and marketing have to forget about traditional releases but think about incremental delivery of value. Baruch (@jbaruch) also promised to send everyone a hard copy of his book “Liquid Software” if you send him a direct message – so – make sure you do that and also check out the details on our discussion of uniquely identifying artifacts through Build-Info.https://www.linkedin.com/in/jbaruch/https://twitter.com/jbaruchhttps://drive.google.com/file/d/1PUb67FxM-eTtdyLNGPc-fGTcCJii-keE/viewhttps://github.com/jfrog/build-info
Transcript
Discussion (0)
It's time for Pure Performance!
Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello everybody and welcome to another episode of Pure Performance. My name is Brian Wilson and as always, my right-hand man, my left-hand man, my front-hand man, my whatever,
my beacon of light in this dark universe, Andy Grabner.
Hi Andy.
Hi, that was a new one, the beacon of light. I've never heard that.
It just came to me.
I was inspired by probably the lights behind you in your video.
Talking about video, we don't have a video,
but we took a screenshot earlier where we will actually let people,
maybe on Twitter, look behind the scenes a little bit,
what's happening here.
Yes.
Well, Andy, I haven't talked to you for 24 hours,
so great to talk to you again.
Yeah.
The beard is still the same length as yesterday.
No, it's grown a little bit.
Yeah, just a little bit, yeah.
But other than that, not a whole lot of things have happened since yesterday.
Great episode, though, on securing the delivery pipelines and kind of making sure that you are protecting, what did he call it, the supply chain, the delivery supply chain of software.
We had Michael Plank or Michi on from Dynatrace explaining how we produce secure software,
which was also great because we talked about continuous delivery
with a security aspect.
And maybe this also gives our guests some additional ideas
on what he wants to talk about today,
because we have, and hopefully I pronounced your name correctly,
Baruch, on the call today.
I will let you introduce yourself, but I just have,
before I let you go, I obviously see you on Zoom,
so I really love the way you look, like your head, your beard.
It's just perfect.
If anybody wants to see him, just open up LinkedIn, search for him,
Baruch DevOps Advocacy at JFrog.
Really cool outfit.
Just love it.
But now, please introduce yourself to the guests, to our listeners.
Thank you very much, Andy. Thank you very much, Brian, for having me. My name is Baruch
Sadogursky, and my business card and my LinkedIn says that I'm the chief sticker officer in
Jeffrog. And this is what I do in times other than pandemics., give people stickers when I meet them in conferences and meetups
and everywhere.
So yeah, also I'm the head of DevOps advocacy with Jfrog.
So I'm leading the team of our DevOps advocates.
And you know, it's a part of developer relations
and everything that comes with it,
interacting with people, hearing what's their pains,
and trying to come up with solutions
that will help them produce software faster,
more reliably, better,
and generally for the please of their users
and their managers.
Pretty cool.
Now, looking at LinkedIn, you live in California, so on the West Coast.
Yes.
But I guess people can guess by your accent that you are not a native Californian.
Where are you from?
Yeah, so I was born in Russia.
I lived most of my life in Israel.
And yeah, now I'm here in the States.
Yep.
See, I thought that was a Valley accent.
The funny thing about Valley accent is that everything is Valley accent.
Yeah.
You know, we have people from all over.
So whatever accent you hear, you can consider it as a Valley accent.
So Baruch, I think the way I reached out to you,
I actually saw a couple of posts on LinkedIn
by one of your colleagues.
And he's been posting a lot of cool content,
educating people on the latest and greatest on Kubernetes
and all sorts of things.
And then I reached out to him.
I said, hey, it would be great to talk. then he said well he's just posting content but he's really
not the expert we have to talk to you and then i said well then introduce me to baruch because i
want to i want to talk to you because then i also found uh the the book that i have in front of me
or the electronic version of it it's called liquid software and this is really
what is kind of the the theme of today's podcast because first of all liquid software sounds like
a really cool term um you mentioned that in your role you want to help people build better software
faster so it's kind of what we all i think have you know tried to achieve but i really like the
term liquid software and i would love to talk about it.
What is this all about?
Also, how does this differentiate what other people say when they talk about continuous delivery, continuous deployment?
What is liquid software different, or is it the the same or how does this work? But in the end, what can you, based on your experience, tell our listeners on how they
in the end produce better software faster?
That's a great question.
And obviously, thanks, Pavan, for this introduction and for making you invite me.
And yeah, Liquid software is a concept
that one of Jeffro co-founders, Fred Simon, came up with.
And the idea about it is continuous updates.
We're going to talk about continuous updates in a second,
but on the grand scheme of things
is you update your software so frequently
and with so tiny intervals
and in such small deltas that it almost looks like the software is flowing
from the producer to the consumer.
And this is the idea of liquid software.
This idea, I think he came up with it maybe, I guess, like three or four years ago.
And it became more than just the vision.
It became actually methodology, which is called continuous updates.
And then a couple of years ago, he, other Jeffro co-founder, you have Landman,
and yours truly, we wrote a book.
It's called Liquid Software,
How to Achieve Continuous Updates with DevOps.
And this is the book that Andy, you refer to.
And it's a lot like other continuous things in our space.
It obviously all have roots in continuous integration
back in the 90s and then
a continuous delivery with Martin Fowler and Jez Humble. And this is a continuation of all the
things. That's the next continuous thing. There are some interesting differences between continuous delivery and continuous updates.
And I would say that continuous updates just shift the focus a little bit.
While continuous delivery is by definition delivery of any software, new versions, new applications and updates, continuous updates focuses on the part where you are not,
most of your releases, 90, whatever percent of your releases is not blank slate, is not clear
slate. You are not delivering new software in a vacuum. Instead, on the other side, there is something that is already running, and your goal
now is to update, not necessarily roll out new software. And there are interesting aspects about
that that we try to highlight in the continuous updates methodology, mostly that you need to be aware of the state,
that what do you do with breaking changes,
how do you work in the way that your customers at least are not even aware
that there is a variant change of the software.
Because what we think about and what we envision is something like
whenever you need to update the software,
you are not installing a new version of the software.
It's just something that is changed inside the version that you already have.
Think about all the latest and greatest browsers, Chrome, Firefox, and what's not.
You are not aware which version of the browser you're running. It's just once in a while,
it gets new capabilities. And you don't think in terms of versions, right? They are on version 84. It doesn't mean much. What means is,
well, they are now have such and such new capability. You know what? Google Meet now
works better. And you don't even know Google Meet works better because the backend changes that they
did, or maybe because your browser now supports a new protocol. You don't even know that.
You don't think about it.
All you know in terms of consumers, well, yes, since yesterday, Google Meets works better.
Does it make sense?
That makes a lot of sense.
So does this also mean from a couple of questions here?
The first question is more technical in the way we approach software engineering.
Do we need to engineer that the software we build is easily updateable by default?
Is this a new capability that we have to bake into our architecture and our software?
Yes.
So there are changes that we need to make. And you know what?
First and foremost, in the concept of how we plan for updates.
And basically, once we give up this notion of we are going to increment in versions
and everybody needs to be aware of them.
We, the customer, other components of our system.
Suddenly, you are in a different mindset.
Say, okay, there is a certain state, not a version, but a state of software that might exist in my entire ecosystem. One of the customers can have one state of the software,
the other can have another state of the software.
My dependencies can be in different state,
and then you need to plan ahead for all of that.
It sounds more complicated.
The version sounds easier.
There are compatible versions and not compatible versions, and this is pretty much
all. But on the other side, it also gives you the flexibility to be backwards and forwards
compatible out of the box. You just live within the concept of a single version unless, until there are changes
which you have to make them breakable.
But when that happens, you just declare,
hey, I have a new software now.
It's not a next version.
It's just completely new software.
It's a little bit like if you're familiar
with the version in Go,
how they did it in Go modules, breakable versions,
like version two for Go, for the machine, it's a completely new dependency. Right? The URL in the
import changes, and it means for the dependency manager, it's a new version.
It doesn't mean that it's called the same.
As long as the version changed, it might be called completely different.
It's exactly the same as any other dependency.
And this is where the difference between updates of an existing software
and installing new software kicks in.
And then we say, well, okay, that's a new software.
Now we installed it.
Let's start operating with continuous updates within this next version.
Now, how does this work?
A lot of people are moving towards smaller services.
Let's take the term microservices even though people
some people have problems with it but if we have service a and b do i then as a consumer of a
service if i'm a and b has different versions do i have to have specific code for all the different
permutations of versions that version that has. We work with loose coupling encapsulation for like, what, 40 years now?
This is exactly the same concept.
As long as your APIs are backwards and forwards compatible,
and we know how to do that, you can change your service every day.
And no one around you should feel it.
Yes, it requires more discipline in terms of changes,
but that's not new.
Semantic versioning has a very, very detailed protocol
on what can be broken and when.
Yes, we don't follow that,
but that's because we're humans and we make mistakes.
This is not going to change.
There is no way suddenly when you give up the versions and talk about liquid software that people are going to make less mistakes.
No, we are not claiming that.
We're aware that this problem will remain and it will be the same problem as with semantic versioning. It's just we simplify the idea by saying,
consider it all the same version until it's not.
And what it gives us is, once we already do that,
is obviously seamless updates when people are less reluctant to update
just because why not?
It's the same version and whatever changed shouldn't break it.
Will it break it? Probably it will.
Is there any other solution? There is really none.
It's not more risky. It's the same amount of risk.
And the risk comes from the idea that people suck.
We said that in yesterday's recording. Yeah.
We're talking about security and all this stuff going on. It's because people suck. Yeah.
Now, in your book, do you cover a couple of, let's say, let's call it best practices or
things what developers can do, especially around version discovery and then
based on the versions that they discover from the dependent services that they also change
their behavior?
Are there any best practices around testing?
Because obviously we will have more and more different versions and combinations of versions.
Are there best practices around that as well?
Absolutely. Well, but here again, the more load on versioning doesn't come from continuous updates.
It's the other way.
Well, yes, there are more options of premutations from different integration points,
but this is just the outcome of having smaller batches.
And having smaller batches is not something
which is new, unheard of,
or something that we don't want.
It's the other way around, right?
The entire Agile and DevOps
are about having smaller batches.
So the fact is, the reality is we have smaller batches.
We have more complicated metrics of testing.
And we don't make it more or less complicated.
It's the same idea.
What we just say is expect people to really have all those combinations
and not only the combinations that for some reason you decided that people should have.
In the end of the day, this is a lie.
People will have whatever combinations they will have.
And if you don't test them because you think they are impossible, then you just don't do your tests well enough and not that they're really impossible
now from moving away from software engineering but more to the way we plan software also how we
are releasing things to the market i think you said something very interesting earlier where you
said well back in the days we talked about about version one, version two, version three. This is mostly, at least for the, like you mentioned, the browser, the Chrome example,
this is gone.
Nobody cares about which version of Chrome I'm on.
At Dynatrace, we also, I mean, we internally have numbers.
These are sprint numbers, but nobody cares about the number anymore because we know when
certain, if a certain capability is there.
Exactly.
But that also means that we have to fundamentally also shift towards
value creation teams, I guess, where we say, hey, we're not planning on this big release that we
call, I don't know, February of 2021, but hey, we have five value streams going on. And as soon as
the individual ones are ready, we may push it out. We may update it and
then give the customer the opportunity to either get it automatically, or maybe I think in the part
of the book that I wrote, you also mentioned, I think, mobile apps, how we on the mobile side,
obviously, we have to click on a button and say, we want to update now. But we don't update to a
certain version. We just say, give me the latest because I get these capabilities.
You nailed it on the head.
The entire version thing is completely marketing, right?
It's the ability for marketing to talk about a bunch of features together
and give them a common name.
This is what it is.
And obviously, it requires a change exactly
there. And I think that
the idea that we see more and more
liquid software and the browsers is still a very good
example is because this is where
the marketing pressure was the lowest.
And this is how they managed to put it through because you don't really do a
lot of marketing around versions of your browser.
The other things you still are. Think about, I don't know,
whatever software you're using on your desktop, it will always be, hey, there is new version 9,
and this is three pages of new features,
and for all those features, you want to pay us $99 for the upgrade.
But the good news are that the market is shifting very rapidly to a subscription model.
You just pay money monthly, and then you will just get whatever new features there are.
And this is what enables us to move to liquid software without hurting the bottom line. Because once you already pay for subscription anyway,
marketing doesn't have to make the updates,
batches large enough and lucrative enough
to kind of convince us to upgrade.
No one cares.
We already pay anyway.
So just give us the versions
and just give us the latest numbers
and the latest features.
And this is where our opportunity lies.
We can steer away from versions because the biggest opposition
of having bulk, the biggest case for having the bulk updates,
the marketing story now shifts to subscriptions
and then they don't care anymore.
I absolutely abhor subscription model.
I understand your point of view,
but I think the way you described it
as empowering the team
to just be able to continually release
and put features out
that makes absolute sense to me though like just personally i i like i don't want to if i paid for
my software i want to pay for it be done with it and not think about paying for it again unless
there's actually a reason that i want those new features and i'm talking about not necessarily
business like i do a lot of audio stuff right and yeah no but it's it's the same and and and that's a very valid that's a very valid point and i would say two things first
um no one cares what you want or what i want right the market the business moves to a subscription
model because the companies that provide a steady stream of income from subscriptions are more successful.
And this is why it actually happens.
So there is no way around that.
But also when you say about, you know what, let's try and imagine how we can marry continuous updates with perpetual
licenses. And you say, you know what, I only want to pay for when I see a value. And I'm telling
you, okay, I'm going to update your software almost every day. Sometimes it will be more than once a day. Sometimes it will be bug fixes.
Sometimes it will be new features.
How do you see you can stay on top of that and pay for whatever you want?
How do you even know if out of three bug fixes that I released today,
you want two and don't care about the third?
I think it's a fair point.
I think it all comes down to the software that you're using.
So in my mind, while you were describing that,
I was thinking about my audio software.
Yeah, no, it's a good one.
And in that case, it's like, if I don't have bugs, if it's working fine,
I don't want bloatware, because oftentimes things get bloated as you go on.
If my system is running and my systems are processing, I don't want to touch them.
That's a different case.
It's a completely different case.
Anyway, this is, I think, a religious argument.
Brian, I love it because it's a very valid objection.
And who owns the software that we're going to
update and then i would say the solution will be you can stop paying for the service whenever you
feel your software is feature complete and work as expected and then you get the perpetual license
for whatever you already have if they offer that
that's the key some some if you're not paying you don't get it we're talking about the vision right
we are talking about how to solve the liquid software how to marry the liquid software vision
with the idea that you want to own your software so that can be one idea you can say you know what
this my audio processing software is i feel it's feature complete
i don't need new features i also don't encounter a lot of bugs i want to freeze it in time i want
this here is the money yeah well i already paid the money i don't want to pay anymore and i and
i will own what i already paid for well obviously, this is where we kind of disconnect you
from the continuous updates that your software got,
and you have what you have.
Next time you want something, we will install, as I mentioned,
a completely new software on a blank slate,
and we will start continuing the updates from there on.
Yeah, and on the flip side, in terms of the solution,
because I think it's great that you turned that into a solution,
I would just say on the flip side of the solution would be that
if things are subscription-based, they actually get updates
and they get the improvements, and these things are actually going on.
Because I think what happens is a lot of people switch to subscription-based,
but they're not continuously delivering,
which is where probably a lot of the pushback.
I think, again, vision-wise, in a great world, if you are going subscription-based, it's because you can deliver on that promise and you're bringing value.
And if that were the case, then I probably wouldn't have the issue with it that I do now.
So it's a chicken or egg.
Anyhow, we kind of went off on what Andy would go on a little bit of a call, probably a little bit of a religious argument there. No, but Brian, I want to
give an example that actually happened to us. Remember, we went
through two or three different subscription-based services
to record podcasts. We have paid a big
upfront cost for the first software we used. We would have been really mad
because we didn't like it.
But we had the ability to say, well, we don't extend for another month and we just jump
to something else.
And then we tried that.
And in the end, it turned out again, we had problems with it.
Right.
But at least I think this model also allows you to switch faster.
I mean, obviously, depending on the software or the service, but I think it also makes
it easier to switch for the consumer.
And this is also, I think the motivation obviously for really successful companies to continuously
improve their service and bring new features faster than their competition, because they
know otherwise they cannot retain their customers.
Yeah.
Yeah.
And I think it's a great idea.
And I think it's, as we say, the concept,
just like when the concept of, let's say, DevOps came out or even Agile, right?
There was the idea, and the idea is way out here,
and everybody's back here towards the beginning
and need to work.
And I think with the Liquid Software concept,
very similar.
You have some people who are actually delivering on it,
who are actually doing these things well.
And then you have other people who are saying, oh'll just charge for that concept but we're not there
yet but if that gives them the inspiration then to get to that end point so that we are delivering
better software faster bringing value to everybody then i'm all i'm all i'm all in on it i guess i
was just looking at more of it like where is it now now? Which is just, you know, I'm grumpy today, so sorry.
No, no, but this is great.
That's exactly the kind of problems and questions that we need to answer.
And it's great that someone is asking them, and I love it.
And obviously, there are a lot of combinations of marketing and sales and technical, which are not aligned yet.
Yeah. But the move to services, even if we hate it in the, to subscription, even if we hate it in
some particular use cases, is still a huge enabler for liquid software
because we don't need to fight with marketing anymore.
We don't need to tell – they now won't tell us, but wait,
we need the changelog to be able to sell this upgrade.
And for us, for the technical people, it's critical because liquid software couldn't have any success without the switch in sales and marketing strategy.
So what I'm saying is that there is a great opportunity now to catch up with sales and marketing and business and actually provide value in the way that will make the subscription model make sense,
even for you, Brian.
When you see that your software is constantly getting more value,
you will say, you know what?
I know what I'm paying for.
I don't feel cheated.
And I'm fine with paying whatever, $10 a month.
And if you think about it from a business point of view that makes a lot of sense too because in in that exact example i'm not upgrading to the latest version because it's
going to cost me like 150 upgrade to the latest version of my recording software and there's no
compelling features in the update you know what i mean like i don't need any of those features they
put in there everything else is minor tweaks i'm like yeah you know sometimes it's great but like to them from a business point of view it almost hurts them
right because what they put in that package of that big version right didn't have enough to
compel me to put the money into it now no one will upgrade now that they're not going to get
the money from me so how do they keep paying their developers to put out the next stuff i mean i'm
one person but exactly exactly yeah it's very interesting to the
latest to the latest contagion because none of the whatever they put in the upgrade wasn't there
but instead if they were on services and i was hooked up on their on their not on services on
subscription and i was hooked up on their subscription because once in a while there
is something that i want i wouldn't probably cancel just because well in a while there is something that I want, I wouldn't probably cancel just because, well,
in some month or some day I didn't see anything.
Yeah.
Yeah, exactly.
Interesting.
Thanks.
New way of viewing it.
Yeah.
So, Baruch, when I read through your paper, your book,
I took a couple of notes, and I got to say,
I read it before the Christmas break.
So I don't recall every single chapter, but as a couple of items that are highlighted here in my
notes, and one is called metadata. The importance of metadata, metadata about every artifact,
where it came from, where it's running, quality, which security checks were done,
which performance tests were executed. So I took some notes. I'm not sure if they
came one-to-one out of your book or if I then just scribbled down some of my notes because
I highlighted performance tests because Brian and I, we are by heart performance engineers.
So can you tell us maybe a little bit more about this metadata and why it is so important and
what people need to understand
and maybe think of when they're building their containers, their apps and services.
Why is metadata?
Why did you highlight it?
Yeah, so metadata is critical because there is all that's left once the version is going
away. When you look at the artifact,
at the moment, you will probably be able to know
which artifact is it because the file name
usually contains some kind of identificator.
Is it the build number or version number
or commit SHA or anything like that.
And then you can start after a long and painful investigation and get to what this artifact really is, how it was built, what is the source, where it was during different points in the pipeline.
Basically, if you have any identification of the artifact, you will be able to, in the end of the
day, find what this artifact is and information about it. Now, obviously, even today, if you just have the name of the service,
dash 3.7.5, it's still very hard to find anything.
But if you don't even have that, then what do you have?
If we get away with versions and we say, well, every artifact can be of any,
you know, as a result of any build.
How do we know?
Well, this is where the metadata comes into play.
And we can say that the identification of the file, we know it.
And that will be the checksum of the file.
Assuming the checksums are unique, if you take a certain array of bytes, which is our software or a module, it translates to a unique checksum. And then we can go to our database of DevOps, if you wish, and ask, what is this artifact?
And there were all the metadata that you collected through your pipelines. And it starts with the source control provision
and everything that associated with that,
the commit nodes and the issues and everything else that came from source.
And then everything that happened to this artifact through the pipeline, how it was built,
what was the environment variables in the CI server, what was the settings that this build was configured with.
And then every step of the way in the testing pyramid in our promotion pipeline, which tests were run, how was the security, how was the performance, how was that?
Everything about this checksum has to be captured.
And this is critical because otherwise you know nothing.
This is why metadata becomes more and more important. what and how with smaller batches faster releases and microservices the information about how
everything works with everything else becomes critical and this is the metadata that you need
to capture for every artifact that you have under their checksum under the only true identifier that every file in the universe and the array of byte
in the universe have. Does it make sense? It's almost like it's genome.
It is. It's exactly that. It's exactly the same idea. And again, the idea that the only thing you care is how this array of bytes represented in a unique fingerprint
in the checksum liberates you again from the concept of versions and the entire set of
problems of overridden versions and mutable versions.
And well, are we sure we didn't build again under the same version number, and all this kind of crap.
This crap just doesn't exist anymore.
Once you say, all I care is a content of this array of bytes,
this is its identifier, its unique fingerprint.
In SHA-256, in SHA-512, whatever you feel comfortable with that won't ever create any conflict with any other RAID bytes in your system.
And this is my identifier that I first attached all the information that I possibly find, collect, and know about this artifact.
And now I can use query language.
I can use UI.
I can use whatever I want to examine this metadata and make smart decisions about it,
how it should be deployed, to which clients it should be delivered, or how a certain problem
happened, and how do I investigate investigate and how do i solve it now are there any standards
for this because the reason why i'm asking as you know brian and i will work for dynatrace so when
we monitor environments we see hosts we see processes we instrument code from a process
perspective we see you know what what type of runtime is it, which binaries get loaded, which char files get loaded.
So we have a lot of metadata about these files.
Or when we talk about container workload monitoring,
we are extracting labels and annotations
from your deployment descriptions.
And I know on Kubernetes, there are some,
let's say, best practices already,
like how you're passing inversion information,
application affiliation, environment information.
Is there something that you explain here
with like a fingerprint, like your SHAs?
Is there something that is actually kind of a standard for that
where the industry has already started on agreeing
that every time
we deploy a container the unique fingerprint should be you know part of that particular label
or something like that because that would be interesting for us as well on the dynatrace side
because we can then pick it up and then i guess integrate or make a query to your tools right
your jfrog provides and say hey give me more information about this artifact because we are just detected in production.
There is a performance problem.
Yeah.
So unfortunately, or maybe fortunately, it's an opportunity.
There is no yet.
There is no like an industry standard of how do we collect, how do we store, and how do we use this metadata.
There were in the past some attempts, and I think the most notable was from Google and a lot of
different companies, including JFrog, Grafias. I mentioned it in a book. There is a chapter about graphias as well, but unfortunately didn't go anywhere. at least our part of the universe, which is how do we label artifacts with the metadata across
the entire pipeline, starting from the source all the way to distribution to the runtime,
whatever this runtime is, container clusters or edge devices or your laptop or whatever.
And we do it in an open standard of, it's not a standard, it's just our standard,
of we called the build information and the distribution information.
And this is just documents that describe this metadata.
They are open source and they are machine readable,
obviously for automated around it.
And obviously they are parsed inside our tool for operations with it,
like promotions using the building for,
and obviously also in the UI.
So yeah, we think about,
and those are early days.
I don't want to commit to anything
that we won't be able to deliver,
but we think about doing more with with
those formats and kind of try and popularize them and maybe see um if the community will want to
adopt those just because they are very well um thought about we are doing the building for part, for example, since 2009.
So that will be 11 years that the build info concept and format is in continuous improvement.
And we'll see if that's something that works for the community and the industry. We'll figure maybe that will become the standard, at least de facto, for capturing, managing, and using the metadata about artifacts.
That's pretty cool. how we on the Dynatrace side could potentially use this information or, you know, help you with our community to,
to drive this more towards and more publicly accepted standard.
The other area,
and I know Brian,
you will love this now because I will mention the word captain.
I was,
I was going to mention it,
Andy.
I was going to say,
I could see you pulling it into captain.
Yeah.
Yeah.
So because we have an open source project called Captain
where we are orchestrating processes around delivery and operations.
And basically we do a similar thing.
We have a unique, we call it a trace context.
So we are tracing the lifecycle of an artifact from its inception
until it's in production.
And with that, when something ends up in production
and something happens, we can trace it all the way back to the initial push of the container, for instance, into a
registry. Exactly. Exactly. Exactly. This is it. And that's the whole idea. And that's the big
question, right? If something happens and we have an artifact, what that how it was built why it was promoted which tests run
uh and and and from there we can uh start investigate and and know what the problem was
go on Andy I was just saying thanks for uh sending over the link just now. We will definitely add this to the podcast proceedings, the build info.
So GitHub.com.jfrog.builds-info.
Yeah, and as I mentioned, it's not in the form of, but it gives you kind of the idea of what information we have there, why we are doing its own and start talking about it more as really,
if not the standard, but at least an idea of how this metadata can be used and shared
across.
But that's a good start.
Yeah, got to start somewhere.
I would also say, I would want to put my two cents in to say, please make sure whatever
happens with all this, it's 100% automated.
Because if we rely on people doing it, we're going to be back at the same place we were
with code comments, right?
Where no one's doing it.
Just to pull this into a parallel world, something I never really talk about, obviously on the
podcast, because it's not tech related, but my daughter, she's on cannabis for her seizures. And for years,
I've been making her extracts. And I see this as a very similar kind of a thing, because,
you know, I find one plant that works from one dispensary. Someone says, oh, can't you get it
from another? I'm like, no, I can't. Because that's a name, that's a marketing name,
it's a version number, it's whatever, right?
The reality of what that's in that plant,
whether it's cannabis, any other medicine,
whatever that you're pulling from a plant,
is all the analysis that's done.
So when you do the scientific analysis
to find out which chemical components are in that extract,
which terpenes and other components are all in there.
That's the true marker of what that is.
That's all depending on the grow conditions,
what's fed in the soil and everything else too.
Same thing with your software.
It's not about build number five, release 15,
or sometimes they get fancy names.
It's those characteristics of deep deep deep characteristics
that are in there that define
what that is and it's really
an awesome concept
it applies to a lot of things
and again you're 100% right
what's important here
is the unification
of the thing
it's having the
common language, having the common terms and
everybody are using it because otherwise it will just be fragmented marketing terms that everybody
invest their own right and we can we can always compare it to standardized industries like
winemaking or what's not like comparing it with cannabis for example
when everybody come up with their own marketing term and you go and to the dispenser next door
and they have no idea what are you talking about when you're going to buy wine and you say hey i
want champagne everybody know exactly what you mean and there cannot be any um any disruptances again that so yeah um i i think
that's the ultimate goal and while we move away from versions that becomes more and more critical
yeah hey i have one more section that i want to highlight uh or that i want to ask you about
kind of my the end of my notes that I had when I read it.
And I want to read it out loud because I really like it.
I think it was one of the captions in your book.
And it says, to error is human, to validate is robot.
And I made some additional notes saying,
rethinking Q&A and validation, go and no-go decisions
have to be automated based on data processed by
smart algorithms.
It's kind of some notes that I took and it plays perfectly also in stuff that we've been
talking about, Brian and I, over the last couple of years, how we are using automation
to extract quality data from tools like your testing tools, your monitoring tools, and then making automated decisions based on that data
by comparing it to, let's say, what's happening in production,
to previous builds, and so on and so forth.
Now, in your work, especially with the organizations that you work with,
is there anything where you see that people are still,
like, what can people do to actually get there?
Because I know we talked about it.
It sounds exciting.
It's amazing.
But still, we see that a lot of people still don't automate exactly that,
either because they don't have good enough test automation
so they'll get reproducible results that they can actually compare
and good data or other things. So what can you tell people of how they can actually get to that place where we
can give algorithms enough good data so that we can then make better decisions in that process
of delivery? So I'm very opinionated about that. And I would say that people don't have automated processes and have
those manual steps for two reasons. And one is a good and valid, the other should go away.
And the good is valid is, hey, we're getting there, right? Stuff is hard to automate. There is a lot of human knowledge processes and bureaucratic procedures that are not trivial to put in code and in automation. It's just not easy. But we understand that we want to do it, and we automate as we go. And in the end of the day, this should be the goal.
There are others that I'm much less okay with that.
And this is the concept of,
hey, we want to have this last minute human approval.
Now, this one really pisses me off
because the only reason you truly have that is to have a scapegoat.
To say, well, if something went wrong, this is because it was their job to prevent it by not giving the approval.
They went ahead and approved it anyway, and this is now their fault. Now, what really
driving me nuts about it is that I think by now, everybody realizes that machines can do much
better validation work than humans. And what you do by appointing this approver is actually you are putting someone in jeopardy just by doing that.
Because if you have a good automated process, you already checked more that this single person can eventually validate in the 20 minutes that they
will look at it. And if you don't have a good approval process, then it's not this person's
fault that you don't have a good automatic approval process. Just go ahead and improve
your pipelines. So there is really no good reason for making someone an approver on top
of the automated pipelines because they won't ever be possible doing the checks that machine
already did. Think about performance testing. You have elaborate labs of automated performance testing
with Gatling, JMeter, Blazemeter, whatever, you name it. And they hit your software with
extraordinary load. And in the end of the day, your software pass those checks.
And then you have an approval human that you ask
them in the end of the day,
is this software okay
from performance point
of view? What can they do?
How can they
bless
or how can
they reject
the automatic tests?
What more they know as humans
than all the automatic pipeline that you have?
The same with security.
You have stuff like JFrog X-Ray
that shifts less all the way to the developer.
The developer adds a new artifact, and their IDE complains that they have vulnerability.
And then regardless of what they do, they have another check, and they see I,
and there are another checks, and they see D.
And those massive systems check tons of security,
vulnerability databases in the world and come to the conclusion
that there is no vulnerabilities by the time that the human
need to approve it.
What possibly human knowledge can contribute to those checks?
No, I agree with you.
And I think it sounds a little bit like maybe you have also listened to some of our stories
and our presentations because we are basically saying the same thing, right?
I mean, on the one hand, it is, I think, the lack of, I don't want to give up a particular
position that I have earned over the years by becoming
the performance expert that can look at two dashboards of two tests and then compare it
within a small amount of time, but still manual.
Maybe some people don't want to give it up.
But on the other hand, I also agree with you.
It is these processes that are still in place in organizations that may not want to change
for whatever cultural
or whatever reason. I completely agree with you. And I think some of the resistance comes from the
fact that people are going to resist initially and they're going to say, how is it going to know,
let's say the test gets rejected because performance was poor. Well, I need to be a
human to sit there and say, well, let me look at why the test result was poor. Well, I need to be a human to sit there and say,
well, let me look at why the test result was poor.
Maybe there was a mistake somewhere in there
because we need to get this out
because marketing says it's got to be out tomorrow.
What that really tells me is, though,
that there's a breakdown in the process
and that the humans should be,
instead of spending the time making decisions and researching if there was a problem with that test, that time should be spent making sure that the tests that are developed, the way they're developed, how they're set up, the data they're using is valid so that when the data comes out for the computer to analyze, everything along that pipeline has already been validated as being done correct.
And that's where the human comes in,
is in the design flow.
Same thing we see when everything starts moving
to an automated pipeline, right?
You have your operations team who used to deploy code,
who used to order new servers and all.
They start coding the pipeline.
They start managing the pipeline
and figuring out what tools need to fit into this pipeline
so everything could be automated.
It's just shifting that responsibility, right right but if you don't think if people aren't
initially thinking that mindset they're going to panic and be like well we can't just put it out
someone's got to look at it you know but do all your check but that's again it's this huge cultural
shift where all the you know we're in this time so many of the conversations andy and i have i'm sure sure you do as well it's all about culture you know absolutely these cultural shifts that
are where everyone's on the precipice and it's that i i go back to when i was a little young
child maybe eight or nine years old and i wanted to go i went to the you know public swimming pool
and they had the high dive which was maybe 15 feet in in the air. And I wanted to jump off of it and I was scared.
It was really, really scary.
And the lifeguard saw me struggling and said, okay, just walk to the edge, close your eyes, and when I count to three, you take a step.
And I did it.
And it was the best thing in the world.
But that's where everyone is right now.
They're on the edge of that diving board and they're scared to take that step.
And you know what i get it and oh yeah obviously that's fine and uh taking dive it's it's it's one way to do it baby steps is exactly the opposite but also like super um super useful you automate
parts of your process and you trust those small parts all the way.
And next thing you know, everything is automated.
Yeah.
And you just, you know, you're done.
But what I am upset about is the people that are set up for a failure maliciously
when we actually understand that this person cannot contribute more than our pipeline.
And they are just there to take a fault when actually the pipeline that we build fails us.
But we cannot blame it on the machine and obviously not on us.
But here we have a scapegoat that was supposed to find this problem.
How exactly?
Well, by his title title he's the approval
that's his job and obviously this is where um um it's it's just they're there to take the blame
yeah cool hey uh but i think we're getting at the the end of our show soon we've been almost here
for well almost an hour. Before we close,
is there anything else
that you want to tell the audience
that hopefully then motivates them
even more so to download your book
or look up what you guys are doing,
either at JFrog
or what you're personally doing?
Any other words?
Yeah, so I'll make the ultimate suggestion a to a to the
listeners and if anyone wants to just get the paperback version of the book just ping me on
twitter on linkedin and i will ship you the book how about? That's an awesome offer. Yeah. Very cool. I hope you will enjoy reading it.
I hope you will have some ideas on how to move towards liquid software and continuous updates
in your organization, in your team. And let me know if I can be of any help. And again,
ping me for the book. I will ship it to you. That's great. That's awesome.
I just want to, in the end, conclude and reflect a little bit.
So first of all, thanks for being on the show.
Thank you for having me.
We'll put the link to your Twitter and LinkedIn and everything.
But I really enjoyed reading, and I'll just read out the title out,
Liquid Software, How to Achieve Trusted Continuous Updates in the DevOps World. I really also thank you for kind of explaining to me that this is really about
the next kind of evolution when we talk about continuous delivery and continuous deployment.
It's about continuous updates. It's about thinking of right before we write code that the code and
the architectures we build are built for being continuously updated
but not only from a technical perspective but also the way we are defining new features and
capabilities and how we market them how we bring it to the end users i think that's another big
big thing i'm very happy that you enlightened us on the whole concepts of kind of uh creating a
fingerprint or i think as Brian said, the genome sequence
of software artifacts. And so with that, we can uniquely track the whole lifetime and lifecycle
of artifacts, which will also be helpful for our line of work at Dynatrace. So I really want to
follow up that conversation with you and see how we can also build tighter integrations here. And yeah, with that, I hope we will have you back soon because I'm pretty sure that movement
is not over yet.
There will be many, many, many, many years to come to help the world build better software
and happy to have you back on the show at some point in the future.
Absolutely.
Thank you for having me.
Thank you for the invite.
And yes, we're just getting started. And let's see how we can make it a reality. And obviously, let's see how we can poke any holes in it and see if it still holds water with whatever objections you,
other people might have.
So Brian, thank you for bringing the topic of how you don't like
the subscription model.
I think it was great.
Thank you very much again for having me and talk to you soon.
It's funny.
I was just going to thank you for having that subscription model talk
because it
brought me a new vision on subscriptions.
A more accepting one if
what's behind them is done right, which I hadn't considered
before. Here you go.
Liquid software for the win. Yes.
There you go. So really appreciate it. Thanks for
coming on. Thanks for everyone for listening.
If you want to reach out to us, you can
get us at pure underscore DT
on Twitter or pureperformance at diatrace.com for an email.
We will have all of Baruch's stuff up on the link.
I think you mentioned earlier in the show if someone wants to follow you on Twitter or LinkedIn or something.
Yeah.
Yeah.
So obviously, you're more than welcome to connect with me on LinkedIn, follow me on Twitter.
But on the more actionable thing, ping me if you want the book.
Great. Awesome. We'll have a lot of stuff in the show notes. Thanks everybody.
Thank you again. Bye-bye.