PurePerformance - 080 The AI to Automate Behavior Driven Test Automation with Thomas Rotté from Probit
Episode Date: February 18, 2019Creating and maintaining test scenarios not only takes a lot of time, but means we are creating artificial test scenarios based on what we think users are going to do versus replicating real users beh...avior. In this episode we invited Thomas Rotté, one of our friends from https://probit.cloud, who solved these problems for their work at KBC Bank. Their solution is an AI that learns behavior of real user traffic, creates a probability model for most common user journeys and uses that model to create automation test scripts on the fly for automated, real user simulating test bots. We also learn how GDPR and other challenges influenced their solution and how they are now working with other tool vendors and enterprises to bring this technology to the market.
Transcript
Discussion (0)
It's time for Pure Performance.
Get your stopwatches ready.
It's time for Pure Performance with Andy Grabner and Brian Wilson.
Hello, everybody, and welcome to another episode of Pure Performance.
My name is Brian Wilson and as always I have Andy Grabner virtually by my side.
Hey, virtually, hi.
Hey, so this is just for the listeners, this is our first recording back from Perform.
It's Monday, the week after Perform and what a week week that was right we were really really busy yeah and it's not only monday after performance also monday
after the super bowl and just saying because i happen to be in boston right now and i'm wearing
my brady jersey so uh just for the football fans out there i can't stand football and football
culture but that's me it was not a good game
it was not a good game but it you know you have to win these games too i guess yeah yeah well
that's called a championship yes um anyway um so we're back to regular uh content again um but i
think today's episode we're we're going to build on some of uh concepts that we were talking about
to perform a really exciting one.
So why don't you go ahead and introduce our guests
and we'll dive into it.
Yeah, so I was track captain at Perform
for the DevOps Noobs track
and I had Sumit from Intuit speaking
and he was joined by Sonia Shefre
from one of our technical product managers.
And she actually in the session
that was all about shifting left
and about actually Intuit showed
how they are reusing production monitoring data to create test scripts and also extract
test data for that.
But he was talking about how they do it at Intuit.
And then Sonia brought up the point that we have a partner in Europe, Probit, that is
actually trying to solve this problem as well and I think have successfully solved the problem.
And that's actually why, you know, our guest today, Thomas.
Thomas should be with us from ProBit.
And I think the reason why we invited him is because we want to learn a little bit more
about why they actually built the product, what kind of problem they solve,
and without having me now going through this.
Thomas, are you with us?
Yes, yes. Hello, Andy. Thank you for inviting me.
Maybe, you know, before we get started, do you want to quickly give an introduction on who you are?
Also, I think it's not only you, but there's also Frederik.
And just give a little background and then we dive right into the topic.
Yeah, perfect. Frederik couldn't be here, but I'll tell everything about him if you want to.
So, yeah. So, I am Thomas Rottet and my colleague is Frederick Bell.
And we have been working together at KBC Bank for a number of years.
And while we were doing that, KBC is using Dynatrace for its monitoring since about six years. We were first using AppMall and now we're using Dynatrace for its monitoring since about six years.
We were first using Appmon and now we're using Dynatrace.
So we've been working with that.
But we were responsible for the quality assurance
within the direct channels, the online applications.
And while we were doing that, we were facing some issues,
especially a lot of issues that were rising for the testing environment.
So we have a number of testing environments at this KBC banking.
We have the dev development,
the first-hand-to-end testing and the UAT development and so on.
So that goes on.
But this environment are definitely sometimes not representative for what is going on in the
production system we have in the production system 1 million users which are using our applications
in the testing system we have 20 testers who have to test all of this so that's not representative
especially when you are looking at performance issues.
If you only have 20 users, the performance will be better.
So that was one thing that we were facing.
But also because there were not that much users, we didn't have that much data also.
Because you want your site, your web application to be used, enabled to have
representative data. And that's, in fact, always a bit the problem with a test environment.
With a test environment, you want it to be stable because you want it to be
comparable to what it was before, but you also want it to be up to date. And this being stable
of your test environment and this being up to date. And this being stable of your test environment
and this being up to date is actually a great paradox. You cannot solve that. If you make it
up to date, it's not stable anymore because it's changing. If you want to make it stable,
it's not up to date anymore. And yeah, this was one of the major problems that we were facing together with then a few years ago, the GDPR.
In Europe, it's very hot, the GDPR thing, because it can cost you a lot of money.
And GDPR is actually pretty strict in using production data in your testing environment.
It's allowed, but you have to be as secure as in your production environment.
So most of the times it's just seen as you cannot use production data in your test environment.
And comparing to that, we had to find a solution.
So it was just me and Frederik, just two employees, developers in the KBC online application. And we were facing this issue,
wanting to solve this for our own product that we have.
So that was the KBC Touch application.
So let me ask you before you continue.
I think it's obviously a problem that not only you guys have been facing at KBC.
I think it's a problem that everybody has been facing
since the dawn of
of i guess software testing and software engineering right uh i mean and and i think
you brought you brought it it was really nice the way you said it right you you want to have
stability because you want to do regression analysis from build to build but on the other
side you want to be up to date so i like also the way you you phrased it there. Now, what I want to ask you, though,
in the beginning, when you started your explanation, you said at KBC, you were 20 testers
and you, as compared to, let's say, a million users out there in production. But solving that
problem, I mean, that problem has been solved, right? I mean, we have test tools that can
simulate hundreds and thousands and millions of users. So that the load generation itself, like using tools, I would assume that
that problem has been solved for a while, right? Well, yes, indeed, there are a number of load
tests and they have been around for a number of time. Now, the issue with this traditional testing software, and you see that at a lot of tools, actually all the traditional test automation software like LoadRunner or Naotis or Selenium or whatever, it always works at the same system.
You have to think very good about your scenario.
What are you going to execute? And you're thinking,
I want to do this and then this click
and then this something else.
And this scenario
is then repeated a lot
and it's repeated
one million times or whatever.
So there you can generate load.
But again,
it is not very representative
considered to what's going on
in the production system
because in the production system, there is going on a lot more and on in the production system. Because in the production
system, there is going on a lot more and all at the same time, and they are interfering with each
other. And this is something that we miss in traditional test automation, especially that
interference. We want to avoid it as much as possible in traditional test automation.
Now, this is what we want to face now.
In production, we have this.
We have a lot of different scenarios.
We tested it in our application, and there are really millions of different scenarios
that are possible.
And it's not possible to test automate all of them.
It takes too much time, too much resources to create it, but especially to
maintain it because maintenance of traditional test scenarios is quite time consuming. And time
consuming is of course money consuming also. Yeah, I think too, a big issue there is even if
you had the resources to do all of that, how do you know and how do you find out
what to set up, right? You have, maybe you have some BI tools that will give you some information
on what people are clicking on. Do you really though understand the full end-to-end path that
users are going through? Or, you know, I know a lot of times in my past, there would be the product
design team and or marketing team saying, well, this is how we envision users using the tool.
So no matter what you're using, it's still a guess.
It's not what the users are specifically doing.
Indeed.
And that is exactly what we know now because of Dynatrace.
Because Dynatrace monitors everything, every click of every user of what he is doing.
And so we know what scenarios the user are actually doing.
So we can use that to replay our sessions in the test environment.
Still, we are facing this GDPR thing also.
So you cannot just replay in your test scenario
whatever you are doing in production.
For example, yes, and this was a banking application.
So we were able to transfer money from one customer to another.
So it's not really a good idea to use production data.
So Thomas, before you continue, so just if I can recap.
So the problem that you really solved is basically automating the automation, meaning you solve the problem of automatically creating test scripts that
are reflecting what's actually really happening in production.
That means the real user click flows.
And that means with this, you made sure that you are testing the right things also on a
continuous basis.
So you automated the automation of creating automated test scripts
and then also automatically maintaining them,
I guess, right?
I think that's the big, that's...
That's perfectly rephrased.
We are doing test automation, automation,
but that doesn't sound very good.
Test automation, yeah.
Well, I'm sure there's some,
we need to talk with some marketing folks here
to figure out the sexy term.
But I still like it.
Automate the automation.
I mean, all right.
So how do you do it?
So first of all, we know a lot of what's going on in production system.
We have this monitoring systems and we know what the users are doing.
Also what the users are doing in different channels, how they interact with issues that are arising.
So we know a lot about it, but it's a lot of data.
It's a huge amount of data, and it's just streaming in continuously.
Dynatrace picks up every click of every user.
So it continues to gain data. And the data is enormous.
Still, we need to be GDPR compliant.
So we cannot use the data of the customer itself.
We just want its behavior.
And what we have added to Dynatrace is actually we let Dynatrace stream its sessions to us.
And we are adding an AI component on top of that.
So it makes a model of the behavior patterns.
And this model of behavior patterns is continuously updated.
So it learns the behavior changes which are done in production.
And each change in the production system
will be reflected in this model that we are building.
And it's the model that actually will be used
in the test automation system to generate the scenarios.
And actually the basis of it is the click pods.
You just say, you click here, you open this,
you scroll, you load some page, and then you are doing whatever.
And below that, we are adding probability distributions.
And that's what the name Probit is from.
It's short for probability unit. So, these probability distributions give you an idea about input values, give you also
an idea about what click paths the different users are following. And so, this input distribution,
for example, again, we go to an account, to a banking application, you have a transfer,
the amount to transfer, you have a probability distribution in
the amount of transfer to send. And adding all this to the model makes it very rich,
but still handleable, both to adapt it if some new sessions are streaming in from the Dynatrace
and to use it in your test automation system.
So let me ask with the model that you're creating, because I'm trying to understand all this. I mean,
it's awesome. And to me, you're talking about what I, you know, as a former tester, this was
like the Holy Grail, right? I used to say, we need something to create these. Now, when you talk
about this model, is this every single possible user click path that comes in is going to be
executed by this or is it doing something to say we're going to take it's going to look at the
commonalities and start with you know weaving out or or it's going to create a model based on
certain percentage of the most common click paths well it will do both, actually, because we are considering each user session
which is coming in to change our system,
to adapt what's going on.
But it will also take into account
that there is a possibility to have noise,
to have some user that did something
which is actually impossible
or some monitoring session that comes in
which doesn't make any sense.
So if it only occurs once, it will be fading out very soon.
And we'll take care about how often it comes,
how long ago the previous occurrence was.
And for the probability distributions, we should only take it into account if a minimum
number of customers used it, because we don't want it to be traceable to a specific customer.
Because then, yeah, you lose the GDPR functionality. It shouldn't be traceable.
But if it's not traceable, if you have multiple customers adding this data, this means that you will not replay everything.
You will not replay the things that only occurred once.
And also, if they occur very few, they will fade out of the model.
Right.
And just speaking on that, because sometimes very few, and we see that on the monitoring side, right, there might be like a cron job that runs once a day.
Do you have the ability to say this one that occurs rarely, though, is still an important business path?
Like it might be, you know, I can't think of a specific example, but let's say there was some user action that maybe happens 10 times a day because it's not common for users to do, but it's important that that gets tested.
Is there a way for you to manipulate the model
to make sure you include some of those paths
that might not be,
what's the term we have for that, Andy?
Low occurrence, but high business value?
Yeah, I think that's what I read.
You want to change the priority of certain things, yeah.
Yeah, we're working on that.
So now we have a number of options
to be able to replay our scenarios.
You can say play everything
or play it in a lower amount
because your testing system
is normally a little bit scaled
lower than your production system.
So you can say replay 10%
with the same probability distribution.
For the specific parts,
we're still working on that.
We are a startup.
So we created this thing inside of KVC.
We are now bringing it to the market
because we see that the problem
is not only KVC,
but it occurs at a lot of different companies.
And so we are adjusting our application to be generic enough to be usable for several companies.
And there are a lot of requests coming in.
And this is one that we heard before.
So certain scenarios, you should give more value to it because they are rare but still valuable.
We're working on that, but we don't have it yet.
So maybe in one of the next releases, it will be available.
That's amazing.
Hey, and Thomas, so you have the model.
The model keeps constantly up to date.
And when you say you replay the model,
so that means at a certain point in time,
you say now, please generate the test scripts, the actual test scripts out of that model?
Or is it where you can, at any point in time, say, I want to look at the model from, let's say, yesterday, because you can still go back in history as well and then create the tests?
Or how does this actually work?
How do you get from the model to the actual test?
And how do they look like?
Did you write your own testing engine?
Are you repurposing some existing testing tools?
How does that look like?
Yeah, it is indeed a combination of different tools that we have used.
So one is the thing that is streaming in from Dynatrace.
The other one is the AI system.
Can't go into too many details about the AI because the patent is still pending, but
it's working now. So this model is stored at our
place so you can always have a look at how the model looks like because that gives you a lot of
insight into the business intelligence of your application. So that's always available.
What we normally do is we spin up a number of bots, automated processes, and each time a bot is
starting, it will request one scenario to the model. And then the model will just generate
one scenario that the bot has to replay. And afterwards, you can have a view on it. It's
quite graphical on how the bot really did the executions, where it went good, where it went wrong, and give an overview on that.
So it's generated at runtime, actually.
So that's cool.
That means you can, if I get this right, let's say if a million users, your model learns from the million real users, and then you launch, let's say, 10,000 bots. So every bot is basically playing a real user
and then says, hey, model,
give me the next test that I should execute.
And then your model is on the one side of the model,
but it's also a service that generates the next test case
that should be executed by the next available bot.
Is this right?
Yes, that's perfect.
Yeah, that's indeed our intention.
So how does, you know,
one of the biggest issues in maintaining these things, right,
is besides creating it and all that,
whether it's a web-driven test
or an HTTP request type of load test,
there's always the issue that testers face
where development team
or the front end teams make some kind of change and your script no longer works because something
is not recognized. Do you have you figured out a way to take that into account or what happens
in that scenario where maybe a new release comes in? And again, my mind is strictly in the current
or older tool. So it might not, you know, I might not be thinking of how you get around this.
But if it's relying on a specific DOM element
or a parameter being passed
and they maybe change the lettering of the parameter,
how does your system handle that
if it's like a new release gets put in
and you're going to run the previous model?
Yeah, that's indeed one of the main issues that are occurring in test automation in general.
And we also face that, of course.
Now, we see that more and more companies are going towards a continuous implementation,
continuous deploy also.
So the releases are quite small and often, and works at the best if our testing system or our
product system is in such a system that the the change is not that big so that's
that's one now we have some customers in which the changes are still quite big
and which does the releases only four times a year or something like that and
there we have indeed that problem.
Now, you have to look at it as a whole in your application.
And what often happens is you change something in
your application and something breaks in
an entirely unexpected other spot of your application.
These regression defects are really annoying,
especially for business and for testers,
which say, hey, we have given that
to the developers already 10 times
and again, it is broken.
Now, that's the thing that we want to face,
that your regression should still work.
So you have a new development,
all the other features in your application should be work. So you have a new development. All the other features in your application should be tested.
But it's very annoying work for a tester
because you have to retest the entire application
while you know that actually nothing has changed.
So we are doing that for you.
The new testing is indeed something that we cannot cope with.
We are also working on an idea on that,
but we are focusing on regression testing most.
So if you say the new things,
that's mostly very often tested by the developer,
by the business that is requesting this change.
So it's tested very often.
And we want to add to that the regression testing on
the entire application. And sometimes things get broken if you remove a certain page or something
or change indeed the naming of some element that happens. And then it will indicate this as, hey,
there is a problem over there. This can be a false positive. And so it will be indicated that there is an error over
there. If you are changing that, especially when you're in a continuous deployment, you know that
this has changed. I should ignore this. So it will not be holding your next release,
but it will indicate this as an issue. And yeah, the developer or the business guy should be aware of the fact
that this is changed actually.
So it's not completely gone.
It's still an issue
that we are facing.
And I think you make
a good point too, right?
Obviously, this can't,
you know, no tool
that's modeling production traffic
can take into account
like new features, right?
So, and anybody
who does any kind of testing
knows there is a combination
in all different kinds of test styles
for different things that you want to test.
Are you doing a load test?
Are you doing a soak test?
Are you doing a new feature test?
So this just adds to that arsenal of,
all right, so if we have 80% of our work
is this regression part with Probit,
now you can eliminate all the heavy work you had to do on that 80% of our work is this regression part. With Probit now, you can eliminate all the heavy work you had to do on that 80% part
and concentrate more on the newer features, which you're going to have to manually script and all that,
until, obviously, they go into production and now they become part of this model.
So this sounds really amazing, I got to say.
Obviously, it's not going to be something that solves every single world problem in the testing uh in the testing world right but this is wow i mean now we are not selling silver bullets because silver
bullets i don't really believe in them but this is really in addition to what the testers are doing
especially also for the stability of your entire application you can say these testers mostly have
this scenario hey you have to test this and this and this. But if they are stuck in the middle because of some downage or a 404 in your application,
it's very annoying for them.
And they cannot work on the work that they really should do.
So we are taking care of the general coverage of your application
and also the stability of your application
so the testers can do the things that they can focus on.
And I think this makes it nice and easy too because one of the conflicts that testers
go through is if they do the standard static regression test, right?
Let's say, you know, not using Probit, but the standard regression static test where
you're using the same load from build to build to build to build, same model, same test scenarios
and looking for, you know, so that's your control. You're looking for a change
in the responses, right? That's a great test. It's still a very valid test. The only issue is that it
doesn't reflect the real world stuff. So oftentimes what we always saw in the past is you would run
your regression test, everything looks great, put it into production. And for some reason,
what users are doing changes and things break.
And then they turn back to the testers and say, how can we didn't, you know, and that's
the big question of what you're doing here.
They turn back to the testers and say, why didn't you catch that?
And the testers will say, well, we're using this static model as our control.
We don't have access to what the real users are doing.
Now they can, if they still want to run a static model to look for, you know, small
control changes.
Okay, that's test number one.
Now let's run the real production regression
to model what the real users are going to do
and see what the performance is.
And you now have this super rich data set
on which to make your decisions on pushing out
or also finding other problems that go into it.
I wish I had this tool like eight years ago
when I was still in performance testing.
Wow.
Sorry, we weren't there yet.
Yeah, let's go back in time.
Hey, Thomas, I got another question.
How do you deal with data, with the test data itself?
Because if you have a million users,
and I understand that you can obviously probability-wise figure out
what's the average amount of money that people are
transferring so that's input data but how do you deal with the then the underlying test data and
the test and the test data and the test system so for instance if your bot picks up a a use case
that is actually transferring money then the bot i assume is logging in with some test account
yeah and what about what if that if that account doesn't have enough money on the bank account
to actually do the transfer?
Is this something that you also factor in?
How does this work?
So the thing is, if you are doing a transfer and you don't have enough money,
there is an error popping up saying, hey, you don't have enough money.
For us, that's perfect.
That's normal behavior that occurs in production
that sometimes occurs in the testing system also.
So actually we don't think that is a problem.
We are continuing the testing
and see that the normal path should also be feasible.
So we have a test set of a few hundred test users.
And if one of them doesn't have enough money
on his account to transfer, that's okay.
It will give an error,
but your application shouldn't crash on that.
Another thing is that the data is actually,
the testing data is the major issue in all testing systems.
And we are trying to solve that or trying to face that in the way that we don't really care about the data.
We care about the behavior patterns.
And we are using some data because you have some probability distributions of input data.
But for example, we have a functionality here which says open your car insurance.
That only works for customers who have a car insurance.
So in our testing system, sometimes this works and sometimes it does not work because you don't have a car insurance.
And we are solving that by keeping a success rate for each action.
And each action is executed a number of times and say, hey, this one succeeds in 50% of
the cases, for example.
If this continues to succeed in 50% of the cases, there is no issue. If this drops to 10% of the cases,
then there is an issue,
and then this is flagged for further investigation.
Okay, so basically what you're doing then,
you're comparing the success rate from test run to test run,
because that actually tells you if the system is stable or not.
Yes, indeed.
And if the success rate keeps stable, there is no problem.
If the success rate drops, then we are reporting that this is a possible incident.
Andy, I wanted to build on that one more with one more question.
With the data, with GDPR, there's a lot of issues with that.
And one of the scenarios I always like to bring up is the idea of search, when you're testing a search system.
There's two issues that people commonly run into when testing a search issue.
Either you say, maybe we'll have 10 search terms.
I'm just exaggerating a little.
We'll have 10 search terms, we'll run those. Issue with that is now you've cached the data for those 10 search terms, and you get your
responses back immediately because everything's cached.
If you go to the other extreme, where it's, we're going to have a large number of unique
search terms, then you're never exercising the caching, right, which is unrealistic as
well.
I would imagine using the real search terms or
some of the real data users are inputting would make that model, you know, very, very accurate
to what's going on. But how do you reconcile? I mean, it probably comes down to an individual
data point, but is it easy to tackle this? How easy is it for you all and for customers to tackle
this with GDPR? Is there, you know, over here in the United States, we don't have to deal with that as much, at least not in the local side of things.
If you're an international customer, obviously, yes.
But how do you tackle knowing or is it easy to know which data you can leave as customers inputted versus which ones you have to change?
It's not easy to see that indeed.
And we are having some algorithms on that.
So we are using some algorithms to see if data is really random.
If you have certain amounts or certain strings, then we are checking the randomness of this string and also how much it is reused by different customers.
If you are using the search term, say hello, and the other one is using the same search term, it won't be specific for you.
If you are searching on your bank account number, then it's specific for you.
No one else will look on your bank account number if you use that in search.
So that's a little bit how we are doing that, but it's not easy. specific for you, no one else will look on your bank account number if you use that in search.
So that's a little bit how we are doing that, but it's not easy. It's one of the hardest things to
actually deal with this data. Therefore, we make the system by default, it captures a number of things and you can add more data to it.
And while it is running, it is also adjustable, of course, in our console.
And then you can play around with that.
But it's not an easy problem to solve.
But it sounds like you can at least look at the patterns of the data being used.
And then with your own data, that might not be the user's data model, that pattern.
Yeah, indeed.
We have some ways to define this, which is not always sufficient.
And we need to continue working on that.
Pretty cool.
Hey, I got another question.
I think you alluded already into it a little bit.
So you are creating test scripts, and you said you're using existing tools that might already be out there uh what are
you what are you are what are the type of scripts that you're generating using selenium
using any other things what do you do yeah we are using selenium uh because we are focusing
on web applications i think selenium is their the standard uh currently. So we are using Selenium scripts
and then they can be executed on several browsers.
We are using a Selenium grid underlying.
So we can add actually whatever you want.
We are looking into working together
with some providers on that.
We are checking Sauce Labs of browser stack,
which are offering these different
browsers. Now we are doing it in our own system, which is of course more limited.
But because we are working with web applications only, currently Selenium is good for us.
Once we will move to the mobile, it will be harder, of course. You have Selendroid and so on.
But yeah, that needs some more investigations.
Currently, we're focusing on web applications only,
especially because of the limited amount of resources
we have to implement all changes.
Cool.
And any plan on taking it a level down on HTTP level
to generate more load,
even though obviously I know you want to simulate
the real end user by doing real user replay.
But any thoughts on taking it a level down
to A2 protocol level
so you can just simulate more load?
Yes, actually we are looking into that,
but we are not going to do that again all the way.
We are trying to work together with neltis
they are very good is this load testing and we have been working on a model now on a
first proof of concept that neltis um they they use in some interceptor and this interceptor can be used in between our robot
and the and the system and so they intercept our testing and they can enhance that to be used as
load testing so we are indeed checking out this but it will be in an in a cooperation with neotis
and we were talking with henrik at the perform andy and he was telling us, I think this is probably something you all might
collaborate on.
Henrik was talking about how in Neotis
now, they're creating
a capability to create scenarios
as code.
So instead of having to go into the GUI, so then that
would mean that you'd be able to leverage
your AI scenario to generate
a file of what that little...
Yeah, that's indeed how we do it.
That's really awesome.
Yeah.
And actually, I mean, yeah, we are working with him on that.
And I think there's also, he showed it at one of our training days
that we had to perform where they have a YAML-based script definition
that you can then run, or you can also write your Java-based
or JUnit-based test scripts.
Yeah, that's pretty cool yeah we are running it in in java because we are we have a java background but actually you
can do it in anything you want uh but we are using the the naotis web driver wrapper uh to intercept
it and so that's that's pretty cool uh we've been working on some other things on Python also, but we have a Java
background. And once you're in the Java, it's kind of your style.
Yeah. Hey, and one last thing, because I think this is pretty cool. Once you mentioned the
name bot, test bot, I thought of another session that I did last week, I performed with Nestor
from Citrix. He was talking a lot about chatbots
and how they are using bot technology
or they wrote their own bots
to do things like automated deployment.
So they have a bot that is validating.
Is the build good enough?
Is the environment good enough
that we deploy that new build into?
Wouldn't it be cool
from a kind of a self-service perspective
to provide a chatbot to developers where I can say, hey, I'm Andy.
I have a new, I made a code change and I want this code change to be now tested or whatever I use, run the 10 most common use cases against my app that I've just deployed in my dev environment and do it for me.
That would be great. That would be perfect.
And I think that's one of the things that we should do.
I think it's really feasible to include that.
Still, we are limited in the number of resources that we have.
So it might be something for next week or maybe in the weekend.
But it sounds really cool.
So that might be a very good idea.
Thank you, Andy.
Well, if it's something for next week, then that's awesome.
It seems you have a lot of resources if you can take a feature request and implement it next week.
Yeah, I think it will be for the weekend
or for the night or something.
I wanted to ask about that, right?
Because there's all these, you know,
as you're talking about this stuff,
I'm like, this is great.
You know, how fast can we get this?
How big is ProBit right now in terms of, you know, company?
Is it just like you working on this or how many?
No.
So we created this as a project within KBC.
So we created four KBC online application only.
That was the first part.
And since end of last year, we are coming out now to see how to generify it and to make it usable for other companies.
And we have a number of early adopters. Actually, there are five which are using our
application, which really have an implementation which is useful, which is really making a
difference in their daily business. And it's very useful for us also to see where we did
program it too close to our initial project.
We want it to be generic, of course.
So now we have to change several things to make it more open.
And actually, you can still get involved as an early adopter, but we are going for the
open public well right now so if if there are possible candidates we
are really uh wanting to to bring this live in in public yeah that's great and andy it's funny
because we always you know a lot of times there's this idea of well i'm going to automate myself
out of a job right but i think this is a great example of well if you learn the automation
you can then start building tools like this because
that that kind of sounds like what you all did isn't that time like you you were doing the testing
but you have the developer knowledge yeah indeed we were coming from a developer and doing testing
and qa and test automation and and automate the automation yeah and that's all that leveling up
we always like to talk about but great example of how you can you know you're not going to automate yourself out of a job because there's always going to be other things you can do as long
as you build on your skills so that's really really awesome i guess there's plenty to do
yeah thomas is there anything else we missed i think we talked about why you did it how it
evolved what's what's currently doing uh any final remarks with anything that we missed uh no no i don't think so uh i think
we have most of what we uh we were planning to talk on um yeah i i am of course always available
if you have additional information if you need additional information everyone can can contact me
wait wait that's a good point where can people contact you how can they
find you well first of all we have a site which is uh probit.cloud at probit.cloud you can find
references to me and my uh my colleague frederick which are the main contact persons now uh and yeah
you were asking about how many people were working on it so we have me and frederick are the contact
persons and behind that there are four developers now working on the so we have me and frederick are the contact persons and behind
that there are four developers now working on the on the technical system and setting up uh the
systems so as as soon as we have more uh customers then we will be able to scale up and you're you're
based out of belgium i believe yes we're based in belg but actually, as this is a SaaS service that can be provided, it's really not an issue wherever you are.
I just want to point it out there in case people start contacting you in the middle of the night.
Thank you for being aware of what your regular business hours are.
I can imagine people being like, I need this now.
Now, yeah.
He said on the podcast, he can do it over the weekend.
Yeah. All right. Cool. this now. Now, yeah. He said on the podcast he can do it over the weekend.
Yeah,
yeah.
All right.
Cool.
Hey,
thank you so much,
Brian.
It's time.
It's time to summon the Summarator.
Let's do it.
Come on.
Do it.
All right.
So,
what I learned today,
I still like the term,
you know,
automating the automation
because that's really
what you guys have been doing.
And I believe it's a great approach of analyzing real user behavior and therefore coming up with a model that reflects the real user behavior in a production environment to then execute the behavior against, you know, the next version of your app. I think we learned that you've done, obviously, your initial work with KBC, so a bank in Europe,
making sure that there are no issues with GDPR, which is obviously one of the strictest
laws we have out there.
So that's been battle tested.
It's great to hear that you have your first five early access customers and you
are now more genericizing, if that's the right word, making it more generic, the solution to
also fit in other industries for other software. I really like the idea of just having a way,
especially now for developers that are going to a more continuous deployment model to say,
I want to know if my end users will still have the same experience as before. And I think
instead of having to create and maintain test scripts that are artificially made up,
it is great that you guys built a solution that does this fully automatically by executing
the same behaviors in prod.
I'm very much looking forward to what you guys are doing in the future.
If people want to know more, probit.cloud.
So that's really great.
And yeah, I love it.
You're solving a big problem here.
And Andy, I wanted to add to that. I think it's amazing what people can do with tools these days. You know, I'm glad I'm not on the testing side anymore,
only because it's so nice not being under that pressure.
But I think it's an amazing time to be on that side
with all the integrations and everything coming out.
You know, we were talking last week,
one of the themes we kept on coming across
in talking to people at Perform
was using tools in different ways.
And I think it was Chris Morgan at Red Hat,
he's working on the OpenShift project.
He said a great thing.
His view of OpenShift is they're going to know
they reached the maturity model
when people start using tools, OpenShift,
in ways that they did not design for.
So you have the operator that starts opening the world.
We have all these tools now that have this openness that
are giving people like ProBit or Neotis or ourselves, anybody,
the way to use data and run things in ways
that we hadn't quite, not necessarily had specifically
in mind when we built, but we said,
hey, let's open this up so that things can happen. And it's amazing to see what is happening. I think we're just hitting the edge of where all
this tool interaction can go. And if you think about with the chatbots, I was joking with Nestor
when we were talking. I'm like, so are Jenkins and Davis going to get into an argument as they're chatting with each other?
But it just blows my mind seeing all the stuff that's coming out.
And I just want to say to Thomas, it's amazing what you're doing.
I wish I had this back when I was doing it.
And I'm so happy you're making it because it's going to make everyone's lives that much easier.
And it's going to make testing that much more fun.
So great, great stuff you're doing here.
And I just want to thank you.
As a former load tester, as a former quality tester, all that, this is amazing stuff.
Thank you very much.
All right.
Thanks, everyone, for listening.
If you have any questions or comments, you can reach us at pure underscore DT.
Thomas, do you do Twitter or anything if people want to follow you or anything going on?
Or is there a Twitter account for Probit that they can follow to keep up on any latest news?
No, we don't have a Twitter account yet.
We have a LinkedIn account.
You also find it on the site, Probit.cloud.
There is a link to the LinkedIn where you can find the latest news on how Probit is doing.
All right, awesome.
Well, wish you the best of luck and can't wait to on how Probit is doing. All right. Awesome.
Well, wish you the best of luck and can't wait to see the future development with this.
Thank you.
Perfect.
Thank you.
Thank you.