PurePerformance - What's next for Feature Flagging and OpenFeature with Ben Rometsch

Episode Date: February 3, 2025

Feature Flagging - some may call them "glorified if-statements" - has been a development practice for decades. But have we reached a stage where organizations are doing "Feature Flag-Driven Developmen...t?". After all it took years to establish a test-driven development culture despite having great tools and frameworks available!To learn more we invited Ben Rometsch, Co-Founder of Flagsmith, to chat about the history, state and future of Feature Flagging. He is giving us an update on where the market is heading, how the CNCF project OpenFeature and its community is driving best practices, what the role of AI might be and what he thinks might be next!Couple of links we discussed during the episode:Ben on LinkedIn: https://www.linkedin.com/in/benrometsch/YouTube Video on Observability & Feature Flagging: https://www.youtube.com/watch?v=VZakh1_oEL8OpenFeature: https://openfeature.dev/

Transcript
Discussion (0)
Starting point is 00:00:00 It's time for Pure Performance! Get your stopwatches ready, it's time for Pure Performance with Andy Grabner and Brian Wilson. Hello everybody and welcome to another episode of Pure Performance Are you already had a beer? No, no. Well, no, that was not the lie. But as he said, as always. And unfortunately, we have rare moments where we have to record a podcast without each other. Well, it's usually, I think, I don't even know if there's ever been one when I did it without you. So when I'm doing it, it's as always. If I'm saying it, it's true. In this case, it's true.
Starting point is 00:01:04 You're right. Right. How would in this case it's true. You're right. Right? How would the computer interpret it? Anyhow, we're not here to talk about language. Because this could be interpreted one or the other way. Yeah, we're here to be patriotic. We're talking about flags.
Starting point is 00:01:18 Flags, yeah. Maybe some maybe, how can I make now this connection from flags to feature flags? Let's just say it out loud. Feature flagging. Feature flagging has been a topic at least from my side for the last couple of years because the promise of feature flags when I first heard about the term, before that when I developed software, I had many if statements in my code
Starting point is 00:01:43 that were basically saying, if this, then do this. But if a particular Boolean variable was on, then execute a particular piece of code. This obviously matured much more than what we had back then. Some still call feature flagging glorified if statements, but to get a better description
Starting point is 00:02:02 of what feature flagging is, what the state of feature flagging is, where the industry is going, we invited Ben from Flexwith today. And Ben, thank you so much for being on the show. How are you doing? Yeah, thanks for having me. I'm really good, thanks. Yeah, really good. Hey, so Ben, when I say glorified if statement, a feature flag is nothing else than a glorified if statement.
Starting point is 00:02:27 Do you cringe? Do you feel like... No, I mean, we lean into that, I think. The kind of the beauty of the pattern is its simplicity and its effectiveness and how you can get such a large benefit just from, you know, the original way back in the mists of time concept of doing it. I think the thing that the big thing that changed was the idea of having those things become remotely switchable, which was, you know, what changed from your and everyone else's back in the very early days. There wasn't a really reliable way of doing that remotely. But yeah, we definitely lean into it and we see the bulk of the value of the idea and the concept and people who use Flagsmith or
Starting point is 00:03:21 one of the other vendors or even Rothering, if you can organize yourself around them and your workflows around them, then they're super powerful and they're just Booleans, yeah. So that's a good way maybe to explain kind of also the maturity and how the things has evolved because back in the days, as I explained, I remember during startup with an environment variable that initialized the variable that I could access in my code, and then I basically said, hey, if A, if true, then do this, if false, do something else.
Starting point is 00:03:54 Now, modern feature flag management, really one of the features is that you can change these things on the fly remotely, but I believe it goes much further. You can do a lot of conditional evaluations right so um where you can define rules uh i guess you obviously folks if you are interested in learning more about future flagging there's also a lot of material out there that we will also link to so as we go through this discussion today and as we talk about additional
Starting point is 00:04:23 content that people should look into folks you, you will find the links in the description of the podcast. Now, Ben, to hear it from your perspective, if I look at your website, it says ship faster and control releases with feature flag management. You've been doing this for quite a while and I would just be interested now because we've been hearing this for a while what's the adoption what's the real adoption of feature flagging right now out there because I think some people may still are still not comfortable with it because changing something dynamically on the fly which means it changes the execution of code on the fly just give us some confirmation. Is this things that people really doing at scale? Yeah, I mean, 100% they are. We, you know, we're constantly talking to companies of all
Starting point is 00:05:16 different shapes and size. We're a commercial and open source platform. So, you know, we're getting everything from like hobbyists, people doing stuff in their basements on home labs and stuff, which, you know, it's really well suited to all the way up to, you know, massive, huge multinational corporations and everyone in between. I do think that the direction of travel in the last kind of 20 years, I mean, I've been writing software professionally for over 25 years now. And, you know, that direction of travel of back then, it was like there would be this enormous amount of energy behind a single release. And, you know, that release might stay in production for many, many months, right? Whatever you were talking about. I mean, I know it sounds kind of crazy now, but that's how software used to,
Starting point is 00:06:15 well, the software that I was involved in very much used to be designed, built, and that was the kind of velocity of those releases. And the direction of travel ever since then has always been to um you know collapse that time between releases you've got the you know the the kind of the pedestal engineering teams um especially you know 10-15 years ago who were pioneering multiple releases a day, not having... I used to work with testing teams that would be in an entire room,
Starting point is 00:06:54 and there would be human beings that would just go and literally go through the same processes day after day, just testing that stuff. This was before all of the automated browser testing stuff. So that's a long way of saying feature flags are a small part of that concept where I guess back then it sounded counterintuitive to say we're going to deploy our software 12 times a day and that's going to make it more, you know, it's going to improve quality, reduce defects, downtime, all this sort of stuff. seniority level of large organizations to understand that counterintuition is actually proven again and again in all different shapes and sizes of teams to be,
Starting point is 00:07:55 if you can release more often, there's a general correlation between quality and how often you can release software. Does this mean, because I think both Brian and I have a huge background in quality engineering, we've both been on the testing side, on the performance testing side, before performance testing, I also did a lot of functional testing. Does this mean that with feature flags,
Starting point is 00:08:20 we no longer need those people that are doing manual testing that are creating automated tests is this is feature flagging kind of eliminating the need for upfront testing no i mean it's definitely not a silver bullet um i think that's that's a little bit of a that that's a common i think that's a bit of a common misconception. Sometimes people think that at one stroke they can be in this new world of not having to do a bunch of stuff that is maybe not the exciting part of engineering. But that's not the case. one sort of component part of a whole ecosystem of tooling and infrastructure and workflows and processes that can get you to that to that point of um you know fix forward um not not you know people not knowing that a build's gone into production almost.
Starting point is 00:09:25 Like, again, that sounds super counterintuitive. And the first time I heard that, you know, however many years ago it was, I was like, this is just nuts. Like, but then, you know, you had, there was a very, in the UK, a challenger bank called Monzo who famously said, you know, we've got one environment of production. We do multiple releases a day. They just, they got a banking license. And so it's definitely part of a lot of tooling around
Starting point is 00:09:50 a whole suite of stuff, things like zero downtime deployments, things like really high quality reactive observability platforms like Dynatrace, you you know uh reliable dependable cicd processes being able to deploy your software without any downtime and being able to do that consistently and repeatedly so there's a whole there's a whole sort of you know and i i talk about this a lot i kind of 25 years ago you know you were going and racking machines into um data centers by you know that was what the engineering team had to do to get sites live um uh doing a deployment without or with you know a few seconds of downtime was tricky um and often involved switching shells as quickly as you could to turn Apache back on or whatever and where the industry is now where the discipline is now
Starting point is 00:10:53 and you know it's kind of almost unthinkable to show someone know, if I could go back to my 20-year-old self and show them what the tooling was and how much time it's freed up, how much more efficient and effective it's made teams. And feature flagging is definitely, like, I mean, I believe it's an important part of that, but it's definitely not a silver bullet. I mean, Flagsmith itself has a very um
Starting point is 00:11:26 comprehensive uh end-to-end front-end set of tests a huge amount of of server-side tests um and you know it's it's all about the the you know the the increase in quality that it gives you and the increase in work efficiency, developer productivity. So there's a bunch of stuff going on. But a lot of this stuff is not about tooling or software or infrastructure either. A lot of it is about people and workflows. And I would say that the pull request page of GitHub is probably as important an invention as anything else. And anyone could write a pull request page, right? Like it's just a bunch of comments and stuff.
Starting point is 00:12:19 So yeah, it's definitely, I find it quite interesting because it's not just technology. It's people and processes and opinions and politics and a lot of that stuff. And that's why it's quite interesting working with different types of organizations. You know, some of them very kind of like classically slow moving, you know, non-engineering industry organizations and things like that. And they're very much a lot of the time, you know, progressive and trying to get to that point. But it's just because of their size. That's a 10 year program kind of thing. Well, thanks for all this background. And I think, as you mentioned, Feature Flakes is just
Starting point is 00:13:08 a technical detail on how we need to change the way we think about releasing software. It's about decoupling the deployment from the release. I think this was a big thing for me still. Some of the questions are, you know, how do you still test? Coming back to the testing aspect, how do you test that you're deploying something new and the feature flags are all turned off? That still you actually made the right, you put the right code behind the feature flag and you don't miss anything.
Starting point is 00:13:45 Also, if you have cross dependencies between feature flags, I think these are still some challenges that might be tooling around it to figure out if the cross dependencies are not a challenge. But overall, it's just a technical detail on how to implement the feature flag.
Starting point is 00:14:04 It means more about how you're changing a development process and your release process. And I really like your example with the analogy with the pull request page, right? Because the pull request page is a very simple page, but it changes completely the way you think about the software delivery process. And I think that's a spot on analogy. Yeah, I mean, almost all tooling has been going in that direction, right? Like observability platforms, source code control and management, deployment, they've all been moving in that direction. And, you know, they're all important.
Starting point is 00:14:42 And, you know, they can all play well together um but just to go back to your um point about you know whether how you do testing with like cross-dependent flags again like these things are just tools and um you know anyone um uh can can can you know you can misuse tools so you know you know, thinking about making flags that are kind of functional, right, in a way that you've, you know, if you had a purely functional language that only takes inputs and outputs
Starting point is 00:15:15 and can't mutate the state of the rest of the program, quite often, you know, it's, well, we generally recommend people think about feature flags in the same way. And so that's one of the reasons why we're really great proponents of using them, particularly on the front end as well, because it's much easier to make a feature flag kind of purely functional in that it's not going to have any knock-on effects on other parts of the application if you're just, you know, removing a menu item or showing an additional option or something like that. So like all these things, you know, I mean, yeah,
Starting point is 00:15:56 by itself, Git doesn't do anything, right? And you can tie yourself into knots using a version control system or using a CI CD platform or using feature flags. And so a lot of what we work on is, I mean, it's not, I wouldn't describe it as consultancy, but definitely like speaking to customers, finding out, you know, how they're using the product, how big their teams are, whether the teams have cross dependencies, they have microservice architectural monoliths and all this sort of stuff, whether they have a monorepository.
Starting point is 00:16:34 Because all those things feed into and can give people an idea of what the best way of using flags are. It's really common as well for us to help teams just get started, you know, get that first feature out that they've gated through a flag and turned on in production through a flag. That's obviously, you know, especially from a person, a people point of view, that can be a lot of work, can be challenging. It can mean, you know, it can even mean at the most extreme level organizations changing their policy on what they define as a release, right?
Starting point is 00:17:17 Like a good example is financial services in the EU. There needs to be, there's an amount of regulation about whether you're, when you change your application or your application behavior, who's allowed to change that, what sign off you need to do that. And obviously, introducing feature flags can then blur that quite a lot. So, you know, you could even be getting to the point where just using the pattern requires you to, you know, get board level sign off on, right, we're going to change our sort of engineering policy around this. So, yeah, once you get the first deployment under your belt, then it's a case of like, right, you know, how can the rest of the team use this? How can the server-side engineering team use this? And how can the SREs use this?
Starting point is 00:18:11 And, you know, how can the mobile app teams use this? And, again, like most tools, they can be overused. You know, we've seen deployments with, you know, hundreds and hundreds or thousands of flags, and sometimes it's just a little bit crazy. Actually, if you open your Chrome browser, and you go to Chrome slash slash flags, you can see how many feature flags are available in the Chrome browser,
Starting point is 00:18:43 and there's many. Obviously, that's an incredibly complicated piece of software, you can see how many feature flags are available in the Chrome browser. And there's many, um, obviously that's an incredibly complicated piece of software, but, um, yeah, it's, it's different like all these tools,
Starting point is 00:18:53 um, how you use them depends on, on who you are and what you want to get out of them. And there's no, there's no real rules for everybody that, that work, that work for everybody really. Yeah.
Starting point is 00:19:05 Hey, as you mentioned, this use case with the regulations, so obviously you have the need as an organization, let's say in the EU, to change certain rules at midnight because then a new law cuts into effect and only certain people are allowed to make this change. Thinking broader,
Starting point is 00:19:24 if I have a handful of feature flags, how do I make sure that I also know exactly at which point in time which feature flag was active and also evaluated? Because in the end, feature flag is like configuration change. And so I always want to make sure if something breaks or if I get an audit, right? If I get an audit by the European Union, I need to make sure to explain that at one hour before midnight,
Starting point is 00:19:52 this code was really not executed, but at midnight, this code got executed. The term S-bomb also comes into mind, like the software bill of materials. Is this where the whole feature flag configuration also becomes part of your s-bomb or how does this work do you have any ideas on that yeah i mean that's a big question um i mean for sure um you know you are you all of engineering's trade-offs and and um you are trading off some aspects of you, reproducibility and understandability in the edge cases of bugs or, you know, which line of code was lit, which, you know, which code path was lit at, you know, 9.57 last night because, you know, something happened. There was some, I don't know, security vulnerability or something like that. And this is kind of, I've been thinking about this a lot recently, actually.
Starting point is 00:20:53 I've kind of come up with this concept that we talk about flags just being Booleans, but enterprise software, you know, it needs this very large, wide, heavy stuff around it, right? Like it needs audit logs and it needs different forms of authentication and it needs integration with different types of, you know, enterprise systems and it needs fine-grained access controls and all of this sort of stuff. And it doesn't matter what you're doing.
Starting point is 00:21:29 It doesn't matter whether you're just doing feature flags or if you're building a platform of the scope of, you know, Dynatrace or GitLab or something like that. They all need all of that stuff around them to to be able to you know for for us where when we're selling into large enterprises those you know there's as you i'm sure you've seen the 500 question excel spreadsheets of you know what's your ssl certificate policy and what authentication mechanisms do you have and all this sort of stuff um and so a lot of what we've been working on um in the last couple of years has um been answering questions like that
Starting point is 00:22:14 so yeah you know you've got and and and um uh this this is where we can sort of talk about about open feature um open telemetry um and being able to yeah like you know so this this idea that you're you're right you're completely right that your flag state for that user or that session at that moment in time is part of your configuration building materials whatever it's what you need to be able to reconstruct you know if you need to go back and look at what happened why some error was thrown or exception was thrown why why it was thrown and it does become part of that black box of all of that stuff that you'd expect to get in a trace and so things like integrating into um you know open telemetry So those flags states that that particular mobile application had in memory at
Starting point is 00:23:08 that time become part of a trace. And so you can say, okay, yeah, you know, we turned on this feature midnight last night because, you know, we left the European union or something equally annoying. And we had to change, you know we left the european union or something equally um annoying and um we had to change you know the the regulation changed and the law changed we had to do something um and so you know the you know that's something you can use a flag management platform like flagsmith or one of the other vendors to do is to say right you know at this point in time at you know 11 59 i want this flag to be false and at 12 midnight i want it to be true and so
Starting point is 00:23:46 that's obviously um needs to be stored somewhere you need to be able to recover that in an audit log or a trace or something like that so you can so you can understand it um and so yeah so a lot of the work that we've been doing on flagsmith, the developer experience probably hasn't moved much in the last couple of years because it doesn't need to, because that stuff kind of is there and works. And a lot of the stuff we've been doing is around things like integrations with OpenTelemetry or Grafana or something like that, where those more unusual, difficult questions of what was the state of the application for this user at this point in time,
Starting point is 00:24:35 we can give a good answer to that. Yeah. Yeah. For me, it was a nice segue because initially when we talked years ago, the reason why we were enriching distributed traces with feature flag information was mainly for troubleshooting purposes and for reporting purposes. That means how many people were now seeing the new feature and did this have any positive or negative impact or did it fail?
Starting point is 00:25:07 And if so, was it because of this feature flag? Yes or no. So we are enriching distributed traces and maybe logs with this information. The additional use case that analysis the whole compliance use case, if compliance is the right word, I think it is the right word that you can prove now with a distributed trace that is enriched with feature flag information that a certain user was served a result with this feature flag turned on to enforce a certain new, let's say, regulation. And I think it's not only with regulatory bodies. I believe also in the e-commerce space, if you think about a new launch of an iPhone, I'm pretty sure all the vendors have a certain contract
Starting point is 00:25:51 that they are only allowed to put the iPhone out there at a particular point in time. Nobody can sell earlier or tickets or something like this. And so the trace becomes even more important because you can also use this to prove that you have done your job right,
Starting point is 00:26:06 that you are within your contractual obligations. And I never thought about this until you just mentioned this. Yeah, I mean, it's funny you mentioned the iPhone one because there's always one vendor in one country that presses the button and does it like 24 hours too early and then everyone's got the specs. And then they're obviously never going to be able to buy anything from Apple ever again.
Starting point is 00:26:28 Yeah, exactly. Yeah, I mean, especially with distributed systems, ironically, that's gone from a very easy problem to solve to a quite hard one. It used to be like, well, just turn that version of the site off and bring that version of the site up and restart Apache. That's how you used to do that 20 years ago. But with more distributed systems now, it becomes an increasingly non-trivial thing to solve, for sure. Without flags, that is, right?
Starting point is 00:26:59 You brought up Open Feature in conversation earlier. Open Feature is an open source, a CNCF project that was launched three years ago now at QCon Valencia between a couple of organizations, including Dynagrace. You were also one of the early adopters in supporting OpenFeature. Just for a clarification perspective, because I sometimes also get serious that people get the difference wrong between OpenTelemetry and an observability platform.
Starting point is 00:27:32 OpenTelemetry is a framework that standardizes the way you're collecting data, but it's not an observability platform. You still need to send this data to someplace to then analyze it. And the same is true with open feature. Open feature is a standard that allows developers to put in feature flags in their code without the, let's say, link to a vendor implementation. But still, open feature on its own doesn't do anything without a backend. And there might be open source backend implementations. And the same is true for open source observability platforms
Starting point is 00:28:07 that can consume open telemetry data, but you still need that backend. Is there anything, first of all, clarification? Did I miss anything to explain? No, that was a great summary. That was a great summary, yeah. I mean, I think trying to trying to yeah i mean the the original goal i think really back in the early day was to try and um design an interface standard for
Starting point is 00:28:33 the sdks so that um it was it was easy to well not just to switch provider but also to to be able to swap out um like flag provider implementation. So just to give people a good example of that, if you're wanting to run isolated unit tests that don't have external dependent, so you don't want them to have external network dependencies, being able to swap out a real-time flag provider like FlagSmith with, say, a flat file of flag values, right, that you want to run your unit and integration tests against.
Starting point is 00:29:15 That's something that is a very reasonable thing to expect and to ask for from a feature flagging platform. And one of the things that we kind of underestimated when we started the project was how much energy that those SDKs would consume and the expectations of quality and capability of those SDKs. And as a small team, it was very difficult for us to, you know, people would come to us and say, I want to be able to swap out my Rust SDK with some static flags. And it's like, well, again, that's reasonable. But, you know, we've only got a finite amount of engineering resource. And so Open Feature, as well as being able to provide a common interface standard that people can write their code to and be able to swap out providers. It also adds a load of flexibility that, you know, the static file unit testing is a
Starting point is 00:30:14 good example of that. But then one of the other great things about it is because it's not the flag implementation itself, it still needs the Flagsmith SDK or the LaunchDarkly SDK or whatever. There's parts of that runtime that it has access to that means that it can do things like send that trace into OpenTelemetry without us as Flagsmith, as a vendor, having to do that. So for us being able to write those adapters for the different languages to plug our SDKs into OpenFeature, you know, some days I wake up in the morning and look at the OpenFeature repository, and it's like, oh, this language, Open Feature SDK has suddenly acquired this new capability because someone in the community implemented it.
Starting point is 00:31:15 And if you're using Flagsmith as a provider, you get that for free, we get that for free. And it's like a perfect example of how open source makes the world go round. So there's a bunch of stuff there, especially for a relatively small team like ours, is just super great. And we just believe that the core philosophy of the project is right. This is a perfect example of where standards are.
Starting point is 00:31:56 I mean, standards, as I would describe it, as just ones that generally become, you know, the kind of lingua franca of dealing with, with flags. And so, yeah, it's, it's been great to watch it, to watch it grow and, um, to see the capabilities of it grow. And also, you know, you were asking about where, um, where feature flagging is in terms of its sort of its journey and its maturity and the thought the sorts of things that we're talking about within the open feature community are probably a good proxy of where that is because the questions are becoming um much more um you know specific yeah, around like observability, around, you know, driving and reporting A-B testing and things like that. And so, yeah, it's kind of in the same way that
Starting point is 00:32:54 Flagsmith went through that maturity of, you know, just being able to just do flags and then to be able to, you know, take it to become more, you know, enterprise rich or, you know, be able to take it to become more enterprise rich or be able to drive and integrate with, say, A-B testing platforms or A-B testing data stores, Open Features is going through that same process. But it's kind of a little bit more difficult in that regard because I think there's definitely, we talk about this a lot within the Open Feature team, trying to decide where the boundary ends, because that's a super critical design decision for the project. And when to say, that's out of our remit, there's another project that would be more relevant for that or you know that's just that's just not relevant so to get into that is definitely the the first um first couple of years of the project it was it was just a ton of work to do and it was all very easy decisions and easy choices almost non-choices really and that you know this just needs to exist
Starting point is 00:34:02 and you know you maybe maybe argue a little bit about specific schema things or what have you, but now it's getting to the interesting questions of should we support A-B testing and to what degree should we support it? Also, the technical solution has become more complicated because there's different opinions about how you should do that sort of thing and all sorts of stuff. But it's been super valuable experience for me watching that project grow for sure. I have a final thought or final question or area that I want to go towards the end of the podcast, which is, and maybe I misheard you earlier, but I think what you said is
Starting point is 00:34:52 that feature flagging technically, it's a solved problem technically. Technically we have simple feature flags, we have complex evaluations, you can do all sorts of things. We have the auditing. Yes, as a feature flag management vendor, you have certain complex evaluations you can do all sorts of things we have the auditing yes as a feature flag management vendor you have certain integrations you need to build but feature flagging itself it's a solved problem does this mean that innovation is kind of from a tooling perspective technology perspective getting towards an end but is this the chance because you said earlier you're now
Starting point is 00:35:25 doing a lot of consulting you don't want to call yourselves consultants but you're really advocating and helping large organizations to really change the way they do software development and delivery is this is this the next evolutionary step so we have have the technicalities down and now it's about really helping organizations to adopt this technology. It's like going back 20, 30 years we had unit testing frameworks but the frameworks on it
Starting point is 00:35:55 on its own, they don't do anything. You need to teach developers how to do test driven development. I mean, I think that's a great analogy actually because you're right like um uh you know i don't know how old j unit is um it's got to be it's got to be 20 years old right and at least yeah uh you um it's still you know not uncommon to i mean but actually really really common you know flagsmith came out of an agency that i was running and from time to time we do not uncommon to, I mean, actually really, really common.
Starting point is 00:36:27 Flagsmith came out of an agency that I was running and from time to time we do help companies through software audits, security audits, technical audits, things like that. And I mean, even in large banks, I won't name any names for certain, but it's really, really, really kind of uncommon to find uh software projects with with good unit test coverage right like um and you know if you're writing some little
Starting point is 00:36:53 side projects on a weekend then fine but if if you're writing some um you know i don't know um kyc workflows for for a bank then you probably want to have unit tests. And so, yeah, I think it's a great analogy. You know, it's kind of unit tests, you know, unit tests are a little bit like brushing your teeth, right? Like, you know, when you're a kid, you know, you should do it. You don't want to, you don't really get any benefit out of it. But it's just something that, you know, when you're a kid, you know you should do it. You don't want to. You don't really get any benefit out of it. But it's just something that you know that is a good idea.
Starting point is 00:37:30 Feature flags are kind of a little bit more interesting because they're more obvious in their payback for engineering teams right like i can put this thing but if i'm waiting for someone waiting for another team or waiting for a microservice to finish this bit of work and it's going to take them a week i can put my stuff behind a flag push it live get on with my life and go do the next thing right um you know unit tests don't really give you that that that like immediate kind of like sort of win of like right i can just do that or you know i think i you know if i need to turn this feature off in australia or canada then i can do that really quite easily with feature flags um but i do think that yeah you're definitely right like as a a, it's kind of like an advocacy thing now, almost more than, you know, how do they work?
Starting point is 00:38:33 How'd you do them? You know, most engineers now, even if they've not used them themselves, when you work, when you, you know, when we do a walkthrough of the platform, there's nothing like, oh, I didn't, you know, there's no like, oh, my God, I didn't expect it to do that, right? It's like, yeah, that's all fairly obvious as an SDK and an API key and all that sort of stuff. And so the interesting thing about that is that when it comes to advocacy, everyone in that industry is pulling in the same direction right like um we're advocating it and
Starting point is 00:39:07 um you know our competitors are advocating it and open features advocating it and all this stuff um and so that kind of makes life a little bit less um kind of competitive and uh in a way because like everyone's kind of putting in the same direction and a lot of this boils down to one of this is one of the questions that obviously we'll never get an answer to but we're always debating um within within flagsmith is what percentage of active software projects in the world are currently using some form of you know either feature flags or feature flag-like stuff, right? If you were to squint, you'd go, yeah, that's a feature flag.
Starting point is 00:39:51 And I don't know, you know, we don't know what that number is, but I still think it's low double digits, right? Like quite very low double digits. And so that, you know, that's kind of like how we perceive it at the time, really. And I do think conceptually and in terms of, um, feature set, um, most of the platforms out there, um, are, you know, they might do things slightly differently and they might have slightly different concepts around, um, certain different things, but, you know like we're always
Starting point is 00:40:49 trying to make the platform um lower latency and more reliable and all this sort of stuff and make things like unit testing easier and quality quality of life stuff for engineers rather than like you know there's this suddenly there's this like version you know this we're currently on version two dot whatever and i can't imagine ever there being a three right like unless we change the versioning system which maybe we maybe we'll do for that reason yeah yeah no but yeah but it enough also proves the point that i made earlier right i think you are technically you are technically you've reached almost technical completeness. There's always certain things you need to do,
Starting point is 00:41:31 but the big hurdle now is to advocate, and it seems the whole industry is advocating for this. It's the same that we are advocating for observability-driven development. You need to, from the start, understand and have a strategy on how to observe your software and you need to define what makes your software healthy or unhealthy because if you don't do this you will patching it later on it's not easy and and and it's just like flying blind and the same is true with feature flags legs feature of leg driven development is something that we probably see
Starting point is 00:42:05 going forward yeah um we are almost at the end and brian i haven't heard much of you today you've just been sitting there and listening and sucking and soaking everything in no there are several questions i was going to ask that then got answered there but i i guess the final question i have, right? So this is the scenario I can imagine. I imagine it's pretty easy to get set up and running on initial feature flags, right? But as my mind works is the more I start thinking about it, the deeper I start thinking about it. It's like, well, what should we do as a feature flag?
Starting point is 00:42:38 What should we not do as a feature flag? When you start thinking about dependency and then maybe even feature flag debt and all these other things, you know, my mind starts, you know, going out of control. So what is there out there for people who are looking to go do feature flags? Is it like consulting services that companies like Flagsmith offered, or offer or you hire a consultant? Or is it just communities? Like, how do people learn to grow and mature with feature flags so that they don't dig themselves in a hole they can't get out of later? We don't offer consulting services
Starting point is 00:43:14 as a standalone thing. We have customer success folk who help our customers. And, you know, and then we have like a Discord community where, you know, every now and again, yeah, like people drop in and go and ask a question related to that. There is some tooling around it that's fairly common amongst providers. And so, for example, being able to label flags as permanent. So there are some use cases of flags where they are going to live for the lifetime of the project.
Starting point is 00:43:54 And then generally, providers have tooling around stale flags. So in Flagsmith, anyway, I can talk to Flagsmith. If a flag's not marked as permanent, but it hasn't had a state change for a particular amount of time, I can't remember exactly how long it is at the moment. You know, you can get a list of right, like these flags haven't, no one's touched them for like six months. And they're not permanent or you haven't marked them as permanent. And so that's probably, you know, that's a smell. And there's definitely engineering smell there that you're like, you know, go and figure out. And then, you know, again, like you can associate flags with team members in Flagsmith.
Starting point is 00:44:39 So you can go, oh, you know, this flag, Andy wrote, created this flag like two years ago. No one's touched it for a year. It's in this report. I'm going to ask him, like, you know, what happened? He'll go, oh, you know, yeah, that went live like 18 months ago. I just completely forgot about it. It's fine. Like, we'll put it in the next pull request.
Starting point is 00:45:01 We'll just, you know, turn that on and we can remove the flag. So there is some amount of tooling around that. But, you know, I'm going to repeat myself again. These things are kind of like societal as much as tooling, getting into that habit of part of your development cycle being, yeah, what's in that stale flags list? The lovely thing about pull requests and being able to deploy software whenever you
Starting point is 00:45:41 want is that you can just do these little rhythms all the time um and front you know engineer and and flags help you increase that that rhythm one of the things that they add to and you sort of get used to doing once you start using them is you know um write the feature, write the unit tests, get it into production, and then a month later or two weeks later or whatever, remove the flag and just remove the conditional from your code kind of thing. And it's when you get that additional step as part of your, as part of your workflow that it just becomes part of that kind of housekeeping that you do. And, you know, people might be thinking, Oh God,
Starting point is 00:46:37 you know, that's the, you know, the last thing I want is another, you know, kind of thing. And, and, and you know i think actually this is this i'm not massively into the kind of ai hype train i know it's kind of a bit more fashionable now to be in that in that camp but that would be a great you know i'm sure we're pretty close to having claude or gemini or whatever you know gpt being able to look at, you know, in the same way that they're really good at writing pull request summaries now, that's a great use case for AI.
Starting point is 00:47:12 Just go like, right, yeah, you know, I've got, I can understand Flagsmith's state. I can see your code. I can see all the changes that are going through there. And his flag hasn't changed for a year and a half. Here's the pull request to remove that flag. And then here's the, I don't know, API or Terraform script to turn that off in Flagsmith. Like, that's definitely, I'm on that AI train, definitely.
Starting point is 00:47:40 Like, that sounds like a great thing. And probably, yeah, I mean, we're probably there in terms of the abilities of the lms now yeah cool cool yeah great to end it on a on a hype topic god no i wasn't gonna i said i'm not gonna mention ai this hour but and i think it will also make a good a good soundbite I took a couple of notes today throughout the presentation on which soundbites to use but maybe this one in the end is the one that we will pick Ben, thank you so much
Starting point is 00:48:14 for being on the podcast, it's always amazing how fast time flies and I know we will see each other most likely at KubeCon in London or at least we'll figure out a way to meet there. Will you, will Flagsmith be there as well, like with the booth? We'll definitely have some folk there. The KubeCon booths are not the cheapest in the world,
Starting point is 00:48:38 so we're not exhibiting, but we will be there in body and spirit, just not in cold, hot cash, I guess. Maybe people will find you around the Open Feature booth because they will have the project pavilion there. Yeah, so there is definitely
Starting point is 00:48:59 that's been another thing just to add. It's been great to see the evolution of evolution of open feature in the context of KubeCon, and there are significant things going on, open feature related in KubeCon. I guess that, you know, if you're interested in terms of hacking on the projects,
Starting point is 00:49:22 working on the SDKs, getting advice on the best way to use the tool. So yeah, check out the KubeCon agenda because there's significantly more open feature-related stuff there than there has been in the past yet. Yeah, and if you get your people at KubeCon to just wear top to bottom
Starting point is 00:49:47 Flagsmith logos and everything, they could just be a mobile booth. You know, you can have iPads or whatever, or tablets on their chest so they can demonstrate, right? Yeah. You don't need to be stationary. The system.
Starting point is 00:50:03 Alright, anyway, thank you so much for being on the show really really appreciate it it's just fascinating yeah no thanks for having me and we look forward to hearing what comes of this in the future
Starting point is 00:50:13 glad to hear you said you're all in on AI for everything yeah and thanks everyone for listening talk to you soon bye bye bye bye bye Yeah. And thanks everyone for listening. Talk to you soon. Bye-bye. Bye-bye.
Starting point is 00:50:25 Bye.

There aren't comments yet for this episode. Click on any sentence in the transcript to leave a comment.