Preamble
This post is not a internal environment vs production environment face off nor is it an attempt to convince you to change your beliefs on testing in production.
It is an attempt to show, that testing in production can be another tool in the testers arsenal, to be used when the context fits and if the team you’re working with have the capability to do so now, or the desire to build that capability for the future.
If it does encourage you to investigate what testing in production might look like for you, share your story, I’d love to hear it.
But you should never test in production…
There is a school of thought which prescribes that testing of an application should only be completed in internal integrated environments.
I’ve recently seen a post discuss control – that new processes should never be tested in production. I’ll often hear about increased risk to the business and customers through shipping unfinished features. Occasionally someone will accuse me of treating our customers as guinea pigs.
Not forgetting data, I’m told that there’s a risk that my testing will be skewing production analytics for customer engagement, account tracking and stock levels if I’m testing in production.
These are all valid risks in their own context and each introduces varying degrees of impact should they realise. There is no wrong in any of these arguments.
Where would you prefer to test?
Ask yourself, given zero risk, would you ever test in production for any reason?
My answer for this is, given zero risk, I would test everything I could in production. I would test new features, integrations of new features with old features, integrations of my application with other applications and dependencies. I would also conduct all of my non-functional testing in production: performance, load, security, etc. Why would I use an approximation of the live environment if I could use the real thing?
But of course zero risk doesn’t exist, so I’m going to take my utopia and start to break it down until I find a level of risk that is suitable for the context in which I would like to test. As part of that exercise, I would need to be clear on what I mean by testing in production.
I define testing in production to be an encapsulation of two distinct states.
- Testing in production of an as yet un-launched, hidden version of the application that customers cannot see or use
- Testing in production following the launch of a new version of the application to customers
Both activities offer their own value streams but solve very different problems.
Everyone can benefit from and should think about spending some time with number 2. Your application is live, your customers are using it. Assuming you could learn something new about what you’ve already shipped or even test out some of your earlier made assumptions, why wouldn’t you want to do that in production? Run a bug bash in production, keep it black box (only things customers can do) if you’re particularly worried about it and observe. You may find something that’s slipped through your net and if you do, you’ve proven its worth.
Testing hidden features
It’s option 1 that I find most interesting. I’ve recently read an article introducing testing in production from the Ministry of Test Dojo – Testing in Production the Mad Science Way The article discusses two distinct models that you can implement to provide you with the means to test in production.
We’ve implemented a variation on the circuit breaker method referenced in the article. In doing so, we have the ability to use feature flags to determine which code path the application should go through and therefore, what behaviours the customer has access to.
In its default state a feature flag is set to off. This means that the customer sees no change despite the new code having been deployed to production. When it’s there, our circuit breakers allow us to turn features on client side. This means that testers can go to production, set the feature flag to on for the feature they want to test and are able to happily test against production code for the duration of their session. Once testing is complete and the feature is ready to go live, we can change the configuration of the feature flag for all customers, safe in the knowledge that we can turn it off again if something were to go wrong. The deployment of configuration is quick and we have two mechanisms to permeate the rollback to our customers – either slowly as sessions expire or we can force them through on their next page load. When rolling forward we only do so as sessions expire.
In shipping this changed code we make the assumption that we’re able to determine whether the introduction of this new code has not had a detrimental impact on the current feature set and customer experience. We derive this confidence through testing during the development phase of the new feature and through our automated checking suite which runs in our pipeline. We also have a third line of defence, a set of automated checks for our core journeys, created by a central team who own our path to live.
This mechanism takes time to mature and we’ve definitely made a few mistakes along the way. With perseverance we’ve been able to ship fairly large changes to our production site, with no impact to customers, test those changes and then turn them on when we’re confident we’re ready to.
Whilst we can mitigate concerns such as customer impacting stock levels by being careful to only use non-low stock items, there are still some structural areas which we do not test in production such as peak load tests and updates to our feature switch and rollback/roll forward mechanisms. Anything else will be considered on a case by case basis, discussed during 3 Amigos within the team(s) and agreed on before actioning.
My thoughts
For some contexts, I prefer testing in production over testing in internal integrated environments because it provides me with these key benefits :
- The likelihood of my testing being blocked by an issue with a dependency is greatly reduced
- The data is at peak scope, complexity and usefulness
- Any bug that I find in the application under test is an actual and real issue
- Any issues that I find with the environment will be having real impact to our customers and/or the business
In my experience, these benefits derive from flaws with the practices put in place to build and support internally integrated environments.
Internally integrated environments do provide their own benefits. There are scenarios and processes which I would be reluctant to test in production and I’ve outlined some of those above. This article also does not discuss inside out testing techniques such as those on code – unit tests and component tests.
Well said, this article is awesome. Typically in Web Analytics space, the option-(2) can be achievable. Using tools like Tealium we can publish the environment of our preference where testing is achieved regardless of swap in actual environment. For option-(2) for web this can also be achieved by redirection method within web config and switch ON/OFF.
LikeLike
I once worked on a contract for a major professional services company to test the implementation of a minor upgrade to a system for a UK High Street bank client. There was no test environment, so I had to test in the Live environment. This meant that I could not start any testing until 3:30 in the afternoon once all that days’ transactions from the client’s branches had been processed; by which time, all the company’s staff had left for the day. I certainly put in the necessary hours for completion on that contract!
LikeLike