Modelling for Testers

In this post, we’ll go on a short journey through the concept of formal modelling and how you can apply modelling techniques to help you test more effectively.

Firstly, what is a model?

Traditionally used in mathematics and science, a model helps communicate and describe something. A very basic model that we’ve all come across is the one used to calculate the approximate volume of any cardboard box width height length. If we stick some random numbers in to the formula, the output is the volume of randomly generated, virtual cardboard box. If we take a cardboard box and measure each dimension, we can calculate the approximate volume of a real box – that’s pretty useful, especially if you’re a company like Amazon!

Screen Shot 2018-07-31 at 16.03.00

Models don’t have to be formula, they can be diagrams; information flows; or physical objects. It took me a little while to make the connection, but that Airfix kit you might’ve built as a child (or grown up), is called a model because it is a model of the real thing.

Using Provided Models to Aid Testing

Not only are there different ways to model something, there are also different uses for them. Requirements are written models that describe the expected behaviour of a system. This type of model is used to communicate how something should work, so that it can be made. These are really useful models for us as testers as they give us an expected outcome. Remember that Airfix kit, well here’s a formal model that you use to help build your physical model.

Screen Shot 2018-07-31 at 18.11.11

We can test the model before a line of code is written, by asking questions of the model and testing its limitations. By doing this early, we can extend the model to be more specific or we can help correct the model where it is deficient. This helps increase the likelihood that what we build, is actually what was required.

When it comes to testing the output of the requirements, we can reuse those models to determine if what we built, meets the expectations of the model. Traditionally we might call these Test Cases: for a given requirement (formula) and set of data (dimensions) we expect the output of the software to be deterministic (volume). This is what we testers call checking and is a prime candidate for automation (but that’s a different topic!). It may even be better still to use this as a way to drive the writing of the system’s code – I’m sure you’ve come across TDD and it’s many variants.

Creating Our Own Models to Aid Testing

As a tester, even if you hadn’t made the connection yourself before, you should now realise that you’re using models all of the time in your daily routine. Models provided to us are great for testers, but modelling techniques are even more useful.

You may ask, if the models have been provided for me, why would I need to model anything myself? Great question! As testers, our primary responsibility is to provide information about the product, focussing on the risks to its value. If the scope of testing activities that we partake in are guided only by the models that we’ve been provided, we run the risk of only reporting on what we thought we knew and not the actual behaviours of the system.

To counteract this risk, we testers can look to use modelling techniques ourselves, to explore and describe the system as it is, not just what it was supposed to do. Here’s a really simple modelling exercise that you can do right now.

Now It’s Your Go!

Firstly, pick a web site to model against. If you work on one, pick that because everything you do in this example will add real value to your testing efforts. If not pick, something relatively simple, maybe your favourite online store.

Next, pick a part of the site to model. Keep it a single page for now, so if it’s an online store, use the search page or product details page and use the mobile version as it’ll reduce the work we need to complete. I’m going to pick Alan Richardson’s Blog to demonstrate the exercise.

Head over to your chosen page in Chrome and once it’s finished loading, open the developer tools (instructions are here) and click the network tab. If they’re not there already, add the Domain and Method columns to the network table.  Order the list by Domain either descending or ascending.

gif_addDomAndMethod_mid

Clear the log in case there’s anything in there already, then refresh the page. This will list out all of the network calls that the page makes client side and it’s these we’re going to model.

In your favourite diagramming tool or on a large piece of paper, stick the page we’re modelling in the centre. I’m going to use a mind map for now.

Head back to Chrome and take note of each of the domains your page calls out to. Create a node in the diagram for each domain and that’s the first part of your model finished.

Screen Shot 2018-07-31 at 17.32.52

You now have a useful, visual record of all of the domains that your site calls out to and you can sit with your friendly neighbourhood developer or architect to determine if there’s anything that looks odd.

A More Useful Model

So that exercise didn’t take too long did it! The great thing is, it won’t take much longer to make it even more useful. Follow these steps to add to your diagram and make that conversation even more interesting.

For each domain you’ve added to your model, refer back to the network tab in Chrome and make a note of the Type of request made and the address up to any query. You may find it’s useful to group some types: images and gifs are a good example. You can also see from my example, I’ve called out that there’s a redirect because I’m using the mobile view in Chrome.

Screen Shot 2018-07-31 at 17.38.08

Don’t stop there, keep adding more information to your model. There’s lots of information that you can add to spark off a conversation. Here’s some ideas on how what you can add to your model as you explore the software even deeper:

  • highlight a domain or address that you don’t recognise for questioning later;
  • call out a request to the same address requesting the same payload more than once;
  • annotate with the size of a response if it falls outside some boundaries
    • a really small response (single bytes)
    • a really large response (varies but something over a few hundred KB is worth questioning);
  • a response that takes a long time, start with anything >300ms (set the Waterfall column to show Total Duration);
  • a response that returns a 4** or 5** error code;
  • any further redirects
  • or if the response body mentions an error.

Now what?

That’s entirely up to you! The important thing to remember is that whilst you’re building this model you’re actually exploratory testing. You’re learning about the product and you’re questioning it. In this simple example we’re only questioning the various calls that the product is making client side, but they are good questions.

Amongst many other examples, this specific activity can help find performance issues through:

  • responses that take too long or are blocking
  • responses that are repeated
  • requests that are pointless and can be canned

It can also help find security issues

  • if you’re making calls that should be https but they’re not
  • any redirects that are happening that you don’t expect
  • any other calls that don’t make sense or may be sending information that they shouldn’t

You can use the model to decide which calls you should manipulate to help you understand how doing so might impact the user experience:

  • try to block a call
  • slow a call down
  • intercept a call and manipulate the response

And we’ve only covered client side requests. By using the same technique, you can take a capability and slowly break it down, level by level, building your knowledge of the product.  Share your knowledge with your team to help identify problems and then use it to give you a ideas on where to focus your testing based on dependencies and areas of brittleness.

I haven’t even mentioned how you can use these models to confidently share your test coverage, that’s a post for another time.

Why I like Testing in Production

Preamble

This post is not a internal environment vs production environment face off nor is it an attempt to convince you to change your beliefs on testing in production.

It is an attempt to show, that testing in production can be another tool in the testers arsenal, to be used when the context fits and if the team you’re working with have the capability to do so now, or the desire to build that capability for the future.

If it does encourage you to investigate what testing in production might look like for you, share your story, I’d love to hear it.

But you should never test in production…

There is a school of thought which prescribes that testing of an application should only be completed in internal integrated environments.

I’ve recently seen a post discuss control – that new processes should never be tested in production. I’ll often hear about increased risk to the business and customers through shipping unfinished features. Occasionally someone will accuse me of treating our customers as guinea pigs.

Not forgetting data, I’m told that there’s a risk that my testing will be skewing production analytics for customer engagement, account tracking and stock levels if I’m testing in production.

These are all valid risks in their own context and each introduces varying degrees of impact should they realise. There is no wrong in any of these arguments.

Where would you prefer to test?

Ask yourself, given zero risk, would you ever test in production for any reason?

My answer for this is, given zero risk, I would test everything I could in production. I would test new features, integrations of new features with old features, integrations of my application with other applications and dependencies. I would also conduct all of my non-functional testing in production: performance, load, security, etc. Why would I use an approximation of the live environment if I could use the real thing?

But of course zero risk doesn’t exist, so I’m going to take my utopia and start to break it down until I find a level of risk that is suitable for the context in which I would like to test. As part of that exercise, I would need to be clear on what I mean by testing in production.

I define testing in production to be an encapsulation of two distinct states.

  1. Testing in production of an as yet un-launched, hidden version of the application that customers cannot see or use
  2. Testing in production following the launch of a new version of the application to customers

Both activities offer their own value streams but solve very different problems.

Everyone can benefit from and should think about spending some time with number 2. Your application is live, your customers are using it. Assuming you could learn something new about what you’ve already shipped or even test out some of your earlier made assumptions, why wouldn’t you want to do that in production? Run a bug bash in production, keep it black box (only things customers can do) if you’re particularly worried about it and observe. You may find something that’s slipped through your net and if you do, you’ve proven its worth.

Testing hidden features

It’s option 1 that I find most interesting. I’ve recently read an article introducing testing in production from the Ministry of Test Dojo – Testing in Production the Mad Science Way The article discusses two distinct models that you can implement to provide you with the means to test in production.

We’ve implemented a variation on the circuit breaker method referenced in the article. In doing so, we have the ability to use feature flags to determine which code path the application should go through and therefore, what behaviours the customer has access to.

In its default state a feature flag is set to off. This means that the customer sees no change despite the new code having been deployed to production. When it’s there, our circuit breakers allow us to turn features on client side. This means that testers can go to production, set the feature flag to on for the feature they want to test and are able to happily test against production code for the duration of their session. Once testing is complete and the feature is ready to go live, we can change the configuration of the feature flag for all customers, safe in the knowledge that we can turn it off again if something were to go wrong. The deployment of configuration is quick and we have two mechanisms to permeate the rollback to our customers – either slowly as sessions expire or we can force them through on their next page load. When rolling forward we only do so as sessions expire.

In shipping this changed code we make the assumption that we’re able to determine whether the introduction of this new code has not had a detrimental impact on the current feature set and customer experience. We derive this confidence through testing during the development phase of the new feature and through our automated checking suite which runs in our pipeline. We also have a third line of defence, a set of automated checks for our core journeys, created by a central team who own our path to live.

This mechanism takes time to mature and we’ve definitely made a few mistakes along the way. With perseverance we’ve been able to ship fairly large changes to our production site, with no impact to customers, test those changes and then turn them on when we’re confident we’re ready to.

Whilst we can mitigate concerns such as customer impacting stock levels by being careful to only use non-low stock items, there are still some structural areas which we do not test in production such as peak load tests and updates to our feature switch and rollback/roll forward mechanisms. Anything else will be considered on a case by case basis, discussed during 3 Amigos within the team(s) and agreed on before actioning.

My thoughts

For some contexts, I prefer testing in production over testing in internal integrated environments because it provides me with these key benefits :

  1. The likelihood of my testing being blocked by an issue with a dependency is greatly reduced
  2. The data is at peak scope, complexity and usefulness
  3. Any bug that I find in the application under test is an actual and real issue 
  4. Any issues that I find with the environment will be having real impact to our customers and/or the business

In my experience, these benefits derive from flaws with the practices put in place to build and support internally integrated environments. 

Internally integrated environments do provide their own benefits. There are scenarios and processes which I would be reluctant to test in production and I’ve outlined some of those above. This article also does not discuss inside out testing techniques such as those on code – unit tests and component tests.

How a Spanish joke reminded me of Software Testing.

Before the hustle and bustle of the work day set in, a colleague of mine and I were discussing a picture someone had drawn on a post-it and left on his monitor.

Screen Shot 2017-12-08 at 09.11.22

The message was clear, someone wanted to thank him for his efforts recently, which is a lovely gesture. What wasn’t so clear was the little picture in the bottom corner. Within the context of the message it was easy, the picture represented someone in the water. However, take the message away and the picture becomes less clear. I described it as one of those balsa airplanes with a rubber band propeller (indicated below), flying above the clouds.

Screen Shot 2017-12-08 at 09.18.12

This led my colleague to introduce me to a wonderful joke that he was told when he was growing up in Spain. The joke goes like this:

Teacher: “Listen carefully: Four crows are on the fence. The farmer shoots one. How many are left?”
Little Johnny: “None.”
Teacher: “Can you explain that answer?”
Little Johnny: “One is shot, the others fly away. There are none left.”
Teacher: “Well, that isn’t the correct answer, but I like the way you think.”

Little Johnny: “Teacher, can I ask a question?”
Teacher: “Sure.”
Little Johnny: “There are three women in the ice cream parlor. One is licking, one is biting and one is sucking her ice cream cone. Which one is married?”
Teacher: “The one sucking the cone.”
Little Johnny: “No. The one with the wedding ring on, but I like the way you think.”

In the first part, the teacher’s context was that of an arithmetic problem. The teacher was likely expecting Little Johnny to say 4, but Little Johnny may have been considering a less abstract interpretation of the information provided. Little Johnny hears the word ‘shot’ and his internal model converts that to a gun, a loud noise and the jittery nature of birds. In that context it’s perfectly understandable that Little Johnny gave the answer he did.

What was the trigger for this misunderstanding? I think it’s caused because the teacher, unknowingly at the time, left their question ambiguous in the face of their audience. This is something Little Johnny takes advantage of in the second part, setting a trap for the teacher to lead to the inevitable punch-line. Little Johnny provides very specific information to describe the scene, building a picture or model in the head of the teacher and in doing so, introducing biases of perspective that the teacher will use to answer the question. Little Johnny then comes in left field with a question, which the teacher will attempt to answer with the specifically limited information Little Johnny provides and laughter ensues.

Why is this an important lesson for Software Testers?

Each and everyday, the life of a tester is filled with information that is provided either knowingly incomplete, or more dangerously, obliviously incomplete.

It’s very much the role for us Software Testers to remember Little Johnny and the teacher when we’re communicating across the disciplines of the teams we work with. As we learn what it is that the product is expected to do, we must force ourselves to remember that it is very likely that our interpretation of the information is incorrect or incomplete based on our own biases and perspective.

We can counter this by asking questions, even if we feel like it might come across as dumb to do so – I’d almost argue that this is exactly when we must ask questions.

The 5 Why’s technique is a useful tool for understanding the primary objectives and drivers of an activity. With it we can challenge the assumptions made by ourselves and others and take them from the world of implicit to explicit.

Specification by Example is another technique that is practised throughout the software industry to provide a consistent language to describe behaviours and expectations, however I find that it’s rarely used to its full potential. Yes GWT scenarios can provide a suite of regression checks but the real power is in the conversation that can be had between a group of people to again, make the implicit, explicit – this will be the subject of another post, so keep tuned!

Even if we think we have a complete picture, the reality will be that we don’t. Rarely have I met anyone who can keep a whole network of related systems, dependencies, contracts and expectations in their head or even down on paper, in a sufficiently useful way, to remove any risk of misunderstanding or gaps in understanding.

That’s why for us Software Testers, our most useful tool can be our ability to explore the landscape in front of us, with a specifically chosen context, to build up a more complete understanding of the actual with respect to what we think we know.