I have been exploring the idea of how we can incorporate Exploratory testing into a very “structured” (by that I mean loads of scripted test cases, “auditable” result sets in one very large test management tool) environment successfully.
We started by examining the advantages and disadvantages of Scripting testing in our context. There were some discussions and debates. It made me think about how I design my scripted tests and has clarified for me some really good reasons why I do it the way I do it. I realise too of course, that very shortly I may come across a project where it absolutely does not fit!.
I have always constructed my scripted tests to focus on a single technique. There are a few techniques in my toolbox that get pulled out regularly, but I will try and adjust or change wherever necessary. As an example, a single test per input field, focusing only on the validations for that field (a single “domain” test). Another test for a single screen which focused only on trying to execute every action from that particular screen, Etc.
Why do I like this approach?
- I can focus on a specific aspect at a time – “different types of tests, catch different types of bugs”
- Having a number of techniques in the toolbox, that I would always try to apply, helps me to figure what it is about the application that I don't yet know and need to learn more about.
- I don't always have to be 100% specific in terms of the steps. I try and write my functional tests in two steps – one to perform the function, and one that verifies that it worked – if there is a second way to do this. This means, I don't have to say things like “click this button”, or “capture xyz”. It also means that if the screen changes, I only change my “screen-related” scripts, not my functional ones.
- It's easier for me to think of a strategy if I have my test ideas split into a variety of techniques. I can start out nice and get meaner – for example, run my screen type tests first before I start tackling the domains (if that is what is called for).
- Since I am trying to encourage “brain engaged” testing, I want to try and strike a balance between giving enough guidance on a test idea and being too specific. In a situation where someone else executes my scripts, I have found that a focus on a technique allows one this balance – as long as you take some time to explain what your specific interpretation of the technique is about.
There are probably some more benefits I haven't yet thought about, and probably some downsides too... and I know I am certainly not the only person doing things this way...would love your comments and thoughts.
p.s. I will still give feedback on how we incorporate Exploratory testing... watch this space!