Friday, December 7, 2007

Adventures and experiments in ET

We have started an experiment on my current project. There are 5 in the team and then me in a test lead role. The team is made up of diverse personalities but similar experience levels in testing. They have been doing mostly execution of scripted tests for the past year or so and are eager to get into something new.

The context:

The project is new internet- and intranet-based web frontends for an existing (internal facing only) application. We have identified 7 main browsers and a couple of operating systems that we will focus on.

The environment is one where Project Managers and stakeholders are used to receiving numerous statistics on number of tests executed, number passed, number failed, number left, etc. And there are some specific audit requirements in terms of ensuring that there is evidence of coverage of the specifications, evidence of execution of test cases and results, etc.

In order to achieve our mission, I decided on a hybrid approach. I really want to try and get more focus and attention on ET but I think that anything radically new and different in this environment needs to be experimented with and proven as a success in order to get people's attention. (Slowly slowly.....).

So we spent some time scripting tests as I normally would – not detailed with millions of steps, but test ideas with some expected results and coverage of business rules.

Most of the benefit of doing this has been in clarifying the specifications. When writing a scripted test you need certain information and we were able to log issues and get the information while the developers were still busy developing. So we clarified some things for them too. It will be interesting to see how many bugs we find from our scripted tests.

Over a couple of years of trying to explain ideas and concepts to people (and myself), I have realised that for me there is really only one way to do it. That is, try and explain it, but then also demonstrate it, applied in the context of where you are going to use it. Hence, we had a little session together as a team. In the boardroom, with the application projected up on the big screen, we performed an exploratory test session on a (somewhat beta) version of our application. (The developers have been great and released their environment to us while they were still developing!! - something almost unheard of in our environment). The session was fun. And we were able to try out the concepts and develop some techniques. We made some notes and found some bugs and found some issues.

Next we needed to consider how we could allocate time for both approaches in our test cycles and how we would get the basic metrics and keep the auditable results required.

We decided to plan for at least one ET session per person per day once we start our “official test cycles”. Until then, we will be writing some charters (single line ideas) and doing only ET. (as the application is not yet complete and the developers have asked us to take a look at the functions they are almost done with).

Interestingly we are using a commercial (and rather expensive) Test management tool to manage our charters and record execution of them. I asked the guys to create a “test case” for each charter and to scan and attach their notes to the test (charter). Then, we “execute” the (blank) test to record time spent on the charter as well as we can then reference this charter when logging everything we found during that particular session. We have also been using TimeSnapper to record our sessions and we attach this to the instance of the charter we ran. I am hoping that by doing this, I will have some interesting information in terms of how many times we executed a particular charter, how many defects/ issues we found during the session and be able to compare this with the scripted test execution . We can also build on a repository of charters and re-execute some if necessary – referring to the notes from the previous instance of the session. All of our scripted tests are managed within the same tool.

Environmental constraints...

We have some other environmental considerations that we are looking to work around. We are in an open plan (somewhat) noisy office and the guys in my team have a lot of knowledge and experience testing a different application that they are often asked questions about. We set down the following guidelines that we try as far as possible not to break.

  1. We lock our phones so that it goes straight to voice mail and doesn't ring

  2. Mobile phones must go in the drawer and be switched off

  3. Outlook gets closed and email notifications switched off

  4. We wear a “do not disturb I am in ET” hat/cap – this means people should try as far as humanly possible not to come and ask questions etc.

And so?

I have found that I am encouraging thinking testing and in a way that enhances reflection and communication. Previously, I have tried to encourage thinking even through the scripted tests but it only works to a certain point and after that, the ideas stop, or should I say that the totally free thinking stops and the frustration sets in.

I am lucky in that I have a team of really enthusiastic and passionate testers that want to learn and want to grow. I am not sure that we would be successful in our endeavour if that wasn't the case. Our debrief sessions are fun and I think that everyone is really getting the hang of reflection. To my mind, merely taking the time to do that makes us a more effective team.

There are a few other things that I would like to look at once we have finished this experiment. How successful were we at delivering to our mission? Do I end up with better thinking, better equipped, engaged and excited testers? How quickly can people become effective as exploratory testers? What can I do to support them and help them to continually be passionate about their job and improve themselves? Am I able to plan for and measure ET and give the right kinds of information to allow the Project Managers and stakeholders to make the decisions that they need to? And is the result set that we keep manageable and auditable? Should we minimise Scripted testing? Can we get sufficient coverage of specifications through charters to impress stakeholders that we have tested enough / the right things?

There will be more to come when I reflect on these questions...