Friday, December 7, 2007

Adventures and experiments in ET

We have started an experiment on my current project. There are 5 in the team and then me in a test lead role. The team is made up of diverse personalities but similar experience levels in testing. They have been doing mostly execution of scripted tests for the past year or so and are eager to get into something new.

The context:

The project is new internet- and intranet-based web frontends for an existing (internal facing only) application. We have identified 7 main browsers and a couple of operating systems that we will focus on.

The environment is one where Project Managers and stakeholders are used to receiving numerous statistics on number of tests executed, number passed, number failed, number left, etc. And there are some specific audit requirements in terms of ensuring that there is evidence of coverage of the specifications, evidence of execution of test cases and results, etc.

In order to achieve our mission, I decided on a hybrid approach. I really want to try and get more focus and attention on ET but I think that anything radically new and different in this environment needs to be experimented with and proven as a success in order to get people's attention. (Slowly slowly.....).

So we spent some time scripting tests as I normally would – not detailed with millions of steps, but test ideas with some expected results and coverage of business rules.

Most of the benefit of doing this has been in clarifying the specifications. When writing a scripted test you need certain information and we were able to log issues and get the information while the developers were still busy developing. So we clarified some things for them too. It will be interesting to see how many bugs we find from our scripted tests.

Over a couple of years of trying to explain ideas and concepts to people (and myself), I have realised that for me there is really only one way to do it. That is, try and explain it, but then also demonstrate it, applied in the context of where you are going to use it. Hence, we had a little session together as a team. In the boardroom, with the application projected up on the big screen, we performed an exploratory test session on a (somewhat beta) version of our application. (The developers have been great and released their environment to us while they were still developing!! - something almost unheard of in our environment). The session was fun. And we were able to try out the concepts and develop some techniques. We made some notes and found some bugs and found some issues.

Next we needed to consider how we could allocate time for both approaches in our test cycles and how we would get the basic metrics and keep the auditable results required.

We decided to plan for at least one ET session per person per day once we start our “official test cycles”. Until then, we will be writing some charters (single line ideas) and doing only ET. (as the application is not yet complete and the developers have asked us to take a look at the functions they are almost done with).

Interestingly we are using a commercial (and rather expensive) Test management tool to manage our charters and record execution of them. I asked the guys to create a “test case” for each charter and to scan and attach their notes to the test (charter). Then, we “execute” the (blank) test to record time spent on the charter as well as we can then reference this charter when logging everything we found during that particular session. We have also been using TimeSnapper to record our sessions and we attach this to the instance of the charter we ran. I am hoping that by doing this, I will have some interesting information in terms of how many times we executed a particular charter, how many defects/ issues we found during the session and be able to compare this with the scripted test execution . We can also build on a repository of charters and re-execute some if necessary – referring to the notes from the previous instance of the session. All of our scripted tests are managed within the same tool.

Environmental constraints...

We have some other environmental considerations that we are looking to work around. We are in an open plan (somewhat) noisy office and the guys in my team have a lot of knowledge and experience testing a different application that they are often asked questions about. We set down the following guidelines that we try as far as possible not to break.

  1. We lock our phones so that it goes straight to voice mail and doesn't ring

  2. Mobile phones must go in the drawer and be switched off

  3. Outlook gets closed and email notifications switched off

  4. We wear a “do not disturb I am in ET” hat/cap – this means people should try as far as humanly possible not to come and ask questions etc.

And so?

I have found that I am encouraging thinking testing and in a way that enhances reflection and communication. Previously, I have tried to encourage thinking even through the scripted tests but it only works to a certain point and after that, the ideas stop, or should I say that the totally free thinking stops and the frustration sets in.

I am lucky in that I have a team of really enthusiastic and passionate testers that want to learn and want to grow. I am not sure that we would be successful in our endeavour if that wasn't the case. Our debrief sessions are fun and I think that everyone is really getting the hang of reflection. To my mind, merely taking the time to do that makes us a more effective team.

There are a few other things that I would like to look at once we have finished this experiment. How successful were we at delivering to our mission? Do I end up with better thinking, better equipped, engaged and excited testers? How quickly can people become effective as exploratory testers? What can I do to support them and help them to continually be passionate about their job and improve themselves? Am I able to plan for and measure ET and give the right kinds of information to allow the Project Managers and stakeholders to make the decisions that they need to? And is the result set that we keep manageable and auditable? Should we minimise Scripted testing? Can we get sufficient coverage of specifications through charters to impress stakeholders that we have tested enough / the right things?

There will be more to come when I reflect on these questions...

Friday, November 9, 2007

My excuses ... and the BBST

There are two reasons for the long time lapse between blog entries. Firstly, there were a few technical difficulties involved in the relocation. These have hopefully all been resolved, but please feel free to report any bugs either as comments or via email.


Secondly, almost all my free time (and some of my not so free time) has been taken up by the BBST (Black Box Software Testing) online course presented by AST (Association for Software testing http://www.associationforsoftwaretesting.org/).

The material within the course enhances that developed by James Bach and Cem Kaner for the Florida Institute of Technology. It is organised specifically for an online learner.

The way that the course is structured to make one think and learn has fascinated me. The modules begin with an Orientation exercise. These exercises were tough, very tough. For some of the exercises, I had to look up almost the entire question on wikipedia! The exercises challenged my thinking around a specific problem. Problems I was not used to solving. This was followed by reading material and videos explaining and elaborating on the topic. The material provided me with additional insight into the problem and not necessarily simple solutions to the problem (or that is the way I interpreted it). Then there was a multiple choice open book quiz that one had to complete. The multiple choice quiz questioned my understanding and in some cases interpretation of the material. (I never scored higher than 55% for any of the quizzes and they are open book!)

There was also a Group exercise. We had to collaborate to produce an answer to an exercise. My group team members were located in Los Angeles and Bangalore. We had some challenges in terms of these time zones however we managed to find one time around 6pm SA time that we could all be awake and discuss our answers over Skype. It was fun! As one of my team members put it, “the challenge is to end up with an answer that has both taken everyone's views into consideration and yet still ensure that it does not end up sounding like 3 different answers”. It did however strike me how difficult non face to face communication can be. Without body language and in some cases where one of us is talking “American” and another “South African” English, we were often saying the same thing, but in different ways.

Following this, we (as individuals) had to review two other groups' answers. This was hard. I learned so much from the other answers and that added to the difficulty in critiquing it. Since I had found my own question challenging, I struggled with being presented an answer that I may never have been able to come up with myself, and then still having to pass judgement on it.

Finally, the exam. There is no hiding in the exam. You have to post your answers in an open forum and you are then reviewed and graded by 2 participants. In turn you have to grade 2 participants. It is SCARY. For all of the exercises and also the reviews that you perform, there are guidelines or “rubrics” to help you form your answer or perform your review.

We are now at the reviewing stage of the Exam answers and I have just received 2 F's... VERY hard to take, but learning from them... :)

For anyone that wants to challenge themselves and their ideas, I recommend this course. I guess I am not really qualified to say (as I wouldn't ever spend the money on certification), but I would see this as infinitely more valuable than something you pay thousands of Rands for.


Thursday, November 8, 2007

I'm mad

I am mad…..

One of the teams I am working with logged a defect recently on an ATM machine they were testing. The machine made a strange noise while reading the card. The functionality that had been newly added was a chip card reader so was different to the readers that had previously been in the machines. The noise was loud… noticeably loud, or they would not have logged it. It wasn’t occurring on all the ATMs with the new reader, but only a couple of them. It was also different from any “card reader noises” that they were used to hearing.

One of our developers had some MAJOR objections to our defect. The first thing he said, really made me mad. He said that a “tester’s personal subjective view should not result in a defect”. According to wikipedia, “Subjectivity - refers to the property of perceptions, arguments, and the language terms use to communicate such, as being based in a subject point of view, and hence influenced in accordance with a particular bias”. Surely, he had a subjective point of view that the noise was not a problem??!

The second thing he said, almost made me madder. He questioned the use of the word “funny” as a way of describing the noise. “Where is the “funniness” threshold for mechanical equipment defined?”, he asked. What would have made more sense? Is there a scientific term that would have convinced him that it warranted a second look?

Thirdly, he questioned us sending the defect to business after the technicians had decided it didn’t deserve further investigation. We did as we always do, i.e. highlight the problem as we see it, and send it to the people that need to be aware, and need to be given an opportunity to question or accept the answer. In my view, the team did everything right.

Anyone disagree? Why do we constantly get attacked for doing our job?


Sunday, September 16, 2007

Non random thoughts on scripted testing

I have been exploring the idea of how we can incorporate Exploratory testing into a very “structured” (by that I mean loads of scripted test cases, “auditable” result sets in one very large test management tool) environment successfully.

We started by examining the advantages and disadvantages of Scripting testing in our context. There were some discussions and debates. It made me think about how I design my scripted tests and has clarified for me some really good reasons why I do it the way I do it. I realise too of course, that very shortly I may come across a project where it absolutely does not fit!.

I have always constructed my scripted tests to focus on a single technique. There are a few techniques in my toolbox that get pulled out regularly, but I will try and adjust or change wherever necessary. As an example, a single test per input field, focusing only on the validations for that field (a single “domain” test). Another test for a single screen which focused only on trying to execute every action from that particular screen, Etc.

Why do I like this approach?

  1. I can focus on a specific aspect at a time – “different types of tests, catch different types of bugs”
  2. Having a number of techniques in the toolbox, that I would always try to apply, helps me to figure what it is about the application that I don't yet know and need to learn more about.
  3. I don't always have to be 100% specific in terms of the steps. I try and write my functional tests in two steps – one to perform the function, and one that verifies that it worked – if there is a second way to do this. This means, I don't have to say things like “click this button”, or “capture xyz”. It also means that if the screen changes, I only change my “screen-related” scripts, not my functional ones.
  4. It's easier for me to think of a strategy if I have my test ideas split into a variety of techniques. I can start out nice and get meaner – for example, run my screen type tests first before I start tackling the domains (if that is what is called for).
  5. Since I am trying to encourage “brain engaged” testing, I want to try and strike a balance between giving enough guidance on a test idea and being too specific. In a situation where someone else executes my scripts, I have found that a focus on a technique allows one this balance – as long as you take some time to explain what your specific interpretation of the technique is about.


There are probably some more benefits I haven't yet thought about, and probably some downsides too... and I know I am certainly not the only person doing things this way...would love your comments and thoughts.


p.s. I will still give feedback on how we incorporate Exploratory testing... watch this space!

Monday, August 13, 2007

Belated explanation of the title

Soon after posting my first blog, I realised that I had neglected to explain the title.... probably a rather significant detail....especially for those hoping to read something about trapping wildlife.

While at CAST (refer to previous blog for details), I was lucky enough to attend an all day tutorial with James Bach. During the day, he challenged us to write our own heuristics (“fallible method of solving a problem”). I was terrified. Firstly of writing - there are a few things I am confident about, but writing my own thoughts is not one of them. Secondly, I was terrified of embarrassing myself in front of one of the people I consider to be a guru in Software testing. I sat with this bunch of blank cards in front of me and thought... and thought some more. Suddenly (and surprisingly), a thought came to me... write what you know. One of my "philosophies" in life is that you can't always change things or accomplish things overnight. It happens one slow step at at time. I realised we have a saying for this... "Slowly Slowly Catch Monkey". I use it when trying to convey a need for (artful?) patience. I guess to me, it means that there is a plan involved. And part of the plan is to make small changes or progress steps all the time - sometimes without anyone even noticing, until finally, the “Monkey” is captured. (Your goal achieved). I have no idea where I heard it first.... I just know we say it. My boss says it, my cousin says it... So I wrote it down and put my own "slightly testing related" spin to it. I had a heuristic! I wasn't overly confident that this would be something that could work... but then a friend (Adam) in the class came over and read it... he liked it, and that gave me courage – he even offered to put it up on the wall with the others for me. And from there, I guess it's history. James seemed to like it too, which was cool. Very cool. So here we are.....(and I am taking some poetic license with myself cos I am not sure this was exactly how I originally worded it!!)

“Slowly Slowly Catch Monkey” (as it relates to Software Testing)

When trying to make changes to a process or environment, it is sometimes best to tackle things one small step at a time.

Friday, August 10, 2007

CAST 2007

I was recently priviliged enough to attend CAST 2007, a Software Testing Conference in Bellevue, Washington. It was the first International Conference I have attended and the experience was one of the highlights of my testing career thus far. So, what did I learn?

I guess the first thing is that I have created this blog page, to practice my writing and communication regarding what we do and how we do it. Certainly, something that stood out for me is that I need to get a lot more practical and down to earth when talking about the art and science of testing.

I got the opportunity to compare some of the testing work that we are doing at home with the ideas and innovations talked about at the conference. There are some things that we can do much better and there are a lot of things that I am quite proud to say, we are already doing (sometimes though without formally talking about it).

The tester competition showed me personally, that I am an ok :) tester but that I need to practice a lot more when it comes to the show and tell side of what I do - i.e. just how important it is to be able to explain what, how and why we do what we do in a way that engages and excites people.

The real highlight for me was the all day tutorial with James Bach. After reading so many of his books, articles and papers and really trying to implement those ideas and innovations, it was a somewhat surreal experience to be in a small group, listening and learning. A whole bunch of things resonated with me from this session - how I need to write more (practice practice practice), read more philosophy, introduce more "testing exercises" with my peers and basically carry on learning as much as I can. I have already started with some small steps - created this blog page, joined the Software-testing yahoo group, and joined AST.

So from here, I am in the process of trying to contextualise all of the new things I have learned and trying to put them into practice - see what works, what doesn't work. I guess I will report back soon on how that's going....

Thanks to everyone involved in the conference and me being there, I hope to share as much as possible with as many as possible....

Louise