Wednesday, October 29, 2008

BBST - Bug Advocacy

WOW, I just completed another amazing BBST course. It feels great. Why? Well, because it is one of the most challenging ways of learning I have yet to come across. There are a number of reasons for this and I talked about them in my first post on the Foundations course.

Bug advocacy covered:


  • Basic concepts
  • Anticipating and dealing with objections
  • Effective advocacy
  • Credibility and influence
  • Writing clear reports

Part of the course required us to register as testers for Open office and evaluate and improve on bugs that had been reported there on Open Office Impress. This was a whole new world and challenge for me. When I evaluate bugs on my projects it’s with a quite comprehensive knowledge of the domain and the applications within it. It comes almost naturally. Here, I had to first really try and understand what the person was reporting and try and recreate it. The bugs we were working on were also those that were unconfirmed. If they had been easy to reproduce, they would not be sitting in this queue very long.  For one of the assignments, I spent more than 4 hours just trying to find a bug I could vaguely understand and try and reproduce.


So - What was different from the first time?


I didn’t feel quite as nervous about what other people would think of me or what the instructors would think of me.
The fact that I knew what the pace would be like and how much work was involved meant that I planned a little better – although still not well enough. I still didn’t manage to do much on the exam cram and I should have.
I studied in a much more structured way this time for the exam. Went back to my old favourite – Mindmapping each section and learning it from the maps.
I was more familiar with evaluating other people’s work and what that entails. I think that my feedback may have improved a little… but still needs lots of work.

What do I still need to improve on for next time?

My planning needs to improve - I think next time, I am going to try and produce the Mindmaps with each section’s video session. And then (as suggested) try and answer the exam questions each week.
My feedback can always get better. I am going to practice a lot more in my everyday work and try and improve that way.

How have I used it?

I had a little “talk” last week with some of the team at my current project. I discussed some of the concepts and that way, re-enforced them for myself.
I have encouraged one of my mentees to use the evaluation techniques so that we can then discuss the process.
I have evaluated a number of bugs and practiced giving feedback.
I have logged some bugs recently with a whole new perspective.


I have said it before, and I will say it again. This to me is far more valuable learning than I would ever get paying thousands of Rands for a “certification”. I am really extremely grateful to all of those at AST and especially Cem Kaner for spending so much time and energy on improving us and thereby lifting our profession. It makes me want to do the same.

Thursday, October 16, 2008

Resources to help you be a better tester...

A friend of mine sent me this mail today and I thought I would post my reply in a blog since it may be of interest to more people....

Hey Lou, how you doing?
I'm alright thanks. I was having a look at the AST web site and saw that you got recognition for your test report, well done!
So, in that regard, I was hoping you would be so kind as to tell me what you do to improve your self as a tester - what books do you read, web sites etc...
It is a broad request but if you could just give some of your favourite resources I would be most appreciative.


Hey R,

There are a few key areas that I focus on that I think hopefully make me a better tester:

There is one and only one conference for me... CAST! There is now a networking site around it.
For details on next years and previous years archives (there is some fantastic stuff in there)

Try and get to go to CAST - you get to network and you get inspired.
(I am also working on a ROI for your company sending you to CAST, and will hopefully share that soon on the blog as well).

Read read read - stuff about testing, stuff about people, stuff about psychology, stuff about business, stuff about learning...and by this I mean skim where necessary - you don't have to read every book cover to cover (something James taught me)

At the moment, I am reading:
Information dashboard design (Stephen Few)
The Fifth discipline (Peter Senge)
Perfect Software and other testing myths (one of the best books on testing... Gerry Weinberg)
The Black Swan – forget the author...
Tipping Point – Malcolm Gladwell
Time to Think – Nancy Kline
I am also reading some of my dad's old textbooks on Organisational behaviour which are quite interesting
Then I never am without my trusty testing "bible" - Lesson's learned in Software Testing – Kaner, Bach and Pettichord.

When it comes to reading I also read a whole LOT of blogs - (see my blogroll) and I subscribe to (and read most of what goes on in) the software-testing newsgroup on yahoo.


I also read Stickyminds when I see something interesting or am searching for something specific, as well as Methods and Tools, Testing Experience, STPMAG (so not everything in these, but whenever something catches my eye).

The really important part of my further education as a tester is the BBST course run by AST.

I am busy with my second module - Bug Advocacy. Once again it has been a real challenge and I have just gotten so much out of it its scary. Some of the material that was used as a basis to form the course can also be downloaded for NOTHING!

Then there are my trusty gurus..
Cem Kaner - see articles and publications
James Bach - see all sections - especially download the Rapid Software testing course slides and appendices.
I read everything I can get my hands on by these guys.... normally a lot more than once.

I learn a lot from my colleagues and from my network (my CAST buddies)...... the key here is that you have to ask.... insightful questions.
I learn a lot from whenever I put together some training and give a course and then try and make it better the next time.
I learn a lot everytime I try a new concept, see how it works and refine it for next time.
I learn a lot whenever I have to think about why something works or why it doesn't or when I need to solve a problem and take enough time to really think about it....

And I have a couple of fantastic mentors that blow my mind on a regular basis with their insight and abilities.

Hope this helps!






Tuesday, October 14, 2008

evolution of assessing

The testing industry in South Africa is one that has seen enormous growth over the last 5 years. As an example, when I joined my company 7.5 years ago, there were 6 of us in the organisation. We now have over 120 people working for us.

So what does this mean? Well, it means that we went from having very little interest in testing and not so many testing jobs available in SA, to the opposite situation really quickly. This in turn meant that suddenly, the amount of money you could earn as a tester went from almost nothing to quite reasonable salaries. So everyone decided to become a tester. We had to try and quickly train people to meet the demands of our clients and so a lot of organisations – ours included started graduate recruitment programs to train graduates in the basic concepts of testing. The bottom line is that we have ended up in a market where people are demanding high salaries but have very little skill and experience to match those salaries. (This I think is partly the reason for a lot of companies outsourcing to other countries but that is a whole other blog).

For a few years now, we have been trying to recruit only the best people to ensure that if we are going to pay a lot (and in turn bill a lot) for someone, they are indeed able to deliver to that salary. This is not an easy task. There are a number of steps to this process, and I just want to focus on one aspect that I am trying to refine.

Most of our clients are large financial institutions with heavy waterfall methods of development that rely a lot on specification driven testing. So one of the key skills that we need, is to be able to successfully analyse a specification and come up with tests or test ideas based not only on the specification, but also the things missing from the specification. To this end, I started to develop an assessment which gave people a specification (which I got as a sample off the internet) and asked some questions. This assessment has now been through a number of iterations and is still evolving. The areas it is evolving in are twofold. Firstly – the way I ask the questions. Secondly - my rubric for assessing the answers. The concept of the rubric was first introduced to me in the BBST Foundations course.

For an example developed by Cem Kaner you can go to http://rubistar.4teachers.org/index.php?screen=ShowRubric&rubric_id=1435804&

The evolution of the assessment thus far has been guided somewhat by the responses I get. This has led me to refine the questions and also try and ask myself what exactly gives me a happy / not happy feeling when I am looking at the response and why. (For me, it starts with an emotional reaction that I then need to analyse and put into critical thinking).

The iterations prior to the ones I am showing here are not worth seeing…(I had to use screenshots as I couldn't see another easier way in blogger to simply attach the files - if you know of one, please let me know :))

My first real iteration was simply a set of questions and an open section for my comments.

My second iteration, I changed the wording of the questions slightly and played around with a rubric but didn’t like the rubric as I felt it was too rigid in some ways and not specific enough in others. (I know – the role things are just crazy).

My third iteration played with the questions some more. And looked at some guidelines for marking.


My fourth iteration played with the questions a lot more and also added more of an outline for a rubric that I hope to evolve some more.

I am posting the evolution of the assessments with the hope that you may have some insights / suggestions for improvement, or questions that may guide my thinking in this area. I have left out the spec portion as to me it doesn’t matter – you could use any spec…as long as it wasn’t perfect and not too long. (I allocate 1.5 hours to the assessment).

p.s. you are welcome to re-use this if you find it useful

Thursday, September 25, 2008

Observations......Part 1

Note: This post is rather old… has been gathering dust in the pile of ideas, but I think is still relevant in terms of its observations of back then.

I wanted to share some of my observations on my ET/SBT experiment.
The first observation is around People... the way it affects me as a manager and I believe my team members....

The last project that I led (an ATM rewrite), was a really large and complex one and I was a lot younger and more stupid then :). I led the test strategy and effort based on what I thought would work best for the context. We ended up with around 4000 test cases and initially we had 12 very junior testers (brand new to testing) to execute them. We also had 5 more senior testers, checking the results, and updating them on our management tool. The reason we had to split these tasks was to be more “productive”. We were testing on physical ATM machines and therefore the tests were run from paper “test packs” and then updated after the fact. The defects were also logged on paper and then updated into our management software by the more senior testers. The problem with this is that we ended up with very bored junior testers. Why? Well, firstly they didn't log their own defects formally into our defect management tool (and therefore didn’t get to practice this properly). They also didn't update their own results. This meant that they didn’t have to take direct responsibility for their work – someone else would be deciding whether or not to mark the test passed or failed. We also had very bored senior testers – their job basically became administration. Even though this wasn't the desired result – I had set it up that way.

In “Lessons learned in software testing”, the guys have a really important lesson that they share - “if you want your staff to act like executives, treat them like executives”. I was trying really hard in every other way to do this, but the responsibility for testing had to be shared. This created opportunities aplenty for conflict and not much for learning. And I guess, maybe there were also other project factors that were to blame for this. The thing is that even though there was every intention that good mentoring and coaching and skills development would happen by splitting the team, it often didn’t.

Let me contrast it with the situation on the SBT project. Firstly, my experimental project was a LOT smaller. My team was made up of junior testers that all had some experience, but were not super experienced. I met with them every single day. This is the first difference – I would meet with the more senior team members in my last project every day, but only in emergencies (and mentoring sessions) with the rest. I was quickly able to assist and guide the session based team members in their day to day testing activities mostly using through the Debrief sessions. They did a great job of logging defects, following up on them, reporting back to me and recording their sessions and updates for the few scripted tests that were run. Basically, I feel that these testers didn’t suffer from boredom at all and they all really relished the opportunity to be responsible for a function or a section of testing. In terms of job satisfaction as well as opportunities to learn and grow, I definitely think that my SBT project was far more successful.

Thursday, September 4, 2008

Some thoughts on estimation...


As some of you may know, I am currently working in a test services department that is in the process of outsourcing all testing to an Indian company. During our handover, we were asked if we used any tools to estimate for projects. Hmmm we thought.. tools? Well, if you mean our experience and our brains... then yes, I guess we use a tool. In fact, most of what we do when estimating, we do almost subconsciously. So this gave us an opportunity to really reflect about all the factors we take into account when estimating.

The result is this mindmap (please note these ideas are not just my own - all the TMs at SS and of course all the people I read have contributed). The idea of splitting it into 2 sections came from something Jerry Weinberg said at CAST which really resonated with me. People are always counting fixing time as testing time.....this is something that I really want to start communicating more thoughtfully about. I'm sure that this map doesn't have everything, and can certainly be improved on.. so if you have any thoughts - please share. There also may be some context specific things that are a little unclear so if there are questions - please ask.


p.s. the interesting (and somewhat scary for me) part is that the outsource company have included some of the factors that we consider into their estimation tool (an excel spreadsheet with formulas for all kinds of things) where one would allocate some kind of percentage weighting for each factor that then increases the number of person hours... it gives the impression that there is science in estimation... which takes me to another thing Jerry said in his new book, "Garbage arranged in a spreadsheet is still garbage"

Sunday, August 24, 2008

Highlights from CAST in Mindmap format..


So I promise that I am working on a proper post about CAST 08... its just hard cos there is so much to tell... in the meanwhile, here is a Mindmap I made of some of the highlights... (it excludes the after conference chat and drink and social sessions that were maybe the best part! )

Thursday, May 22, 2008

YAY... I am going to CAST

The great news is that my training for the year has been approved and I am off to CAST again!

http://www.associationforsoftwaretesting.org/drupal/CAST2008

Last year was the first time I experienced this amazing conference and I feel extremely privileged and incredibly excited to be going back. I will be hanging out with all the coolest people in Software testing, and once again trying to absorb every little thing I can. The program looks fantastic, with presentations from Cem Kaner, Rob Sabourin and even Jerry Weinberg. The focus at CAST is on “conferring”, so you really have an opportunity to learn and grow through the experience.

So how is it that I am lucky enough to be going? Well.... there are a few things that may have helped:

1) I put it in my personal development plan – we haven't had these very long, but if you don't ask for something, chances are you won't get it. I have made it my mission to tell everyone I can how much I loved CAST and how much it did for me, and how badly I wanted to go back. I have gone so far as to say that CAST is the only training that I would like to go on this year.

2) There has been tremendous support from some key people in the company – there are a number of people that work with me that have been very supportive of me going..... so I try and impress them every way I can

3) After returning from CAST last year, I tried to use what I had learned – there were a few ways that I attempted to do this....through development of our second analysis course, through some of the informal sessions I had with people and also my Session based testing experiment

4) There were some presentations at company meetings on the things that I learned last time

5) I am paying for some of it myself… by committing my own money, I am showing how important this is to me personally

I am sure that my loyalty to the company doesn't hurt either – who would want to send someone to a conference if they weren't sure how long the person might be around....?

In writing this, I would like to encourage those of you passionate about your career and passionate about testing, to consider CAST for yourselves... and hopefully I have given you some ideas of how you can get there too.