Thursday, September 4, 2008

Some thoughts on estimation...


As some of you may know, I am currently working in a test services department that is in the process of outsourcing all testing to an Indian company. During our handover, we were asked if we used any tools to estimate for projects. Hmmm we thought.. tools? Well, if you mean our experience and our brains... then yes, I guess we use a tool. In fact, most of what we do when estimating, we do almost subconsciously. So this gave us an opportunity to really reflect about all the factors we take into account when estimating.

The result is this mindmap (please note these ideas are not just my own - all the TMs at SS and of course all the people I read have contributed). The idea of splitting it into 2 sections came from something Jerry Weinberg said at CAST which really resonated with me. People are always counting fixing time as testing time.....this is something that I really want to start communicating more thoughtfully about. I'm sure that this map doesn't have everything, and can certainly be improved on.. so if you have any thoughts - please share. There also may be some context specific things that are a little unclear so if there are questions - please ask.


p.s. the interesting (and somewhat scary for me) part is that the outsource company have included some of the factors that we consider into their estimation tool (an excel spreadsheet with formulas for all kinds of things) where one would allocate some kind of percentage weighting for each factor that then increases the number of person hours... it gives the impression that there is science in estimation... which takes me to another thing Jerry said in his new book, "Garbage arranged in a spreadsheet is still garbage"

4 comments:

Anonymous said...

Hi Lou,
Good to see you clearing away the tumbleweeds :)

Project estimation for testing is an interesting dilemma. As I look at your mind map, there are a bunch of things that spring to mind. Let me play Devil's advocate and fire off a few questions at you. I'm going to do a brain dump and let you sort through it because it's Friday afternoon and I'm on my third glass of a really bloody good Japanese whisky --but I digress.

Is the object of the exercise to estimate testing time only? What happens if you blow out your testing time? Which parts of this process are critical things to have and which is a nice to have? Who is all of this for? Who matters in your project? What matters to them? How does your map relate to what they need to see?

Your map appears to assume a waterfall SDLC or some other BDUF work flow, which probably figures if you're working with a financial institution. If you guys are doing any sort of agile/scrum/xp style of development though, it's going to fundamentally change how time spent on those projects is estimated. If they're using sprints and burn down charts, you won't have scope for any of the BDUF process. Personally I think BDUF is a busted metaphor, but if that's what you have to work with, then you have to make do with what you have.

I suspect you could use an entirely new section on obstacles. Project concurrency? Add 20% time per project per resource for context switching. Outside context problems? You have your own experiences to draw on - who else can you poll? Learning from your own mistakes is good. Learning from other people's is better.

What are your developers doing to help? Writing unit tests? Writing testable code? Whose estimate gets eaten into if they deliver a busted build? On that front, what gets squeezed if you look like running over time? Do you lose features? Lose quality? Do you extend the deadline?

Now that you see what's mapped out - is there anything you routinely do because it is expect of you rather than useful? In other words - what fat is there that can be carved out? If specs aren't up to scratch, do you need to update them? How many of the tests that you design up front survive first contact with the enemy? Can you see anything that is testing by rote rather than by identified risk? On the flipside, are there any time wasters you know you won't be able to escape? Do you need to ahdere to any standards or other auditable paper trail?

The fact that the team you're outsourcing to has a spreadsheet for test estimation rings alarm bells with me. Do they use it as a checklist from which to provoke thought, or is it a replacement for thinking entirely? Are they going to become better thinkers because of what you provide them, or have a slightly expanded list of stuff they cover off on to cover their arse?

I'd be more inclined to ask them what they do to estimate testing time (beyond the spreadsheet they gave you) - if that's all they have, be afraid.

Hmm, I could go on, but how about you digest that lot and tell me if it's at all useful to you.
-B

Louise said...

Hey Ben,
Thanks for your brain dump. Its much appreciated.
Before I get home and have some wine.. and then answer... what is BDUF?

Anonymous said...

BDUF - big design up front - basically your waterfall and V model types of sdlc :)

Louise said...

Thanks Ben. OK, first let me try and answer some of the questions..

>>Is the object of the exercise to estimate testing time only?

Well, we were asked "how do you estimate for projects?" and our thinking went around the various types of projects we deal with and was really a brain storm of all of the various elements - testing time, fixing time and I guess also resourcing and planning as well.

>>What happens if you blow out your testing time?

Hmm you mean when we asked for 6 weeks and then realised during the project we actually need 10? Normally in this situation, we would try and see why our estimates were so out (normally its one of those factors like - we found more bugs than we expected to) and we would then negotiate with the project team giving them the various options... we could only test xyz, or add some time to the project, etc.

>>Which parts of this process are critical things to have and which is a nice to have?

Do you mean the estimating process?

>>Who is all of this for? Who matters in your project? What matters to them? How does your map relate to what they need to see?

Well, this is a rather sad thing. See we have never actually sat with anyone at sponsor level to specifically look at explaining how we estimate and all of the factors that we consider. I guess we do sometimes on a per project basis get to explain some of the reasons our estimates may have been wrong, but I think that is different and this is where the sad part comes in. Because I don't think we have communicated well enough about it, there is a perception that someone else can do it better. And maybe they can. I am just feeling like we haven't done ourselves justice.

To comment on your third para - yes, almost all our projects are BDUF. On the rare occasions we get to work on smaller pieces of work, we get to be a bit more agile and this is like a breath of fresh air for us all... (us testers that is).


I like your ideas on the obstacles section - I guess that some of the things I have under the "considerations" section, but seeing them as obstables might put a different thinking hat on so I will try that.

>>What are your developers doing to help? Writing unit tests? Writing testable code? Whose estimate gets eaten into if they deliver a busted build? On that front, what gets squeezed if you look like running over time? Do you lose features? Lose quality? Do you extend the deadline?

Hmm, I could write a separate whining blog about this but I guess it really takes me to another sad story. Again here, we have not said "NO" enough. (Something that really resonated in the JW tutorial). And so we have "tried to make things work" and accepted bad code when we shouldn't have. The end result being that the perception of business is that "testing takes too long and is too expensive". Again - they are thinking that someone else can do it better... and if those other people insist on a higher standard of code to test, then they will probably do a much better job of meeting their own deadlines.

>>>Now that you see what's mapped out - is there anything you routinely do because it is expect of you rather than useful? In other words - what fat is there that can be carved out? If specs aren't up to scratch, do you need to update them? How many of the tests that you design up front survive first contact with the enemy? Can you see anything that is testing by rote rather than by identified risk? On the flipside, are there any time wasters you know you won't be able to escape? Do you need to ahdere to any standards or other auditable paper trail?

We do have some rather stringent requirements from audit - but nothing that we can't get around if we do proper session based testing - (something I have been experimenting with). There are a few other obstacles to this (that i still need to blog about), but I guess the truth is that we certainly could do things smarter and add more value.. its just that we will have to do that somewhere else now...


I am hoping that their spreadsheet is a kind of checklist for thinking... but that remains to be seen. I'll let you know :)

Thanks Ben... (I didn't even need the wine).