Tuesday, January 29, 2008

Requirements Revisited

A recent post on the Seilevel Forum asks: What do Testers Test? Is it against the requirements or the design specification. A partial answer is "either or both, it depends". The standard v-model of testing suggests that there is testing appropriate to each level of specification. How much you do of each is a classic trade-off question.

But it seems to me that the received wisdom overlooks an important consideration, one that is generally missing from requirements practices. The important point is that specifications do not generally specify absolute outcomes (so they're a bit of a misnomer in that regard). What they do specify is a range of valuable outcomes, what Roger Cauvin has referred to as the "least stringent condition".

Clearly, if the design specification calls for behaviour that is more stringent than the requirement specification, compliance with the design specification implies compliance with the requirement specification, so testing both involves redundancy. More importantly, though, what you really want to test against is the most stringent condition or, perhaps, the most stringent condition of value.

This post is entitled Requirements Revisited because that is what we ultimately need to test against. Not what we think is wanted; not what we asked for; but what we're told we've got! Or, at least, as much of what we're told we've got as we think is valuable.

If our super developers claim to have given us extra performance or potentially useful unrequested function, testing provides evidence for such claims (though it is not the only source of evidence). But if we don't need it, why prove it works? Only because (or when) the risk of not doing so exceeds the cost. I have worked on extremely flexible systems in the past where the extent to which a capability had been tested was more of a limitation, in some cases, than the function or performance. We lacked a formal approach to revisiting the requirement specifications, so we could end up with change requests whose only impact was additional testing or risk assessment.

In short, for each element of any level of specification, there is (or should be) a claimed level of satisfaction. After testing (or other validation), there is a proven level of satisfaction. Any case where the claimed level of satisfaction exceeds the proven level is an identified risk. Signoff on testing constitutes acceptance of the identified risks.

2 comments:

Roger L. Cauvin said...

Good point regarding risks and claimed levels of satisfaction.

Requirements generally have a metric embedded in the condition. Sometimes this metric is scalar. For example, a reliability requirement might specify a minimum uptime (the percentage of time the service successfully delivers functionality). In that case, uptime is a scalar metric.

When testing, it can be useful not to just verify that the product meets its minimum uptime, but to gauge what the actual uptime is. Then the sales and marcom departments can use the figure, and the organization has a baseline for improvement.

AlanAJ01 said...

Thanks, Roger. We discussed scalar requirements on a previous post