Uncertainty of Estimation

Uncertainty of Estimation in the software estimation process has variable magnitude and dimensions! It can exist as a result of a lack of knowledge which is understandable but has not been deciphered, as of now. On the other hand, uncertainty could also be a function of the unknown knowledge which needs to be discovered yet. Very well pointed out –

There are known unknowns…….But There are also unknown unknowns, The one we don’t know, We don’t Know. – Secretary of Defence, Donald Rumsfeld, February 2002

Precisely, this is the case of cost estimation in the software development process where the uncertainty arising due to unknown unknown variables poses a greater risk of failure as compared to the known unknowns. Irrespective of the array of multiple estimation techniques available at the disposal of an expert estimator such as – judgement, Wideband Delphi, top down, bottom up models, algorithms or parameter based techniques, – the element of uncertainty always gets affected due to variation in requirements, inputs and scope definition. Scope though strictly defined in the very start of the project through effort, code volume and function points, its variability at later stages (which happens most often) creates major challenges to software estimation process. An expert estimator must have the ability to foresee the two categories of unknowns to lend a certain degree of credibility and avert risk of uncertainty to some extent.

Degree of Uncertainty

As per a study by Standish Group’s Chaos Report, a third of the software projects are actually able to deliver functionality calculated in terms of time schedule and budgeted cost. This report however does not compare the degree of functionality as per the IEEE study, which states that a complex project offering greater functionality might overrun cost and time, while a simpler project with low level functionality can easily meet time and cost estimates. The dynamism of the very definition of estimation success and failure is based on variable studies thus lending a high degree of uncertainty to estimation accuracy.[1]

Intrinsic Issues Leading to Uncertainty and Accuracy[2]

  • Software estimates are not finite variables with quantifiable attributes. The estimates are simply probable measure of the size, duration and degree of involved attributes to deliver an expected level of functionality with available data at that time.
  • Overestimation is a general phenomenon where even an expert estimator might overlook interruptions, change in requirements and various factors making overly optimistic estimates. Arriving at realistic estimates based on historical data, the very first time is rather difficult to achieve.
  • Expecting a different set of results in the present scenario by avoiding past flawed judgements with respect to time, effort and cost is dangerous. As had been the saying, history repeats itself, it is best to learn from it. Instead of avoiding the past failures, historical data can help in building better estimates.
  • Complexities and missing requirements keep erupting after the design gets going. The project is seldom simple and small as originally anticipated and only tends to grow. Thus estimations where scope of rework or additional work has not been included in the project estimates, are bound to face a problem in future. Missing requirements, business changes, enhanced scope might actually require a rework of as high as 50% as per a study. All this affects uncertainty and estimation accuracy.
  • The list of unknowns is huge in the beginning of any project. Technologies or tools being used to implement requirements may be unknown at the time, available skill sets/skill mix and project plan is undecided. With design progress, the unknown variables diminish which lowers uncertainty around estimation variables and hence the risk associated with cost, time and scope but teams have to make estimate commitments much before this is actually achieved.

 

Various Tradeoffs for Accuracy

As pointed out, early estimates are replete with uncertainty to a great extent. Instead of simply categorizing a project as a failed one (if its outcome with respect to scope, time and budget varies against set estimates) it is important to measure deviation in accuracy of estimates to quantify uncertainty of estimation. This calculation can assist in handling uncertainty in the future and improve future estimates.

The trade-off between time estimates, cost estimates and effort estimates with respect to quality (functionality) must be realistic. Over-optimism bias should not underestimate risk and probable outcomes associated with overestimation. Empirical evidence based on historical data can provide useful present and future perspectives. Since there is a non linear relationship between effort, time, and cost estimates and quality of software product, even a small difference in any of the estimate elements can have a significant difference at variable points in size spectrum of the project. Various studies have been conducted to demonstrate trade-off between team size and schedule, effort and quality. These tradeoffs show large teams achieve only slight schedule squeeze with significant rise in effort and defect level. Another study showed that productivity rises with project size, while keeping size constant, productivity diminishes as team size increase. This trade-off was true irrespective of the productivity measure used for calculation (SLOC/person/month, Function point/person/month. Thus optimal team size is affected by project scope and must be accounted during the estimation process.[3]

A historical project database through estimation software can alleviate this uncertainty through analogies for basing new estimates. Parametric modeling equations can further assist the estimation tools to minimize uncertainty to a great extent through solid past data points to handle unrealistic estimation, over optimism and costly surprises. It would help project managers to use empirical evidence to negotiate on achievable timelines, make realistic commitments to clients, set optimal team size, skill mix, and define productivity parameter within teams. Past experiences can enable scoping for growth in requirements, track success and quantify extent of deviation and make mid way correction if required. Thus historical project database can lower uncertainty of estimation to a great extent – history is powerful enough to diminish future uncertainties!!

References

Armel, Kate.

“History is the Key to Estimation Success.” Journal Of Software Technology (2012): 15-22.

Dekkers, Carol.

Software Estimation: How Misperceptions Mean We Almost Always Get It Wrong. 08 April 2014. 27 October 2015

[1] http://www.qsm.com/sites/default/files/qsm/DACS_Article_Feb2012.pdf
[2] http://www.drdobbs.com/architecture-and-design/software-estimation-how-misperceptions-m/240166474
[3]http://www.qsm.com/sites/default/files/qsm/DACS_Article_Feb2012.pdf