The software development industry landscape has witnessed huge failures and project pullback due to poor software estimation. For many years, estimation has been a long debated topic among developers, researchers and the development community and has been the focus on our mission for improving software estimation today.
For scope definition, effort is defined. Calculation of code volume, number of features and function points are all different variations in defining the scope of a project. Methodologies used for effort estimation – expert analysis, Delphi technique, top down, bottom up approach, parametric and algorithms have long been used by development firms – but risk and uncertainty surrounding scope creep, budgetary overruns and delivery schedule have always existed.
Risk management has been a primary issue for all software companies. Irrespective of continuing failures, the industry seems to learn little. Various tools and models have been developed, tested and assessed with little success to transform development for managing risk.
Below I have discussed major takeaways from prevailing estimation processes which ignore risk management leading to inaccuracy of estimation.
1. Being Over Optimistic in Estimation Ignores Risk Assessment
Most software failures arise out of the inherent tendency of development firms to win business at all costs (read over optimism for estimation). This make them estimate competitively, ignoring major critical project activities and to report less hours for more work. The result is immense pressure on the team since those critical development activities cannot be done away with. To complete on time, the team works overtime – quality gets ruined, even if the project somehow manages to reach its zenith. Overoptimistic estimation processes do not consider even the mildest risk assessment for successful project delivery. Irrespective of past negative experiences, very few organizations have actually curbed their propensity to focus on winning the bid rather than winning it for a successful delivery as the prime goal.
2. Absence of a Well Organized Historical Data Management System Prevents Risk Calculation
Ideally, every organization should learn from their past experiences. That’s true for estimation too. An assessment of the deviation from estimated and actual numbers for effort estimation enables some insight into the probable causes of failure. Generally, productivity gets the blame for failure of a software project; however the truth may be a generalized estimation calculation, lack of defined effort model, poor team allocation, and macro factors leading to change in demand or scope creep.
Whatever is the case, how do you analyze it? – Through historical data and past result assessment. But does your organization have the capability to manage a large pool of historical data or employ a tool to carry analysis for risk assessment separately? Small organizations would deem it costly – both maintenance and analysis. An organization specific historical data source and checklist of past actions enables future contingency planning and risk assessment. The truth is most companies do not use a tool which enables a contingency planning endeavour.
3. Neither the Expert Nor the Group Estimation Technique, But a Structured Estimation Tool Can Manage Risk
Organizations can either go for individual estimators, or an expert estimating for the project. Normal individual estimation processes would have different analysts producing different estimates, which require review. On the other hand, expert estimators would generalize estimation to produce estimates based on his expert efficiency and ignore individual skill level of developers. There is no provision of risk assessment and management in this process. The negative effect/risk associated with a group based estimation model does get analyzed or recorded for future reference and risk management.
However, a structured tool which is aided with granular estimation capabilities based on skill level, development activities and testing capabilities as per technology platform will be able to handle risk associated with human judgement errors with certainty.
4. Controlling Changes in Requirements Manages Risk Related to Scope Expansion
A very general phenomenon in any software project development is scope creep –which again is a function of changes in requirements. Is your requirement analysis a robust procedure? A detailed planning and discussion with your client ensures that requirement analysis is perfect and capable of delivering the visualized and expected system as per user needs.
If there occurs a mismatch between user needs and developer understanding with respect to functionality, scope creep is indispensable. Again improper communication of user goals by the client, owing to their lack of knowledge concerning technical language leads to defective understanding of requirements. Risk associated with inadequate requirements analysis can be managed at the conceptual phase through communication robustness.
It is essential that communication and discussion at the requirement analysis stage is highly robust. Once this is achieved, a tool can be employed to break the requirements into implementation types for estimation.
5. Project Management Capabilities Affect Risk Definition – Objective Measurement of Risk Critical for Analysis
Risk definition varies with different project management capabilities. So the same software project will be less risky when handled by a more experienced and trained project manger as compared to a new and less experienced project manager. The inherent variable nature of risk prevents objective risk assessment.
However instituting a system of benchmarking which allows project objectives and goal attainment as basis of measuring risk probability can lend objectivity to risk calculation. Hence if the system definition is in terms of implementation types, each of which gets estimated for effort and cost as per chosen technology platform would limit subjectivity due to individual project management capability. Here an automated tool will perform project management taking all contextual factors into consideration. An inability to attain defined goals would clearly help in risk assessment objectively – defining internal and external factors without human error/bias coming into play.
A Risk Limiting Strategy Should Focus on 6 Typical Factors:
1. Robust communication with the client to define client expectations and user needs.
2. Clearly defined goal and project objectives to be communicated succinctly to developers for sound requirements analysis at conceptual phase.
3. Internal Focus of the development team to develop a system which finds meaning for the end user and not just a technological ‘fantastic’ product – Meet end objectives of the client.
4. Deploy a system/tool to conduct estimation to overcome optimistic bias and produce realistic and achievable estimates.
5. Employ automation to enable granular and technologically robust estimation.
6. Employ an automated tool which allows setting benchmarking standards, enable charting and applying historical data for analysis and have robust project management capabilities for objective estimation.
These features would not only limit risk but enable risk assessment and management in the long run for future projects.
Quick FPA, recent invention in the area of software estimation is one such tool which seeks to limit risk and uncertainty in development through standardization, quickness and objectivity of estimation.
Register and receive an invite for free trial to boost your software project estimation capabilities!