“The project is late because testing is taking too long,” sound familiar? Testing is one of the final tasks to complete before the introduction of the proposed changes to clients (Lewis, 2009). With the discovery of errors, delays may occur; however, the testing team is accountable for the detection of potential problems within the product before the market sees it, but the suggestion that all responsibility for delays belongs to one group is counterproductive to their role. This team, like other disciplines, can only be accountable for testing effort in the project and not the effort by others. Project management, or scrum master, has to ensure that all parties work effectively and efficiently to ensure continuous improvement in project outcomes (King, 1996). Many tools are available to use, which will ensure success; however, sometimes, it seems that the application of the tools and data is not enough to get the job done. Frequently it seems we are using a hammer to put in a screw. How can the organization prevent “pointing of the finger” to lay blame, which is destructive to team cohesiveness and encourage positive reflection for problem resolution?
In my almost 30 years of experience, the opening statement occurs frequently. Most post-implementation reviews, or retrospectives, identify and reflect on positive and negative outcomes for future improvement. During these meetings, it is usual to discuss an issue; however, if there is no identification of a root cause or there is no time to work through it correctly, a blaming session occurs between team members, each refusing responsibility for the issue. The resulting information in these types of discussions is not productive in providing solutions for future development. There is information related to testing available that speaks to the benefits of writing precise requirements, which will define the organization’s needs at the start of the project, in a Waterfall organization. For Agile delivery organizations, it is the quality of the stories during spring planning and the collaboration with all stakeholders, including clients. Although the information is available, the application of them is not widely used (Emam & Koru, 2008). Despite these identified issues by Emam and Koru, The point of contention will continue to lie with Testing because they are involved in the final component of the project cycle and, for that reason, are the most visible.
One tool used for Waterfall projects is the estimated vs. actual costs during each phase of development. Most often, in the early part of the project cycle, the upfront work goes according to plan, and the expected costs match the actual values, which are based on the gathering of requirements and planning the project. With the discovery of errors and delays occur, costs continue to rise to lead some to assume that the fault lies with testing rather than possibly with the previous work. In an Agile environment velocity, throughput and time spent (clocking extra time in the sprint) are some metrics that can help. The exploration of the root cause to allow the actual problem to be resolved and develop more efficient future projects; frequently, the issue lies in a process fault (Alshawi & Al-Karaghouli, 2003).
So how can the issue of laying blame be resolved when the presumed fault lies with the testing group or any team participant of issues of which they have no control? Gray and Williams (2001) suggest the use of quality improvement through questioning and analysis (QIQA) is a tool in an RCA program that may assist management in these situations. This process will provide an environment where everybody can learn from experience in the project through an appropriate discussion about procedures. These discussions would promote an open atmosphere identifying improvements, and team member buy-in could occur. A review of other tools and available data would help to develop the needed questions to get to the source of the problem. The supporting inquiry will assist in developing the appropriate structure to prevent negative feedback towards any individual or team and encourage positive discussion towards solutions.
Root Cause Analysis (RCA), when combined with a robust metrics and measurement program, can lead to identifying issues within a project and provide data for continuous improvement (Black, 2002; Flemming & Koppelman, 2008). Some research presents data indicating that RCA does provide benefits throughout an organization (Lehtinen, n.d.). This transformation of information into a cost figure, which, when added to quantitative measured values, will decrease the cost of a specific time frame. Re-work is not an added value because it is avoidable with early detection and mitigation (Lewis, 2009). Based on Toor’s’ (2009) findings, the use of the RCA process would create a capable team by allowing the facilitator to use people management skills to achieve the following:
– Create ownership
– Create best practices
– Hone analysis skills
– Generate awareness
As noted above, people management, as identified by Toor (2009), contributes to the RCA process and continuous improvement. Without these features, the recognized data provided would have no value due to possible corruption with unsound analysis and probable bias. Although many factors could make a metrics program less helpful as a tool and create data that could be invalid, there is still a possibility for successful use (Lukas, 2008). Marshall, Ruiz and Bredillet (2008) confirmed through previous studies that metrics could be a valuable tool to determine project success. From a testing group’s point of view, a proper metrics program can identify other options for delays, which might have occurred early in the development of the cycle. This information would be of benefit to the project steering committee and department management in future growth.
From a governance perspective, Sharma, Stone and Ekinci (2009) suggest that these two tools could produce positive effects because they ensure that all stakeholders are involved. Details of RCA focuses on the group responsible for the identified concern and provides evidence about how and when the issue occurred and affected work effort. The solution will capacity planning for future work as well as improvement and efficiency within the process cycle.
Testing groups would benefit from the introduction of the RCA and a useful metrics combination. Their combined application would involve a transformation from a testing environment to a Quality Assurance environment. Current best practices for the RCA process provides results to the stakeholders within the organization resulting in resolved issues and successful future projects. Although it allows for some data on what can be improved, it doesn’t’ provide the costs involved with the rework that may be needed to resolve the problem. Since this value is not measured, the initial biased statement, “The project is late because testing is taking too long,” will continue. However, the identification of a root cause means nothing if the metrics are not tracked and would not identify the reason for the delay.
As with the introduction of a new set of processes and reporting, they are training all stakeholders within the organization is essential. Keeping this information and all metrics from each discipline (i.e. Business analysts, Product Managers, Testers, and Developers) in a database with the collection of Root Cause Analysis data over a while will provide valuable information to all stakeholders who are looking for recurring issues. With this data, they may be able to pinpoint the juncture at which these concerns began to affect the teams’ value. This process can then create a collaborative environment where learning and improved production can occur (Gray & Williams, 2011). These tools will assist the post cycle discussion groups in engaging in meaningful dialogue supporting successful change, productive conflict resolution and team cohesiveness.
Want to learn more about building an RCA and Metrics program within your organization? Or, enhance your current one? Contact us.
Alshawi, S., & Al-Karaghouli, W. (2003). Managing knowledge in business requirements identification. Journal of Enterprise Information Management, 16(5).
Black, R. (2002). Managing the testing process (Second ed.). United States of America: Wiley Publishing Inc.
Emam, K. E., & Koru, A. G. (2008). A Replicated Survey of IT Software Project Failures. IEEE Software, 25(5), 84-90.
Flemming, Q. W., & Koppelman, J. M. (2008). If it walks and talks like EVM… it must be earned value management. Contract Management, 48(3), 46.
Gray, D., & Williams, S. (2011). From blaming to Learning: re-framing organizational learning from adverse incidents. The Learning Organization, 18(6), 438-453.
King, I. (1996). The road to continuous improvement: BPR and project management. IIE Solutions, 28(10).
Lehtinen, T. O. (n.d.). Perceived Feasibility of Using Root Cause Analysis in Post Project Reviews: an Empirical Investigation. Retrieved December 09, 2012, from Lund University: http://esem.cs.lth.se/esem2012/idoese/pdf/137_IDoESE_Lehtinen.pdf
Lewis, E. W. (2009). Software Testing and Continuous Quality Improvement Third Edition. Boca Raton: Auerbach Publications.
Lukas, J. A. (2008). Earned value analysis – why doesn’t work. 2008 AACE international transactions.
Marshall, R. A., Ruiz, P., & Bredillet, C. N. (2008). Earned value management insights using inferential statistics. International Journal of Managing Projects in Business, 1(2), 288-294.
Tajinder, T. P. (2009). People management: an imperative to effective project management. Business Strategy Series, 10(1), 40-54.