The release of my book Solving for Project Risk Management: Understanding the Critical Role of Uncertainty in Project Management is only five days away! In advance of its release I have been providing individual blog posts on each chapter. Previous blog posts focused on cost and schedule growth and the need for quantitative risk analysis for cost and schedule. In this post, I focus on Chapter 4, titled “Covered with Oil: Incorporating Realism in Cost and Schedule Risk Analysis.”
The title of the chapter is inspired by a lyric in Jimmy Buffet’s song Margaritaville. One of the lines in the song’s lyrics refers to beachgoers covered in tanning oil. Jimmy Buffett probably never imagined that
this could apply to crude oil instead. However, a 2010 oil spill in the Gulf of Mexico resulted in oily beaches on the US Gulf Coast. Both the cost and environmental impacts were much worse than anyone had predicted. It was the biggest oil spill ever. The leak wasted more than five million barrels of oil and was an environmental
disaster.
Not planning for some events such as this one outside of a project’s control is reasonable. But project managers have some control over their destiny. They can meet budget, schedule, and scope by cutting content. In the cases of extreme overruns, senior leaders can cancel projects. However, budgets for projects typically include little risk reserves. Most risk analyses do not account for minimal changes in a project’s design or relatively mild external forces that should be part of the initial plan.
Even risk analysis is subject to underestimation of the actual ranges for cost and schedule. One of the ways to get better at something is to measure performance. By doing so, you can assess what you did right and what you
did wrong. You can figure out your mistakes and learn from them. You can also see what you did right and learn to do that again. This feedback is critical for improving performance. Projects would have better risk analyses if the profession as a whole had been doing this on a systematic basis for the past 50 years. However, this is rarely
done. These actions over the years are like throwing darts but never looking at the dartboard to see how close the darts are to the bull’s eye. Without looking at the results, it is impossible to determine if the methods used hit the bull’s-eye or missed the dartboard entirely. There is a small amount of data on performance of cost risk analysis and even less for schedule risk analysis. Not surprisingly, what is available indicates that risk analyses are far from realistic. For most analyses I have examined, the actual cost is above the 90% confidence level. If done right, a 90% confidence level should be exceeded by only one out of 10 projects. However, I have found the
opposite to be true. See the table below for a summary.
In the table, only two of the missions had actual costs that were less than the 90% confidence level. Even though this is a small sample, the chances of this occurring is one in 2.7 million if you assume that these 90% confidence levels are accurate. You are more likely to be struck by lightning. This indicates that what cost analyses are estimating as 90% confidence levels are in reality much lower.
The cure for this problem is to calibrate cost and schedule risk analyses to historical cost growth and schedule delay data. To learn more about this, check out my book – you can pre-order from Amazon or Barnes and Noble.