Several years ago, when I was the Cost Director with the Missile Defense Agency I had a conversation about quantitative cost risk analysis with a senior executive with the Department of Defense’s Cost Assessment and Program Evaluation. I asked him why his organization did not conduct quantitative risk analysis for its independent cost estimates. He explained that he thought it was difficult to do well, and that many of the analyses he had seen had unrealistically tight ranges. While I don’t agree that we should not do risk analysis just because it is hard to do right, he did have a point. Many risk analyses significantly underestimate the range of uncertainty. One of the reasons for this is that as a profession, we do not measure our performance. That is, we do not compare our risk analysis with actual costs. In my forthcoming book, I discuss this issue and provide a comparison of actual costs to risk analyses that were conducted for 10 projects. In my analysis, I compared the actual cost to the 90th percentile. If the 90th percentile for a project cost risk analysis is $100 million, that means that there is 90% probability that the actual cost will be equal to or less than $100 million. There is only a 10% chance that the actual cost will be greater than the 90th percentile.
I looked at 10 projects from a variety of industries and found that for 8 of the 10, the actual cost was higher than the 90th percentile, which is the opposite of what we should expect. See the table below for a summary.
This is a small sample of missions. What is the probability that if the 90th percentiles of these risk analyses were really 90th percentiles, that we would find that at least 8 of 10 would have actual costs that exceeded this threshold?
We can model this as a binomial distribution, which models the probability of k successes in n experiments as:
In the formula for the binomial, p is the probability of a success of each experiment. We assume mutual independence among all 10 projects. In our question, n=10, k = 0, 1, or 2, and p = 0.90, which is the probability that the actual cost is less than or equal to the 90th percentile of the project’s risk analysis. The probabilities for the these outcomes, along with their sum, are listed in the table below.
The sum of these, which is 3.736×10^-7, is the chance of seeing this outcome if the 10 risk analyses produced accurate 90th percentiles. This is approximately 1 in 2.7 million. You are more likely to be struck by lightning (1 in 700,000). This means these 90th percentiles are not realistic, which calls into question the credibility of the state of practice in project cost risk analysis.
Fortunately, there are ways to fix this. One option is to calibrate point estimates of cost to historical cost growth data. For more information on how to do this, see the ICEAA website page which collects many of my conference papers related to risk, including my 2018 paper on risk calibration.
You can read Chapter 1 of my book for free here.
When I collaborated on the Air Force Cost Risk and uncertainty Analysis and the GAO counterpart, we were very careful to show the difference between portfolio risk and individual program risk. In talking with folks at the CAPE whose charter was to estimate portfolios at DoD (Dr. Burke, Miller, Janeki et al, they used portfolio probabilities to optimize portfolio cost effectiveness. I am glad to see you are recognizing this body of work. I would not have expected otherwise. Even in retirement, I appreciate you contributions. Please keep sharing. John Cargill
Thanks for the kind words, John.
Comments are closed.