Skip to content

‘Radical Uncertainty’ Resurrects An Old Distinction and Over Inflates Its Importance

As a writer, practitioner, and student of risk management, I was intrigued to discover Radical Uncertainty: Decision-Making Beyond the Numbers, which was written by two economics professors and published last March. I approach any work on risk management by academic economists with a large amount of skepticism. In my experience, academic economists do not seem to understand risk. Years ago I took a course titled The Economics of Uncertainty, where I learned about maximizing expected utility. I later learned there is much more to uncertainty than just averages, and I have yet to encounter an application where risk or return is measured in ‘utils.’ More recently, to the horror of medical experts, economics professor Emily Oster advised pregnant women that low levels of alcoholic drink consumption is safe, again proving that economists do not understand risk (even if the likelihood of fetal damage is low, the consequence is quite high and lasts a lifetime – you can read more here).

So I opened the book Radical Uncertainty with a healthy skepticism, expecting that the authors, economists John Kay and Mervyn King, would get risk wrong, but I hoped to gain some new insights. The book failed to meet even my low expectations. It is a 500-page tome that would have been better as a 20-page paper. The authors expend way too many words making their point, which is that most uncertainty cannot be quantified so attempts to model it in the real world is a fool’s game. They try to resurrect and over inflate the importance of a distinction made between measurable and unmeasurable uncertainty that was proposed 100 years ago by economists John Maynard Keynes and Frank Knight and which has survived in various forms since, including some of Nassim Nicholas Taleb’s work, such as The Black Swan.

The authors provide example after example of events they claim could not have been foreseen. They are right about events such as the rise of the smart phone but many times they are flat out wrong. For example, they claim the recent technical issue with the Boeing 737 Air Max were an unknown/unknown. However, this was predictable – with a constrained budget, a rushed schedule, and a lack of testing it would have been a miracle for the new aircraft to not have encountered serious technical problems. The authors are too quick to claim that real-world issues cannot be effectively modeled with quantitative methods.

Even when quantitative models of uncertainty are not accurate that does not mean they are not useful. They deride the practice of using quantitative analysis of future stock market returns in retirement planning, but never provide a good alternative except the vaguely defined notion of “narratives.” Simply because we cannot know the level of the S&P 500 thirty years from now with any confidence does not mean when should not use quantitative analysis now to inform our investments decisions for retirement.

Despite the authors’ statement that probability theory is “not very difficult” they seem to know little about the subject. For example, they seem to be completely ignorant of extreme-value theory, which has been used for several decades to effectively manage extreme risks.

The book is aggravating to read as the authors play fast and loose with facts. While it is well-written, it reads more like informal lecture notes than an edited book. The authors get many little things wrong and have a disregard for details. For example, they state that in the United States the golden spike that completed the first transcontinental railroad was at a point in Utah where the line “surmounts” the Rocky Mountains. I’ve been there and that spot is West of the Rocky Mountains. The transcontinental railroad crossed the Rocky Mountains to Wyoming. There are many such statements that set my teeth on edge like this one that are either wrong or just seem not quite right. The one statement in the book that is spot on is that “risk is the product of a portfolio as a whole and is not the sum of the risks associated with the individual investments within it.”  

The practice of risk management has issues, but this book does not provide meaningful solutions. For an alternative that provides practical advice on the proper application of quantitative methods for the analysis and management of risk for projects, check out my book, Solving for Project Risk Management: Understanding the Critical Role of Uncertainty in Project Management, now available from Amazon, Barnes and Noble, and other outlets. You can read Chapter 1 for free and watch a 10-minute video overview on YouTube.

3 thoughts on “‘Radical Uncertainty’ Resurrects An Old Distinction and Over Inflates Its Importance”

  1. This is the same argument against quantitative risk analysis that I am run up against in the acquisition profession. Whenever I have briefed risk results to a Program Manager, the sharp ones usually reply with some form of: “But you are using Program XYZ in your data and we are not going to make the same mistakes that they did”. True, they will not make the exact same mistake, but they will make a similar mistake whose result will roughly mirror the cost growth of the other program. Statistics is about predicting an exact outcome, but looking at the range of likely results based on historical trends. PMs don’t get that.

Comments are closed.