Pages

Planning for the Unexpected?

Yesterday, I got access to a quite new Blog, stirrisk, dedicated to contribute to risk management by original ideas and thinking . The post Business Planning for Turbulent Times refers to the book of that title and a special chapter that I read.

In the plausibility and probability paragraph that I find essential: scenarios need to come in a set, never alone; it is the set as a whole which aims to represent the degree of irreducible uncertainty in the environment.

This reminds me on Stephen Wolfram's Science which fundamentally challenge the intuition that more sophisticated calculations might require more sophisticated underlying rules and discusses the different mechanisms of randomness derived from (four) classes of system behavior related to how initial conditions lead to final states. Randomness from the environment, the initial conditions and the intrinsic generations. The Principle of Computational Equivalence - all processes can be viewed as computation. And, Computational Irreducibility - is reached when computations cannot be reduced to others with less "efforts". Reasoning lead to the key for or discussions here: the whole idea of using mathematical approaches to describe behavior makes sense  when the behavior is computationally reducible. Beyond other approaches are required, like scenario analysis with systematic scenario setting (although IMO, the irreducibility of uncertainty might not be so easy to assess?).

From the conclusion of the chapter of the book: A ‘quant-modelling’-dominated culture of risk management has serious problems with creating and working with meaningful plausibility. However, modellers and regulators must now explore the plausible conditions under which the key assumptions of their models might no longer hold. Scenario planning, if used appropriately, offers an effective way to carry out plausibility analysis because it puts probability models in different future plausible contexts.
This challenges the belief that effective risk management can be based entirely on historical data and the probabilities derived from those data.
We cannot expect that the future is going to be like the past.

Taking this all in account, I come back to VaR in the Jungle and extend to Risk in the Jungle - use clever computational reduction as the "secure" core for your operational semantic of certain scenario sets (we might call them scenario groups) in the relative safe places. Don't forget model scenarios - if already the price differs applying different models something is wrong with the instrument and then go from there ...
And yes, you cannot predict by looking into the past, but you can look a bit into the future by inverting and backtest your scenario system. But the real unexpected will remain unexpected.