Is It Enough To Be Good In Not Being Bad?
When I made my first tumbling steps in computing it was already agreed: "No one ever has got fired for buying IBM". But at that time, the 70s, only a few understood that decisions are about optimizing risk.
The fear of regret
It is a wonderful example of understanding loss aversion - defensive decision making. Traditional marketers of brands codify growth promotion: value & benefit, standard & importance, recognition & programming, identity & promotion or emotion & love
And we know it but often suppress it: value is the weakest and love it the strongest growth code - but most people do not love risk management systems, nor find them ideologic. But they could be mainstream. Risk is often mistaken with danger - on all levels, from the managerial to the operational.
So, in enterprise level technologies the "IBM example" goes on and on. Standard&importance wins the game.
Please do not misunderstand, I do not complain - I want to point out that this is is our power source: do things that matter for those who care. Is this arrogant? Not thought to ….
Better or less likely to be bad?
My experience as a seller and buyer: people quite often pay a fairly large premium for brands, not because they are objectively better, but they do not expect them to be bad.
There is no metric for assessing a technology.
Stochastic tinkering?
As NN Taleb calls trial and error insight gain. Yes, people make progress using things without knowing how they work. And this is great - if somebody comes along and explains (a little later). Especially if traps and side effects need to be avoided. I call this the black box - white box principle of using (in contrast to the white box - place box principle of learning). This is the risk engineering part of risk management,
The tightly coupled complex systems trap
We have pointed on many detailed model-method traps here. And uncovered principles that were common sense for a long time but do not work a expected.
Here's a real dangerous one: a complex system may have unintended consequences and tightly coupled means there is not enough time to react to them. And it is paired with the "sunk cost bias". If a system is expensive, you don't change them.
Serving costumers individually by an integrated, but loosely coupled system
UnRisk offers highly automated, adaptable, decision support risk management systems that are development systems in one. It works on a generic portfolio across scenario valuation principle and because of its data base concept it is integrated, its orthogonal organization makes it loosely coupled.
Our growth code is individualization & innovation.
This post is inspired by a Rory Sutherland at Edge contribution-
Picture from sehfelder