This article in this week's The Economist discusses on the microchip level what we haven been thinking on the algorithms.
2009 I wrote here When Good ENUF is Great after we have presented the Risky Horror Show at Wilmott Finance focus.
To make them much faster and energy efficient let chips make a few mistakes her and there is the suggestion in this Economist article. This reminds me of the problem of inline quality assurance in a complex manufacturing process. It depends whether the little mistakes are made at the beginning of a sequence messing up the process or patterns of mistakes that do not influence much.
Back to the chip. If it produces sloppy results and does not completely mess up the program it might be applications where this will even go unnoticed. Take music-genre oriented sound qualities for example.
What about a "sloppy" valuation and risk analytics engine? Or trading decision support with "relaxed correctness"?
For enterprise-wide pricing and analytics strategies all UnRisk products access the same accurate valuation engines that avoid cross model & method inconsistencies which usually becomes horrible in interplay.
But with principle component applications we can reduce the number of, say, VaR calculation to 8% without losing any decision support quality - white paper: A clever handful is enough.
This and other "deterministic" simplifications, like near-to-the-optimum calculations, ... enable us to apply the most accurate methods to valuations that need to be performed many, many times. Multiplied with the speed-up utilizing new hybrid computer muscles the throughput is amazing. And time-to-insight is so fast, without losing any quality.