You might have observed that we had a spirited discussion in Wilmott Forums (Book and Numerical Methods) about the correct application of solvers and how to test them - you can follow it via Andreas' recent "Vasicek examples" posts in our Mathematics thread.
Some of the early claims were based on little misunderstandings of the test objectives, but they turned out to force us to do even more extreme tests of the robustness of high-end techniques to extreme cases.
This motivated me to think a little more about project management and project failure. I often read that project failure are caused by fuzzy user input, moving targets and little management support. This might be true for an information and communication project, but in projects for quantitative systems you need to take a system view. Their behavior is determined by their intrinsic structure and time dependence.
As mentioned earlier, it is important to organize things orthogonally - for the usage as well as for testing. No, we do not price a fixed rate bond by FEM with unwinding. But there are analytic solutions (that we use to price them) to compare your numerical results with.
We call this "umbrella-testing" - test your solvers along "ribs" where verified solutions are available, apply multi method approaches for the entire "surface" in between, play with conditions that simulate "winds" (deform them ..) and so on. Extreme cases might not have ever happened in the bank practice, but provided they will you have covered them already.
So, the hope of addressing the roots of project failures is to build and test your system for a wider scope and try to detect systemic dysfunctions that cause them.
This makes your system more robust with respect to fuzzy user input and moving targets.
And we, the management, motivate our teams to do exactly this. The extra cost are really paid back quickly.