On the Importance of Unit Tests

Caused by the continuous developement of new features and functionalities, the complexity of the UnRisk software package has grown rapidly over the last years. In many situations, additional features, which are needed by our customers, are integrated by modifications of existing and tested source code. 
The main problem is, that one has to be very careful to guarantee, that changes of the application do not break something that used to work before. To avoid these undesirable side effects of new implementations, a large portion of the  UnRisk functionality is tested on different platforms (Windows, Linux and Mac OS X) via automatically scheduled unit tests. Within these tests, valuation and calibration results are compared to reference numbers, and if deviations are not within a given tolerance level, the corresponding test is flagged as failed. 
The developers, which have modified the sources since the last build, are notified by email about the status of the test results.
Here is an example of the test summary, for the case where all unit tests succeeded:
Since the UnRisk package is built and tested on a daily basis, it helps a lot to immediatly detect problems, which occur in the software development process.

Agenda 2015 - A Portfolio of Products

Our compass for 2012 was the all-new UnRisk, for 2013 accelerate and for 2014  package and disseminate know how.

2014 - we released UnRisk FACTORY with a Bloomberg and an Excel Link and UnRisk 8 (yesterday) the new pricing and calibration engines providing a multi curve framework and eminent practical functions transforming lognormal distributed into normal distributed data spaces for interest-rate-model calibration and valuation. newUnRisk kernels are used for regime changes as required by the xVA project.

2015 - tie technologies intelligently together

With our mission to deliver the promise of serving individual requirements whilst driving generic technologies, we offer an expanding portfolio of products. They all have in common being solutions and development systems in one.

For years, we have built a technology stack that enables us and quant developers to create individual products swiftly  Carefully choosing the mathematics and mapping every practical detail.

Our technology stack combines the UnRisk Financial Language implemented in UnRisk gridEngines for pricing and calibration, a portfolio across scenario FACTORY, a VaR Universe, the UnRisk FACTORY Data Framework, UnRisk web and deployment services and an Excel Link...End of 2014 an xVA engine with emphasis on central counter party risk valuation will be available.

The product portfolio will include UnRisk Quant, UnRisk Capital Manager, UnRisk Bank, UnRisk Auditor...and focus on technologies that are required for the purpose in the adequate deployment environments related to functionality, performance and usage.

What will drive us in 2015? Meet the individual requirements even more precisely by configuring our technology stack intelligently.

UnRisk 8 Rolls Out

20-Nov-14 - we have released UnRisk PRICING ENGINE and UnRisk-Q version 8, introduced as UnRisk 8. This release is free for all UnRisk Premium Service Customers and will be shipped to all new customers immediately. UnRisk has been introduced 2001. Now UnRisk 8 is the 21st release.

What's new in UnRisk 8 has been compiled in Andreas' pre-announcement yesterday.

There's one thing: UnRisk-Q is the core of our technology stack. UnRisk PRICING ENGINE is a solution, but remains a technology, because our proprietary Excel Link provides a second  front-end, Excel, but the UnRisk Financial Language front end remains available.

It's perfect for quants, who want to build validation and test books in Excel, but develop new functionality atop UnRisk or, say, front office practitioners who want to run dynamic work books, but develop post processors aggregating results in a beyond-Excel way. Even better if both collaborate closely.

Tomorrow: UnRisk 8 is released

Tomorrow, we will release Version 8 of UnRisk. UnRisk 8 includes, as key features, the valuation of moderately structured fixed income instruments under a multi curve model, and the Bachelier model.

The multicruve model allows to use (in the same currency) different interest rate curves for discounting, e.g., the EONIA curve, and for determining variable cashflows, e.g. Libor3m or Libor 6m.

The Bachelier model for caps, floors, swaptions can replace the Black76 model, when interest rates are low. In Black vs Bachelier revisited, I pointed out the difficulties with Black 76, when interest rates approach zero. In such cases, (Black) volatilties explode, and orders of magnitude of several 1000 percent for Black volatilities are quite common. With the Bachelier model and its data, which may be used as calibration input, negative interest rates may occur without nasty instabilities.

Is UnManaging the Modern Management?

In no CEQ on board? I have suggested the promotion of quantitative managers for the C level pointedly. But this was the provocation phase. My strong belief is that an emergence of quantitative theories and methods will kill the tradition of strictly boss-driven organizations.

Traditional companies are "incremental". Strangely, only a few C level members tackle the challenge of innovation. They're trained for operational efficiency. Even in a crisis there are few organizing a bottom-up renewal?


I grew up in organizations where strategies were built at the top, big leaders controlled little leaders, team members competes for promotion…Tasks were assigned, rules defined actions. It was the perfect form of "plan-and-control": a pyramid. Only little space for change.

In an organizational pyramid the yesterday overweights the tomorrow. In a pyramid you can't enhance innovation, agility or engagement.

It is indispensable to reshape the organizational form.


Traditional manages want conformance to specifications, rules, deadlines, budgets, standards and principles. They declare "controlism" as the driving force of the organization. They hate failures and would never agree to "gain from disorder".

Not to make a mistake, control is important but freedom is important as well.

Management needs to deal with the known and unknown, ruled and chaotic, (little) losses for (bigger) gains…


Bureaucracy is the formal representation of the pyramid and the regime of conformance.

Bureaucracy must die.

This part is inspired by Gary Hamel's Blog post in MIXMASHUP,

Change the organization

If we want to change the underlying form-and-ideology of management that causes the major problems, we may want to learn a little from the paradigms of modern risk management.

Duality - how to deal with the known and unknown
Boundaries - try to find the boundaries between the known and unknown
Optimization - optimization only works within the boundaries
Evolution - business in a networked world is of the co-evolution type
Game theory - a mathematical study of uncertainty caused by actions of others

This all needs quantitative skills. And if quantitative skills spread management fades.

The program grid

IMO, quants, that are self-esteemed, become stronger and contribute more to a better life if they drive a co-evolution in, what I call, a "program grid": a grid of individuals sharing programs, information and skills, without unleashing the very innovation making their solutions different. Program grids may be intra or inter-organizational.

Technology stacks, know how packages, workouts…destroy cold-blooded bureaucracy? If quants do not strive for getting picked, but choose themselves thy will contribute to the (indispensable) change.

Five-Year-Old Passes Microsoft Exam

Link from Marginal Revolution. Article from BBC NT.

IMO, another example why kids should learn programming early. It's fun and its building "nowists"…creating things quickly and improving constantly, without having permission of the preachers of ideology, rules…driving bottom-up innovation.

Electron Kaleidoscope

You are probably aware the Michael and I are doing some work on artificial graphene, a man-made material that mimics the electronic properties of real graphene - the material and our research project are explained in more detail in this blog post. In a nutshell, the system confines electrons to a hexagonally shaped "flake" with a lattice of so-called scatterers, that is, a lattice of small circular areas that are "forbidden" for the electrons.

I recently made plots of the electronic density (that is, the probability to find an electron at a certain point in the flake) for different eigenstates of the electronic wave functions. I found those plots so nice - from an artistic view point as well as a scientific one - that I thought I'd want to share them with you.

A short explanation for the scientifically minded readers: white means very high electron density, the color scale for decreasing density goes via orange and blueish colors to black, which means no electrons. The color scale is logarithmic, because I was not so much interested in the density as such, but the areas where the density is zero - these areas are called the "nodes" of the wave functions.

The symmetry of these nodes is dictated by a competition between the hexagonal symmetry of the outer confinement and the symmetry of the lattice of scatterers (the wave function is forced to be zero there). This competition (physicists call such a system a "frustrated system") results in the Kaleidoskope-like structure of the the density of electrons in that material.

Sunday Thought - Optimal Intelligence?

Yesterday, I reread chapters of Aaron Brown's great book: Red-Blooded Risk. Optimizing risk means, arranging things that make opportunities and dangers a positive contribution.

Optimal intelligence?

And a question came to my mind: is there optimal intelligence?

Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought.

My simplified definition of intelligence: the capacity of knowledge and the ability to change…the intelligence of knowns and unknowns.

This suggests two-sidedeness and consequently subject of optimization. If you have no knowledge, everything is change - if you know everything, why would you change?

Intelligent people want to change the underlying systems that are causing major problems of our life. Some call this integral intelligence,

What makes such radical innovation more systemic?

Know the system you want to change - but not too much
Prototype - expect the unknown
Organize a feed back cycle - learn

IMO, an approach of optimal intelligence.

Artificial Intelligence

In The myth of AI, Edge, Jaron Lanier challenges the idea that computers are people. There's no doubt computers burst of knowledge - it's even computational…but...

I like the example of (Google) translation. Although back in the 50s, because of Chomsky's work, there has been a notion of a compact and elegant core to language, it took three decades, the AI community was trying to create ideal translators. It was a reasonable hypothesis, but nobody could do it. The break through came with the idea of statistical translation - from a huge set of examples provided by millions of human translators adding and improving the example stack daily. It's not perfect, not artful…but readable. Great.

We,ve invented zillions of tests (Turing test…) for algorithms to decide whether we want to call the computer that runs it a person. With this view we consequently love it, fear its misbehavior…

My simple question: what are the mechanism to make them partners of an optimal intelligence - changing the underling systems that are causing major problems of our human life.

UnRisk goes to Tampere

I have been invited as a speaker to the


The Tampere node of the National Doctoral Training Network in Condensed Matter and Material Physicis (CMMP) organized a three-day school on electronic structure methods with recognized speakers from both Finland and abroad. The school has been targeted mainly to postgraduate students in related fields, but also postdocs as well as motivated undergraduate students has been encouraged to participate.

I have been asked to give an overview of the numerical methods the students can use not only in electronic structure theory but also in the (financial) industry. So I tried to cover many different topics from inverse problems, Monte Carlo methods to PDEs. It was a nice experience to speak there and motivate  young people that the methods they learn for their master or PhD thesis will also be valuable for their live after university.

Anticonformists - Why Do They All Look Alike?

Marginal Revolution linked to this The Washington Post Storyline article: The mathematician who proved why hipsters all look alike. Jonathans Touboul's paper is here.

From the abstract
In such different domains as statistical physics and spin glasses, neurosciences, social science, economics and finance, large ensemble of interacting individuals taking their decisions either in accordance (mainstream) or against (hipsters) the majority are ubiquitous. Yet, trying hard to be different often ends up in hipsters consistently taking the same decisions, in other words all looking alike
In this case, I am not sure whether mathematics is required to predict the emergent dynamics.

It seems to be quite obvious to me: if you only listen to mainstream, you create mainstream.  To create trends, mainstreams are usually acting focussed and simple. To fight the mainstream hipsters need to align and synchronize. To strengthen their non-conformity they act conform in their system.