A 2014 Retrospect

The year 2014 was, from the computational finance view, quite challenging. With interest rates at a historical low level, lognormal models become impossible.

See Black versus Bachelier how we at UnRisk handle the difficulties with Black 76 models in that case.

Another 2014 story is credit / debt valuation adjustment. When regulators go the limit  (and sometimes beyond) of reasonability, when computational requirements get higher and higher, when millions of scenario values have to be calculated, then the UnRisk option is worthwhile to have a closer look.

In UnRisk's CVA project, cofunded by the Austrian Research Promotion agency, we have been working (and work is still ungoing) on bringing the xVA challenges to the ground.

UnRisk is not only about clever math, but also about stable and up-to-date realisations in modern software environments. Being a stone-age Fortran programmer myself, I enjoyed Sascha's post on the goats, wolves and lions problem very much.

There were more targest achieved by the UnRisk team in 2014: the releases of the UnRisk Engine version 8 and of the UnRisk FACTORY versions 5.1 and 5.2, the implementation of an HDF5 file format as a bsis for the CVA calculations and more things to come.

A comparison of MC and QMC simulation for the valuation of interest rate derivatives

For interest rate models, in which the evolution of the short rate is given by a stochastic differential equation, i.e.
the valuation process can be easily performed using the Monte Carlo technique. 
The k-th sample path of the interest rate process can be simulated using an Euler discretization:
where  z is a standard normal random variable. 
The valuation of interest rate derivatives using Monte Carlo (MC) Simulation can be performed using the following steps:
  1. Generate a time discretization from 0 to the maturity T of the financial derivative, which includes all relevant cash flow dates.
  2. Generate MxN standard normal random numbers (M=number of paths, N = number of time steps per path)
  3. Starting from r(0), simulate the paths according to the formula above for k=1..M.
  4. Calculate the cash flows CF of the Instrument at the corresponding cash flow dates
  5. Using the generated paths, calculate the discount factors DF to the cash flow dates and discount the cash flows to time t0
  6. Calculate the fair value of the interest rate derivative in each path by summing the discounted cashflows from step 5.:
  7. Calculate the fair value of the interest rate derivative as the arithmetic mean of the simulated fair values of each path, i.e.
The only difference of QMC simulation is to use deterministic low discrepancy points, instead of the random points used in step 2 of  the Monte Carlo algorithm. These points are chosen to be better euidistributed in a given domain by avoiding large gaps between the points. The advantage of QMC Simulation is, that it can result in better accuracy and faster convergence compared to the Monte Carlo Simulation technique.
The following picture shows the dependence of the MC/QMC valuation result on the number of chosen pahts for a vanilla floater, which matures in 30 years, pays annually the Euribor 12M reference rate. The time steps in the simulation method is chosen to be 1 day. One can see that using QMC, a much lower number of paths is needed to achieve an accurate price. 

Extremely Large Telescope to be Built

Image Source: ESO

At a recent meeting ESO’s main governing body, the Council, gave the green light for the construction of the European Extremely Large Telescope (E-ELT) in two phases.

For details, see ESO's press release.

Mathematics for Industry network

In Berlin, one of the agenda points was to formulate the next steps for the Mathematics for Industry network which will be funded by the European Union within the COST framework.

To be more specific, the trans-domain action 1409 (TD1409) is named Mathematics for industry network (MI-NET) with the objectives to encourage interaction between
mathematicians and industrialists, in particular through
(1) industry-driven problem solving workshops, and
(2) academia-driven training and secondment.

I was nominated by the national COST coordinator to become a member of the management committee of this cost actions and I am looking forward to the interactions with my colleagues.

For more information, click here.

SQL Databases - You Know Where The Door Is?

I've joined the team decisions, but when I read Michael's post on HDF5 yesterday, it made me brooding again. A correct and in time decision made with a careful view into the most probable future of data management. But what about the regulatory wishes and ability of technology providers?

No data silos banks!

The regulatory bodies do not like fluid data - they want them solid…they want evidence of every transaction. And we created the UnRisk FACTORY data base that stores every information of every detail of each valuation transaction forever. Every! And clearly, they are strictly SQL compliant and far beyond we provide functions in our UnRisk Financial Language (UnRisk-Q) that enable to manipulate its objects and data programmatically.

The UnRisk engines are blazingly fast and, obviously, database management became a nasty bottleneck.

The data space "explodes" with the valuation space

xVA - and the related regime of centralization - introduces immense complexity to the valuation space.

In xVA - fairer pricing or accounting VOODOO  I wrote sixteen months ago
….selecting momentary technologies blindly may make it impossible to achieve the ambitious goals. Data and valuation management needs to be integrated carefully and an exposure modeling engine needs to work event driven.With this respect we are in the middle of the VA project. Manage the valuation side first - and do it the UnRisk way: build a sound fundament for a really tall bullding
And this is what we did.

The new regime needs trust

Of course, we'll make inputs, results and important (meta)information available. But, what was still possible with our VaR Universe...store every detail...like VaR deltas…in SQL retrievable form...may be impossible under the new regime.

But, UnRisk Financial Language users will have the required access and much more…functions to aggregate and evaluate risk, margin...data and what have you.

So, ironically regulatory bodies may have boycotted a part of their transparency requests?

However, IMO, it needs more trust of all parties from the beginning…and the view behind the curtain will become even more important. You can't keep millions of valuations to get a single price…evident? But we can explain what we do and how our interim data are calculated.

With out pioneer clients we go already through the programs…and courses and workouts will become part of our know-how packages.

The options of future data management?

The world of data management is changing. Analytical data platforms, NoSQL databases…are hot topics. But, what I see in the core: new computing muscles do not only crunch numbers lightning fast, they will come with very large RAM memory.

This affects software architectures, functionality and scalability. Those RAM memories may become bases for NoSQL databases…however, ending up with disk-less databases.

There may be many avenues to pursue…but it's no mistake to think of a NoSQL world.

It's unprecedented fast again

Many years ago we've turned UnRisk into gridUnRisk performing single valuations on computational kernels in parallel. Then we started making things inherently parallel. Now we accelerate the data management immensely.

Prepared for any future. Luckily we've chosen the right architectures and technologies from the beginning.

New UnRisk Dialect: HDF5

Right now the UnRisk team is working on the development of a powerful xVA Engine (the corresponding new UnRisk products will be released in 2015). In order to being able to handle the huge amounts of generated data
  • positive exposures
  • negative exposures
  • realizations of all underlying risk factors in all considered Monte Carlo paths
we decided to choose HDF5 to be the best "Language" to connect the User Interface with the underlying Numeric Engines.
To be honest, HDF5 is not a language it is a file format.
An HDF5 file has the advantage that it may become very big without the loss of speed in accessing the data within this file. By the use of special programs (e.g. HDFView) one can easily take a look at the contents of such a file.
The UnRisk xVA calculation workflow consists of the following steps:
  1. Read in the User Input containing the usual UnRisk objects
  2. Transform this Input into the HDF5 Dialect and create the HDF5 file
  3. Call the xVA Engine, which
    1. reads in the contents of the file
    2. calls the numeric engine
    3. writes the output of the numeric engine into the HDF5 file
  4. Transform the output of the xVA Engine back into the UnRisk Language
  5. Return the results to the User
Here is a screenshot of the contents of such an HDF5 file (containing the xVA calculations for a portfolio consisting of several netting sets and instruments):

But why did we choose this workflow and do not use a simple function call?
The reasons are the following:
Reason 1: We are able to split our development team into two groups: the first one is responsible for steps 1, 2, 4 and 5 , the second one is responsible for step 3. Both groups simply have to use the same "UnRisk HDF Dictionary".
Reason 2: The different steps may be performed asynchronously - meaning that the workflow could look as follows:
  • Create many HDF5 files on machine 1 (i.e. perform steps 1 and 2 from above for a list of user inputs)
  • Call the xVA Engine for each of these files on machine 2 at any time afterwards
  • Extract the calculation output on machine 3 at any time afterwards
Reason 3: Users may only want to use our xVA Engine - i.e. they want to perform steps 1, 2, 4 and 5 themselves. The only Thing, they have to learn is the UnRisk HDF5 dialect (we may support such customers by the use of our Java functionality to write to and read from HDF5 files).
Reason 4: For debugging purposes it is very convinient that a customer only has to send his / her HDF5 file to us - we immediately can use this file to debug our xVA Engine
Reason 5: If the user wants to add a new Instrument to his / her portfolio, he / she simply has to add the Instrument description to the HDF5 file (may be done also by the use of our user interfaces) of this Portfolio. By the use of the already existing results (of the calculations for the portfolio without this Instrument) the performance of the calculations may be increased immensely.
We keep you informed on our xVA developments and let you know as soon as our corresponding products are available. If you have any questions beforehand, feel free to contact us.

Wolfram Technology Seminar Vienna 2014

This week we had the Wolfram Technology Seminar Vienna 2014 in the Kuppelsaal at the Technical University.

Topics covered included:

  • Presentations on the latest Wolfram products and technologies, including the Wolfram Language, Mathematica 10, SystemModeler 4, Wolfram Programming Cloud, and Mathematica Online
  • A problem-solving desk where our experts will answer your questions
  • Q&A and networking opportunity
  • An introduction of the new Mathematica online courses that uni software plus GmbH provides for free for its customers
Around 80 people followed the invitation and got an impression of the new technologies Wolfram provides. These new technologies will also help us to further improve UnRisk and to define new ways of deployment.

The "Kuppelsaal" of the Technical University

Sunday Thought - Regulatory Capture

I am really surprised, how hastily regulatory bodies push the centralization. Researchers in politics and economics say: It's normal.

It's innovation? But, innovation needs decelerators. 

It may be innovative, but innovation needs decelerators. Acceleration is great for many systems, but if you are in a fog of possibilities, you need to the think a little more. Insight comes from inquiry and radical experimentation.

It's Sunday, so I think of cooking. There is fast cooking - ingredients cooked in the flame - and slow cooking - cook in a way allowing flavors to mix in complex ways. Great chefs are good at slow cooking. They test creative new dishes thoroughly. And they promote the results to get eaters hooked to their innovations and get them time to adapt...

Why all the haste? It's a dramatic regime switch - why not implementing a test phase?

Regulators get inevitably captured?

Is it the problem? The fear of being blamed for another great recession?

Some say, it's normal for regulators to get captured…it's a natural logic...
Acadamics call it "regulatory capture", the process by which regulators who are out in place to tame the wild beasts of business instead become tools of the corporations they should regulate, especially large incubents.
Models and reasons are reviewed here. A few selected: regulators need information from the regulated, consequently interaction, cooperation…but there is also lobbying and there are career issues...

Only a scene in a big picture?

Big Business Capture Economists?

Beyond regulation…what if big business has also managed to bend the thinking of economists? An idea they are is published in the Mark Buchanan's article has big business captured the economists?
Are they [economists] free authors of their ideas to are the, like regulators, significantly influenced in their thinking by their interaction with business interests?
There is empirical evidence that this happens…

Beware strict centralization?

I've only poor knowledge in social and  economic sciences, but I understand: capture is not a risk, but a danger (it can't be optimized).

And my system view tells me: centralization feeds accumulation that feeds capture.

This was one of the reasons, why I posted don't ride the waves of centralization blind.

I know that the bigger mistakes are often fixed later and the only thing we can do is helping the small and medium sized financial market participants to not only meet the regulatory requirements, but stabilize the core of their businesses...in competition with the big player who were selected to "save" it.

Cognitize - The Future Of AI

I've worked in the field, that is called Artificial Intelligence, for nearly 30 years now. At the frog level first, the bird level then. From 1990, I emphasized on machine learning...before it was kind of expert systems…

What we strived for were models and systems that were understandable and computational. This led us to multi-strategy and multi-model approaches implemented in our machine learning framework enabling us to do complex projects swifter. It has all types of statistics, fuzzy logic based machine learning, kernel methods (SVM), ANNs and more.

The future of AI?

Recently, I read more about AI. I want to mention two articles: The Myth of AI, of Edge.com (I wrote about it here) and the Future Of AI, Nov-14 issue of WIRED Magazine.

I dare compiling them and cook them together with my own thoughts.

Computerized Systems are People?

The idea has a long tradition that computerized systems are people. Programs were tested (Turing test…) whether they behave like a person. The ideas were promoted that there's a strong relation between algorithms and life and that the computerized systems needs all of our knowledge, expertise… to become intelligent…it was the expert system thinking.

It's easier to automate a university professor than a caterpillar driver…we said in the 80s.

Artificial Life

The expert system thinking was strictly top down. And it "died" because of its false promises.

Christopher Langton, Santa Fe Institute of Complex Systems, named the discipline that examines systems related to life, its processes and evolution, Artificial Life.  The AL community applied genetic programming, a great technique for optimization and other uses, cellular automata...But the "creatures" that were created were not very intelligent.

(Later the field was extended to the logic of living systems in artificial environments - understanding complex information processing. Implemented as agent based systems).

We can create many, enough intelligent, collaborating systems by fast evolution…we said in the 90ies

Thinking like humans?

Now, companies as Google, Amazon…want to create a channel between people and algorithms. Rather than applying AI to improve search that use better search to improve its AI.

Our brain has an enormous capacity - so we just need to rebuild it? Do three break throughs unleash the long-awaited arrival of AI?

Massive inherent parallelism - the new hybrid CPU/GPU muscles able to replicate powerful ANNs?
Massive data - learning from examples
Better algorithms - ANNs have an enormous combinatorial complexity, so they need to be structured.

Make AI consciousness-free

AI that is driven by this technologies in large nets will cognitize things, as things have been electrified. It will transform the internet. Our thinking will be extended with some extra intelligence. As freestyle chase, where players will use chess programs, people and systems will do tasks together.

AI will think differently about food, clothes, arts, materials…Even derivatives?

I have written about the Good Use of Computers, starting with Polanyi's paradox and advocating the use of computers in difficult situations. IMO, this should be true for AI.

We can learn how to manage those difficulties and even learn more about intelligence. But in such a kind of co-evolution AI must be consciousness-free.

Make knowledge computational and behavior quantifiable

I talk about AI as a set of techniques, from mathematics, engineering, science…not a post-human species. And I believe in the intelligent combination of modeling, calibration, simulation…with an intelligent identification of parameters. On the individual, as well as on the systemic level. The storm of parallelism, bigger data and deeper ANNs alone will not be able to replicate complex real behavior.

We need to continue making knowledge computational and behavior quantifiable.

Not only in finance…

But yes, quants should learn more about deep learning.

Industrial Mathematics in Berlin

Today, I travelled to Berlin where the ECMI council (ECMI = European consortium for mathematics in industry) will meet.

This is my first time ever in Berlin. What I really enjoy:
- the clear directions given at the underground exits
- Dussmann das Kulturkaufhaus