No Chief Executive Quant on Board?

RBS has an excellent quant team. They introduce(d) innovative solutions, explain(ed) how they work at conferences, open workshops...and describe(d) them in books. What they recommend(ed) is widely copied in the quant finance circles…

But the bosses?

Before the 2008 collapse RBS was briefly the largest bank in the world…subsequently, RBS fell sharply in ranking, lost confidence and needed significant support from the UK government. They and their bosses were in the bad news...

We know it, the Lehman earthquake grew to a financial tsunami..

But still, did the management not listen to the quant team? Ignore them? Were the new products inside their investment banking just a mystery to them? Some say, they micromanaged the wrong things, see Braveheart banking; the fall of RBS.

I'm not a management expert, but I had a lot of bosses myself, before I started up my own business.

We all hate our bosses, right?

They always let us do the hard stuff? They just harvest our great output?

I never felt that way…and I did a few hard jobs: I cleaned the slag gutter of a blast furnace, arbitrated bulky scrap for input into a steel mill, harvested tobacco…in hard but above medium wage summer jobs.

The bosses enabled my full time studies. No reason to get mad on them.

Work hard to become a managing mathematician?

Later, I became a boss myself...responsible for a hundred people developing factory automation software. I reporting to the C level…It was an exciting time. Our own CNC machine and robot programming systems with operation and tool lifecycle management, shop floor control…intelligently combined with inlicensed CAE, resource and production management systems…ran in automated factories of renowned discrete manufacturers.

They were work of talented engineers, mathematicians, physicists, computer scientists…obviously we applied quantitative theories.

But in the late 80s, the C level managed a dramatic downturn of the entire enterprise and got fired completely. I got a new boss. A bureaucrat managing in the sense of "nobody has been ever fired for buying IBM"...

I quit…but I left the work without any bad feeling. It was a great time. I felt, like an entrepreneur in a large, old firm (actually, I started with a group of three)…and stared up my own business.

I don't know any CEQ

in a financial institution that is not part of the entrepreneurship.

In My Life as a Quant, Emanuel Dermal relives his exciting journey of a high-energy physicist becoming a managing director...he worked with Fisher Black at Goldman Sachs...and they were celebrated for their models and methodologies and they enjoyed working in a collaborative environment. They were kind of entrepreneurs in a large, old firm.

This was in the infancy of quaint finance, there were not so much proven and affordable technologies available.

But, discussing in quant finance forums, I'm surprised that

Reinventing the wheel

seems to be still attractive.
I want to create a FinancialEngineering library with generic financial engineering functions   throughout the various models and asset classes. Create a parent class in C++...?
was a question at a home page Serving the Quant Finance Community>>Programming and SW Forum.

Really? Was the first reply, but then the discussion went into details of C++ 11, modern C++ design…

But, there's great technology available

Obviously, it's not only us who can say: We spend years developing. Carefully choosing the mathematics. Mapping every practical detail. For pricing and calibration. For derivative and risk analytics. For structuring. For portfolio across scenario simulation…thousands of practitioners test our technologies on a vast variety of deal types, valued hundreds of billions of USD on a daily basis…

Quantitative managers optimize market risk?

They empower quants becoming a new generation of quantitative managers. Do on a much higher level what renowned quants did in the earlier days of quant finance?

You can't manage and do the plumbing. But you also can't be a quantitative manger without knowing the theories, methodologies and technologies either. To do one thing for your financial institution: optimize risk.

This would qualify for a C Level position?!

Back to factory automation. There's a vast variety of great technology available now. It's not economically feasible at all to build your own computer aided manufacturing system...from scratch, now.

It was never so easy…

RBS seems to have an insight sales strategy…RBS Insight…but do they have a CEQ?

Description of term structure movements using PCA continued

In my last blog entry How good is the description of term structure movementsusing PCA a lot of open questions remained. Today I want to give first answers...
 - How good is the description of interest rate movements using only a few factors?
We assume, that we have a time series of  yield curves , where each of them is given on 16 curve points (1W,3M,6M,9M,1Y,2Y,3Y,4Y, 5Y,7Y,10Y,15Y, 20Y,25Y, 30Y,50Y). Calculating the principal components ei, based on daily interest rate movements, the increments of the yield curve dr =(dr1,…,dr16) can be exactly described using the formula

 where (.,.) is defined to be the inner product of two vectors. The following pictures show, how good an arbitrary chosen interest rate increment (blue curve) can be approximated using only 4 (left), 5 (middle), 6 (right) factors, i.e.


The table below shows, how many percent of the variance of daily, weekly and monthly historical interest rate movements can be described using only a few PCA factors:




 


Using a time series of daily EUR interest rate movements, the following picture shows the variation of the original data (left) and the remaining variation (right) after the filtration of the first four principal components. One can see, that on average about 1 basis point of the interest rate movements remain unexplained.



 

So, using a few principal components for the description of interest rate movements, leads to a good approximation of the original data. Furthermore, combinations of principal components produce  realistic yield curve scenarios, which can be used for the calculation of interest rate risk measures of instruments and portfolios. 

UnSelling UnRisk?

First There Was UnMarketing Now There Is UnSelling - this post of "Six Pixels of Separation" pointed me to this book about everything but to sell.

UnSelling is about the bigger picture of sales.

Analogical, UnRisk is about the bigger picture of risk.

In both cases you need a lot of experience and in depth knowledge to get the bigger picture.

But yes, it is sometimes indispensable to unlearn. The rules of selling - as well as the rules of risk management - have fundamentally changed.

The programmability of sales by understanding value, access and education and the programmability of risk by understanding money, duality, boundaries and optimization…come to my mind.

A Surprising Entrepreneurship Paradox

With affordable technologies and tools, better communication channels, skill sharing…it has never been easier to start your own company, but entrepreneurship is in the decline.

I'd not have believed that the decline happens for years, before I read about the Entrepreneurship Paradox at Pieria.

Concluding, it seams that the future is becoming old, like the rest of us (entrepreneurs)? Why is this so? Do the (business) failure rates of younger firms increase? Does it need (too much) time to become antifragile? Do entrepreneurs hate to manage?

OK, entrepreneurship at the high-tech sector started declining after the dot-com crash. But the dot-com boom triggered a broadband emergence that helps entrepreneurship?

What to do? IMO, for a vibrant, growing economy, older firms should co-operate much more with the younger and fight the worst enemy: be happy with the achieved, together.

Reading this, has inspired me to write about the development of a quant entrepreneurship (dependent or independent). I will post it on Tuesday.  

A Must Read for People Working on Counterparty Risk

Today's blog post will be a short review of the book

Counterparty credit risk and credit value adjustment: A continuing challenge for global financial markets – Second Edition by Jon Gregory

I have been working on the xVA topic now for almost two years and this book guided me for most of this time. The author Jon Gregory is the acknowledged global expert on counterparty credit risk.
The book starts with explaining the emergence of counterparty risk and how financial institutions are developing capabilities for valuing it. Aspects of portfolio management and hedging of credit value adjustment, debit value adjustment, and wrong-way counterparty risks are also covered. In addition, the book addresses the design and benefits of central clearing, a recent development in attempts to control the rapid growth of counterparty risk. The book offers many practical examples, including experiences from the recent credit crisis.

Without using too much complicated Mathematics Gregory is able to explain also complex interrelations. I can really recommend this book to everybody interested in the topic.

This Ideas Must Die

Whenever I'm in London I spare time to visit the Serpentine Galleries. Their program inspired me to look playfully into the past from a future perspective as posted here, think about skills exchange as posted here, …

Serpentine and Edge announce a Marathon on "Extinctions"  - among many exciting (and cool) contributions, it touches one thing that address muscles in my brain that I rarely use: there is no such thing as an abstract program … the principle of Constructor Theory (Physics)…Extinction of abstraction?

Edge's own contributions to the conversation will be published in Feb-15. This Ideas Must Die - Scientific Theories That Are Blocking Progress.

I cannot go, but it will be live-streamed here and I'll read the book.

A Prime Discussion

Last week, I met a former colleague, who is now working in risk management of an Austrian bank. It was a very enjoyable evening with a few beers and memories of the times when we were undergraduate students.

This obviously (at least for mathematicians) led us to prime numbers.

So, today's question:
Does the harmonic series of prime numbers, i.e.,


 converge?

I will sketch a proof (whether it converges or not) next week.

Although prime numbers seem to belong to pure mathematics, there are at least two important applications in finance: one is for encrpyting information by the RSA algoithm, the other one is for generating low discrepancy numbers.

What Is UnRisk For? - Updated

Jan-13, I posted what is UnRisk for

Is it still valid? Yes, nothing changed - in principle. Derivative and risk analytics serving the core business of various financial market participants and actors.

Individualization in centralized regimes?

But, the regulatory wave has brought a paradox into play. Central counter party is intended to reduce risk by standardization…but ironically, it forces certain market participants, like banks, to individualize their offerings in order to find new services and clients.

Remember, because of margin compression, OTC revenues will be / are reduced… However, central clearing also offers new revenue opportunities for banks, like execution and clearing revenues, collateral management services fees… (banks can help clients to fulfill obligations...)

Can you structure me this?

The new question: do you support xVA? needs fanning.

It's not only different for pricing, valuation and risk management, actors in various roles need different ways to deal with it. Sales, front office, risk management, controlling… practitioners and quants in interplay.

Simpler instruments became the new "structured products" under the new regimes. We were always good at valuing structured products and analyzing their contribution to risk in complex portfolios. We've put our best effort in doing it right again in the new regimes. Accuracy and speed really matter, if you need millions of valuations to get a "fair" price and risk spectra.

Technologies, development tools and solutions

The technologies behind are identical, but it will be even more important in the future to configure them for special actor groups and tasks. And enable quant developers to transform them into individual solutions for certain purposes swiftly.

You're lucky if you have a technology stack that supports this. You're even luckier if you have organized them orthogonally. And most lucky if you have created a financial language to programmatically manipulate the financial objects, contract features, frame conditions…implemented it in engines that are bank proof. We're lucky.

It empowers us to launch new products along a technology path in the near future. Exciting products - I'll keep you informed.

Really Big Data

This is for those of you who have wondered what this mysterious "Density Functional Theory" Michael keeps mentioning in his physics posts is about. Please don't be frightened by the somewhat unwieldy name, I'll try to give you a rough flavour and an "executive summary" in a fun way.

The key to many kingdoms


The dream behind the whole endeavour is that we would very much like to be able to solve Schrödinger's equation: It contains (almost) everything one ever might want to know about chemistry, about material science  (which goes from steel industry down to nanotechnology, including semiconductor industry), molecular biology, pharmacology, and so on. The important point here is that one could simply compute all the required information in that areas, without needing any prior empirical knowledge, just from an invariable law of nature and a few constants.

Wave functions


Sounds to good to be true? It certainly is. Without going into details about Schrödinger's equation, what you would get as a result - if you could solve it - is the so-called "wave function" of the system. How complicated that wave function is depends on how many electrons there are in the system you are studying. Let's start with something simple: the good, old Ethane molecule (shown below), which consists of two carbon atoms, six hydrogen atoms, and a cloud of 30 electrons moving around them.

The wave function now depends on the positions of all 30 electrons in three-dimensional space: I hope you'll forgive me one formula - here's how it looks like:


The wave function itself doesn't have any real-world interpretation, but its square has: if you look at all that positions (r1, r2,...r30) at the same time, the square of the wave function tells you the probability that you will find electron number one at position number one, electron two at position number two, ... and so on (actually, it is not possible to number electrons, even in principle, so one still needs to subject the poor wave function to what is called antisymmetrization, which makes stuff even more complicated. Too complicated for this blog post ;).

Big data


The problem now is that this innocent looking wave function is quite a beast: consider, for a moment, you wanted to sample it on a grid and store it in memory. If you'd just use 20 grid points in each coordinate direction and double-precision numbers, this would amount to having to store 8 20^90 bytes!

That's obviously a large number, but let me briefly illustrate how large it is. The ultimate storage medium humanity could probably dream of is a medium where one could store one byte per atom - this would allow to store one billion Petabytes on the volume of a standard SD-card (like those you have in your digital camera). So how much volume would you need to store the wave function of Ethane? If you do the math, it turns out that you'd need about 1.6*10^39 cubic lightyears. Just in case cubic light years are not among the units you use on a daily basis: according to the NASA homepage, this is roughly one million times the size of our universe (a few universes more or less don't matter anymore at this stage ;)



Insane data compression

I believe at this point it is clear that directly calculating the wave function is not going to be feasible, ever. Creative people have invented many different ways to get around the problem, and one of those ways is density functional theory, or "DFT" as its friends are calling it.

DFT is based on an astonishing theorem found by Pierre Hohenberg and Walter Kohn in 1964. This theorem has to do something with the density, so let me explain that first: The density is the probability to find any electron (no matter which one) at a given position in space. It is a much simpler object than the wave function, because it is only a function in one (three-dimensional) position:
To give you a quick comparison: For the same 20 grid points used above for the wave function, it would take about 60 kilobytes to store it in memory - contrast that to the million universes above!

Now back to the theorem of Hohenberg and Kohn: The density could of course be calculated form the wave function (if we had that in first place). What Hohenberg and Kohn found out is that in principle, also the reverse is true! In principle (we don't know how to do that in practice), the wave function could be reconstructed from knowledge of the density alone.

In even plainer words: Those two things contain the same amount of information. Yes. The information contained in the million universes filled with mankind's ultimate storage medium can be compressed down to a few kilobytes.

I'll leave you with that thought for the time being - there is, of course, a slight catch, which is hidden in the phrase "in principle" above. Most of the research in DFT (and there is a lot) is about making this "in principle" happen in practice.

Sunday Thought - Know the Rules To Break Them

The summer returned today. We went to a restaurant close to the Danube. Sundays they offer lunch starting from 12 to 5 pm. We sat in the little garden and enjoyed a light menu and a 2012 Riesling "Kastanienbusch", Rebholz, Pfalz, DE.

They break the rules, because most Austrians like to eat from 12 to 1.30 pm on Sundays (and most of the restaurants close at 2.30 pm).  We broke the rules, because we drank a German Riesling and not one of the celebrated Wachau Rieslings from its grand cru vineyards.

This is what wine-searcher says about Wachau wine.
Wachau's steep, sweeping, vineyard-lined riverbanks could easily be mistaken for those of Germany's Mosel, even if the wines could not: classic Wachau Rieslings taste richer, riper and more tropical than their counterparts from the cooler, wetter Mosel. They have much more in common with the richest Rieslings of Alsace and Pfalz.
As a terroirist I disagree. Wachau Riesling at the "Smaragd" level became too "designed by breed yeast" and alcohol-rich - in general. Wines from Muthentaler, Veyder Malberg, Pichler-Krutzler, ... break the rules. I call their wines "clear mountain spring water with subtle terroir replication".

What a Lazy Sunday Afternoon(a). The Small Faces broke the rules of pop when they released Odgens' Nuts Gone Flake a concept album mixing heavy rock with a fairy tale …

BTW, at UnRisk we break the rules by making the black box white.