I've always known it. It's absurd. It's against enlightenment. It's antiscientific.
In general, if the degree of information is zero, the degree of speculation is infinite. And those stories are often enjoying, but speculative...
Diana's post assures.
Only a Superhero could do the monster task ;)
- No known species of reindeer can fly. But there are at least 300 000 species of living organisms yet to be classified, and while most of these are insects and germs, this does not completely rule out flying reindeer which only Santa has ever seen.
- There are 2 billion children (persons under 18) in the world. But since Santa doesn't (appear) to handle the Muslim, Hindu, Jewish and Buddhist children, that reduces the workload to 15% of the total - 378 million according to Population Reference Bureau. At an average (census) rate of 3.5 children per household, that's 108 million homes. One presumes there is at least one good child in each.
- Santa has 31 hours of Christmas to work with, thanks to the different time zones and the rotation of the earth, assuming he travels east to west (which seems logical). This works out to 967.7 visits per second. This is to say that for each Christian household with good children, Santa has 1/1000th of a second to park, hop out of the sleigh, jump down the chimney, fill the stockings, distribute the remaining presents under the tree, eat whatever snacks have been left, get back up the chimney, get back into the sleigh and move on to the next house. Assuming that each of these 108 million stops are evenly distributed around the earth (which, of course we know to be false but for the purpose of our calculation we will accept), we are now talking about 1.2 miles per household, a total trip of 129 million miles, not counting stops to do what most of us must do at least once every 31 hours, plus feeding and so on. This means that Santa's sleigh is moving at about 1200 miles per second, 6 000 times the speed of sound. For purposes of comparison, the fastest man-made vehicle on earth, the Ulysses space probe, moves at a pokey of 27.4 miles per second - a conventional reindeer can run, tops, 15 miles per hour.
- The payload on the sleigh adds another interesting element. Assuming that each child gets nothing more than a medium-sized Lego set (2 pounds), the sleigh is carrying 321 300 tons (assuming not all children are good), not counting Santa, who is invariably described as overweight. On land, conventional reindeer can pull no more than 300 pounds. Even granting that "flying reindeer" could pull ten times the normal amount, we cannot do the job with eight, or even nine. We need 107 100 reindeer. This increases the payload - not even counting the weight of the sleigh - to 329 868 tons. Again, for comparison - this is four times the weight of the Queen Elizabeth.
- 329 868 tons traveling at 1200 miles per second create enormous air resistance - this will heat the reindeer up in the same fashion as space crafts reentering the earth's atmosphere. The lead pair of reindeer will absorb 14.3 quintillion joules of energy. Per second. Each. In short, they will burst into flame almost instantaneously, exposing the reindeer behind them, and create deafening sonic booms in their wake. The entire reindeer team will be vaporized within 4.26 thousandth of a second. Santa, meanwhile, will be subjected to centrifugal forces 17 500 times greater than gravity. A 250 pound Santa (which seems ludicrously slim) would be pinned to the back of his sleigh by 4 315 015 pounds of force.
See Black versus Bachelier how we at UnRisk handle the difficulties with Black 76 models in that case.
Another 2014 story is credit / debt valuation adjustment. When regulators go the limit (and sometimes beyond) of reasonability, when computational requirements get higher and higher, when millions of scenario values have to be calculated, then the UnRisk option is worthwhile to have a closer look.
In UnRisk's CVA project, cofunded by the Austrian Research Promotion agency, we have been working (and work is still ungoing) on bringing the xVA challenges to the ground.
UnRisk is not only about clever math, but also about stable and up-to-date realisations in modern software environments. Being a stone-age Fortran programmer myself, I enjoyed Sascha's post on the goats, wolves and lions problem very much.
There were more targest achieved by the UnRisk team in 2014: the releases of the UnRisk Engine version 8 and of the UnRisk FACTORY versions 5.1 and 5.2, the implementation of an HDF5 file format as a bsis for the CVA calculations and more things to come.
- Generate a time discretization from 0 to the maturity T of the financial derivative, which includes all relevant cash flow dates.
- Generate MxN standard normal random numbers (M=number of paths, N = number of time steps per path)
- Starting from r(0), simulate the paths according to the formula above for k=1..M.
- Calculate the cash flows CF of the Instrument at the corresponding cash flow dates
- Using the generated paths, calculate the discount factors DF to the cash flow dates and discount the cash flows to time t0
- Calculate the fair value of the interest rate derivative as the arithmetic mean of the simulated fair values of each path, i.e.
|Image Source: ESO|
At a recent meeting ESO’s main governing body, the Council, gave the green light for the construction of the European Extremely Large Telescope (E-ELT) in two phases.
For details, see ESO's press release.
To be more specific, the trans-domain action 1409 (TD1409) is named Mathematics for industry network (MI-NET) with the objectives to encourage interaction between
mathematicians and industrialists, in particular through
(1) industry-driven problem solving workshops, and
(2) academia-driven training and secondment.
I was nominated by the national COST coordinator to become a member of the management committee of this cost actions and I am looking forward to the interactions with my colleagues.
For more information, click here.
No data silos banks!
The regulatory bodies do not like fluid data - they want them solid…they want evidence of every transaction. And we created the UnRisk FACTORY data base that stores every information of every detail of each valuation transaction forever. Every! And clearly, they are strictly SQL compliant and far beyond we provide functions in our UnRisk Financial Language (UnRisk-Q) that enable to manipulate its objects and data programmatically.
The UnRisk engines are blazingly fast and, obviously, database management became a nasty bottleneck.
The data space "explodes" with the valuation space
xVA - and the related regime of centralization - introduces immense complexity to the valuation space.
In xVA - fairer pricing or accounting VOODOO I wrote sixteen months ago
….selecting momentary technologies blindly may make it impossible to achieve the ambitious goals. Data and valuation management needs to be integrated carefully and an exposure modeling engine needs to work event driven.With this respect we are in the middle of the VA project. Manage the valuation side first - and do it the UnRisk way: build a sound fundament for a really tall bulldingAnd this is what we did.
The new regime needs trust
Of course, we'll make inputs, results and important (meta)information available. But, what was still possible with our VaR Universe...store every detail...like VaR deltas…in SQL retrievable form...may be impossible under the new regime.
But, UnRisk Financial Language users will have the required access and much more…functions to aggregate and evaluate risk, margin...data and what have you.
So, ironically regulatory bodies may have boycotted a part of their transparency requests?
However, IMO, it needs more trust of all parties from the beginning…and the view behind the curtain will become even more important. You can't keep millions of valuations to get a single price…evident? But we can explain what we do and how our interim data are calculated.
With out pioneer clients we go already through the programs…and courses and workouts will become part of our know-how packages.
The options of future data management?
The world of data management is changing. Analytical data platforms, NoSQL databases…are hot topics. But, what I see in the core: new computing muscles do not only crunch numbers lightning fast, they will come with very large RAM memory.
This affects software architectures, functionality and scalability. Those RAM memories may become bases for NoSQL databases…however, ending up with disk-less databases.
There may be many avenues to pursue…but it's no mistake to think of a NoSQL world.
It's unprecedented fast again
Many years ago we've turned UnRisk into gridUnRisk performing single valuations on computational kernels in parallel. Then we started making things inherently parallel. Now we accelerate the data management immensely.
Prepared for any future. Luckily we've chosen the right architectures and technologies from the beginning.
- positive exposures
- negative exposures
- realizations of all underlying risk factors in all considered Monte Carlo paths
- Read in the User Input containing the usual UnRisk objects
- Transform this Input into the HDF5 Dialect and create the HDF5 file
- Call the xVA Engine, which
- reads in the contents of the file
- calls the numeric engine
- writes the output of the numeric engine into the HDF5 file
- Transform the output of the xVA Engine back into the UnRisk Language
- Return the results to the User
- Create many HDF5 files on machine 1 (i.e. perform steps 1 and 2 from above for a list of user inputs)
- Call the xVA Engine for each of these files on machine 2 at any time afterwards
- Extract the calculation output on machine 3 at any time afterwards
Topics covered included:
- Presentations on the latest Wolfram products and technologies, including the Wolfram Language, Mathematica 10, SystemModeler 4, Wolfram Programming Cloud, and Mathematica Online
- A problem-solving desk where our experts will answer your questions
- Q&A and networking opportunity
- An introduction of the new Mathematica online courses that uni software plus GmbH provides for free for its customers
|The "Kuppelsaal" of the Technical University|
It's innovation? But, innovation needs decelerators.
It may be innovative, but innovation needs decelerators. Acceleration is great for many systems, but if you are in a fog of possibilities, you need to the think a little more. Insight comes from inquiry and radical experimentation.
It's Sunday, so I think of cooking. There is fast cooking - ingredients cooked in the flame - and slow cooking - cook in a way allowing flavors to mix in complex ways. Great chefs are good at slow cooking. They test creative new dishes thoroughly. And they promote the results to get eaters hooked to their innovations and get them time to adapt...
Why all the haste? It's a dramatic regime switch - why not implementing a test phase?
Regulators get inevitably captured?
Is it the problem? The fear of being blamed for another great recession?
Some say, it's normal for regulators to get captured…it's a natural logic...
Acadamics call it "regulatory capture", the process by which regulators who are out in place to tame the wild beasts of business instead become tools of the corporations they should regulate, especially large incubents.Models and reasons are reviewed here. A few selected: regulators need information from the regulated, consequently interaction, cooperation…but there is also lobbying and there are career issues...
Only a scene in a big picture?
Big Business Capture Economists?
Beyond regulation…what if big business has also managed to bend the thinking of economists? An idea they are is published in the Mark Buchanan's article has big business captured the economists?
Are they [economists] free authors of their ideas to are the, like regulators, significantly influenced in their thinking by their interaction with business interests?There is empirical evidence that this happens…
Beware strict centralization?
I've only poor knowledge in social and economic sciences, but I understand: capture is not a risk, but a danger (it can't be optimized).
And my system view tells me: centralization feeds accumulation that feeds capture.
This was one of the reasons, why I posted don't ride the waves of centralization blind.
I know that the bigger mistakes are often fixed later and the only thing we can do is helping the small and medium sized financial market participants to not only meet the regulatory requirements, but stabilize the core of their businesses...in competition with the big player who were selected to "save" it.
What we strived for were models and systems that were understandable and computational. This led us to multi-strategy and multi-model approaches implemented in our machine learning framework enabling us to do complex projects swifter. It has all types of statistics, fuzzy logic based machine learning, kernel methods (SVM), ANNs and more.
The future of AI?
Recently, I read more about AI. I want to mention two articles: The Myth of AI, of Edge.com (I wrote about it here) and the Future Of AI, Nov-14 issue of WIRED Magazine.
I dare compiling them and cook them together with my own thoughts.
Computerized Systems are People?
The idea has a long tradition that computerized systems are people. Programs were tested (Turing test…) whether they behave like a person. The ideas were promoted that there's a strong relation between algorithms and life and that the computerized systems needs all of our knowledge, expertise… to become intelligent…it was the expert system thinking.
It's easier to automate a university professor than a caterpillar driver…we said in the 80s.
The expert system thinking was strictly top down. And it "died" because of its false promises.
Christopher Langton, Santa Fe Institute of Complex Systems, named the discipline that examines systems related to life, its processes and evolution, Artificial Life. The AL community applied genetic programming, a great technique for optimization and other uses, cellular automata...But the "creatures" that were created were not very intelligent.
(Later the field was extended to the logic of living systems in artificial environments - understanding complex information processing. Implemented as agent based systems).
We can create many, enough intelligent, collaborating systems by fast evolution…we said in the 90ies
Thinking like humans?
Now, companies as Google, Amazon…want to create a channel between people and algorithms. Rather than applying AI to improve search that use better search to improve its AI.
Our brain has an enormous capacity - so we just need to rebuild it? Do three break throughs unleash the long-awaited arrival of AI?
Massive inherent parallelism - the new hybrid CPU/GPU muscles able to replicate powerful ANNs?
Massive data - learning from examples
Better algorithms - ANNs have an enormous combinatorial complexity, so they need to be structured.
Make AI consciousness-free
AI that is driven by this technologies in large nets will cognitize things, as things have been electrified. It will transform the internet. Our thinking will be extended with some extra intelligence. As freestyle chase, where players will use chess programs, people and systems will do tasks together.
AI will think differently about food, clothes, arts, materials…Even derivatives?
I have written about the Good Use of Computers, starting with Polanyi's paradox and advocating the use of computers in difficult situations. IMO, this should be true for AI.
We can learn how to manage those difficulties and even learn more about intelligence. But in such a kind of co-evolution AI must be consciousness-free.
Make knowledge computational and behavior quantifiable
I talk about AI as a set of techniques, from mathematics, engineering, science…not a post-human species. And I believe in the intelligent combination of modeling, calibration, simulation…with an intelligent identification of parameters. On the individual, as well as on the systemic level. The storm of parallelism, bigger data and deeper ANNs alone will not be able to replicate complex real behavior.
We need to continue making knowledge computational and behavior quantifiable.
Not only in finance…
But yes, quants should learn more about deep learning.
This is my first time ever in Berlin. What I really enjoy:
- the clear directions given at the underground exits
- Dussmann das Kulturkaufhaus
It will be as short, because I've had similar thoughts that I put into various posts...about quant work under exogenous or endogenous influences, here, here, here, here.
Think like an entrepreneur. Think about delegating tasks. Think whether you could grow by partnering. What about, finding the correct and robust numerical scheme, programming it…? When you delegate every job somebody else can do, you'll most probably find the most profitable job only you can do...and you've got the time to do it. Validate models…use the right market data for the model calibration…create the most advanced risk management process…aggregate risk data...prepare dynamic reports that behaves like a program.
We will be pleased helping to leverage this important job - build the decision support for optimal-risk taking.
In 1941 they've deserted the Red Army, When the Nazi regime occupied Estonia, Roland goes into hiding and Edgar took a new identity as their loyal supporter. 1963 Estonia is again under control…Edgar is now a Soviet apparatchik…
This is an artistically written book about a dark time. It's a historic novel, a crime story, a romance, a war story...
I read all books of the Finnish-Estonian writer Sofi Oksanen.
25 years later, Ignacio became a successful defense lawyer, he was asked by Tere to defend Zarco…
The setting is the Catalan City of Girona in the late 70s, after Franco's death. Ignacio described all this 30 years later in a series of interview with an unnamed writer.
This book surveys the borders between right and wrong. respectability and crime…its take is brilliantly plotted and I love the style.
It's on the favorite fiction of 2014 picks of the economist Tyler Cowen, who's blog I read frequently. I agree.
Before Outlaws, I read Soldiers of Salamis. Both books make Javier Cercas one of my favorite fiction writers.
Both books reviewed here are on my best-ever list.
What's new in UnRisk 8 has been compiled in Andreas' pre-announcement yesterday.
There's one thing: UnRisk-Q is the core of our technology stack. UnRisk PRICING ENGINE is a solution, but remains a technology, because our proprietary Excel Link provides a second front-end, Excel, but the UnRisk Financial Language front end remains available.
It's perfect for quants, who want to build validation and test books in Excel, but develop new functionality atop UnRisk or, say, front office practitioners who want to run dynamic work books, but develop post processors aggregating results in a beyond-Excel way. Even better if both collaborate closely.
The multicruve model allows to use (in the same currency) different interest rate curves for discounting, e.g., the EONIA curve, and for determining variable cashflows, e.g. Libor3m or Libor 6m.
The Bachelier model for caps, floors, swaptions can replace the Black76 model, when interest rates are low. In Black vs Bachelier revisited, I pointed out the difficulties with Black 76, when interest rates approach zero. In such cases, (Black) volatilties explode, and orders of magnitude of several 1000 percent for Black volatilities are quite common. With the Bachelier model and its data, which may be used as calibration input, negative interest rates may occur without nasty instabilities.
Traditional companies are "incremental". Strangely, only a few C level members tackle the challenge of innovation. They're trained for operational efficiency. Even in a crisis there are few organizing a bottom-up renewal?
I grew up in organizations where strategies were built at the top, big leaders controlled little leaders, team members competes for promotion…Tasks were assigned, rules defined actions. It was the perfect form of "plan-and-control": a pyramid. Only little space for change.
In an organizational pyramid the yesterday overweights the tomorrow. In a pyramid you can't enhance innovation, agility or engagement.
It is indispensable to reshape the organizational form.
Traditional managers want conformance to specifications, rules, deadlines, budgets, standards and principles. They declare "controlism" as the driving force of the organization. They hate failures and would never agree to "gain from disorder".
Not to make a mistake, control is important but freedom is important as well.
Management needs to deal with the known and unknown, ruled and chaotic, (little) losses for (bigger) gains…
Bureaucracy is the formal representation of the pyramid and the regime of conformance.
Bureaucracy must die.
This part is inspired by Gary Hamel's Blog post in MIXMASHUP,
Change the organization
If we want to change the underlying form-and-ideology of management that causes the major problems, we may want to learn a little from the paradigms of modern risk management.
Duality - how to deal with the known and unknown
Boundaries - try to find the boundaries between the known and unknown
Optimization - optimization only works within the boundaries
Evolution - business in a networked world is of the co-evolution type
Game theory - a mathematical study of uncertainty caused by actions of others
This all needs quantitative skills. And if quantitative skills spread management fades.
The program grid
IMO, quants, that are self-esteemed, become stronger and contribute more to a better life if they drive a co-evolution in, what I call, a "program grid": a grid of individuals sharing programs, information and skills, without unleashing the very innovation making their solutions different. Program grids may be intra or inter-organizational.
Technology stacks, know how packages, workouts…destroy cold-blooded bureaucracy? If quants do not strive for getting picked, but choose themselves thy will contribute to the (indispensable) change.
IMO, another example why kids should learn programming early. It's fun and its building "nowists"…creating things quickly and improving constantly, without having permission of the preachers of ideology, rules…driving bottom-up innovation.
I recently made plots of the electronic density (that is, the probability to find an electron at a certain point in the flake) for different eigenstates of the electronic wave functions. I found those plots so nice - from an artistic view point as well as a scientific one - that I thought I'd want to share them with you.
A short explanation for the scientifically minded readers: white means very high electron density, the color scale for decreasing density goes via orange and blueish colors to black, which means no electrons. The color scale is logarithmic, because I was not so much interested in the density as such, but the areas where the density is zero - these areas are called the "nodes" of the wave functions.
The symmetry of these nodes is dictated by a competition between the hexagonal symmetry of the outer confinement and the symmetry of the lattice of scatterers (the wave function is forced to be zero there). This competition (physicists call such a system a "frustrated system") results in the Kaleidoskope-like structure of the the density of electrons in that material.
And a question came to my mind: is there optimal intelligence?
Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought.
This suggests two-sidedeness and consequently subject of optimization. If you have no knowledge, everything is change - if you know everything, why would you change?
Intelligent people want to change the underlying systems that are causing major problems of our life. Some call this integral intelligence,
What makes such radical innovation more systemic?
Know the system you want to change - but not too much
Prototype - expect the unknown
Organize a feed back cycle - learn
IMO, an approach of optimal intelligence.
In The myth of AI, Edge, Jaron Lanier challenges the idea that computers are people. There's no doubt computers burst of knowledge - it's even computational…but...
I like the example of (Google) translation. Although back in the 50s, because of Chomsky's work, there has been a notion of a compact and elegant core to language, it took three decades, the AI community was trying to create ideal translators. It was a reasonable hypothesis, but nobody could do it. The break through came with the idea of statistical translation - from a huge set of examples provided by millions of human translators adding and improving the example stack daily. It's not perfect, not artful…but readable. Great.
We,ve invented zillions of tests (Turing test…) for algorithms to decide whether we want to call the computer that runs it a person. With this view we consequently love it, fear its misbehavior…
My simple question: what are the mechanism to make them partners of an optimal intelligence - changing the underling systems that are causing major problems of our human life.
CMMP AUTUMN SCHOOL: BASICS OF ELECTRONIC STRUCTURE CALCULATIONSTAMPERE UNIVERSITY OF TECHNOLOGY, NOVEMBER 12-14, 2014
The Tampere node of the National Doctoral Training Network in Condensed Matter and Material Physicis (CMMP) organized a three-day school on electronic structure methods with recognized speakers from both Finland and abroad. The school has been targeted mainly to postgraduate students in related fields, but also postdocs as well as motivated undergraduate students has been encouraged to participate.
From the abstract
In this case, I am not sure whether mathematics is required to predict the emergent dynamics.In such different domains as statistical physics and spin glasses, neurosciences, social science, economics and finance, large ensemble of interacting individuals taking their decisions either in accordance (mainstream) or against (hipsters) the majority are ubiquitous. Yet, trying hard to be different often ends up in hipsters consistently taking the same decisions, in other words all looking alike
It seems to be quite obvious to me: if you only listen to mainstream, you create mainstream. To create trends, mainstreams are usually acting focussed and simple. To fight the mainstream hipsters need to align and synchronize. To strengthen their non-conformity they act conform in their system.
This is a passionate plea for the proof:
Forever, in infinite many cases
A proof is the lazy brain's best friend - it prevents it from the need to test a theorem, a transformation, a change, a program…in finite many, but many!, cases. A proof says: correct in infinite many cases. From "now" on the semantics is functional not necessarily operational.
A proven theorem can be pushed into the knowledge base and used as black box. It becomes a validated building block of the innovative mathematical spiral.
When we want to solve 3*x=5, we may be aware that rational numbers, under multiplication, (Q,*) form an Abelian group and each number is a basis for all others. To solve the equation, we use an equivalence transformation to get a basis that has nice properties for finding the solution:
We apply the same principle if we want to solve a system of linear equations. Again, it's a helpful view to see the "unknowns" as weights for column vectors linearly combing the "goal vector".
Provided m=n and the column vectors build a basis (they are linearly independent) a unique solution exists. Again, we use equivalence transformations to get a basis with nice properties…In the matrix language it's a triangular matrix…The core of the proof is the general principle of constructing the bases.
What about a system of multi-variate polynomial equations? The Austrian mathematician Bruno Buchberger has shown that the same principle can be applied…proven by constructing the Gröbner Bases. In short, it transforms the system in a way that one basis element is only univariate.
Deep knowledge in ring theory, ideals…is required.
One of Bruno Buchberger's research project is Theorema. In short, automated theorem proving, with a system built atop Mathematica. If the Theorema software can automatically create a constructive proof for the solution of polynomial equations (by constructing Gröbner Bases) it generates the solver.
This is not required here, because GBs are already constructed. But, in general an automated theorem prover can become an algorithm generator.
The original objective of this field was
- automating mathematics (not only computation)
- empowering computer science with mathematical thinking and techniques
APT stands for Automatically Programmed Tools a high-level programming tool used to generate instructions for numerically controlled machine tools. APT has been created at the MIT in 1956.
It was clear that each APT program needed to run on the GA as on the IBM - no subset. APT compilers, interpreters and post processors (generating the control code for the concrete machine tools) were written in FORTRAN and different from those used in the IBM APT (because of limited resources on the GA, but also to apply own ideas of geometric modeling…).
Was it innovative?
Later, we introduced a new language that was much more feature and task oriented…however, it created the same constructor…the control programs carrying out the task.
The paradox of copying
Jorge Louis Borges, wrote a great short story "Pierre Menard, Author of the Quixote". Menard, a fictive character, did not compose another "Quixote", he produced a version that is re-written word by word. In this story, irony and paradox generate ambivalence. Menard's copy is not a mechanical transcription, it coincides with Miguel Cervantes' Quixote…
(Borges, Quixote) is different from (Menard, Quixote), because of "knowledge". I think Borges states through the paradox of Menard: all texts are a kind of rewriting other texts. Literature is composed of versions? The paradox of Menard is pushing the limits to the absurd and impossible, but it is about the principles of writing...
However, Menard's version would become more "different" if he offered a complete thorough Quixote course, a reading tour, a blog, a magazine, "how to write Quixote alike books" workouts…
The Workout in Computational Finance
What if someone rewrote Adreas' and Michael's great book that explains that and why a thorough grounding in numerics is indispensable for evaluating the pricing and risk models correctly and implement them in high quality. The rewritten would be be different.
The book represents knowledge of the UnRisk Academy that was established to disseminate this knowledge. It offers online and life seminars, workouts…and the real transformations made in response to the feed back of hundreds of practitioners who use UnRisk to carry out their tasks.
It's knowledge and its dissemination forms that innovativeness. Constructed and Packaged.
In a recent blog post, Andreas asked the question if the harmonic series of prime numbers converges. In a later blog post he sketched a proof. Do mathematicians ever move beyond the sketch stage with their proofs?
Andreas could have saved a lot of time by just typing in the following code into Mathematica. It uses the Prime function which gives the nth prime:
In that vein, all series problems can be answered with a quote by Ronald Reagan: “A tree’s a tree. How many more do you need to look at?”
There is no such thing as an abstract program is one of the basic insights of a new fundamental theory of physics the constructor theory, developed by David Deutsch, Chiara Marietto…Oxford Univerity.
I am a lousy physicist, but I dare to write a little about this theory, because I found one example that I (hopefully) understand.
You can write an offline, task oriented robot program but its constructor is the robot control. It's the entity that carries out the given task ("..pick a part of the box, put it on the palette...") repeatedly. The robot control is the foundational element - the constructor.
The robot control uses models that are calibrated and constantly re-calibrated to the real working space and situation. It may need sophisticated feature recognition...
In constructor theory, a transformation or change is described as a task. A constructor is a physical entity which is able to carry out the task repeatedly. A task is only possible if a constructor is capable of carrying it out.
It sounds so simple, but it goes beyond Popper's science theory of falsification, because it touches information, computation and knowledge on a fundamental level. If we, for example, think of the idea of entropy in a thermodynamic system the link to information is strong…(oh, I'm already on icy terrain)
I take the practical view: there is no such thing as an abstract program…
In mathematics, BTW, there is no theorem without a model and a theorem comes to life only based on its operational semantics - the evaluation, the computation...
However one try: If I understand it right knowledge can be instantiated in our physical world and the better the instantiation the better the knowledge. This sounds quite evolutionary?
The evolution of the option theory
In the introductory book Quantitative Methods for Financial Markets (for students!), Andreas wrote: "the principles of risk-neutral valuation transforms the market into a 'fair game'". This rule has been instantiated by the BS formula by 1987. But, with the introduction of far out of the money options the smile was explored.
In the following, increasingly complex option models were/are introduced - among them models that cannot be validated (impossible to calibrate and re-calibrate..).
In the sense of the constructor theory, the task they represent is "impossible". Too complex models are a fundamental trap.
How to avoid them is not easy, because you need to know in depth, where the computational limits are. And the borders are moving...
Stories have to do with lives and when I asked no CEQs on board? I had Emanual Derman's story "My Life as a Quant" in mind and how the situation (and stories) of quants have changed.
I learned that a story has a style, a structure and a substance and in relation to a life the structure is the most import criterion, IMO. It is about the plot.
Archplot. Miniplot. Antiplot
build the story triangle in R. McKee's view.
Archplot is the classic story structure. It features a single protagonist. The lead character pursues an object of desire (an advanced risk management process?), confronting external forces (a strategy, project roles, a management principles…). The story ends with an irreversible change in the life of the protagonist. It's causal, real and linear..
Archplot is human life story. As humans we may find radical change to be difficult, but we want the protagonist to change from the beginning to the end. We want characters taking myriads of challenges...
Miniplot characters struggle with their inner demons, move through the world avoiding external confrontations. They're passive not active. Inside they fight for their life. Miniplot usually offers "open" ends.
Antiplot fights the story itself, it breaks all rules. No requirement for causality, nor a constant reality, no time constraints and the protagonists are the same at the end of the story. They never fight any forces. They just remain as they ever were.
Choose the Archplot form
Pursue the objective of becoming a CEQ, saving the life of your financial institution, managing the transformation of your knowledge into margin.
We may be able to serve you.
Projects…things to be created, financed and shipped. Sometimes they influence a life, other times, they fade. UnRisk influences my business life.
In the later 90s I helped getting a contract from a London based trading desk of an American bank: pricing of sophisticated convertible bonds.
Lucky for me, cooperation shifted from a one time cooperation to long-term affiliation in an exciting project. UnRisk.
It's different building a consortium from conducting someone else's project - you jointly get the idea, see an outcome, share a vision, build the technology, build the tools, plant the seeds for growth, are selected or rejected, your clients shape you and your ideas, the tools build you, you identify your "dream client"and "dream partner", you refine your brand promise, you stop listening to focus groups only, you know the financial impact of your decisions, you get the cash flow right…you reinvent your technologies and tools…
UnRisk, as Andreas pointed out in his post yesterday, has many faces.
I'm proud that it matters, that it's different in many aspects, that we got out of the niche very early, that it has a bright future…I'm part of the project.
The trick is to represent the project.
...harmonic series—that is, the pitches of the notes follow a mathematical distribution known as integer multiplesAmazing. Maths everywhere. I've found the link in Marginal Revolution.
The UnRisk user community is quite heterogeneous: There are UnRisk users coming from accounting or controlling, there are quantative analysts, risk managers, treasurers, traders.
And they all have their preferred ways to work.
With UnRisk FACTORY 5.2. and the UnRisk Excel link for the UnRisk FACTORY, we have closed the gap between two widely used interfaces and thus reduced possible sources of errors in communication.
Thanks to the development team!
Do you want to work with people like us? Our track records are characterized by achievements in mathematics and computer science and our business skill set has been developed on the job.
The selection criteria for a technology always include "who". Names, track records, skill set, provenance, financial stability, market presence…
Not with people like you, may some business professionals say…you are mathematicians, but we need to do real business…
Not with people liker you, may some mathematicians say…you are transforming mathematics, a culture technique, into margins…
With people like you, say those, who care…you provide know how packages and respond to our requirements swiftly…
It was never so easy to connect globally and find the right partners for learning, developing, marketing...but traditional thinking let us still cling to preferences for neighborhood or major places, known cultural background…or scale?
I've maybe said it too often, but unleashing the programming power behind UnRisk is our chosen path for growth. It's result of long term (mathematical) thinking. It's our approach to risk optimization. Moderate growth in a constant feed back loop.
Quantsourcing empowered by UnRisk technology stack
Our technology stack combines the Unrisk Financial Language implemented in UnRisk gridEngines for pricing and calibration, a portfolio across scenario FACTORY, a VaR Universe, the UnRisk FACTORY Data Framework, UnRisk Web and deployment services and since yesterday an Excel Link that does not only link to the PRICING ENGINE, but the FACTORY. End of 2014 an engine with emphasis on counter party risk valuation will be available.
Don't start from scratch, our technology stack and products are amazing…and working with us is not too bad.
- UnRisk Web Service: enables our users to import data from the UnRisk FACTORY database into Mathematica
- UnRisk PRICING ENGINE: enables our users to use all of the UnRisk functionality from within Excel
- Market Data
- Calibrated (automatically within the UnRisk FACTORY) interest rate models
- Valuation Results of individual instruments and portfolios
- VaR Results of Portfolios including the contribution VaRs of the underlying instruments
- Extract the valuation results of the portfolio
- Loop over all underlying instruments and extract the expected cashflows
- Aggregate the expected cahsflows for the given date intervals
Immortal, unconstrained mobile and absolutely wise
Freakonomics likes the idea of timelessness, unconstrained mobility and the absolute wisdom. But then they question the economic side…
Jim Jarmusch made a great film about vampires: Only Lovers Left Alive. A quiet and dark film with a feeling of timelessness…giving the impression that any world is important. The two immortal lovers show us the highest culture of absolute wisdom, connectedness…
But, how immortal and wise vampires ever are, they are caught to live at night, buy (at dark markets) or steal blood…whatever constraints they have removed, they are stuck to one rare resource…to get it they even risk to transform others into vampires and create competition...
In the film the lovers are close to die of hunger…not enough energy left to do what they need…in the last second they found the perfect victim…
Only the imperfect diversify…and live?
The spotlight of the Nov-14 issue of the HBR Magazine is "Internet of Everything"
We strive for understanding and knowing everything. The phrase "internet of things" has arisen to highlight new opportunities exploiting new smart, connected products transforming data into knowledge.
But isn't absolute wisdom also absolute boredom? Isn't 'uncertainty good? Remember, we only learn from turbulences and gain from disorder.
What are we going to do, if the data tell us everything? Will data become to us then the blood of the vampires? Will the vampires ever get a free market of real blood...will we get a free market of informative data?
Co-eveolution in the programming grid
The internet of everything will help to establish a co-evolution of, say, weather forecasting and energy optimization...but for finance and economics we should not forget modeling, parameter identification, simulation…speculation and verification.
IMO, we need co-evolution at another level: co-program for new insight. Let our breakthrough explore new problems at a higher level…let us find abstractions from applying examples…and share ideas and skills.
In my Merlot post I announced to write about this indigenous Friulian wine. No sorry...the story how I got another outstanding rare wine.
No, I'm not an elitist. I don't like rare wines because they are rare. But, my wine preferences include wines from autochthonous grapes...and their outstanding exemplars are often rare.
Time to out my wine preferences.
I like reading and love music (from John Adams to John Zorn). Literature and music are categorized by genres. And this inspired me to think of wine genres - without naming them. Even more, I understand a wine as a story. Genre is a difficult foundation of story to wrap my mind around. And so it is for wine.
I borrowed the concrete idea from Shawn Coyne's great blog the story grid (Genre's five leaf clover).
Honestly, when I read some of the tasting notes, I need to smile about the creativity…when the wine "sings in the glass", or (a Chambertin!) "shows a nuanced smell of a wet dog pelt" (which dog - has it a name?)…
And, especially why they fit so well to this and that dish. I do not care much about this. I eat the dish and then I drink the wine. So, is it then concluding the last dish or preparing for the next? Food companion is not a genre criteria for me.
Oh sorry, I start driving onto an intellectual side row.
The five leaves of wine genres:
Length - from flash to "infinite"
Nature - from natural to absurdly constructed
Style - from "documentary clear" to dramatic (when not theatrical)
Structure - from linear to complexly nested
Content - fruit, flowers, spices, herbs, minerals, exotics...
No, I do not distinguish nose, color, taste,…
I like multi-genre wine drinking, but my favorite wines are usually medium long, natural, documentary clear, moderate complex but dense, mineral or floral (but not baroque florid).
Example from one of my favorite regions, Rhone: I give preference to North Rhone wines over Chateauneuf du Pape and the white over the red… This leads to the non theatrical white Hermitage (like the affordable Ferraton Miaux) or the expensive Condrieu Chateau Grillet.
Pignolo fits perfect to my favorite genre. Lively but dense (I agree JR!).
How to get the Pignolo that fits for my favorite genre?
The first time I came to Cormons I had nothing than the wine books and the drinking experience of, what I call, the big label wines...Jerman, Vintage Tunina a prototype.
But, I was lucky to select Aquila D'Oro at Castello di Trussio for a wine and dine evening at this first visit. The owner of the restaurant (and castle), Giorgio Tuti, introduced us to the indigenous wines from great vintners: Ribolla from Gravner and Radikon, Tocai (now Friulano) from Vie de Romans…
But at the beginning, Giorgio Tuti and us were mutually risk averse and exchanged only the "secure" opinions. Later, when we knew each other better, he rolled his eyes imperceptible when I asked for a dramatic Pinot Grigio from Ronco del Gelso…and served a documentary clear, onion-colored Pinot Grigio from Pierpaolo Pecorari instead.
My preference for Borgo del Tiglio has its roots at that time - result of guided exploration, wine by wine.
Once, Giorgio Tuti recommended "the" Rosso from Gravner and arose my love to Friulian reds.
The first Pignolo was from Dorigo. At a later visit he served a Pignolo magnum of another cult property: Moschioni. We knew already Moschioni's Refosco and Schioppettino and were surprised how clear and floral the Pignolo was.
2004, Jerman's Pignolo Special Edition for Gorgio Tuti
Giorgio Tuti has sold some land around the Castello di Trussio to Jerman. And they came to an agreement that Jerman will plant Pignolo at the most qualified corner…and Giorgio will get a special Edition of a selected year.
What I've suppressed: in less favorable years the Pignolo can be a bit rough…2004 was perfect (Pignolo's quality is volatile).
Last week, Giorgio Tuti sold me three bottles of his special edition. I will wait a few years to open it…I hope.
p.s. to share a new recommendation of Giorgio Tuti: Ronchi Ro
It's about the intersection of science and commercialization. A way to resolve the entrepreneurship paradox.
PureTech, Giving Life to Science, is a science and technology development and commercialization company tackling tomorrow's biggest healthcare problemsTheir purpose is
radical innovation in healthand PureTech
has a thematic, problem-driven approach to starting companies, proposing non-obvious solutions rooted in academic research and developing them together with a brilliant group of cross-disciplinary expertsIn short, PureTech focuses on taking science and engineering, primarily in the healthcare area, and developing innovative products and companies. Yet another incubator? No, much more…
In my factory automation time (25 years ago) I dreamed of establishing a "walk in center" for complicated discrete manufacturing problems. With industrial scale flexible manufacturing islands, and labs…with researchers from distinguished academic institutions and industry and practitioners from manufacturers...finding new operation plans, creating new tools, set ups...and running concrete experiments on the most complicated parts.
It never materialized, because I failed to convince the authorities to make it happen (manufacturers and manufacturing system providers were enthusiastic)…but the idea was a kind of "PureProd".
MathConsult, Andreas Binder CEO, is a spin off company of the Industrial Mathematics Institute of the Johannes Kepler University of Linz. They also partner with the Radon Institute for Computational and Applied Mathematics (RICAM) of the Autrian Academy of Sciences. 100+ mathematicians and physicists work in this IndMath center in Linz - with 25 at MathConsult.
MathConsult transforms their core competencies - Numerical Simulation and Inverse Problems - into complex systems and products for concrete industrial partners in the areas of Metallurgy/Chemical Engineering, Multi-physics problems, plastic deformation, dynamical multi body systems, Adaptive Optics…
Their key technologies embraces hundreds of cross-sectoral mathematics software programs and there is translational research going on within MathConsult to create new systems for their partners.
Those libraries were the key to add a new competency: Computational Finance. Some of the approaches are presented here.
We built UnRisk and created the UnRisk consortium for technology development and commercialization. UnRisk concentrates on derivative and risk analytics and we've decided to unleash our technologies and provide the corresponding know how packages.
What we haven't done: intensive fund raising (one of the PureTech strengths). But we offer options, like project-for-product cost arrangements…
However, we think, we are a new kind of quant finance company. But this is "in the small".
In the large, quant finance lacks radical innovation. Consequently, regulators decided to force a kind of bureaucratic regime by standardization and centralization. Nothing to become mad about…what should they do instead?
Quantifying behavioral risk?
There's the big discussion about state risk and behavioral risk. Fama got the nobel prize for showing that there's only state risk (EMH) and Shiller for emphasizing on behavioral risk.
But behavioral risk will never be a topic that goes beyond intellectual discussions at market risk cocktail parties, if it doesn't become computational.
And this is really hard work. A mathematical Hercules task. It can't be done by single groups alone. It needs collaboration and antidisciplinary.
The mathematics may be influences by game theory, evolution theory…and probably the approach of cellular automata…but also "pure mathematics" will play an important role in the sense that its models may speak out a behavior (not only a state transition…).
Don't do it alone - cooperate. cooperate. cooperate.
But, this will only happen when the financial circles recognize that strict competition is an innovation killer. If we do not cooperate more, financal markets will be increasingly conducted by regulation and run into the next crisis based on regulatory arbitrage...
PureTech is a great company.
p.s. as Andreas posted yesterday we've received a research grant for doing the counter party risk valuation "the UnRisk way". If you want to partner, we will be happy to...