MIT Professor David Autor has written this brand-new paper about labor market polarization discussing the paradox of Michael Polanyi: "We can know more than we can tell ..".
No, I will not contribute to the "end of labor" discussion, because I do not have the knowledge, informative data … and at the other hand, I think the problem its too complex for predictive modeling. I rather recommend to think: provided it happens, how can we get used to it.
But the paradox inspired a thought:
As a car driver, I cannot be copied by a machine?
I am a great car driver, but I have no theory about great car driving. I cannot tell you, why I'm able driving smooth through curves without fidgeting ... It's an implicit knowledge that cannot be replaced by a computer program … but hold on, why do car drivers cause accidents in the worst cases stop lives?
How, do I react to a moose suddenly trampling out of the bushes? As usual, make way and clash with another car …?
But, self-driving cars will save lives
Why? It's less about cars and their controls, but information and communication, connected local intelligence, learning and adaptation.
It's about sensors that provide much more information a driver could capture, about blazingly fast reactions much faster than those of humans, imaging technologies that can see much further and deeper and anticipate danger.
And if not, machines do not have a "social brain". Consequently, the self-driving car may decide to clash with the moose in a certain way as the best of a all possible solutions.
The polarity of computer use
It's common sense that computers are great in doing routine jobs faster and cheaper. But computers can do things we cannot do properly - solve extremely difficult problems in time.
So it might be wise to flip the human-machine interaction: let computers overtake, if situations become really difficult and unusual behavior is needed. Build und use computerized systems that can overcome Polanyi's paradox.
IMO, (financial) risk management is a field where this flip should happen. Currently not so few market participants are spending tons of money to install systems that guide them through situations where the conditions are well known and dangers are cleared out. But what about the situations where dangers are greater and less known? Situations that keep risk manager awake at night?
Putting computers to good use must include the risky horror?!