I've worked in the field, that is called Artificial Intelligence, for nearly 30 years now. At the frog level first, the bird level then. From 1990, I emphasized on machine learning...before it was kind of expert systems…
What we strived for were models and systems that were understandable and computational. This led us to multi-strategy and multi-model approaches implemented in our machine learning framework enabling us to do complex projects swifter. It has all types of statistics, fuzzy logic based machine learning, kernel methods (SVM), ANNs and more.
The future of AI?
Recently, I read more about AI. I want to mention two articles: The Myth of AI, of Edge.com (I wrote about it here) and the Future Of AI, Nov-14 issue of WIRED Magazine.
I dare compiling them and cook them together with my own thoughts.
Computerized Systems are People?
The idea has a long tradition that computerized systems are people. Programs were tested (Turing test…) whether they behave like a person. The ideas were promoted that there's a strong relation between algorithms and life and that the computerized systems needs all of our knowledge, expertise… to become intelligent…it was the expert system thinking.
It's easier to automate a university professor than a caterpillar driver…we said in the 80s.
The expert system thinking was strictly top down. And it "died" because of its false promises.
Christopher Langton, Santa Fe Institute of Complex Systems, named the discipline that examines systems related to life, its processes and evolution, Artificial Life. The AL community applied genetic programming, a great technique for optimization and other uses, cellular automata...But the "creatures" that were created were not very intelligent.
(Later the field was extended to the logic of living systems in artificial environments - understanding complex information processing. Implemented as agent based systems).
We can create many, enough intelligent, collaborating systems by fast evolution…we said in the 90ies
Thinking like humans?
Now, companies as Google, Amazon…want to create a channel between people and algorithms. Rather than applying AI to improve search that use better search to improve its AI.
Our brain has an enormous capacity - so we just need to rebuild it? Do three break throughs unleash the long-awaited arrival of AI?
Massive inherent parallelism - the new hybrid CPU/GPU muscles able to replicate powerful ANNs?
Massive data - learning from examples
Better algorithms - ANNs have an enormous combinatorial complexity, so they need to be structured.
Make AI consciousness-free
AI that is driven by this technologies in large nets will cognitize things, as things have been electrified. It will transform the internet. Our thinking will be extended with some extra intelligence. As freestyle chase, where players will use chess programs, people and systems will do tasks together.
AI will think differently about food, clothes, arts, materials…Even derivatives?
I have written about the Good Use of Computers, starting with Polanyi's paradox and advocating the use of computers in difficult situations. IMO, this should be true for AI.
We can learn how to manage those difficulties and even learn more about intelligence. But in such a kind of co-evolution AI must be consciousness-free.
Make knowledge computational and behavior quantifiable
I talk about AI as a set of techniques, from mathematics, engineering, science…not a post-human species. And I believe in the intelligent combination of modeling, calibration, simulation…with an intelligent identification of parameters. On the individual, as well as on the systemic level. The storm of parallelism, bigger data and deeper ANNs alone will not be able to replicate complex real behavior.
We need to continue making knowledge computational and behavior quantifiable.
Not only in finance…
But yes, quants should learn more about deep learning.