What Do You Think About Machines That Think? is the 2015 Edge question. (Maybe) in reply to Stephen Hawkins' warning: "The development of full AI could spell the end of the human race".
The first contributors and responses can be found here.
The following view is shaped by 15 years of practical project experience with AI tools:
I think, the real questions behind are
"…..About Machines That Think Like Us?"
"Is building a thinking machine possible?" and if yes "how far from thinking are machines we can build in the near future?".
"Should thinking machines be built at all?"
I have no doubt that thinking machines are possible (when a combinations of chemicals can do it, why not silicon...?)
AI - the future that never happened by now.
The idea has a long tradition: computerized systems are people and there is a strong relation between algorithms and life,,,
First…the top down expert system thinking of AI..."died".
Then…"Artificial Life" promised to create intelligent creatures by genetic programming…it works well fur less ambitious objectives.
Now…because we have neurons intelligent machines need to have them too…our brain has an enormous capacity…to make AIs we only need to combine massive inherent parallelism, massive data management and deep neural nets…?
However, our objects of desire - universal machines that think like us - are, IMO, far away.
Summarizing, I'm in the camp of people who believe that machines that think can complement us doing things better for a better society. But it depends on what they are supposed to be thinking about.
What I fear: that we try to teach people to behave like machines - if we think like machines, it will be easier for machines to think like us?!
IMO, AI is a set of techniques of mathematics, engineering, science…not a post human species. Not only in finance, economy…behavior must be quantified and knowledge made computational...