Approaching Singularity?

In Will Technology Outsmart Us I have used a title that also could have been Approaching Singularity using the word popularized by the futurist Ray Kurzweil predicting that if the technology becomes more intelligent it will become self-driven by positive feed back loops.

The Smart Revolution.

After I have posted Will Technology Outsmart Us, I searched a little and found in The NY Times, Not Quite Smart Enough. Smart appliances are part of a larger trend toward smart electronics, it says.
So, if I have a freshly caught trout and beetroot in my refrigerator it will tell me recipes ...
One by Heston Blumenthal or Alain Ducasse, one on dietary needs?

Will Technology Outsmart Us?

Does Complexity Economics need complex simulation systems to gain insight and even beyond, complex control systems to being "managed"? A global controller is in contradiction to complexity.
I see a system as complex, if it contained subsystems of co-evolution.

Traditional economic models see an interplay between countless market participants who acting more or less rational bringing the markets towards a balance and external impacts mostly based on events as well as rules and technology developments.
In short, financial instruments taking their realistic values would "freeze" - the market would settle down - without external "shocks".
Do they? Complex, constructive learning plays an important role in such complex systems leading to an internal dynamics even with the absence of external shocks.

M Buchanan went into details in his post Minority Games. Really exciting.

But let me step back a little. Will there be a co-evolution of human and artificial intelligence or will technology just take over? In other words, will we, will we need to stay competitive?

I have a background in factory automation and in this field feed back loops are still used to control machine tools, robots, transportation and storage systems. There will be a better way of interaction based on simple local intelligence.

What about an economic agent system?

Let me escape from the feasibility question.
Can competition in principle mean "fewer" losses?
Humans build networks to enhance knowledge, interact across cultures, ask great questions, engage opposite views, .... they co-operate.

Therefore I do not see the future spectacled, say, in the Matrix films.
What interacting with technology teaches us will be important and consequently, we need to design systems strictly bottom up.