Yu et al. propose a new training algorithm for feed-forward neural networks, and they show that this algorithm is faster and better able to avoid being trapped in local minima, like conventional back-propagation. They also show an application of their algorithm to a stock market problem. New and promising connectionist algorithms are always welcome, and the neural network community certainly will appreciate this one.
It is difficult to criticize such a well-written paper. I do, however, have a concern about the references: they are, in most cases, inadequate. In the introduction, when the authors mention traditional back-propagation, they do not cite its main reference [1], which is responsible for bringing it to life. Regarding the references across the text, four out of 22 are at least 20 years old, and 15 are at least ten years old. Only three were published within the last decade. Of course, there are classical papers that cannot be left out, but the authors should have provided updated references to make their research more credible.