Volume 14, Number 3
PATTERN RECOGNITION LETTERS
greedy algorithm; often exact solutions result as verified by solutions obtained using a fast parallel enumerative algorithm. The underlying mechanism of the general method involves the decomposition of arbitrary problem energy gradients into piecewise linear functions which can be modeled as the outputs of small groups of hidden units. In Hellstrom and Kanal (1992b), we derived thermodynamic mean-field neural networks for multiprocessor scheduling. Simulations of networks of up to 2400 units gave very good and often exact solutions. Our general method for treating non-Hamiltonian energy functions also has broad applicability to problems in pattern recognition.
A related optimization method which we and others have found very useful is the Graduated Non-Convexity (GNC) technique developed by Blake and Zisserman (1987). In addition to using it for edge-preserved smoothing, we have used it in developing algorithms for discontinuity-preserved motion field computation in an application involving multiple-object tracking and motion parameter estimation from passive imagery (Raghavan et al. (1992a) and (1992b), Gupta and Kanal (1992)). It is interesting to note the essential equivalence among Mean- Field Annealing, GNC, and a feature extraction technique known as Variable Conductance Diffusion, which has been shown in some recent papers (Bilbro et al. (1992), Snyder et al. (1992)).
While the new generation of artificial neural networks excite us, we should keep in mind that: (1) as has been shown by us (1989) and others, often fairly simple statistical decision tree methods give equivalent or better results; (2) the various neural network paradigms for pattern classification introduced in recent years have close connections with stochastic approximation, estimation and classification procedures known in statistical pattern recognition; and (3) rather good algorithms have been developed in recent years for fairly large combinatorial optimization problems whereas neural networks have so far only been demonstrated on much smaller problems. It remains to be shown that combinatorial optimization is a good arena for artificial neural networks. However, there is no denying the positive results already achieved in recent years with neural networks, genetic algorithms, and other such newer techniques which I think are very intriguing and fertile areas for theoretical and experimental inquiry.
For the solution of complex problems in pattern recognition and more generally in machine intelligence, involving heterogeneous data sources of both numeric and symbolic information, the use of hybrid methodologies integrating multiple paradigms is becoming increasingly popular. The following statement appeared in Kanal (1972):
"It is now recognized that the key to pattern recognition problems does not lie wholly in learning machines, statistical approaches, spatial filtering, heuristic programming, formal linguistic approaches, or in any other particular solution which has been vigorously advocated by one or another group during the last one and a half decades as the solution to the pattern recognition problem. No single model exists for all pattern recognition problems and no single technique is applicable to all problems. Rather what we have in pattern recognition is a bag of tools and a bag of problems."
Twenty years later I was pleased to see the following comments in a recent article by Marvin Minsky (1991):
In the 1960's and 1970's students frequently asked, "Which kind of representation is best", and I usually replied that we'd need more research before answering. But now I would give a different reply: "To solve really hard problems, we'll have to use several different representations.”
Later in the same article, Minsky continues:
"It is time to stop arguing over which type of pattern classification technique is best because that depends on our context and goal. Instead we should work at a higher level of organization and discover how to build managerial systems to exploit the different virtues and evade the different limitations of each of these ways of comparing things."
I hope that with Minsky joining in what I had long thought should be apparent, fewer researchers amongst us will yield to the temptation to attempt to boost their current favorite technique or theory by knocking what are viewed as competing methodologies and areas of inquiry. In recent years we have heard such comments praising Neural Networks and criticizing AI, ignoring the many seminal contributions of Al, or denigrating neural networks and praising case-based reasoning, or comments about Neural Nets versus Decision Trees or Genetic Algorithms, etc.