puting capacity, will solve many if not all the remaining problems with face recognition technology.
If artificial beings are to “read” people; that is, read their emotions through their facial expressions, further advances are needed.The hu- man face has more muscles than does the visage of any other living creature.These muscles can wrest the face into thousands of expres- sions, some differing only subtly but carrying serious differences in meaning. Since early studies made by the nineteenth-century anato- mist Guillaume-Benjamin-Amand Duchenne, for instance, it has been known that the difference between a false smile of seeming happiness, and a true smile of real joy, is that in a true smile the corners of the mouth are raised and the skin crinkles at the corner of the eyes.
Machine vision can already distinguish among emotions that pro- duce widely different expressions. In one example, Gwen Littlewort and her colleagues, at the Machine Perception Laboratory of the Uni- versity of California, San Diego, have developed a system that auto- matically detects a face as seen in a video image, and decides in which of seven categories its expression belongs: anger, disgust, fear, joy, sad- ness, surprise, or neutrality. Although relatively crude, this level of emotional identification is sufficient to enhance rapport between hu- mans and artificial beings, allowing the latter to respond differently to an angry person, say, than to a surprised one.
But a digital being that cannot tell a false smile from a real one might remain naïve about humans, like the android Commander Data in Star rek. Fortunately, in 1982, Paul Ekman, a psychologist of the University of California, San Francisco, who specializes in facial ex- pressions, with his colleagueWallance Friesen, developed a method to classify everything a face can do. The Facial Action Coding System uses anatomical knowledge to define more than 30 action units (AUs) corresponding to contractions of specific muscles in the upper and lower face.These AUs are sufficient to fully describe the thousands of possible facial expressions.
In 2001,Takeo Kanade’s group at Carnegie Mellon drew on this work to develop a neural network that breaks down any facial expres- sion it sees into discrete AUs, with a recognition rate exceeding 96