It takes further mental effort to interpret the information these actions bring into the brain.The brain must learn to see, a complex process that begins early in life. How difficult this is even for the powerful visual cortex is illustrated by a real-life case related by the neurologist and writer Oliver Sacks—the tale of a middle-aged man who miraculously regained his sight after decades of blindness, but who found that eyesight alone was not enough; he also needed a brain that had learned to understand visual information.Although he struggled hard to comprehend the world visually, it was too late for him to master this ability.
Given the enormous demands vision places on the brain, it is not surprising that it takes massive computing capacity for a machine to match human vision. Hans Moravec notes that early AI researchers were ready to believe that given the right software, machine minds could be made fully intelligent.“Computer vision convinced me oth- erwise, ” he now writes, adding,
Each robot’s-eye glimpse results in a million-point mosaic.Touching every point took our computer seconds, finding a few extended patterns con- sumed minutes, and full stereoscopic matching of the view from two eyes needed hours. Human vision does vastly more every tenth of a second.
Typically, to perform the equivalent of human vision in real time requires a computer executing billions of instructions per second. Early computers were incapable of handling streams of visual data and interpreting it on reasonable time scales; in the late 1960s and early 1970s, it took hours for the pioneering robot Shakey to calculate its actions as it scanned its surroundings.
Now cheap, readily available microprocessors can handle visual information at high speeds, and a laptop computer can perform as- pects of visual cognition in real time. Larry Matthies, who runs the MachineVision group at the Jet Propulsion Laboratory, says that com- puters are now so fast that even complex programs for machine vision can be rapidly executed. Philosophical differences about top-down versus bottom-up or other approaches to artificial vision, he adds, have “very quickly become outdated. Because we’ve got fast enough machines you can do better vision, more reasoning—and that’s the solution.”