X hits on this document

PDF document

On-Road Vehicle Detection: A Review - page 7 / 18

38 views

0 shares

0 downloads

0 comments

7 / 18

700

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 28,

NO. 5,

MAY 2006

Fig. 8. Obstacle detection: (a) left and (b) right stereo images, (c) and (d) the remapped images, (e) the difference image, and (f) corresponding polar histogram (from [56]).

Although only two cameras are required to find the range and elevated pixels in an image, there are several advantages to use more than two cameras [58]: 1) repeating texture can confuse a two cameras system by causing matching ambi- guities, which can be eliminated when additional cameras are present and 2) shorter baseline systems are less prone to matching errors while longer baseline systems are more accurate. The combination is better than either one alone. Williamson and Thorpe [59] investigated a trinocular system. The trinocular rig was mounted on top of a vehicle with the longest baseline being 1.2 meters. The third camera was displaced 50 cm horizontally and 30 cm vertically to provide a short baseline. The system reported a capacity of detecting objects as small as 14 cm at range in excess of 100 m. Due to the additional computational costs, however, binocular system is more preferred in the driver assistance system.

5.3

Motion-Based Methods

All the cues discussed so far use spatial features to distinguish between vehicles and background. Another cue that can be employed is relative motion obtained via the calculation of optical flow. Let us represent image intensity at location ðx; yÞ at time t by Eðx; y; tÞ. Pixels on the images appear to be moving due to the relative motion between the sensor and the scene. The vector field o~ðx; yÞ of this motion is referred to as optical flow.

Fig. 9. Comparison of optical flows computed with different algorithms: (a) a frame of image sequence, (b) the theoretical optical flow expected from a pure translation over a flat surface, (c) optical flow from first-order derivative method, (d) optical flow using second-order derivatives the remapped images, (e) optical flow using multiscale differential techni- que, and (f) optical flow computed with correlation technique (from [60]).

Optical flow can provide strong information for HG. Approaching vehicles at an opposite direction produce a diverging flow, which can be quantitatively distinguished from the flow caused by the car ego-motion [60]. On the other hand, departing or overtaking vehicles produce a converging flow. To take advantage of these observations in obstacle detection, the image is first subdivided into small subimages and an average speed is estimated in every subimage. Subimages with a large speed difference from the global speed estimation are labeled as possible obstacles.

The performance of several methods for recovering optical flow o~ðx; yÞ from the intensity Eðx; y; tÞ have been compared in [61] using some selected image sequences from (mostly fixed) cameras (see Fig. 9). Most of these methods compute temporal and spatial derivatives of the intensity profiles and, therefore, are referred to as differential techniques. Getting a reliable dense optical flow estimate under a moving-camera scenario is not an easy task. Giachetti et al. [60] developed some of the best first-order and second-order differential methods in the literature and applied them to a typical image sequence taken from a moving vehicle along a flat and straight road. In particular, they managed to remap the corresponding points between two consecutive frames, by minimizing the following distance measure:

Document info
Document views38
Page views38
Page last viewedSun Dec 04 21:50:18 UTC 2016
Pages18
Paragraphs554
Words17453

Comments