X hits on this document

PDF document

On-Road Vehicle Detection: A Review - page 4 / 18

43 views

0 shares

0 downloads

0 comments

4 / 18

SUN ET AL.: ON-ROAD VEHICLE DETECTION: A REVIEW

Fig. 3. Computing the symmetry: (a) gray-level symmetry, (b) edge symmetry, (c) horizontal edges symmetry, (d) vertical edges symmetry, and (e) total symmetry (from [70]).

distribution was fit to this color space and each pixel was classified as either road or nonroad pixel.

Buluswar and Draper [36] used a nonparametric learning- based approach for object segmentation and recognition. A multivariate decision tree was utilized to model the object in the RGB color space from a number of training examples. Among various color spaces, the RGB color space ensures that there is no distortion in the initial color information, however, color features are highly correlated—it is difficult to evaluate the difference of two colors from their distance in RGB color space. In [37], Guo et al. chose the L*a*b color space instead. The L*a*b color space has the property that it maps equally distinct color differences into equal Euclidean distances. An incremental region fitting method was investigated in the L*a*b color space for road segmentation [37].

5.1.3 Shadow

Using shadow information as a sign pattern for vehicle detection was initially discussed in [38]. By investigating image intensity, it was found that the area underneath a vehicle is distinctly darker than any other areas on an asphalt paved road. A first attempt to deploy this observation can be found in [39], although there was no systematic way to choose appropriate threshold values. The intensity of the shadow depends on the illumination of the image, which in turn depends on weather conditions. Therefore, the thresholds cannot be, by any means, fixed. To segment the shadow area, a low and a high threshold are required. However, it is obvious that it is hard to find a low threshold for a shadow area. The high threshold can be estimated by analyzing the gray level of the “free driving space”—the road right in front of the prototype vehicle.

Tzomakas and Seelen [40] followed the same idea and proposed a method to determine the threshold values. Specifically, a normal distribution was assumed for the

697

Fig. 4. Free driving spaces, the corresponding gray-value histograms and the thresholded images (from [40]).

intensity of the free driving space. The mean and variance of the distribution were estimated using Maximum Like- lihood (ML) estimation. The high threshold of the shadow area was defined as the limit where the distribution of the road gray values declined to zero on the left of the mean, which was approximated by m 3, where m is the mean and is the standard deviation. This algorithm is depicted in Fig. 4. It should be noted that the assumption about the distribution of the road pixels might not always hold true.

5.1.4 Corners

Exploiting the fact that vehicles in general have a rectangular shape with four corners (upper-left, upper-right, lower-left, and lower-right), Bertozzi et al. proposed a corner-based method to hypothesize vehicle locations [41]. Four templates, each of them corresponding to one of the four corners, were used to detect all the corners in an image, followed by a search method to find the matching corners (i.e., a valid upper-left corner should have a matched lower-right corner).

5.1.5 Vertical/Horizontal Edges

Different views of a vehicle, especially rear/frontal views, contain many horizontal and vertical structures, such as rear- window, bumper, etc. Using constellations of vertical and horizontal edges has shown to be a strong cue for hypothesiz- ing vehicle presence. In an effort to find pronounced vertical structures in an image, Matthews et al. [42] used edge detection to find strong vertical edges. To localize left and right position of a vehicle, they computed the vertical profile of the edge image (i.e., by summing the pixels in each column) followed by smoothing using a triangular filter. By finding the local maximum peaks of the vertical profile, they claimed that they could find the left and right position of a vehicle. A shadow method, similar to that in [40], was used to find the bottom of the vehicle. Because there were no consistent cues associated with the top of a vehicle, they detected it by assuming that the aspect ratio of any vehicle was one.

Goerick et al. [43] proposed a method called Local Orientation Coding (LOC) to extract edge information. An image obtained by this method consists of strings of binary code representing the directional gray-level variation in the pixel’s neighborhood. These codes carry essentially edge information. Handmann et al. [44] also used LOC, together with shadow information, for vehicle detection. Parodi and Piccioli [45] proposed to extract the general structure of a traffic scene by first segmenting an image into four regions: pavement, sky, and two lateral regions using edge grouping.

Document info
Document views43
Page views43
Page last viewedThu Dec 08 16:48:08 UTC 2016
Pages18
Paragraphs554
Words17453

Comments