IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
Fig. 6. Image, free driving space and image segmentation-based local image entropy and co-occurrence-based image segmentation (from ).
Fig. 5. Multiscale hypothesis generation—size of the images: 90 62 (first row), 180 124 (second row), and 360 248 (third row). The images in the first column have been obtained by applying low pass filtering at different scales; second column: vertical edge maps; third column: horizontal edge maps; fourth column: vertical and horizontal profiles. All images have been scaled back to 360 248 for illustration purposes (from ).
Groups of horizontal edges on the detected pavement were then considered for hypothesizing the presence of vehicles.
Betke et al.  utilized edge information to detect distant cars. They proposed a coarse-to-fine search method looking for rectangular objects. The coarse search checked the whole image to see if a refined search was necessary, and a refined search was activated only for small regions of the image, suggested by the coarse search. The coarse search looked through the whole edge maps for prominent edges, such as long uninterrupted edges. Whenever such edges were found, the refined search process was started in that region.
In , vertical and horizontal edges were extracted separately using the Sobel operator. Then, two edge-based constraint filters (i.e., rank filter and attached line edge filter) were applied on those edges to segment vehicles from background. The edge-based constraint filters were derived from prior knowledge about vehicles. Assuming that lanes have been successfully detected, Bucher et al.  hypothe- sized vehicle presence by scanning each lane starting from the bottom to a certain vertical position, corresponding to a predefined maximum distance in the real world. Potential candidates were obtained if a strong horizontal segment delimited by the lane borders had been found. A multiscale approach which combines subsampling with smoothing to hypothesize possible vehicle locations more robustly was proposed in  to address the above problems.
Three levels of detail were used: ð360 248Þ, ð180 124Þ, and ð90 62Þ. At each level, the image was processed by applying the following steps:
low pass filtering (e.g., first column of Fig. 5);
vertical edge detection (e.g., second column of Fig. 5), vertical profile computation of the edge image (e.g., last column of Fig. 5), and profile filtering using a low pass filter;
horizontal edge detection (e.g., third column of Fig. 5), horizontal profile computation of the edge image (e.g., last column of Fig. 5), and profile filtering using a low pass filter; and
local maxima and minima detection (e.g., peaks and valleys) of the two profiles.
The peaks and valleys of the profiles provide strong information about the presence of a vehicle in the image. Starting from the coarsest level of detail, all the local maxima at that level are found first. Although the resulted low resolution images have lost fine details, important vertical and horizontal structures are mostly preserved (e.g., first row of Fig. 5). Once the maxima at the coarsest level have been found, they are traced down to the next finer level. The results from this level are finally traced down to the finest level where the final hypotheses are generated.
The proposed multiscale approach improves system robustness by making the hypothesis generation step less sensitive to the choice of parameters. Forming the first hypotheses at the lowest level of detail is very useful since this level contains only the most salient structural features. Besides improving robustness, the multiscale scheme speeds-up the whole process since the low resolution images have much simpler structure as illustrated in Fig. 5 (i.e., candidate vehicle locations can be found faster and easier). Several examples are provided in Fig. 5 (left column).
The presence of vehicles in an image causes local intensity changes. Due to general similarities among all vehicles, the intensity changes follow a certain texture pattern . This texture information can be used as a cue to narrow down the search area for vehicle detection. Entropy was first used as a measure for texture detection. For each image pixel, a small window was chosen around it, and the entropy of that window was considered as the entropy of the pixel. Only regions with high entropy were considered for further processing.
Another texture-based segmentation method suggested in  uses co-occurrence matrices introduced in . The co- occurrence matrix contains estimates of the probabilities of co-occurrences of pixel pairs under predefined geometrical and intensity constraints. Fourteen statistical features were computed from the co-occurrence matrices . For typical textures of geometrical structures, like trucks and cars, four measurements out of the 14 were found to be critical for object detection (i.e., energy, contrast, entropy, and correlation) . Using co-occurrence matrices for texture detection is more accurate in general than using the entropy method mentioned earlier since co-occurrence matrices employ second order statistics as opposed to histogram information employed by the entropy method (see Fig. 6). However, computing the co- occurrence matrices is expensive.