X hits on this document

PDF document

On-Road Vehicle Detection: A Review - page 3 / 18

33 views

0 shares

0 downloads

0 comments

3 / 18

696

IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,

VOL. 28,

NO. 5,

MAY 2006

3

ACTIVE VERSUS PASSIVE SENSORS

The most common approach to vehicle detection is using active sensors [25] such as radar-based (i.e., millimeter-wave) [26], laser-based (i.e., lidar) [27], [28], and acoustic-based [29]. In radar, radio waves are transmitted into the atmosphere, which scatters some of the power back to the radar’s receiver. A Lidar (i.e., “Light Detection and Ranging”) also transmits and receives electromagnetic radiation, but at a higher frequency; it operates in the ultraviolet, visible, and infrared region of the electromagnetic spectrum.

The reason that these sensors are called active is because they detect the distance of objects by measuring the travel time of a signal emitted by the sensors and reflected by the objects. Their main advantage is that they can measure certain quantities (e.g., distance) directly without requiring powerful computing resources. Radar-based systems can “see” at least 150 meters ahead in fog or rain, where average drivers can see through only 10 meters or less. Lidar is less expensive to produce and easier to package than radar; however, with the exception of more recent systems, lidar does not perform as well as radar in rain and snow. Laser-based systems are more accurate than radars, however, their applications are limited by their relatively higher costs. Prototype vehicles employing active sensors have shown promising results. However, when a large number of vehicles move simultaneously in the same direction, interference among sensors of the same type poses a big problem. Moreover, active sensors have, in general, several drawbacks, such as low spatial resolution and slow scanning speed. This is not the case with more recent laser scanners, such as SICK [27], which can gather high spatial resolution data at high scanning speeds.

Optical sensors, such as normal cameras, are usually referred to as passive sensors [25] because they acquire data in a nonintrusive way. One advantage of passive sensors over active sensors is cost. With the introduction of inexpensive cameras, we could have both forward and rearward facing cameras on a vehicle, enabling a nearly 360 degree field of view. Optical sensors can be used to track more effectively cars entering a curve or moving from one side of the road to another. Also, visual information can be very important in a number of related applications, such as lane detection, traffic sign recognition, or object identification (e.g., pedestrians and obstacles), without requiring any modifications to road infrastructures. Several systems presented in [5] demonstrate the principal feasibility of vision-based driver assistance systems.

4

THE TWO STEPS OF VEHICLE DETECTION

Fig. 2. Illustration of the two-step vehicle detection strategy.

5

HG METHODS

Various HG approaches have been proposed in the literature, which can be classified into one of the following three categories: 1) knowledge-based, 2) stereo-based, and 3) motion- based. The objective of the HG step is to find candidate vehicle locations in an image quickly for further exploration. Knowledge-based methods employ a priori knowledge to hypothesize vehicle locations in an image. Stereo-based approaches take advantage of the Inverse Perspective Mapping (IPM) [30] to estimate the locations of vehicles and obstacles in images. Motion-based methods detect vehicles and obstacles using optical flow. The hypothesized locations from the HG step form the input to the HV step, where tests are performed to verify the correctness of the hypotheses.

5.1

Knowledge-Based Methods

Knowledge-based methods employ a priori knowledge to hypothesize vehicle locations in an image. We review below some representative approaches using information about symmetry, color, shadow, geometrical features (e.g., cor- ners, horizontal/vertical edges), texture, and vehicle lights.

5.1.1 Symmetry

As one of the main signatures of man-made objects, symmetry has been used often for object detection and recognition in computer vision [31]. Images of vehicles observed from rear or frontal views are in general symme- trical in the horizontal and vertical directions. This observa- tion has been used as a cue for vehicle detection in several studies [32], [33]. An important issue that arises when computing symmetry from intensity, however, is the pre- sence of homogeneous areas. In these areas, symmetry estimations are sensitive to noise. In [4], information about edges was included in the symmetry estimation to filter out homogeneous areas (see Fig. 3). In a different study, Seelen et al. [34] formulated symmetry detection as an optimization problem which was solved using Neural Networks (NNs).

On-board vehicle detection systems have high computational requirements as they need to process the acquired images at real-time or close to real-time to save time for driver reaction. Searching the whole image to locate potential vehicle locations is prohibitive for real-time applications. The majority of methods reported in the literature follow two basic steps: 1) HG where the locations of possible vehicles in an image are hypothesized and 2) HV where tests are performed to verify the presence of vehicles in an image (see Fig. 2). Although there is some overlap in the methods employed for each step, this taxonomy provides a good framework for discussion throughout this survey.

5.1.2 Color

Although few existing systems use color information to its full extent for HG, it is a very useful cue for obstacle detection, lane/road following, etc. Several prototype systems have investigated the use of color information as a cue to follow lanes/roads [35], or segment vehicles from background [36], [37]. Crisman et al. [35] used two closely positioned cameras to extend the dynamic range of a single camera. One camera was set to capture the shadowed area by opening its iris, and the other the sunny area by using a closed iris. Combining color information (i.e., red, green, and blue) from the two images, he formed a six-dimensional color space. A Gaussian

Document info
Document views33
Page views33
Page last viewedSat Dec 03 00:57:05 UTC 2016
Pages18
Paragraphs554
Words17453

Comments