X hits on this document

PDF document

On-Road Vehicle Detection: A Review - page 13 / 18





13 / 18



VOL. 28,

NO. 5,

MAY 2006

In HV, most research efforts have focused on feature extraction and classification based on learning and statistical models. Efforts in this direction should continue while capitalizing on recent advances in the statistical and machine learning areas. For example, one issue that has not been given enough attention in the vehicle detection literature is the issue of selecting a good set of features. In most cases, a large number of features is employed to compensate for the fact that relevant features are unknown a priori. However, without employing some kind of feature selection strategy, many of them would be either redundant or even irrelevant which could affect classification accuracy seriously. In general, it is highly desirable to use only those features that have great separability power while ignoring or paying less attention to the rest. For instance, to allow a vehicle detector to generalize nicely, it would be nice to exclude features encoding fine details which might be present in some vehicles only. Finding out what feature to use for classification/ recognition is referred to as feature selection. Recently, a few efforts have been reported in the literature addressing this issue in the context of vehicle detection [76], [96], [97], [98]. Several efforts have even been reported to improve tracking through feature selection [99]. We believe that more efforts are required in this direction along with efforts to develop more powerful feature extraction and classification schemes. Recent advances in machine learning and statistics (e.g., kernel methods [100] should be leveraged in this respect).

Fig. 11. Low light camera versus normal camera. (a) Low-light camera daytime image. (b) Same scene caught using normal camera. (c) Low- light camera nighttime scene. (d) Same nighttime scene caught using normal camera.

Combining multiple cues should also be explored more actively as a viable means to develop more reliable and robust systems. The main motivation is that the use of a single cue suitable for all conceivable scenarios seems to be impossible. Combining different cues has produced pro- mising results (e.g., combining LOC, entropy, and shadow [44], shape, symmetry, and shadow [101], color and shape [102], and motion with appearance [103]). Effective fusion mechanisms as well as cues that are fast and easy to compute are important research issues.




Employing more powerful sensors in vehicle detection applications can influence system performance consider- ably. Specific objectives include improving dynamic range, spectral sensitivity, spatial resolution, and incorporating computational capabilities.

Traditional CCD cameras lack the dynamic range neces- sary to operate in traffic under adverse lighting conditions. Cameras with enhanced dynamic range are needed to enable daytime and nighttime operation without blooming. An example is Ford’s proprietary low-light camera which has been developed jointly between Ford Research Laboratory and SENTECH. It uses a Sony x-view CCD array with specifically designed electronic profiles to enhance the camera’s dynamic range. Fig. 11a and Fig. 11c show the dynamic range of the low light camera, while Fig. 11b and Fig. 11d show the same scene images caught under same illumination conditions by using a normal camera. The low- light camera has been employed in a number of studies including [77], [76], [104], [49], [96], [97]. Recently, several efforts have focused on using CMOS technology to design cameras with improved dynamic range.

Low-light cameras do not extend visual capabilities beyond the visible spectrum. In contrast, Infrared (IR) sensors allow us to sense important information in the nonvisible

spectrum. IR-based systems are less sensitive to adverse weather or illumination changes—day and night snapshots of the same scene are more similar to each other. Several studies have been carried out to evaluate the feasibility and advantages of using IR for driver assistance system [105], [106], [107], [108]. An interesting example is the miniaturized optical range camera developed in the project MINORA [109], [110]. It works in the near-IR, it is cheap, fast, and capable of providing 3D information with high accuracy. However, it has certain limitations such as low resolution and narrow field of view. Fusing several sensors together could offer considerable performance improvements (see Section 9.3).

Improving camera resolution can offer significant bene- fits too. Over the last few years, the resolution of sensors has been drastically enhanced. A critical issue in this case is decreasing acquisition and transfer time. CMOS technology holds some potential in this direction (i.e., pixels can be addressed independently like in traditional memories).

In conventional vision systems, data processing takes place at a host computer. Building cameras with internal processing power (i.e., vision chip) is an important goal. The main idea is integrating photo-detectors with processors on a very large scale integration [111]. Vision chips have many advantages over conventional vision systems, for instance, high speed, small size, and lower power consumption, as well as a wide brightness range, etc. Several cameras available today allow to address and solve some basic problems directly at the sensor level (e.g., image stabilization can now be performed during image acquisition).


Sensor Fusion

Developing driver assistance system suitable for urban areas where traffic signs, crossings, traffic jams, and other participants (motorbikes, bicycles, pedestrians, or even live stocks) may exist poses extra challenges. Exclusively

Document info
Document views24
Page views24
Page last viewedTue Oct 25 17:47:29 UTC 2016