X hits on this document

PDF document

On-Road Vehicle Detection: A Review - page 14 / 18





14 / 18


Fig. 12. An example of sensor fusion.

vision-based systems and algorithms are not yet powerful enough to deal with complex traffic situations. To extend the application of a driver assistance systems, substantial research efforts are required to develop systems employing information from multiple sensors, both active and passive, effectively (see Fig. 12).

Sensor characteristics reveal that each sensor can only perceive certain characteristics of the environment, there- fore, a single sensor is not sufficient enough to comprehen- sively represent the driving environment [67], [37]. A multisensor approach has the potential to yield a higher level of reliability and security. Methods for sensor fusion and integration are concerned with improving the sensing capacity by using redundant and complementary informa- tion from multiple sensors. These sensors are able to obtain more accurate environment features that are impossible to perceive with a single sensor.

For example, acoustic sensors were fused with video sensors in [29] for both detection and tracking in order to take advantage of the complementary information available in the two sensors. In another study, a multisensor approach was adopted using sensor technologies with widely overlapping fields of view between different sensors [112]. Depending on the relevance of the area covered, they increased the degree of vehicle is surveyed by means of a single laser sensor, the sides are each covered by two independent laser scanners and several overlapping short range radar sensors, and the front of the car is covered by three powerful long-range sensors (i.e., stereo-vision, laser, and radar). The sensor signals are combined by sensor fusion into a joint obstacle map. By considering confidence and reliability measures for each sensor, the obstacle map computed by sensor fusion was shown to be more precise and reliable than any of the individual sensor outputs themselves.

Although sensor fusion has great potential to improve driver assistance system, developing the actual multisensor platform requires dealing with a series of problems including not only the conventional issues of sensor fusion and integration, but also some special issues in driver assistance system design. With the common geometry and time frames, sensor fusion needs to be implemented at the various levels:



Registration level. To allow fusing the data from different sensors effectively, sensor data needs to be registered first. Encapsulation level. Registered data from different sensors can be fused to yield more accurate informa- tion for the detected vehicles based on the reliability/ confidence levels of the attributes associated with




different sensors. For instance, a more accurate position-velocity could be obtained by analyzing the registered radar and stereo-vision data. In other words, at this level, same type of information is encapsulated together to a more accurate and concise representation. Perception-map level. Complementary information can be fused to infer new knowledge about the driving environment. The position-velocity informa- tion of detected vehicles and road geometry infor- mation (from vision) can be fused to produce a primary perception map, where vehicles can be characterized as being either stationary/moving or inside/outside the lane. Threat quantification level. Vehicle type, shape, dis- tance, and speed information can be fused to quantify the threat level of a vehicle in the percep- tion map to the host vehicle.

Most vehicle detection approaches have been implemen- ted as “autonomous systems” with all instrumentation and intelligence on-board the vehicle. Significant performance improvements can be expected, however, by implementing vehicle detection as “co-operative systems” where assistance is provided from external sources (i.e., the roadway, or from other vehicles, or both). Examples of roadway assistance include passive reference markers in the infrastructure and GPS-based localization. Vehicle-to-vehicle co-operation works by transmitting key vehicle parameters and intentions to close-by vehicles. Having this information available as well as knowing the type of surrounding environment through GPS might reduce the complexity of the problem and make vehicle detection more reliable and robust.




Vision-based vehicle detection systems should be modular, reconfigurable, and extensible to be able to deal with a wide variety of image processing tasks. The functionality of most vehicle detection systems today is achieved by a few algorithms that are hard-wired together. This is quite inefficient and cannot handle satisfactorily the complexities involved. Recently, there have been some efforts to develop a software architecture that can deal with different levels of abstraction including sensor fusion, integration of various algorithms, economical use of resources, scalability, and distributed computing. For example, a multiagent-system approach was proposed in [13] (i.e., ANTS or Agent NeTwork System) to address these issues. In another study [85], a hard real-time operating system called “Maruti” was used to guarantee that the timing constraints on the various vision processes were satisfied. The dynamic creation and termination of tracking processes optimized the amount of computational resources spent and allowed fast detection and tracking of multiple cars. Obviously, more efforts in this area would be essential.




On-board vehicle detection systems have high computa- tional requirements as they need to process the acquired images at real-time or close to real-time to save time for driver reaction. For nontrivial velocities of the vehicle, processing latency should be small (i.e., typically no larger than 100 ms), while processing frequency should be high (i.e., typical in excess of 15 frames per second). Due to the

Document info
Document views63
Page views63
Page last viewedMon Jan 23 09:09:15 UTC 2017