IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,
constrains of low latency and the difficulty in sending and receiving video data reliably, most image processing must be done no site on the vehicle.
Computer vision algorithms are generally very computa- tionally intensive and require powerful resources in order to comply with the real-time performance constraints. With increasing computing power of standard PCs, several systems have been demonstrated using general purpose hardware. For example, our group has developed a vehicle detection system that works at a frame rate of approximately 10 frame per second (NTSC: processing on average every third frame) using a standard PC machine (Pentium III 1133MHZ) . Although we expect the development of more powerful, low-cost, general-purpose processors in the near future, specialized hardware solutions using off-the-self components (e.g., cluster of PCs) seems to be the way to go at present.
Vehicle detection for precrash sensing requires high enough sampling rate in order to provide a satisfactory solution. If the vehicle’s speed is about 70 mph, then 10Hz corresponds to a 3 meter interval. The most time consuming step in our system is the computation of the vertical/ horizontal edges. Most low-level image-processing algo- rithms employed for vehicle detection perform similar computations for all the pixels of an image and require only local information. Therefore, substantial speed-ups can be achieved by implementing them on appropriate hardware. Specialized hardware solutions are possible using low-cost general-purpose processors and Field Programmable Gate Arrays (FPGAs).
developed in related fields (e.g., face recognition , and surveillance ) should be adapted in order to develop and make available to the broader scientific community bench- marks and carefully designed evaluation procedures to enable performance evaluations in a consistent way. Relating level of performance in terms of complexity of driving scene is also of critical importance. Ideas developed in related fields (e.g., object recognition ) should be adapted to allow more effective designs and meaningful evaluations.
An on-board vision sensor will face adverse operating conditions, and it may reach a point where it might not be able to provide good quality data to meet minimum system performance requirements. In these cases, the driver assis- tance system may not be able to fulfill its desired responsi- bilities correctly (e.g., issuing severe false alerts). A reliable driver assistance system should be able to evaluate its performance and disable its operation when it cannot provide reliable traffic information any more. We refer to this function as “failure detection.” One possible option for failure detection is to use another sensor exclusively for this purpose, at the expense of additional cost. A better method might be extracting information for failure detection from the vision sensor. Some preliminary experiments have been reported in the scenario of distance detection using stereo vision , where the host vehicle and subject were both stationary. Further exploration of this issue is yet to be carried out.
Recent advances in computation hardware allow us to have systems that can deliver high computational power, with fast networking facilities, at an affordable price. Several studies have taken advantage of hardware imple- mentations to speed-up computations including edge-based motion detection , hardware-based optical flow esti- mation , object tracking , as well as feature detection and point tracking . Sarnoff has also developed a powerful image processing platform called VFE-200 . VFE-200 can perform several front-end vision functions in hardware simultaneously at video rates (e.g., pyramids, registration, and optical flow). It is worth mentioning that, most of the hardware implementations appeared in the literature have addressed smaller problems (e.g., motion detection, edge detection, etc.). Integrating all those hardware components together, as well as integrating hardware and software implementations seamlessly, re- quires more effort.
The majority of vehicle detection systems reported in the literature have not been tested under realistic conditions (e.g., different traffic scenarios including simply structured highway, complex urban street, and varying weather conditions). Moreover, evaluations are based on different data sets and performance measures, making comparisons between systems very difficult. Future efforts should focus on assessing system performance along a real collision time- line, taking into account driver perception-response times, braking rates, and various collision scenarios.
The field is lacking representative data sets (i.e., bench- marks) and specific procedures to allow comprehensive system evaluations and fair comparisons between different system. To move things forward, exemplary strategies
We have presented a survey of vision-based on-road vehicle detection systems—one of the most important components of any driver assistance system. On-road vehicle detection using optical sensors is very challenging and many practical issues must be considered. Depending on the range of interest, different methods seem to be more appropriate. In HG, stereo-based methods have gained popularity but they suffer from a number of practical issues not found in typical applications. Edge-based methods, although much simpler, are quite effective but they are not appropriate for distant vehicles. In HV, appearance-based methods are more promising but recent advances in machine and statistical learning need to be leveraged. Fusing data from multiple cues and sensors should be explored more actively in order to improve robustness and reliability. A great deal of work should also be directed toward the enhancement of sensor capabilities and performance including the improvement of gain control and sensitivity in extreme illumination condi- tions. Hardware-based solutions using off-the-self compo- nents should also be explored to meet real-time constraints while keeping cost low.
Although we have witnessed the introduction of the first vision products on board vehicles in the automobile industry (e.g., the Lane Departure Warning System avail- able in Mercedes and Freightliner’s trucks ), we believe that the introduction of vision-based systems in the automobile industry is still several years away. In our perspective, the future holds promise for driver assistance systems that can be tailored to solve well-defined tasks that attempt to support, not replace the driver. Even though, several orders of improvement in sensor performance and