Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Controls/MCUs

Improve car safety with smart sensors

Posted: 02 Apr 2007 ?? ?Print Version ?Bookmark and Share

Keywords:automotive safety design? intelligent sensors for car safety? improve automotive safety with ADAS?

Automotive OEMS, their supplier partners and governments around the world have been working diligently to develop and promote active safety and advanced driver assistance systems (ADAS). Designed to either increase accident avoidance or reduce crash severity, ADAS is touted by automotive analysts as the top new technology for 2010. It increases driver awareness of possible hazards, potentially improving reaction times with features such as lane departure warning systems (LDWS), drowsiness detection and night vision.

As consumers become aware of the greater safety that ADAS provides, high commercial acceptance is anticipated to drive market demand for these features. ADAS has been implemented in luxury cars, and as the technology matures and trickles down to mass-market vehicles, higher volumes will bring cost economies. For automotive OEMs, active safety and ADAS provide the added benefit of offering important product differentiation, given that today's passive safety systems are quickly becoming standard features.

ADAS applications use various sensors to collect physical data about the vehicle and its surroundings. After collecting data, an ADAS will then use object detection, recognition and tracking processing techniques to evaluate threats. Two example applications are LDWS, which alerts the driver when an unintentional lane change is detected, and traffic-sign recognition. When implementing LDWS, the system detects and tracks road lanes relative to vehicle position to notify an inattentive driver that the vehicle is crossing over into the adjacent lane. For traffic-sign recognition, the system provides the driver with the last speed limit or notifies them if they are traveling in a particular zone.

Systems usually require various sensors to collect environmental information. LDWS uses CMOS camera sensors; night vision uses an infrared sensor; adaptive cruise control typically uses radar; and ultrasound aids in parking assist. Although the details of each application vary, the processing usually consists of three stages: data capture, pre-processing and post-processing. The pre-processing stage involves functions that apply to the full image, and are thus data-intensive and regular in structure. These include transformations of the image, stabilization, feature and signal enhancements, noise reduction, color conversion and motion analysis. The post-processing stage involves feature tracking, scene interpretation, system control and decision-making.

Sensors used to collect environmental information produce data sets that are essentially images.

Recognizing, tracking and evaluating driving-related objects are a complex process. Driving styles and conditions affect the quality of raw data collected by sensors and can obscure important details necessary for recognizing and tracking objects. Drivers operate their vehicles in a highly dynamic and unpredictable fashion under different weather conditions, including bright sunlight, rain, fog and snow. To further complicate matters, all processing must be done in real-time with processing latency no greater than 30ms. A half-second of latency in warning may be the difference between an accident and a driver responding in time to an alert.

Each step from data capture to action requires substantial signal-processing capabilities, making high performance essential for implementing active safety and ADAS accurately and in a timely fashion. DSPs specifically designed and optimized for automotive safety applications provide the needed performance, enabling OEMs to bring active safety and ADAS to market.

Dynamic flexibility
Besides high performance, ADAS applications require a flexible architecture to address many functions. For example, traffic signs differ from country to country across language, text font, shape and color. Flexibility is also needed to maximize IP reuse across product lines and to cost-effectively manage the quick pace of innovation typical in any emerging market.

Consider the pre-processing algorithms used to filter out effects of driving conditions. Some automotive suppliers use a single algorithm, while others use one algorithm for day processing and another for night driving. In reality, many pre- and post-processing algorithms may be necessary to cover various driving conditions. The system must also be able to adapt quickly, such as when a vehicle enters a tunnel, effectively changing from day to night driving in an instant.

Note that multiple sensors around a vehicle must perform different functions. Laterally-facing sensors handle blind-spot detection; front-facing sensors manage vehicle, lane, traffic-sign and pedestrian recognition; and sensors inside the vehicle perform occupancy sensing, and detect driver drowsiness and intent.

Additionally, different sensors process different kinds of data. Some traffic-sign recognition algorithms rely on sign color. In those cases, forward-facing sensors need to support a wide color scale. On the other hand, grayscale sensors are much more sensitive to variations in brightness, and they offer nearly double the spatial resolution of a color sensor. Most ADAS functions rely on high sensor sensitivity, so a grayscale camera is a better fit. It is also important to note that imaging sensors for ADAS applications usually have a high dynamic range, normally exceeding 8bits/pixel.

Multiple sensors around a vehicle perform specific functions including pedestrian detection, night vision, lane-departure warning and parking assist.

The most efficient way to handle these challenges is to execute multiple algorithms on a single DSP. Ideally, a single DSP can perform all driving condition pre-processing and multiple recognitions tasks, such as lane-departure warning and traffic-sign recognition. This reduces chip count, leading to fewer points of failure, increased system reliability, and lower system costall key drivers for automotive applications.

Today's SoC architectures enable further efficiencies by integrating all of the peripherals necessary for a complete video/image processing system within a single chip. With a wide range of peripheral support, today's highly integrated devices also make it easy to connect to the rest of the vehicle's systems.

SoC architectures also provide application-specific specialization without the cost of an ASIC implementation. Done right, an SoC can also maintain the flexibility of a programmable software architecture vs. an ASIC's rigidity.

Data transfer
The SoC architecture should also be designed to move data efficiently. As with any video application, the more often data must be moved, the greater the latency of processing. To increase system performance and maximize the use of level-one memory resources, developers usually limit processing to areas of interest. By focusing processing on specific areas of interest, the image block that needs to be processed is significantly smaller than when the entire image is processed and evaluated.

To support this type of data movement, the SoC needs a multichannel, multithreaded DMA engine. The DMA controller should support various transfer geometries and sequences. Transfers on the previous DMA controllers were limited to only two dimensions, and shared the same index parameters for source and destination. In contrast, the EDMA3 controller on the DaVinci processors supports independent source and destination indexes, and 3D transfers.

In addition to video input and processing, applications such as parking assistance also require video output. Video output capability can also be very useful during R&D and system debug stageseven if video out is not planned for production.

ADAS applications are cutting-edge and rapidly evolving, so developers need tools that simplify development and aid rapid prototyping. Thus, ADAS algorithm development works best when using C or modeling software, such as Simulink or Matlab. Of course, a working system needs more than just algorithms. Hence, it is essential to have off-the-shelf software components such as the real-time kernel and peripheral drivers. It is also helpful to have off-the-shelf application-specific development tools and algorithm libraries.

- Brooke Williams
Marketing Manager

- Zoran Nikolic
Principal Architect

- Gaganjot Maur
Applications Engineer
Automotive & Machine Vision, Texas Instruments Inc.

Article Comments - Improve car safety with smart sensor...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top