Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Sensors/MEMS

Exploring ADAS stereo vision apps (Part 1)

Posted: 01 Dec 2015 ?? ?Print Version ?Bookmark and Share

Keywords:advanced driver assistance systems? ADAS? stereo vision? camera sensors? 3D?

Cameras are the most precise mechanism available to capture accurate data at high resolution. As with human eyes, cameras capture the resolution, minutiae, and vividness of a scene with such beautiful detail that no other sensors, including radar, ultrasonic, and lasers, can match. Prehistoric paintings discovered in caves across the world are testament that pictures and paintings coupled with visual sense has been the preferred method to convey accurate information.

The next engineering frontier, and some say the most challenging for the technology community, is real-time machine vision and intelligence. The applications include and exceed real-time medical analytics-based surgical robots and cars that are driven with autonomous intelligence. In this article, we will focus on autonomous advanced driver assistance systems (ADAS) applications and how cameras in general, and stereo vision in particular, is the basis for safe autonomous cars that can "see and drive" themselves.

Key ADAS applications are shown below (figure 1). Many of these applications can be implemented by using a vision system with forward, rear, and side mounted cameras for pedestrian detection, traffic-sign recognition, blind spots, and lane-detect systems. Others, such as intelligent adaptive cruise control, can be implemented robustly as a fusion of radar data with camera sensors, especially for such complex scenarios as city traffic, curvy, non-straight roads, or higher speeds.

Figure 1: Applications of camera sensors for ADAS in a modern vehicle. The forward-facing camera is used for lane detection, pedestrian detection, traffic sign recognition, and emergency braking, while side- and rear-facing cameras are for parking assistance, blind spot detection, and cross traffic alerts.

Selecting the camera for the job
All real-world scenes that a camera encounters are three-dimensional. Objects that are at different depths in the real world may appear to be adjacent to each other in the 2-dimensional mapped world of a camera sensor. Figure 2 shows a photo from the Middlebury image dataset. Clearly the motorbike in the foreground is approximately two meters closer to the camera than the storage shelf in the background. To better demonstrate this concept, pay attention to both point 1 and 2 annotated in the figure. The red box (point 1) that is in the background appears adjacent to the forks (point 2) of the bike in the captured image, even though it is at least two meters from the camera. The human brain has the power of perspective that allows us to make decisions regarding depth from a 2D scene. For a forward-mounted camera in the car, the ability to analyse perspective is not so easy.

Figure 2: This image from the Middlebury database shows a motorbike in the foreground is much closer to the camera than the storage shelf, though all objects appear adjacent in a 2D mapped view.

1???2?Next Page?Last Page

Article Comments - Exploring ADAS stereo vision apps (P...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top