Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Optoelectronics/Displays
?
?
Optoelectronics/Displays??

Exploring ADAS stereo vision apps (Part 1)

Posted: 01 Dec 2015 ?? ?Print Version ?Bookmark and Share

Keywords:advanced driver assistance systems? ADAS? stereo vision? camera sensors? 3D?

If we have a single camera sensor mounted and capturing video that needs to be processed and analysed, that system is called a monocular (single-eyed) system. A system with two cameras, separated from each other is called a stereo-vision system. The table compares the basic attributes of a monocular camera ADAS with a stereo camera system.

The monocular camera video system can do many things reasonably well. The system and analytics behind it identifies lanes, pedestrians, many traffic signs, and other vehicles in the path of the car, all with good accuracy. Where the monocular system is not as robust and reliable is in calculating the 3D view of the world from the planar 2D frame that it receives from the single camera sensor. That's not so surprising if we consider the natural fact that humans and most advanced animals are born with two eyes. Figure 3 describes at high level the process and algorithms used to analyse the video (image) frame received from a camera sensor.

Table: Here is a high-level comparison of system attributes for a mono vs. stereo camera ADAS system.

Figure 3: This high-level algorithm flow is used to analyse an image in an ADAS system.

The first stage in figure 3 is the image pre-processing step. In this stage, various filters are run on the image, typically every pixel, to remove sensor noise and other unnecessary information. This stage also converts the format of the received RAW data from camera sensor to a YUV or RGB mode that can be analysed by subsequent steps. On the basis of the preliminary feature extraction (edges, Gabor filters, Histogram of Oriented Gradients etc.) performed in this first stage, the second and third stages further analyse the images to identify regions of interest by running algorithms such as segmentation, optical flow, block matching, and pattern recognition.

The final stage uses region information and feature data generated from the prior stages to create intelligent analysis decisions about the class of the object in the regions of interest. Evidently, this brief explanation does not quite do justice to the involved ADAS image processing algorithms' field. However, since the primary objective of this article is to highlight the additional challenges and robustness that a stereo-vision system provides, the block level algorithmic information is sufficient background for us to delve deeper into the topic in Part 2.

About the author
Aish Dubey is a Safety Architect.


?First Page?Previous Page 1???2



Article Comments - Exploring ADAS stereo vision apps (P...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top