Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Embedded

Enable depth discernment in embedded vision apps

Posted: 27 Sep 2013 ?? ?Print Version ?Bookmark and Share

Keywords:embedded vision? image sensors? 2D? 3D? Depth sensing?

The term "embedded vision" refers to the utilisation of computer vision in embedded systems, mobile devices, PCs and the cloud. Stated another way, "embedded vision" refers to systems that extract meaning from visual inputs. Historically, such image analysis technology has only been found in complex, expensive systems such as military equipment, industrial robots, and quality-control inspection systems for manufacturing.

However, cost, performance, and power consumption advances in digital integrated circuits such as processors, memory devices, and image sensors are now paving the way for the proliferation of embedded vision into high-volume applications.

With a few notable exceptions, such as Microsoft's Kinect game console and computer peripheral, the bulk of today's embedded vision system designs employ 2D image sensors. 2D sensors enable a tremendous breadth and depth of vision capabilities. However, the inability of 2D image sensors to discern an object's distance from the sensor can make it difficult or impossible to implement some vision functions. And clever workarounds, such as supplementing 2D sensed representations with already known 3D models of identified objects (human hands, bodies, or faces, for example) can be too constraining.

In what kinds of applications would full 3D sensing be of notable value versus the more limited 2D alternative? Consider, for example, a gesture interface implementation.

The ability to discern motion not only up-and-down and side-to-side but also front-to-back greatly expands the variety, richness, and precision of the suite of gestures that a system can decode. Or consider face recognition, a biometrics application.

Depth sensing is valuable in determining that the object being sensed is an actual person's face, versus a photograph of that person's face. Alternative means of accomplishing this objective, such as requiring the biometric subject to blink during the sensing cycle, are inelegant in comparison.

Automotive advanced driver assistance system (ADAS) applications that benefit from 3D sensors are abundant. You can easily imagine, for example, the added value of being able to determine not only that another vehicle or object is in the roadway ahead of or behind you, but also to accurately discern its distance from you. Precisely determining the distance between your vehicle and a speed-limit-change sign is equally valuable in ascertaining how much time you have to slow down in order to avoid getting a ticket.

The need for accurate three-dimensional, no-contact scanning of real-life objects, whether for a medical instrument, in conjunction with increasingly popular 3D printers, or for some other purpose, is also obvious. And plenty of other compelling applications exist such as 3D videoconferencing, manufacturing line "binning" and defect screening , etc.

Stereoscopic vision
Stereoscopic vision (combining two 2-D image sensors) is currently the most common 3D sensor approach. Passive (i.e. relying solely on ambient light) range determination via stereoscopic vision utilises the disparity in viewpoints between a pair of near-identical cameras to measure the distance to a subject of interest. In this approach, the centres of perspective of the two cameras are separated by a baseline or IPD (inter-pupillary distance) to generate the parallax necessary for depth measurement (figure 1). Typically, the cameras' optical axes are parallel to each other and orthogonal to the plane containing their centres of perspective.

Figure 1: Relative parallax shift as a function of distance. Subject A (nearby) induces a greater parallax than subject B (farther out), against a common background.

For a given subject distance, the IPD determines the angular separation ? of the subject as seen by the camera pair, and thus plays an important role in parallax detection. It dictates the operating range within which effective depth discrimination is possible, and it also influences depth resolution limits at various subject distances.

1???2???3???4?Next Page?Last Page

Article Comments - Enable depth discernment in embedded...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top