Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Sensors/MEMS
?
?
Sensors/MEMS??

Equip robotic systems with spatial sensing

Posted: 30 Aug 2013 ?? ?Print Version ?Bookmark and Share

Keywords:robotic systems? algorithms? Adaboost classifiers? DSPs? Embedded Vision Alliance?

Each pixel in each frame is processed in this stage, so the number of operations per second is tremendous. In the case of stereo image processing, the two image planes must be simultaneously processed. For these kinds of operations, one option is a dedicated hardware block, sometimes referred to as an IPU (image processing unit). Recently introduced vision processors containing IPUs are able to handle two simultaneous image planes, each with 2048x1536 pixel (3+ million pixel) resolution, at robust frame rates.

The second processing stage handles feature detection, where corners, edges, and other significant image regions are extracted. This processing step still works on a pixel-by-pixel basis, so it is well suited for highly parallel architectures, this time capable of handling more complex mathematical functions such as first- and second-order derivatives.

DSPs (digital signal processors), FPGAs (field programmable gate arrays), GPUs (graphics processing units), IPUs and APUs (array processor units) are all common processing options. DSPs and FPGAs are highly flexible, therefore particularly appealing when applications (and algorithms used to implement them) are immature and evolving. This flexibility, however, can come with power consumption, performance, and cost trade-offs versus alternative approaches.

On the other end of the flexibility-versus-focus spectrum is the dedicated-function IPU or APU developed specifically for vision processing tasks. It can process several dozen billion operations per second but, by being application-optimised, it is not a candidate for more widespread function leverage. An intermediate step between the flexibility-versus-function optimisation spectrum extremes is the GPU, historically found in computers but now also embedded within application processors used in smartphones, tablets, and other high-volume applications.

Floating-point calculations such as the least-squares function in optical flow algorithms, descriptor calculations in SURF (the Speeded Up Robust Features algorithm used for fast significant point detection), and point cloud processing are well suited for highly parallel GPU architectures. Such algorithms can alternatively run on SIMD (single-instruction multiple-data) vector processing engines such as ARM's NEON or the AltiVec function block found within Power Architecture CPUs.

In the third image processing stage, the system detects and classifies objects based on feature maps. In contrast to the pixel-based processing of previous stages, these object detection algorithms are highly non-linear in structure and in the ways they access data. However, strong processing 'muscle' is still required in order to evaluate many different features with a rich classification database.

Such requirements are ideal for single- and multi-core conventional processors, such as ARM- and Power Architecture-based RISC devices. This selection criterion is equally applicable for the fourth image processing stage, which tracks detected objects across multiple frames, implements a model of the environment, and assesses whether various situations should trigger actions.

Development environments, frameworks, and libraries such as OpenCL (the Open Computing Language), OpenCV (the Open Source Computer Vision Library), and MATLAB can simplify and speed software testing and development, enabling evaluation of sections of algorithms on different processing options, and including the ability to allocate portions of a task across multiple processing cores. Given the data-intensive nature of vision processing, when evaluating processors you should appraise not only the number of cores and the per-core speed but also each processor's data handling capabilities, such as its external memory bus bandwidth.

Industry alliance assistance
With the emergence of increasingly capable processors, image sensors, memories, and other semiconductor devices, along with robust algorithms, it's becoming practical to incorporate computer vision capabilities into a wide range of embedded systems. By 'embedded system', we mean any microprocessor-based system that isn't a general-purpose computer. Embedded vision, therefore, refers to the implementation of computer vision technology in embedded systems, mobile devices, special-purpose PCs, and the cloud.

Embedded vision technology has the potential to enable a wide range of electronic products (such as the robotic systems discussed in this article) that are more intelligent and responsive than before, and thus more valuable to users. It can add helpful features to existing products. And it can provide significant new markets for hardware, software, and semiconductor manufacturers.

Transforming a robotics vision processing idea into a shipping product entails careful discernment and compromise. The Embedded Vision Alliance catalyses conversations in a forum where trade-offs can be understood and resolved, and where the effort to productise advanced robotic systems can be accelerated, enabling system developers to effectively harness various vision technologies.

Reference
"Embedded Low Power Vision Computing Platform for Automotive", Michael Staudenmaier, Holger Gryska, Freescale Halbleiter Gmbh, Embedded World Nuremberg Conference, 2013.

About the authors
Brian Dipert is Editor-In-Chief of the Embedded Vision Alliance. He is also a Senior Analyst at BDTI (Berkeley Design Technology, Inc.), which provides analysis, advice, and engineering for embedded processing technology and applications and Editor-In-Chief of InsideDSP, BDTI'sonline newsletter dedicated to digital signal processing technology. Brian has a B.S. degree in Electrical Engineering from Purdue University in West Lafayette, IN. His professional career began at Magnavox Electronics Systems in Fort Wayne, IN; Brian subsequently spent eight years at Intel Corporation in Folsom, CA. He then spent 14 years at EDN Magazine.

Yves Legrand is the global vertical marketing director for Industrial Automation and Robotics at Freescale Semiconductor. He is based in France and has spent his professional career between Toulouse and the USA where he worked for Motorola Semiconductor and Freescale in Phoenix and Chicago. His marketing expertise ranges from wireless and consumer semiconductor markets and applications to wireless charging and industrial automation systems. He has a Masters degree in Electrical Engineering from Grenoble INPG in France, as well as a Masters degree in Industrial and System Engineering from San Jose State University, CA.

Bruce Tannenbaum leads the technical marketing team at MathWorks for image processing and computer vision applications. Earlier in his career, he was a product manager at imaging-related semiconductor companies such as SoundVision and Pixel Magic, and developed computer vision and wavelet-based image compression algorithms as an engineer at Sarnoff Corporation (SRI). He holds a BSEE degree from Penn State University and an MSEE degree from the University of Michigan.

To download the PDF version of this article, click here.


?First Page?Previous Page 1???2???3



Article Comments - Equip robotic systems with spatial s...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top