Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Processors/DSPs
?
?
Processors/DSPs??

Vision in wearables: Broader applications, functions

Posted: 26 Sep 2014 ?? ?Print Version ?Bookmark and Share

Keywords:wearables? processors? image sensors? vision processing? smartphone?

For 3-D discernment, a depth map can be generated on the wearable device (at varying processing load requirements depending on the specific depth camera technology chosen), with the point cloud map then sent to an external device to be used for classification or (for AR) camera pose estimation. Regardless of whether post-processing occurs on a locally tethered device or in the cloud, some amount of pre-processing directly on the wearable device is still desirable in order to reduce data transfer bandwidth locally over Bluetooth or Wi-Fi (therefore saving battery life) or over a cellular wireless broadband connection to the Internet.

Even in cases like these, where vision processing is split between the wearable device and other devices, the computer vision algorithms running on the wearable device require significant computation. Feature detection and matching typically uses algorithms like SURF (Speeded Up Robust Features) or SIFT (the Scale-Invariant Feature Transform), which are notably challenging to execute in real time with conventional processor architectures.

While some feature matching algorithms such BRIEF (Binary Robust Independent Elementary Features) combined with a lightweight feature detector are providing lighter processing loads with reliable matching, a significant challenge still exists in delivering real-time performance at the required power consumption levels. Disparity mapping for stereo matching to produce a 3D depth map is also compute-intensive, particularly when high quality results are needed. Therefore, the vision processing requirements of various wearable applications will continue to stimulate demand for optimised vision processor architectures.

Industry assistance
The opportunity for vision technology to expand the capabilities of wearable devices is part of a much larger trend. From consumer electronics to automotive safety systems, vision technology is enabling a wide range of products that are more intelligent and responsive than before, and thus more valuable to users. The Embedded Vision Alliance uses the term 'embedded vision' to refer to this growing use of practical computer vision technology in embedded systems, mobile devices, special-purpose PCs, and the cloud, with wearable devices being one showcase application.

Vision processing can add valuable capabilities to existing products, such as the vision-enhanced wearables discussed in this article. And it can provide significant new markets for hardware, software, and semiconductor suppliers. The Embedded Vision Alliance, a worldwide organisation of technology developers and providers, is working to empower product creators to transform this potential into reality. CEVA, CogniVue, and SoftKinetic, the co-authors of this article, are members of the Embedded Vision Alliance.

About the authors
Brian Dipert is Editor-In-Chief of the Embedded Vision Alliance. He is also a Senior Analyst at BDTI (Berkeley Design Technology, Inc.), which provides analysis, advice, and engineering for embedded processing technology and applications, and Editor-In-Chief of InsideDSP, the company's online newsletter dedicated to digital signal processing technology. Brian has a B.S. degree in Electrical Engineering from Purdue University in West Lafayette, IN. His professional career began at Magnavox Electronics Systems in Fort Wayne, IN; Brian subsequently spent eight years at Intel Corporation in Folsom, CA. He then spent 14 years at EDN Magazine.

Ron Shalom is the Marketing Manager for Multimedia Applications at CEVA DSP. He holds an MBA from Tel Aviv University's Recanati Business School. Ron has over 15 years of experience in the embedded world; 9 years in software development and R&D management roles, and 6 years as a marketing manager. He has worked at CEVA for 10 years; 4 years as a team leader in software codecs, and 6 years as a product marketing manager.

Tom Wilson is Vice President of Business Development at CogniVue Corporation, with more than 20 years of experience in various applications such as consumer, automotive, and telecommunications. He has held leadership roles in engineering, sales and product management, and has a Bachelor's of Science and PhD in Science from Carleton University, Ottawa, Canada.

Tim Droz is Senior Vice President and General Manager of SoftKinetic North America, delivering 3D time-of-flight (TOF) image sensors, 3D cameras, and gesture recognition and other depth-based software solutions. Prior to SoftKinetic, he was Vice President of Platform Engineering and head of the Entertainment Solutions Business Unit at Canesta, acquired by Microsoft. Tim earned a BSEE from the University of Virginia, and a M.S. degree in Electrical and Computer Engineering from North Carolina State University.


?First Page?Previous Page 1???2???3???4???5



Article Comments - Vision in wearables: Broader applica...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top