Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Processors/DSPs
?
?
Processors/DSPs??

Vision in wearables: Broader applications, functions

Posted: 26 Sep 2014 ?? ?Print Version ?Bookmark and Share

Keywords:wearables? processors? image sensors? vision processing? smartphone?

Consider that object recognition enables you to comparison-shop, displaying a list of prices offered by online and brick-and-mortar merchants for a product that you're currently looking at. Consider that this same object recognition capability, in 'sensor fusion' combination with GPS, compass, barometer/altimeter, accelerometer, gyroscope, and other facilities, enables those same smart glasses to provide you with augmented reality information about your holiday sight-seeing scenes. And consider that facial recognition will someday provide augmented data about the person standing in front of you, whose name you may or may not already recall.

Trendsetting current products suggest that these concepts will all become mainstream in the near future. Amazon's Fire Phone, for example, offers Firefly vision processing technology, which enables a user to "quickly identify printed web and email addresses, phone numbers, QR and bar codes, plus over 100 million items, including movies, TV episodes, songs, and products."

OrCam's smart camera accessory for glasses operates similarly; intended for the visually impaired, it recognises text and products, and speaks to the user via a bone-conduction earpiece. And although real-time individual identification via facial analysis may not yet be feasible in a wearable device, a system developed by the Fraunhofer Institute already enables accurate discernment of the age, gender, and emotional state of the person your Google Glass set is looking at.

While a single camera is capable of implementing such features, speed and accuracy can be improved when a depth-sensing sensor is employed. Smart glasses' dual-lens arrangement is a natural fit for a dual-camera stereoscopic depth-discerning setup. Other 3D sensor technologies such as time-of-flight and structured light are also possibilities.

And, versus a smartphone or tablet, smart glasses' thicker form factors are amenable to the inclusion of deeper-dimensioned, higher quality optics. 3D sensors are also beneficial in accurately discerning finely detailed gestures used to control the glasses' various functions, in addition to (or instead of) button presses, voice commands, and Tourette Syndrome-reminiscent head twitches.

Point of view (POV) cameras are another wearable product category that can benefit from vision processing-enabled capabilities (figure 2). Currently, they're most commonly used to capture the wearer's experiences while doing challenging activities such as bicycling, motorcycling, snowboarding, surfing, and the like. In such cases, a gesture-based interface to stop and stop recording may be preferable to button presses that are difficult-to-impossible with thick gloves or when it is clumsy or impossible to use fingers.

Figure 2: The point of view (POV) camera is increasingly "hot", as GoPro's recent initial public offering and subsequent stock-price doubling exemplify (top). With both the POV camera and the related (and more embryonic) 'life camera', which has experienced rapid product evolution (bottom), intelligent image post-processing to cull uninteresting portions of the content is a valuable capability.


?First Page?Previous Page 1???2???3???4???5?Next Page?Last Page



Article Comments - Vision in wearables: Broader applica...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top