Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
?
EE Times-Asia > Processors/DSPs
?
?
Processors/DSPs??

Vision in wearables: Broader applications, functions

Posted: 26 Sep 2014 ?? ?Print Version ?Bookmark and Share

Keywords:wearables? processors? image sensors? vision processing? smartphone?

Object detection and recognition will also be a key component of augmented reality (AR) across a number of applications for wearables, ranging from gaming to social media, advertising, and navigation. Natural feature recognition for AR applications uses a feature matching approach, recognising objects by matching features in a query image to a source image. The result is a flexible ability to train applications on images and use natural feature recognition for AR, employing wirelessly delivered, augmented information coming from social media, encyclopaedias, or other online sources, and displayed using graphic overlays.

Natural feature detection and tracking avoids the need to use more restrictive marker-based approaches, wherein pre-defined fiducial markers are required to trigger the AR experience, although it's more challenging than marker-based AR from a processing standpoint. Trendsetting feature-tracking applications can be built today using toolsets such as Catchoom's CraftAR, and as the approach becomes more pervasive, it will allow for real-time recognition of objects in users' surroundings, with an associated AR experience.

Adding depth sensing to the AR experience brings surfaces and rooms to life in retail and other markets. The IKEA Catalogue AR application, for example, gives you the ability to place virtual furniture in your own home using a mobile electronics device containing a conventional camera. You start by scanning a piece of furniture in an IKEA catalogue page, and then "use the catalogue itself to judge the approximate scale of the furnishings 每 measuring the size of the catalogue itself (laid on the floor) in the camera and creating an augmented reality image of the furnishings so it appears correctly in the room."

With the addition of a depth sensor in a tablet or cell phone, such as one of Google's prototype Project Tango devices, the need for the physical catalogue as a measuring device is eliminated as the physical dimensions of the room are measured directly, and the furnishings in the catalogue can be accurately placed to scale in the scene.

Not just hand waving
Wearable devices can include various types of human/machine interfaces (HMIs). These interfaces can be classified into two main categories 每 behaviour analysis and intention analysis. Behaviour analysis uses the vision-enabled wearable device for functions such as sign language translation and lip reading, along with behaviour interpretation for various security and surveillance applications. Intention analysis for device control includes such vision-based functions as gesture recognition, gaze tracking, and emotion detection, along with voice commands. By means of intention analysis, a user can control the wearable device and transfer relevant information to it for various activities such as games and AR applications.

Intention analysis use cases can also involve wake-up mechanisms for the wearable. For example, a smart watch with a camera that is otherwise in sleep mode may keep a small amount of power allocated to the image sensor and a vision-processing core to enable a vision-based wake up system. The implementation might involve a simple gesture (like a hand wave) in combination with face detection (to confirm that the discerned object motion was human-sourced) to activate the device. Such vision processing needs to occur at ~1mA current draw levels in order to not adversely impact battery life.

Photographic intelligence
Wearable devices will drive computational photography forward by enabling more advanced camera sub-systems and in general presenting new opportunities for image capture and vision processing. For example, smart glasses' deeper form factor compared to smartphones allows for a thicker camera module, which enables the use of a higher quality optical zoom function along with (or instead of) pixel-interpolating digital zoom capabilities. The ~6" baseline distance between glasses' temples also inherently enables wider stereoscopic camera-to-camera spacing than is possible in a smartphone or tablet form factor, thereby allowing for accurate use over a wider depth range.

One important function needed for a wearable device is stabilisation for both still and video images. While the human body (especially the head) naturally provides some stabilisation, wearable devices will still experience significant oscillation and will therefore require robust digital stabilisation facilities. Furthermore, wearable devices will frequently be used outdoors and will therefore benefit from algorithms that compensate for environmental variables such as changing light and weather conditions.

These challenges to image quality will require strong image enhancement filters for noise removal, night-shot capabilities, dust handling, and more. Image quality becomes even more important with applications such as image mosaic, which builds up a panoramic view by capturing multiple frames of a scene. Precise computational photography to even out frame-to-frame exposure and stabilisation differences is critical to generating a high quality mosaic.

Depth-discerning sensors have already been mentioned as beneficial in object recognition and gesture interface applications. They're applicable to computational photography as well, in supporting capabilities such as high dynamic range (HDR) and super-resolution (an advanced implementation of pixel interpolation).

Plus, they support plenoptic camera features that allow for post-image-capture selective refocus on a portion of a scene, and other capabilities. All of these functions are compute-intensive, and sizes of wearable devices are especially challenging in this regard with respect to factors such as size, weight, cost, power consumption, and heat dissipation.

Processing locations and allocations
One key advantage of using smart glasses for image capture and processing is ease of use 每 the user just records what he or she is looking at, hands-free. In combination with the ability to use higher quality cameras with smart glasses, vision processing in wearable devices makes a lot of sense. However, the batteries in today's wearable devices are much smaller than those in other mobile electronics devices 每 570 mAh with Google Glass, for example, vs ~2000 mAh for high-end smartphones.

Hence, it is currently difficult to do all of the necessary vision processing in a wearable device, due to power consumption limitations. Evolutions and revolutions in vision processors will make a completely resident processing scenario increasingly likely in the future. Meanwhile, in the near term, a portion of the processing may instead be done on a locally tethered device such a smartphone or tablet, and/or at cloud-based servers. Note that the decision to do local vs. remote processing doesn't involve battery life exclusively 每 thermal issues are also at play. The heat generated by compute-intensive processing can produce discomfort, as has been noted with existing smart glasses even during prolonged video recording sessions where no post-processing occurs.

When doing video analysis, feature detection and extraction can today be performed directly on the wearable device, with the generated metadata transmitted to a locally tethered device for object matching either there or, via the local device, in the cloud. Similarly, when using the wearable device for video recording with associated image tagging, vision processing to generate the image tag metadata can currently be done on the wearable device, with post-processing then continuing on an external device for power savings.

?First Page?Previous Page 1???2???3???4???5?Next Page?Last Page



Article Comments - Vision in wearables: Broader applica...
Comments:??
*? You can enter [0] more charecters.
*Verify code:
?
?
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

?
?
Back to Top