Grasp the significance of gesture interface
Keywords:Gesture recognition? sensor? smartphones? tablets?
Gesture recognition, one key example of these sensor-enabled technologies, is achieving rapid market adoption as it evolves and matures. Although various gesture implementations exist in the market, a notable percentage of them are based on embedded vision algorithms that use cameras to detect and interpret finger, hand and body movements. Gestures have been part of humans' native interaction language for eons. Adding support for various types of gestures to electronic devices enables using our natural "language" to operate these devices, which is much more intuitive and effortless when compared to touching a screen, manipulating a mouse or remote control, tweaking a knob, or pressing a switch.
Gesture controls will notably contribute to easing our interaction with devices, reducing (and in some cases replacing) the need for a mouse, keys, a remote control, or buttons. When combined with other advanced user interface technologies such as voice commands and face recognition, gestures can create a richer user experience that strives to understand the human "language," thereby fueling the next wave of electronic innovation.
Not just consumer electronics
When most people think of gesture recognition, they often imagine someone waving his or her hands, arms or bodies in the effort to control a game or other application on a large-screen display. Case studies of this trend include Microsoft's Kinect peripheral for the Xbox 360, along with a range of gesture solutions augmenting traditional remote controls for televisions and keyboards, mice, touchscreens and trackpads for computers. At the recent Consumer Electronics Show, for example, multiple TV manufacturers showcased camera-inclusive models that implemented not only gesture control but also various face recognition-enabled features. Similarly, Intel trumpeted a diversity of imaging-enabled capabilities for its Ultrabook designs.
However, gesture recognition as a user interface scheme also applies to a wide range of applications beyond consumer electronics. In the automotive market, for example, gesture is seen as a convenience-driven add-on feature for controlling the rear hatch and sliding side doors. Cameras already installed in rear of the vehicle for reversing, and in the side mirrors for blind spot warning, can also be employed for these additional capabilities. As the driver approaches the car, a proximity sensor detects the ignition key in the pocket or handbag and turns on the cameras. An appropriate subsequent wave of the driver's hand or foot could initiate opening of the rear hatch or side door.
Another potential automotive use case is inside the cabin, when an individual cannot (or at least should not) reach for a particular button or knob when driving but still wants to answer an incoming cell phone call or change menus on the console or infotainment unit. A simple hand gesture may be a safer, quicker and otherwise more convenient means of accomplishing such a task. Many automotive manufacturers are currently experimenting with (and in some cases already publicly demonstrating) gesture as a means for user control in the car, among other motivations as an incremental safety capability.
Additional gesture recognition opportunities exist in medical applications where, for health and safety reasons, a nurse or doctor may not be able to touch a display or trackpad but still needs to control a system. In other cases, the medical professional may not be within reach of the display yet still needs to manipulate the content being shown on the display. Appropriate gestures, such as hand swipes or using a finger as a virtual mouse, are a safer and faster way to control the device.
Related Articles | Editor's Choice |
Visit Asia Webinars to learn about the latest in technology and get practical design tips.