Global Sources
EE Times-Asia
Stay in touch with EE Times Asia
EE Times-Asia > Interface

Motion engine leaves gesture control in the dust

Posted: 08 Jun 2015 ?? ?Print Version ?Bookmark and Share

Keywords:Quantum Interface? motion engine? predictive navigation software? smartwatch?

Quantum Interface (Qi) has recently unleashed its first predictive navigation software and technology. Around end of May, the Austin-based startup started to offer an Android smartwatch launcher called QiLaunch Ware, for a private beta testing. The motion-based interface is blazing fast and allows users to navigate through any app in a continuous motion, totally eliminating the "point and click" scenarios that we have been accustomed to with touch screens.

EEtimes Europe has caught up with the company's founder and CTO Jonathan Josephson to know more about the underlying technology.

Key to the Qi interface is the motion engine developed by Josephson, way before touch screens and smartphones became commonplace, let alone all the apps they power.

"We have global patents and IP dating back to 2002 and we've been working on that since before then," Josephson said, admitting that when he first thought of using natural motion instead of coded gestures to interact with interfaces, his idea was to control light switches across a room, for easy light selection and dimming.

"It struck me how we could use simple geometry and the principles of motion and time to control objects, control programs and data, seamlessly".

The motion engine software is sensor agnostic, it is architected to take any sensor data (capacitive finger touch, IR or time of flight, eye tracking or name it) and do the maths to convert the user's hand or finger direction, angle, speed and even acceleration into dynamic control attributes.

"For us, it doesn't matter which sensors you use, we convert user motion into dynamic attributes that can reach threshold events to trigger selection, all in real time. Compare this to gesture-based interfaces where you have to finish the action (gesture) before the processor can go to a look up table and decide what to do with it," explained Josephson.

In fact, as soon as the user moves in a direction, the predictive algorithm starts unfolding menus and pre-selecting the icons that the user is likely to be looking for, and even better, these icons come to the user, all that at a proportional speed that reflects the user's agility.

The benefits are many fold. First the user interface is much more intuitive, you no longer have to imitate a cursor across a screen and move to reach a fixed icon on its XY coordinates, start moving in the icon's direction and you'll trigger a new layout of options.

"By getting away from the click-mentality, we are bringing back the missing analogue into digital interfaces," claimed the CTO.

1???2?Next Page?Last Page

Article Comments - Motion engine leaves gesture control...
*? You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.

Back to Top