Touchless interaction

Based on my recent experience in developing the Intuiface accessibility sandbox, I found that there are three (or better five) navigation interactions that need to be programmed for a good experience:

Up / Down - to move among interactive elements
Previous / Next - to browse content in a collection
Select - to execute an action

Let’s simplify things and just use three commands. You could have three motion sensors from Nexmosphere positioned at due distance from each other. The sensors will detect the presence of your hand as you get close (without the need to touch them) and trigger some scripts. I checked on Nexmosphere site, and just in hardware components this will be roughly $200.

An alternative would be to use a webcam + Machine Learning (ML) solution along the lines of OpenVINO. While OpenVINO is trained to recognize faces, imagine if there is a ML model that recognizes some basic gestures. Users will need to be shown what those gestures look like (e.g. raising the left hand means Go Back, and a fist means Select).

The integration with Intuiface will require some coding, but it may work. You can experiment with this idea by downloading this Machine Learning + Intuiface demo. Once you train the model to recognize up to three different objects / gestures / images, you can associate specific actions to each of them.

1 Like