Touchless interaction

With the current COVID 19 crisis, alot of people are interested in exploring touchless interactivity, this is where the Leap Motion & Kinect projects that were shut down can be of good use.

Since these sensors are hard to find, Intuiface should start at looking at newer sensors like the Xbox one or Orbbec range of 3d cameras - https://orbbec3d.com/?utm_source=google&utm_medium=ppc&utm_campaign=53084159198&utm_term=kinect%20sensors&utm_content=b&gclid=EAIaIQobChMIk7upraGS6QIVjDgrCh0N6g-uEAEYAiAAEgJx6PD_BwE

Hi Melvyn! I’m guessing you’re talking about gesture tracking, and we’ve never seen anything that worked well with random, off-the-street, untrained people. Alternatives like speech and use of mobile phones as a remote control are much more intuitive and far less prone to misunderstanding intent. Even head tracking with OpenVINO works better although the “vocabulary” is quite limited.

I’m not a 3D guy so I didn’t know Orbbec. Taking a look, I couldn’t find any interactive demonstrations that didn’t involve touch. (Those cameras do lots of cool things. I’m just saying that when interactivity is involved, the examples still use touch.) As for XBox, I don’t think Microsoft has done anything there since Kinect, which is EOL. Windows Central says forget gesture with the Xbox, use voice. :slight_smile:

There’s lots of continued effort with hand gestures: Google Soli, Microsoft HoloLens, Utlraleap, Oculus Quest, and others - but all have serious limitations. Soli is for just one smartphone. HoloLens and Quest require expensive headsets that you’d never want to be publicly accessible. And Ultraleap is struggling because 1) the devices are fairly prominent in an installation, and 2) they’re still picky about gesture quality.

Bottom line: We’re not against working with gesture control options, we’ve just yet to see one that could work broadly, isn’t cost-prohibitive, doesn’t have high breakage risk in public, and has gained some traction.

xxxx

Melvyn, if you didn’t mean gesture control, then I hope the above was a good diversion. :slight_smile:

Oh, just a reminder that @Seb and team put together an excellent multi-mode experience showing the use of touch, speech, and personal mobile phones to control onscreen content. The idea is that visitors can choose their preferred method of interactivity, and touch doesn’t have to be one of them.

3 Likes

And it’s session based, a really good example of interaction beyond touch.

1 Like

Hi @Ryan,

I definitely reused some ideas we discussed together in your thread Web triggers remote - Controlling kiosk from a mobile question, thanks for your inputs!

For those who wants to see the video of the new sample before reading the article, here it is:

1 Like

Hi @geoff, thank you for the indepth explanation. I had indeed looked at the video that Seb put before putting in this post. Talking about interaction with intuiface for “random, off-the-street, untrained people”, the methods in the video can really hit a roadblock in some area, as dialects and accents are different, so voice recognition may not work. To use a phone to remotely control the experience via local triggers or QR, you need an internet connection, no internet means you’re stuck. That’s where a gesture controller adapts best, as you dont need the internet or speech, its a universal thing.

I’ve seen websites that use the above cameras as a touch interface, which means you dont actually touch the surface, hovering a few centimeters above the surface treats it as the touch area. So whether you use gestures as touch interaction or a virtual touch plane, either case is fine

Anyways going into lengthy, discussions about the pros and cons of this will only be a waste of time, due to the current crisis & demands, and keeping in mind that in future people will look at this technology as more safe than touching something physically, I brought up the post, in the hopes that this may be a value add to Intuiface.

If you feel it cannot be done, no worries, we will then just have to look at alternatives.

Hi @melvyn_br,

“Gestures” or I’d rather say “touch in the air” is definitely another good approach to the current interrogations about touch screens, as discussed in this other thread: AIRxTOUCH™.

The big “pros” about these type of solutions is that they work out of the box with Intuiface, since they are recognized “as a touch screen” at the OS level.

True “gesture recognition”, such as swipe left / right, still have a few cons in my opinion for such scenarios

  • learning curve: you’re not in your living room with a tutorial on how to interact.
  • reliability / sensibility to light: swipe gesture are really tough to detect in a “non-controlled environment”, meaning again not playing with Kinect in your living room.

If anyone finds a good, reliable system that can be used in a public environment, you can build your own Custom Interface Asset, as we did with OpenVINO, and create your own Intuiface integration there.

Agreed that gesture can be universal. It’s the ability of technology to treat it as universal that has been the problem. If there’s a device out there making this possible - with respect to the requirements I’d mentioned earlier - we’d be excited to work with it.

As for the proximity-touch tech, my guess is this approach, under the covers, would generate traditional HMI events on Windows PCs, meaning it should just work with Intuiface, no integration required. Could just be plug-and-play for us. With other Player-supported platforms, if this exists, it’s just something we’d have to try.

Hi guys just finished reading this. I also have been in demand to find a touch-less interactive display. Has anyone found something that works. I have bought the Leap Motion however I am not getting good results.

David

Hi @Seb, this was a video you posted years ago, can you help me with the XP file of this, Gesture recognition with Leap Motion using IntuiFace - YouTube we have got access to a leap motion controller and want to try it out.

We added the leap motion IA, installed the V2 software and ran intuiface, was able to add the IA hand postures into the scene, but when we Play it nothing happens, is there some other software that needs to run for intuiface to detect it ?

Thanks

Hi Melvyn,

I saw you got some answers directly from our support team. I confirm that this sample has been dismissed several years ago.

1 Like

I am also looking for touchless interface and I believe that everyone has its own device in their pocket all the time, For this I believe the best soluon and easiest to implement is to use the intuipad program but only with the touchpad interaction without the coloring or the control of the presentation. Everyone with a device and a closed wifi network or internet can play with the presentation without touching any public device. That is already ready to be used today, no need to do major modifications on hardware or software.

Based on my recent experience in developing the Intuiface accessibility sandbox, I found that there are three (or better five) navigation interactions that need to be programmed for a good experience:

Up / Down - to move among interactive elements
Previous / Next - to browse content in a collection
Select - to execute an action

Let’s simplify things and just use three commands. You could have three motion sensors from Nexmosphere positioned at due distance from each other. The sensors will detect the presence of your hand as you get close (without the need to touch them) and trigger some scripts. I checked on Nexmosphere site, and just in hardware components this will be roughly $200.

An alternative would be to use a webcam + Machine Learning (ML) solution along the lines of OpenVINO. While OpenVINO is trained to recognize faces, imagine if there is a ML model that recognizes some basic gestures. Users will need to be shown what those gestures look like (e.g. raising the left hand means Go Back, and a fist means Select).

The integration with Intuiface will require some coding, but it may work. You can experiment with this idea by downloading this Machine Learning + Intuiface demo. Once you train the model to recognize up to three different objects / gestures / images, you can associate specific actions to each of them.

1 Like

Interesting thoughts Paolo, although in my experience and opinion, gesture recognition for a public device has, today, more cons than pros

  • Sensors are sensitive to the environment: too many people in the field of view, light changes including sunlight
  • Muscle fatigue: Doing 1 or 2 gestures can be fun, but doing more can quickly be tiresome for the user trying to reach the information he’s looking for.
  • Learning curve: as you said, people needs to be “trained”, and this needs to be quick.

Regarding the latter, it might only be a question of time. Who would have thought of doing a 2 fingers pinch gesture to zoom on something 15 years ago, and who wouldn’t think about it today?

1 Like

I just released a new version of the Intuiface Accessibility Sandbox with support of touch-free interactions. Thanks @seb for inspiring me with all your demos.

2 Likes

If you are curious about using Leap Motion with Intuiface without resurrecting the now defunct Leap Motion IA, I outlined an alternative way in this thread.

1 Like

Hey Melvyn. We also using OptiTUO for the Touch - less Interaction in combination with Intuiface. Just check out the movie below:

Touchless Interaction with OptiTUIO and Intuiface

Hello. I’m please to share with you a technical video of our latest AIR TOUCH® technology : https://youtu.be/Ht0vomP3W0M

Robust (works up to 120.000 lux => direct summer sunlight), accurate, responsive, and SAFE. Simply better than a touch device.

It’s native windows 10 driver. No specific code. Works with Intuiface. You can click, double click, right click, drag&drop and zoom. More info: www.airxtouch.com

BR.

1 Like