[New Feature] Face Detection with OpenVINO

It’s been a busy day so I haven’t had time to write this up until now. And come to think of it, because I waited so long, there probably isn’t much I can say that you don’t know already know. You know what? Forget it… is what I would say if I didn’t care so much about Intuiface!

If we’ve done our job, you’ve surely seen by now that yes, we’ve just launched face detection as a no-cost, built-in feature, made possible through our integration with something called the Intel distribution of the OpenVINO toolkit. This toolkit enables applications to emulate human vision by combining what is called a deep learning inference engine with pre-trained models for real-world objects - like faces. Broadly known as “computer vision”, it’s a fascinating example of what’s happening in the world of artificial intelligence and Intuiface is a beneficiary.

The OpenVINO toolkit is open source and freely distributable, enabling us to not only deliver integration but to deliver the actual face detection server that does all the hard work. Using any camera on any Windows PC, this face detection server can identify age range, gender, head pose, dwell time, and emotion. (Yes, even emotion!) This information can be accessed as a conditional for any trigger, making it possible to react to particular demographic and state information about a user, such as displaying particular content or reacting to particular movements.

For example, check out his use of head position and pose to make selections and manipulate environments:

Everything you need to know can be found in our Help Center article, including the link to a sample experience in our Marketplace that has prebuilt some pretty cool interactions triggered by camera-captured information.

Intuiface is the first platform on the market enabling the no-code adoption of computer vision via the OpenVINO toolkit. We thank Intel for their assistance and applaud you for being so lucky to have chosen Intuiface for interactive content creation needs. :wink:

Stay healthy!

6 Likes

Looking forward to test it! Sounds like a wonderful feature

1 Like

In the marketplace sample, we address several usages of the face detection, such as

  • Detecting someone => display a message, navigate to a new scene
  • Using face position
    • X position on edges of the screen => next or previous navigation action
    • X & Y position compared to target objects position + margin (the circles in home scene) + a countdown timer to validate the action => call an action (navigation, press a button, …)
    • Y position + thresholds (Global Variables) + Simple Counter => the squat counter
  • Using face pose (orientation, angles) => control a 3D model
  • Face main emotion + confidence threshold => change the smiley character style

We have (many) other ideas in the pipe, such as

  • using the face pitch & yaw angles to detect a nod or shake gesture (answer yes / no to a question), but that requires more than just a couple of triggers & conditions.
  • using the detected gender to change the voice of the text to speech (many studies on brain responses to opposite-sex voices)
  • using the main emotion to create a “simon says” game (our demo at ISE 2020) and generate audience engagement

Tell us what you think about these usages and share your feedback with us.
We’d love to know about your ideas of using this Face Detection in Intuiface.

2 Likes

@geoff that is fantastic, I’m just reading your article on 16:9 Touchscreen Guidance For A (Post) Confinement World, something I’m working on myself in my spare time, this is crucial moving forward.

@tosolini , we’re all looking forward to you testing this out =)

@Seb awesome as always, thanks for the info and sharing the work, I’ll download and try over the weekend.

← Update, that works awesome! trying to add Pitch roll (y-axis), using the room as base example, swapped it out for a Stormtrooper helmet… sort of working, but I need to work my head around the controls… Going to keep playing with this, i can see real benefit using the face detection with a scroll menu, rotating 360 images etc… Thanks again for pushing the boundaries!

2 Likes

Thanks @Ryan for the feedback!

For the 3D Object control, we tried 2 methods:

  • binding directly the model angle to the face pitch, maybe with a converter.
    • more reactive than what you see in the sample
    • but a bit jittery since the face angle is not stabilized and depends on computer vision precision.
  • having a trigger raised after angle goes below / over a certain threshold that calls a camera animation action
    • a bit less reactive, only triggers after the face reaches the threshold
    • but animation more fluid, from predefined A to B angles.

I’ll let you experiment, and don’t hesitate to share a short clip of your tests :wink:

1 Like