Designing an Intuiface spatial computing XP with Apple Vision Pro


I recently acquired an Apple Vision Pro (AVP) and decided to explore its capabilities by creating an Intuiface spatial computing experience (XP) that allowed me to manipulate a 3D model in Mixed Reality (MR). Intriguingly, I performed the entire design process while wearing the headset. Let me share that journey with you.

Hardware Setup
My setup includes an Apple MacBook Pro M3, equipped with Parallels software to run a Windows 11 Virtual Machine. This setup is necessary for running Windows applications, such as Intuiface.

The AVP acts as a spatial computer that integrates digital content into your physical environment, allowing for interaction through eye movements, hand gestures, and voice commands. It can mirror the MacBook’s display onto a high-resolution virtual screen within the headset.

For this experiment, I wore the AVP throughout the entire creation process to fully immerse myself in a MR authoring environment.

The Intuiface XP
The proof of concept involved a simple XP that displayed a 360-degree photo within a 3D sphere. The sphere was initially captured using a 360 camera. I processed the image with Polycam and exported it in USDZ format, which is optimized for 3D and augmented reality on Apple devices. Subsequently, I uploaded the file to my Azure cloud storage and linked it to the 3D model using Intuiface’s OpenWindow IA (thanks @Seb!).

No-Code Spatial Computing
Launching the application was straightforward using the AVP’s Safari browser. A tap on the image downloaded the 3D sphere, which I could then manipulate directly with my hands in a truly immersive MR setting.


  • The experience of authoring with the AVP on a large virtual display was revelatory. Within minutes, the headset’s presence faded from my awareness, allowing me to interact with my usual keyboard and mouse setup naturally. This capability is particularly advantageous for digital nomads who require a larger display while traveling.
  • A limitation I encountered was the lack of parameter support in the OpenWindow IA, which necessitated additional steps to open the 3D sphere.
  • Although spatial computing is still in its early stages, I was impressed by Intuiface’s capability to enable the creation of no-code applications, simplifying the process of building MR experiences.

What are your thoughts on this emerging platform? Are you considering exploring this technology?


I have to say, I tried using a virtual desktop in my Quest 2 and found a few limiting factors:

  • No access / weird access to physical keyboard and mouse
  • Weight of the headset / heat sensation of the foam on the face
  • Fatigue caused by having the arms in the air for a long period of time.

I’m sure Vision Pro fixes the first issue.

For the second one, I haven’t tried it but since it has to be tethered and has no embedded battery, I would have assumed that Vision Pro was lighter than a Quest 2/3 but it’s apparently not. So, it comes down to weight balance and comfort, I’m sure Apple did a great job there.

For the fatigue aspect, that brings me back to 18 years ago when I was working with 3D hand gestures recognition and co-wrote this research paper (sorry, it’s in French): Gestaction3D-AFRV2006.pdf (1.7 MB)
Side note, our research was a follow-up of an MIT project from the early 2000s that was actually used in a movie in 2002, we just had less budget and visual effects :smiley:

We made some good progress on the computer vision side / technical side, but when running user studies with a human factor specialist, the main conclusion was that the human body was not meant to keep the arms in the air for so long unless you were at a David Guetta’s concert, and we were almost going backwards in terms of efficiency and accuracy compared to a keyboard and mouse, when it comes to work tasks / daily and repetitive tasks.
For fun and entertainment, for sure I believe in VR / XR.
For work, I’m waiting for Brain-Computer Interfaces to become a reality (cf Neuralink / Sword Art Online)


Thanks Seb for sharing some of your past R&D in this field. I agree with you that all these devices are still a bit heavy and the LED resolution is important. Vision Pro is quite crisp in this regard. Also, the scenario of the virtual display doesn’t require any air tapping / pinching since it’s happens with mouse and keyboard. I’m not sure the Quest provides such a seamless support.

Regarding the OpenWindow IA, I’d suggest to add a field for parameters. In my demo, if I used the equivalent of:

<a href="path_to_your_file.usdz" rel="ar">View in AR</a>

I could have opened the 3D model in AR without showing the thumbnail. Thanks!

Check your email / XPs shared with you @tosolini, you have an updated OpenWindow IA :wink:

1 Like