This may be worded a bit funny, but here’s the general idea I thought of while building parallax XP’s.
There’s already a gesture trigger for “swipe” in IF, but I don’t believe the swipe gesture measures the number of pixels/coordinates of movement that the person drags their finger. For example, in the slot machine game in one of your sample XP’s, the swipe gesture triggers the wheel to spin. However, the trigger only finds that ‘a swipe happened’…not how far the user swiped, or any distance.
I’m wondering if this feature would actually open up a can of worms, rather than do good…but here we go:
Is it possible to create an IA of some kind that measures Speed, and X, Y distance of someone’s finger-swipe?
This isn’t a fully-hatched idea, but I’m wondering if having binding’s for the coordinates of someone’s finger could deliver more value. My end-goal is to find easier ways to create parallax presentations without covering up assets with the input/scrolling object. I’m not sure how smart phones do it, but it’s interesting to be able to swipe through menus, but still touch on buttons for apps in the foreground.
Thanks for sharing your thoughts/ideas to build on this!
can you please put a link here for that XP that you have talked about? (the slot machine game, i searched in the marketplace but i didn’t found it and i think it may help me on a project that i’m doing).
Thanks Chloe!
Also, to revise this a bit - I think the IA would only need to show the real time x,y coordinates of the touch. For multi-touch, there could be 10 sets of x,y coordinates (in the case of a 10 pt touch display)
When binding to these coordinates, we could have assets/objects appear and follow someone’s finger(s). It would also allow some different ways to do parallax without so much cover-up of elements when using a transparent rectangle. Feel free to build on this idea if there’s a better solution-
Just checking back into this topic with an additional thought:
If multi-touch coordinates are too hard to track, then maybe this features is only limited to the first touch on the screen. Wherever the user touches, the x,y coordinates of the touch are bound to the IA, and change dynamically whenever the current touch is held and moved.
I think this would still accomplish many things. The user could bind objects to “follow” the users’ touch around - as well as appear wherever a new touch is registered.
It would also open up new ways of parallax - with this IA as the input, users could bind the output to many other objects/assets to move in proportion to the coordinates of the users’ finger.
If anyone else has any ideas to build on this, I’m curious of how other designers could benefit from this function!