UIST 2019 is coming close. I went through the accepted papers this year quickly and preferred to discuss some interesting ones.
First of all, I did not see many interesting papers in my area, which is good news because that means my literature review work could be large decreased this time.
I split the papers into several categories: single-user gesture-based interaction, multi-user (especially for two persons) interaction for the purpose of guidance, context-aware UI design, and headset design. Probably I won’t cover all of them in one time so I will have a “Part II”, or not.
Single-user gesture-based interaction. It seems leap motion is back. Quite a few works were using that. I agree that it is small and lightweight. And probably it is still the most accurate for now. I need to do more research on that.
Paper — Portal-ble: Intuitive Free-hand Manipulation in Unbounded Smartphone-based Augmented Reality — implemented a system that attaching leap motion to the smartphone. It talked about the hardware design including where to place the leap motion, UI design including the feedback of different gestures, and multi-modal design like sound feedback and haptic feedback. The work is useful and leads to the new design for the phones. As far as I know, Pixel 4 already supports some mid-air gestures. However, the work seems to simple to me in 2019.

Paper — MagicalHands: Mid-Air Hand Gestures for Animating in VR — focused on the gesture taxonomy. It conducted a study first to identified and categorize the gestural interactions that can be used to depict complex dynamic effects in VR. That reminds me of mixfab, which also invited participants and collected their gestures. MagicalHands is designed for direct manipulation and abstract demonstrations for animating in VR. The system could be really informative if the study is well designed. It is open-sourced and I’d love to give a try.

Paper — Opisthenar: Hand Poses and Finger Tapping Recognition by Observing Back of Hand Using Embedded Wrist Camera — is working on gesture recognition for headset scenarios. The hardware design includes leap motion too. Although they cannot use leap motion’s recognition algorithms directly, the wide fov and being lightweight are still very attractive as a camera device. It applied a modified two-stream convolutional network with weighted motion images to detect static number gestures and dynamic tap gestures. Clearly, the strong point is that the hand won’t be out of view any more since our wrists stay with our hands. Also, I am curious how to scale to both hands, simply adding two leap motions or? I felt the accuracy could increase for bimanual interactions.

To be continued…