A Warm Welcome
I am Keanan Pucci, a rising senior Tisch Game Design major who just so happened to be a PA for Parallux one day, and decided to stick around. As a new face at the Future Reality Lab, I’ve spent the past month exploring the different facets of this space and settling into my own work. My place among this collection of innovators is with Parallux, the XR startup focusing on large-scale, networked, audiences in a shared virtual space. In Cave, audience members could be distinguished by randomly assigned helmets and only had agency over the positions of their hands and body. I was brought on with the goal of expanding the audience’s interaction with the content and each other.
In accomplishing this, we wanted to allow audience members to better explore the space without distracting from the story being told. From the mind of Kris Lang came the idea to allow the player to zoom in and effectively create their own closeups of the characters using a pair of opera glasses. This not only gave the audience a new exploration tool, but reinforces the theatrical nature of the piece. However, implementing this feature took a wealth of iterations to get right.
The biggest issue with zooming in a VR user’s display is the motion sickness caused by exaggerated movement. Imagine having binoculars taped to one’s face, any motion results in an amplified change in the viewing window. Myself and Will Monahan first attempted to tackle this issue by creating a vignette with a plane that scaled based on the field of view’s zoom factor. However, the non-uniform scaling was giving us trouble and we moved onto writing a custom overlay shader. Even with the vignette, users found the prototype dizzying. So, we got rid of fluid zooming in and out, instead choosing fixed zoom levels the user toggles between. Additionally, we enlarge the vignette the more the user is moving to block out more of the screen. We eventually settled on a pair of opera glasses that vignettes the player’s view with two levels of zoom and dynamic tunnelling to reduce motion sickness.
Aside from this feature, I have been given the chance to stretch my skillset while learning from the Mary and the Monster team and the rest of the lab. I worked on developing the AR version of Mary and the Monster for the Magic Leap, and learned about its image tracking capabilities. I tried my hand at joining the animation pipeline by denoising motion capture data, which gave me a whole new level of appreciation for the work Pasan and Thomas do. These experiences, along with the well-received opera glasses, have filled my time for an exciting first month at the Future Reality Lab.