One of the first major decisions we had to make in the early stages of CAVE can be summed up as follows: how much agency are we going to give our audience members ?
At first, we thought of having all of the audience members standing, allowing them to move around the virtual set freely, in an almost Sleep No More fashion. We would design the stage and points of action to be all around the users, guiding them to look around the whole set. However, we soon realized that while having that freeform structure would come with some problems: will the audience members know where to look? Will they be more interested in the emergent behaviors of interacting with the audience, more-so than the story? How much risk would be involved in having such a large group of people “having the blinders on” in VR (the FoV and awareness of others in VR is still unsatisfactory). We realized that the approach of having every audience member stand would end up adding more to the already immense challenge that we’ve put for ourselves in our production.
So, with the idea of a standing audience benched, we started toying with seated audience arrangements. We ended up landing our current arrangement, two “wings” of half the audience each, relatively early, and moved on to the next challenge: what will the audience members be able to do? The first feature I experimented with was an immediate success: where you look in VR is where your representative audience avatar looks.
I used IK to bind the arms of the avatar to their lap, while allowing the chest and head to follow where the user was looking. This allowed us to quickly generate a sense of presence, as now the audience (live or AI-driven) could follow points of action and actually look around the set.
In designing this feature, I immediately was able to cause an avatar to wildly head bob back and forth; this would definitely distract other users during the show, so I defensively reduced how much the avatar can move for any one period of time. This opened up a larger conversation for us: how much control will the audience have over their avatars? Will they be able to move the hands of the avatars? What about voice/eyes/etc?
We ended up harkening back to one of the core tenets of why we are producing CAVE: we want an unprecedented number of people to see the show, and have a seamless throughput of viewers. This led us to decide on a more conservative, “lean back” implementation of the audience – no hand waving, head bobbing, or any discrete interaction. The audience will control where their avatar will look, and we’re experimenting some social interactions like emotes. Overall, we’re really aiming for CAVE to be a fully “lean back” experience – we want viewers to be fully immersed in viewing the story and able to feel and see the presence of their fellow audience members, but not to interact with the story directly (yet).
That being said, I’ve designed all of the technology being used for CAVE to fully support user input and interaction – our future productions will have all of the scaffolding necessary to add many more levels of immersion and interaction. We hope for CAVE to be the first of many immersive large scale productions, with each one being more wild and adventurous than the previous!