We have been working on multi-user experience in mixed reality for a while, because we have faith that people want to spend time together.
However, I have confusion on both developing social interaction and enjoying that. It is always tedious and complicated to me during developing multi-user experience, including both hardware and software. For co-located experience, it is a little bit easy because we don’t need to run across the rooms. For remote social experience, I felt like I was flying, especially when we were making the video for MR Chalktalk. When co-worker and I was reviewing the video, we found that I ran among three different locations very frequently to trigger both virtual cameras and physical cameras, to check the status of behavior server, and giving hints to people in virtual world.
When I am the user of a shared space experience, it is a trade-off to me. I have been tried devices from full-body tracking to head-only tracking. If I want the experience to be more realistic, more warm-up work is required. If I want read-to-go experience, I have to lose some feedback. Meanwhile, shortsighted friends keep complaining to me that they failed to put their glasses into the headset. (And I can only tell them to purchase a new pair of glasses ASAP 😉 )
Thus, I have to think about it. Although we have been done this many years on shared space experience, the foundation of the developing environment are still in progress. People in lab keep wasting the time on that because it is not well designed and established yet. And researchers cannot help but lose patience when they failed to contribute the most time to the core work. (To me, I need to go to Washington Sq to absorb the energy from the sunshine.)
By the way, thanks to Diwakar and Karl, we recorded almost everything we need for the video.