Living at the Edge
Now that we have room-scale co-located tracking with no external sensors, thanks to the Quest, and Connor DeFanti’s calibration scheme – we can begin to explore other exciting possibilities for the future. One of which is using small embedded boards, such as the Raspberry Pi, coupled with an camera – both attached to the front of the Quest, in order to communicate compressed edge features and otherwise to a dedicated PC running heavier algorithms such as SLAM / SFM, pose estimation, YOLO for object detection, etcetera.
By using one or more of these Pi / camera systems, we can achieve stereo vision, without the need for video pass-through on the HMD, not to mention also offloading computational constraints. We can double dip into our room mapping research with 360 camera fixed on top of our friend Bracey Smith’s Loomo the friendly vision robot. Loomo enables us to remotely control and otherwise automate the process of roaming a space in order to collect 360 views for photogrammetric reconstruction of larger spaces and in a more controlled manner than might be done with manual collection.
Perhaps you’d use the same local server you use for SLAM / SFM at the beginning of a shared experience to communicate with the Pi / camera boards and the Quests in order to construct in a distributed fashion, a shared vision of the space the group is in. Feature extraction is independent of the rest of the pipeline, so even if the pipeline isn’t quite homogeneous end-to-end, it’s possible to reduce unnecessary computation and use the external cameras to fill in the blanks, so to speak.
Once you can begin to understand the space you’re in, you can interact with the environment in many different ways, with physical manipulables, interactive procedural characters walking around on the floor and room furniture (more to come on this), dynamic environment sensing, and more, as the possibilities are boundless with Edge Computing.