A Quick and Easy Calibration Method

virtuallab

One of the things our lab has been doing for several years now has been taking people in the same physical space and placing them in a shared virtual space. One of the tricky things about this is that most virtual reality applications have some sort of built-in tracking system, but do not yet have any sort of global truth. That is, if we put two users in the same virtual world, one user might see the objects around the room in totally different positions! For sharing spaces, that is no good.

Much of our past research and development has revolved around calibrating systems so that people can share spaces. Sometimes, we have relied on external systems, like an OptiTrack motion capture system, to give people a ground truth. However, those are quite expensive and not very portable. So, we wanted to find a more flexible solution.

Recently, we have been working with VR devices with full positional tracking — you can look around, walk around, and the device will track you locally. However, it still has the issue that multiple people in the same space don’t have a coordinated virtual world! In order to synchronize multiple people, they must understand two things: where the center of the room is (position), and where the front of the room is (rotation).

There are several ways to address this. The first idea we had, and possibly the simplest, is to have each user stand in a fixed point, looking in one direction. That way, everyone can set that point to be the “center” of the room, and their facing direction can set where the “front” of the room is. If everyone does this exactly right, then they will have the same virtual space.

One small problem with the above: standing in the same spot is easy, but looking forward is a little bit trickier. Very slight differences in rotation will cause large errors once you start to move around. Even being off by 5 degrees will create several inches of error at the boundaries of your room!

So, given that positioning accurately is relatively easy compared to looking forward accurately, how can we take advantage of that to quickly and accurately calibrate our room?

Suppose we mark two points in the actual room and the users walk to each one, pressing a button once they reach them. These points will be registered in the program as different points in the virtual space. We can use this difference to calculate both the difference in position and rotation of the virtual and physical room, since we have drawn a line.

This is a very quick process, and quite accurate too! If you place the two markers at the boundaries of your space, then your error throughout your world will be bounded by your positional accuracy. In other words, if you can accurately place yourself within one inch of each of the markers, then your virtual position in the world will be no more than 1 inch away from your real position in the world.

We tried applying this to a model we created of the lab a couple years back, and the results were great! It was a very strange experience to be in our lab space, put on a headset, and be exactly where I was… but in a lower-polygon version of it. As a next step, we’ll be able to get people interacting together in the same space!

1 thought on “A Quick and Easy Calibration Method”

  1. Pingback: Calibrating the Oculus Quest Guardian – Future Reality Lab

Leave a Comment

Your email address will not be published.