Re-imagining the MoCap workflow in light of emerging technologies

Re-imagining the MoCap workflow in light of emerging technologies

The following recaps my term project to create a motion capture suite geared toward VR. Links to the source code can be found at the end of the post. Credit to S. Herscher for the problem statement and ideas.

– N. G. Vitovitch

Large-scale motion capture environments are prohibitively expensive; at the time of this writing, the premiere optical-based tracking solution produced by OptiTrack costs in excess of $5,999.99 per camera, with recommended configurations utilizing many cameras! Small-scale configurations (featuring fewer, lower-quality cameras) incur a smaller cost upfront, but inherently produce a greater volume of shorter captures which incur additional post-production costs (this lab is familiar with these costs; “Cave” was produced in-house using a 10’x10’x10′ capture volume and dozens of hours editing animations).

Typically it is the responsibility of the production crew to monitor motion capture actors and mitigate continuity errors between captures (i.e. matching feet position, body posture, head direction, etc.). Not all errors can be fixed with inverse kinematics! It can be particularly difficult to preserve the stage’s layout between captures when an actor’s movements collectively exceed the size of the capture volume. Suddenly someone needs to track and re-orient props to the stage whenever the actor’s starting position changes. Things get tedious very quickly!

The challenge of maintaining a virtual stage is well situated for a software solution. And that’s what got us thinking about StageDive. StageDive is a motion capture application, built on top of the Unity 3D game engine and OptiTrack’s Motive capture suite, designed with the goal of mitigating these post-production costs by providing an infinitely large virtual capture stage and tools to measure and minimize capture-to-capture discontinuity errors.

The software is still in it’s infancy, but out of the box we have the tools to capture, edit, and compose raw motion capture data. I’m currently working on more sophisticated tools for manipulating and combining animations. Lastly (and as teased in the title) I’m planning on integrating our VR SDK; I want to allow users to build up their virtual stage, play with lighting, and get up-close and personal while compositing animations.

I will continue to mature StageDive this spring. If you feel inspired, share your thoughts in the comments below or throw an issue up on GitHub!

https://github.com/nussetorten/StageDive

Happy 2019!

Nick Vitovitch
Posted on:
Post author

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.