Recently I have been doing a lot of 3D modeling for CAVE – creating assets and texturing them. But for a while, my role in the production was to clean up the mocap performance data of our main character. I enjoyed doing it, it was rewarding being able to put a horribly contorted body back to normal. But it wasn’t without its fair share of grievances and I found myself thinking a lot about how to solve some of the major problems I ran into.
Motion capture (mocap) is a way to digitally record human movements. That movement data is then mapped to a 3D character so that it moves like the actor you recorded. The mocap software we use in the lab, Optitrack Motive, is connected to 12 infrared cameras that are used to track markers placed on specific points on a mocap suit. For the production of CAVE, we were able to get 12 more cameras connected to the system in order to make our performance captures more accurate and lessen the chances of losing markers to occlusion. Occlusion, in this context, is an instance when a marker is blocked from the view of all or most of the cameras – so the system has no idea how that marker is moving. Having more eyes on the markers helped reduce this problem and usually gave us 90% clean animation to work with. But, those last few percent of missing mocap data is the most difficult to fill either because the software confused a few different markers with each other, a marker is completely lost throughout a performance, or because a marker becomes occluded. When doing mocap, the parts of a capture that are most often lost are the hands and feet. These markers are very easily occluded – the feet if a person sits on their knees, and the hands if the actor is holding an object or such. At a point, it becomes almost impossible to fix.
I speak of all this because I realized that there might be a way to solve the problem of losing the hands. Zhu Wang recently spoke about his work with the Rokoko Smartsuit and how he added LeapMotion to it in order to track his hands. I asked myself: can we add the LeapMotion sensors and the data they capture to the Optitrack system? While I don’t know how it would be done or whether it’s possible, having a dedicated way to track and record the hands’ movements would be very helpful in mocap clean up. This could cut down on the time it takes to complete the overall cleanup and, perhaps, give us more time to iterate on a performance. We would also not have to take a large amount of time to redo animation in another software such as Autodesk Maya. Most motion capture systems use a minimal amount of markers on the hands because of the occlusion problem and productions have to rely on video reference in order to properly animate fingers. Even if there wasn’t a way to connect the LeapMotion data with Optitrack, simply having the recorded movements from the LeapMotion would be invaluable to an animator. The only problem that comes to mind with this solution is that there may be interference with the LeapMotion sensor and the IR cameras used in the Optitrack system.
Another possible way to get around the occlusion issue is to use something like a Rokoko suit along with Optitrack. This option would require the Optitrack system to track not only markers on a suit but also inside a suit. The internal sensors would be able to tell the software the position of the limbs in general while the external suit markers would work the same way they do now and provide more detailed information. On one hand, this could solve the problem with occlusion and the overall performance of the tracking would be improved but on the other, it would be expensive and it wouldn’t necessarily give us more detailed hand animations.
In the end, these are just a couple of thoughts I’ve had while trying to figure out how to clean missing mocap data. But I think they’re worth considering, especially when creating something like CAVE.
I wonder, could it be possible for the live tracker to use the motion capture not like a master puppeteer which control every movements of its doll each frame, but rather as a ghost which embody a living(-like) creature and direct its motion ? So having a kind of AI-driven animation tools heavily taking suggestion by the actor but which keep control undercover to bypass this signal errors.
I don’t know, maybe I divague.