Over the last 2 weeks, Thomas and I had been figuring out how to get good facial capture results through the use of Faceware. I mentioned in my previous post that I was really looking forward to working with the tech and that it seemed pretty straightforward. And I was right.
The software is very easy to use and can help create an entire performance in the course of a few hours. It only requires that you have a decent facial rig to place the motion capture information onto – which we did thanks to the changes Thomas made to Ayara’s face. The process involves recording an actor using the headset with a mounted GoPro camera provided by Faceware (or any other clear footage of a person’s face shot by any camera), using the analyzer software to train Faceware to find the person’s eyes and nose, brows, and mouth, then retargeting that information onto a face rig.
For our testing, Susan Darvishi graciously agreed to be our actor (thank you Susan!), and we were quickly able to figure out how to create an ideal recording. The camera must be placed slightly below the center of the face – with the nostrils barely visible. This gives a clear view of the mouth and eyes which are harder to record and analyze than the eyebrows. The actor’s performance must also be somewhat exaggerated when making specific sounds and mouth shapes such MBP, O, and EE. These shapes are not only very important when doing traditional facial animation – in order to sell the idea that a person is speaking – but also are the shapes the software has trouble picking up. Besides these directions, an actor’s performance does not have to be hindered. The process of retargeting the performance onto a facial rig means that some of the smallest nuances of the actor’s performance might be lost (due to technical limitations of the facial rigs we can use) but most of it remains and it’s actually very easy to tell who the performance is by.
I’m really happy with the software and I hope to use it more soon.