Time to share a new paper 😉
(I have to say it is quite annoying that I cannot just share what I’ve done during the interval because the idea is still ongoing.)
Today I wanna talk about the idea “Online Optical Marker-based Hand Tracking with Deep Labels”, which is a hand tracking system based on optitrack markers attached to the customized glove. That reminds me of so many different ways I’ve done during my research across different labs.
Hand is really a classic and difficult question. I hope I did not remember it wrong that it has 27 DoF at all, crazy. What I tried at the beginning is bare hands detection. At that time, I am against the idea of wearing data glove(, until now). Just because it is unnatural. But I cannot deny with sensors or markers, it really helps push the hand tracking forward a lot. And now we have so many CNN approaches which need the training data of the ground truth, aka sensor/marker information. I felt like it is centuries ago that I used traditional computer vision method to do the segmentation, detection and tracking.
This paper propose a technique that frames the labeling problem as a key point regression problem conducive to a solution using CNN. So far when some markers are occluded because of the view or the pose, which is quite common for hand pose, the key point regression algorithms could help to recover the position. Still I am thinking how to push that marker-based idea forward to a marker-less approach. How to take usage of the capture with the markers and construct a helpful hand model for future usage. That is how the future becomes.
By the way last time I did mention a lot about deep learning, however, after I took a deep look at the paper, it is SVM actually 🙁