First look at OpenPose
Today, I implemented the OpenPose library on two of my machines.
OpenPose uses deep learning through convolutional networks to estimate human poses from single camera footage.
My first attempt – implementing OpenPose on my laptop
For this first attempt, I compiled OpenPose on my laptop which is running Ubuntu 18.04 and has an integrated Intel graphics card.
It is possible to run OpenPose on a system that does not have an NVIDIA GPU installed, although it is very slow when compiled to run in CPU mode..
The stats in the image below show that it’s processing the footage from my webcam at 0.0 frames per second.
OpenPose is generating a new frame approximately every 30 seconds, so the reported 0.0 fps is actually more like 0.03 fps.
Even though it’s working slowly, OpenPose is able to identify Sebastian’s pose when I am standing close to him.
It’s also able to identify Sebastian’s pose and Susan’s pose when they are partially occluded.
Running OpenPose in CPU mode is resource intensive. Two of my laptops CPU cores are at 100% utilization, the other two have some capacity.
My second attempt – implementing OpenPose on my desktop
For my second attempt, I compiled OpenPose on an Ubuntu 16.04 desktop with an NVIDIA GeForce 1060 graphics card using the CUDA libraries.
Here it’s running at 8.6 FPS – 287 times faster than on my laptop.