Since we got a pair of Intel Realsense Depth Cameras, I have been working on creating a pipeline for volumetric capture at the lab. However, if we were to capture footage from different depth cameras we would need a solution for stitching up the output pointclouds, that is where my investigation began.
First, I created a way to decode the bag files exported from Realsense’s Depth Camera. These files contained all the information necessary to combine point clouds together, including RBG (texture) and the xyz coordinates from the values of the depth map. At first, I attempted to use ICP global registration to stitch multiple point clouds together, but very quickly found out how it would not always work if the point clouds were not aligned and if there were not significant (50%+) overlap between the two PCs. That’s when I turned to camera calibration.
Then, Zhenyi and I worked together on streamlining a pipeline for camera calibration. Once, that was used to get the stereo calibration, of two cameras, we had the matrix transformations to be able to bring one point cloud into the world space of another!
I was also able to merge the textures onto the point cloud as well.