1.This week I found a paper, SDM-NET: Deep Generative Network for Structured Deformable Mesh. Their main idea is to decompose a 3D object into parts. And each part can be deformed from a homeomorphic simple object. Their network can also learn the support and symmetry information from the input. The part structures and geometries are jointly encoded into a latent space by an autoencoder. And their decoder can reconstruct a high-quality model from the latent code.
In my opinion, its main idea is a little similar to AtlasNet. And the deformation part idea is a little like pixel2mesh. The first one separates the model into many parts. The second one deforms homeomorphic objects. It is not a paper for 3D reconstruction. It is a VAE for 3D models. But I think it may be helpful.
2.NVIDIA just releases a new tool for 3D deep learning, called Kaolin, which is based on PyTorch. It may help a lot. There github repo link is: https://github.com/NVIDIAGameWorks/kaolin .