Skip to content

WenzhaoTang/3D-Registration

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

3D Registration (Semester Project @ TUM)

Project's overview

Our project is focused on the task of finding point correspondences of rigid 3D objects to a canonical model. To achieve this goal, we conducted a thorough review of related work in the field and explored various possibilities. We experimented with template matching, but found its disadvantages when compared to deep learning methods. After careful consideration, we chose SurfEmb's architecture to perform our task.

To enable dense correspondence, we applied artificial texture to our models. We created our own datasets consisting of five patterns and one non-textured pattern. We then used our method to perform inference on the new dataset to qualitatively show the optimisation. Finally, we evaluated our results both quantitatively and qualitatively, to show the improvement in performance when we applied textures. Our work shows that deep learning methods like SurfEmb's architecture can significantly improve the accuracy of finding point correspondences in 3D objects, especially when used with artificial texture.

Install requirements

Download surfemb:

$ git clone https://github.com/WenzhaoTang/3D-Registration.git
$ cd surfemb

Install conda , create a new environment, surfbase, and activate it:

$ conda create --name surfbase python=3.8
$ pip install -r requirements.txt
$ conda activate surfbase

Prepare Datasets

Texture 1 Texture 2 Texture 3 Texture 4 Texture 15

Here's a list of download links for the patterns displayed above, in the order shown:

Pattern 1, Pattern 2, Pattern 3, Pattern 4, Pattern 5, No Texture

Original datasets can be downloaded through the following link in accordance with the BOP's format: Original Bop.

Extract the datasets under data/bop (or make a symbolic link).

The following images display the rendered objects with applied and selected patterns:

Training

To observe differences, train a model using varying numbers of epochs. Configure the following settings in the training script:

import wandb
wandb.log({'epoch': num})

The value of num can be selected from the following options: 5, 10, or 20.

number of epochs convergent speed perceptibility of differences
20 epochs convergence imperceptible
10 epochs near convergence barely noticeable
5 epochs no convergence obvious

The following figure illustrates this concept:

$ python -m surfemb.scripts.train [dataset] --gpus [gpu ids]

For example, to train a model on T-LESS-Nonetextured on cuda:0

$ python -m surfemb.scripts.train tlessnonetextured --gpus 0

Inference data

We use the detections from CosyPose's MaskRCNN models, and sample surface points evenly for inference.
For ease of use, this data can be downloaded and extracted as follows:

$ wget https://github.com/rasmushaugaard/surfemb/releases/download/v0.0.1/inference_data.zip
$ unzip inference_data.zip

OR

Extract detections and sample surface points

Surface samples

First, flip the normals of ITODD object 18, which is inside out.

Then remove invisible parts of the objects

$ python -m surfemb.scripts.misc.surface_samples_remesh_visible [dataset] 

sample points evenly from the mesh surface

$ python -m surfemb.scripts.misc.surface_samples_sample_even [dataset] 

and recover the normals for the sampled points.

$ python -m surfemb.scripts.misc.surface_samples_recover_normals [dataset] 

Detection results

Download CosyPose in the same directory as SurfEmb was downloaded in, install CosyPose and follow their guide to download their BOP-trained detection results. Then:

$ python -m surfemb.scripts.misc.load_detection_results [dataset]

Inference inspection

To see pose estimation examples on the training images run

$ python -m surfemb.scripts.infer_debug [model_path] --device [device]

[device] could for example be cuda:0 or cpu. Here is an example of inference on a training image:

By performing inference inspection, we can visually observe how applying different textures results in varied correspondence accuracy.

Add --real to use the test images with simulated crops based on the ground truth poses, or further add --detections to use the CosyPose detections.

Inference for BOP evaluation

Notice

If you would like to create your own inference data, please adjust the bop_challenge file accordingly for inference/evaluation purposes.

Inference is run on the (real) test images with CosyPose detections:

$ python -m surfemb.scripts.infer [model_path] --device [device]

Pose estimation results are saved to data/results.
To obtain results with depth (requires running normal inference first), run

$ python -m surfemb.scripts.infer_refine_depth [model_path] --device [device]

The results can be formatted for BOP evaluation using

$ python -m surfemb.scripts.misc.format_results_for_eval [poses_path]

Either upload the formatted results to the BOP Challenge website or evaluate using the BOP toolkit.

Extra

Custom dataset: Format the dataset as a BOP dataset and put it in data/bop.

Credits

This is a project assigned in the 3D Computer Vision practical course at the Technical University of Munich. The team is comprised of the following members:

  • Emre Demir
  • Wenzhao Tang
  • Julian Dauth

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors