SAIL-Recon / eval /readme.md
hengli
first
b7f83b0

A newer version of the Gradio SDK is available: 6.2.0

Upgrade

Tanks and Temples.

Image set

  1. Data Preparation

    Download the images data from here (for intermidiated and advanced set, please download from here), and COLMAP results from (here)[https://storage.googleapis.com/niantic-lon-static/research/acezero/colmap_raw.tar.gz]. We thank ACE0 again for providing the COLMAP results.

  2. Adjust the parameter in run_tnt.sh

    Specify the dataset_root, colmap_dir, model_path and save_dir in the file.

  3. Get the inference results.

    sh run_tnt.sh
    

Video set

Click to expand
  1. Data Preparation

    Download the video sequence and from here and get images from video via this.

  2. Run Inference

    Replace docs/demo_image in ../demo.py to the path storing images from videl.

7 scenes

  1. Data Preparation Download the corresponding sequence from here.

TUM-RGBD

  1. Data Preparation

    Download the corresponding sequence from here.

  2. Adjust the parameter in run_tum.sh

    Specify the dataset_root, recon_img_num, model_path and save_dir in the file.

  3. Evaluate the results.

    sh run_tum.sh
    

    Noting that we set the recon_img_num to 50 or 100 according to the length of dataset. Please refer to the supplementary of paper for detail.

  4. Using evo to evaluate The results

    evo_ape tum gt_pose.txt pred_tum.txt -vas
    

7 scenes

  1. Download the dataset from here and Pseudo Ground Truth (PGT) (see the ICCV 2021 paper , and associated code for details).

  2. Adjust the parameter in run_7scenes.sh

    Specify the dataset_root, recon_img_num, model_path and save_dir in the file.

  3. Evaluate the results.

    sh run_7scenes.sh
    

    You will see a result.txt file reporting the evaluation results.

Mip-NeRF 360

  1. Data Preparation

    Download the data from here.

  2. Adjust the parameter in run_mip.sh

    Specify the dataset_root, model_path and save_dir in the file.

  3. Get the inference results.

    sh run_mip.sh
    

Co3D-V2

  1. We thank VGGT for providing evaluation code of CO3D-V2 dataset. Please see link here for data preparation and processing.

  2. Adjust the parameterco3d_dir in runco3d_anno_dir_7scenes.sh

    Specify the dataset_root, recon_img_num, model_path, recon, reloc and fixed_rank in the file.

  3. Evaluate the results.

    sh run_co3d.sh
    

    You will see evaluation result in the terminal.