Super-Resolution Datasets¶
It is recommended to symlink the dataset root to $MMEDITING/data
. If your folder structure is different, you may need to change the corresponding paths in config files.
MMEditing supported super-resolution datasets:
Image Super-Resolution
Video Super-Resolution
DIV2K Dataset¶
[DATASET]
@InProceedings{Agustsson_2017_CVPR_Workshops,
author = {Agustsson, Eirikur and Timofte, Radu},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}
Training dataset: DIV2K dataset.
Validation dataset: Set5 and Set14.
mmediting
├── mmedit
├── tools
├── configs
├── data
│ ├── DIV2K
│ │ ├── DIV2K_train_HR
│ │ ├── DIV2K_train_LR_bicubic
│ │ │ ├── X2
│ │ │ ├── X3
│ │ │ ├── X4
│ │ ├── DIV2K_valid_HR
│ │ ├── DIV2K_valid_LR_bicubic
│ │ │ ├── X2
│ │ │ ├── X3
│ │ │ ├── X4
│ ├── val_set5
│ │ ├── Set5_bicLRx2
│ │ ├── Set5_bicLRx3
│ │ ├── Set5_bicLRx4
│ ├── val_set14
│ │ ├── Set14_bicLRx2
│ │ ├── Set14_bicLRx3
│ │ ├── Set14_bicLRx4
Crop sub-images¶
For faster IO, we recommend to crop the DIV2K images to sub-images. We provide such a script:
python tools/data/super-resolution/div2k/preprocess_div2k_dataset.py --data-root ./data/DIV2K
The generated data is stored under DIV2K
and the data structure is as follows, where _sub
indicates the sub-images.
mmediting
├── mmedit
├── tools
├── configs
├── data
│ ├── DIV2K
│ │ ├── DIV2K_train_HR
│ │ ├── DIV2K_train_HR_sub
│ │ ├── DIV2K_train_LR_bicubic
│ │ │ ├── X2
│ │ │ ├── X3
│ │ │ ├── X4
│ │ │ ├── X2_sub
│ │ │ ├── X3_sub
│ │ │ ├── X4_sub
│ │ ├── DIV2K_valid_HR
│ │ ├── ...
...
Prepare annotation list¶
If you use the annotation mode for the dataset, you first need to prepare a specific txt
file.
Each line in the annotation file contains the image names and image shape (usually for the ground-truth images), separated by a white space.
Example of an annotation file:
0001_s001.png (480,480,3)
0001_s002.png (480,480,3)
Prepare LMDB dataset for DIV2K¶
If you want to use LMDB datasets for faster IO speed, you can make LMDB files by:
python tools/data/super-resolution/div2k/preprocess_div2k_dataset.py --data-root ./data/DIV2K --make-lmdb
REDS Dataset¶
[DATASET]
@InProceedings{Nah_2019_CVPR_Workshops_REDS,
author = {Nah, Seungjun and Baik, Sungyong and Hong, Seokil and Moon, Gyeongsik and Son, Sanghyun and Timofte, Radu and Lee, Kyoung Mu},
title = {NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2019}
}
Training dataset: REDS dataset.
Validation dataset: REDS dataset and Vid4.
Note that we merge train and val datasets in REDS for easy switching between REDS4 partition (used in EDVR) and the official validation partition. The original val dataset (clip names from 000 to 029) are modified to avoid conflicts with training dataset (total 240 clips). Specifically, the clip names are changed to 240, 241, … 269.
You can prepare the REDS dataset by running:
python tools/data/super-resolution/reds/preprocess_reds_dataset.py ./data/REDS
mmediting
├── mmedit
├── tools
├── configs
├── data
│ ├── REDS
│ │ ├── train_sharp
│ │ │ ├── 000
│ │ │ ├── 001
│ │ │ ├── ...
│ │ ├── train_sharp_bicubic
│ │ │ ├── 000
│ │ │ ├── 001
│ │ │ ├── ...
│ ├── REDS4
│ │ ├── GT
│ │ ├── sharp_bicubic
Prepare LMDB dataset for REDS¶
If you want to use LMDB datasets for faster IO speed, you can make LMDB files by:
python tools/data/super-resolution/reds/preprocess_reds_dataset.py --root-path ./data/REDS --make-lmdb
Vimeo90K Dataset¶
[DATASET]
@article{xue2019video,
title={Video Enhancement with Task-Oriented Flow},
author={Xue, Tianfan and Chen, Baian and Wu, Jiajun and Wei, Donglai and Freeman, William T},
journal={International Journal of Computer Vision (IJCV)},
volume={127},
number={8},
pages={1106--1125},
year={2019},
publisher={Springer}
}
Download from here