欢迎来到网际学院,让您的头脑满载而归!

这可能是目前最好的图像超分辨率算法,刚刚开源了

发布日期:2019-03-06 14:35:55 作者:管理员 阅读:478

Wide Activation for Efficient and Accurate Image Super-ResolutionTech Report | Approach | Results | TensorFlow | Other I

Wide Activation for Efficient and Accurate Image Super-Resolution

Tech Report | Approach | Results | TensorFlow | Other Implementations | Bibtex

Update (Oct, 2018): We have re-implemented WDSR on TensorFlow for end-to-end training and testing. Pre-trained models are released. The runtime speed of weight normalization on tensorflow is also optimized.

Run

  1. Requirements:

    • Install PyTorch (tested on release 0.4.0 and 0.4.1).

    • Clone EDSR-Pytorch as backbone training framework.

  2. Training and Validation:

    • Copy wdsr_a.py, wdsr_b.py into EDSR-PyTorch/src/model/.

    • Modify EDSR-PyTorch/src/option.py and EDSR-PyTorch/src/demo.sh to support --n_feats, --block_feats, --[r,g,b]_mean option (please find reference in issue #7, #8).

    • Launch training with EDSR-Pytorch as backbone training framework.

  3. Still have questions?

    • If you still have questions, please first search over closed issues. If the problem is not solved, please open a new issue.

Overall Performance

NetworkParametersDIV2K (val) PSNR
EDSR Baseline1,372,31834.61
WDSR Baseline1,190,10034.77

We measured PSNR using DIV2K 0801 ~ 0900 (trained on 0000 ~ 0800) on RGB channels without self-ensemble. Both baseline models have 16 residual blocks.

More results:

Number of Residual Blocks13
SR NetworkEDSRWDSR-AWDSR-BEDSRWDSR-AWDSR-B
Parameters0.26M0.08M0.08M0.41M0.23M0.23M
DIV2K (val) PSNR33.21033.32333.43434.04334.16334.205

Number of Residual Blocks58
SR NetworkEDSRWDSR-AWDSR-BEDSRWDSR-AWDSR-B
Parameters0.56M0.37M0.37M0.78M0.60M0.60M
DIV2K (val) PSNR34.28434.38834.40934.45734.54134.536

Comparisons of EDSR and our proposed WDSR-A, WDSR-B using identical settings to EDSR baseline model for image bicubic x2 super-resolution on DIV2K dataset.

WDSR Network Architecture

这可能是目前最好的图像超分辨率算法,刚刚开源了

Left: vanilla residual block in EDSR. Middle: wide activation. Right: wider activation with linear low-rank convolution. The proposed wide activation WDSR-A, WDSR-B have similar merits with MobileNet V2 but different architectures and much better PSNR.

Weight Normalization vs. Batch Normalization and No Normalization

这可能是目前最好的图像超分辨率算法,刚刚开源了 这可能是目前最好的图像超分辨率算法,刚刚开源了

Training loss and validation PSNR with weight normalization, batch normalization or no normalization. Training with weight normalization has faster convergence and better accuracy.

Other Implementations

Citing

Please consider cite WDSR for image super-resolution and compression if you find it helpful.

@article{yu2018wide,
  title={Wide Activation for Efficient and Accurate Image Super-Resolution},
  author={Yu, Jiahui and Fan, Yuchen and Yang, Jianchao and Xu, Ning and Wang, Xinchao and Huang, Thomas S},
  journal={arXiv preprint arXiv:1808.08718},
  year={2018}
}

@inproceedings{fan2018wide,
  title={Wide-activated Deep Residual Networks based Restoration for BPG-compressed Images},
  author={Fan, Yuchen and Yu, Jiahui and Huang, Thomas S},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops},
  pages={2621--2624},
  year={2018}
}

CVPR 2018 Workshop NTIRE2018图像超分辨率的优胜方案开源了!
该算法在NTIRE2018所有三个realistic赛道中全部获得第一名!
作者昨天在arXiv上传技术报告公开了技术方案并开源了PyTorch代码。
论文名称《Wide Activation for Efficient and Accurate Image Super-Resolution》,

代码主页:

https://github.com/JiahuiYu/wdsr_ntire2018
论文:
https://arxiv.org/abs/1808.08718v1


Copyright oneie ©2014-2017 All Rights Reserved. 所有资料来源于互联网对相关版权责任概不负责。如发现侵犯了您的版权请与我们联系。 网际学院 版权所有
免责声明  商务合作及投稿请联系 QQ:86662817