Waifu2x
This article covers installing, using and training waifu2x, image super-resolution for anime-style art using deep convolutional neural networks.
Installation
To directly use waifu2x, install waifu2x-gitAUR package. There are other alternates for using waifu2x, just search waifu2x
in AUR.
Usage
waifu2x is avaliable with command waifu2x
. For detailed options, run waifu2x --help
Upscaling
Use --scale_ratio
parameter to specify scale ratio you want. And -i
with input file name, -o
with output file name:
waifu2x --scale_ratio 2 -i my_waifu.png -o 2x_my_waifu.png
Noise Reduction
Use --noise_level
parameter(1
or 2
) to specify noise reduction level:
waifu2x --noise_level 1 -i my_waifu.png -o lucid_my_waifu.png
And you can use --jobs
to specify number of threads launching at same time, benifit for multi-core CPU :
waifu2x --jobs 4 --noise_level 1 -i my_waifu.png -o lucid_my_waifu.png
Upscaling & Noise Reduction
--scale_ratio
and --noise_level
can be combined, so you can:
waifu2x --scale_ratio 2 --noise_level 1 -i my_waifu.png -o 2x_lucid_my_waifu.png
Training
To train custom models, an NVIDIA graphical card is required because waifu2x uses CUDA for computing. Then you need to prepare below develop dependencies and waifu2x source.
Dependencies
Install:
- lua51
- cuda
- snappy
- graphicsmagick
- torch7-gitAUR
- torch7-trepl-gitAUR
- torch7-sys-gitAUR
- torch7-cutorch-gitAUR
- torch7-nn-gitAUR
- torch7-cunn-gitAUR
- torch7-image-gitAUR
- torch7-xlua-gitAUR
- torch7-dok-gitAUR
- torch7-optim-gitAUR
- lua51-graphicsmagick-gitAUR
- lua51-cjsonAUR
- lua51-csvigo-gitAUR
- lua51-snappy-gitAUR
It is recommended to install below optional cuDNN library and bindings package. With them you can enable cuDNN backend for training, which have a significant speed up.
You need to manually download a cudnn binary pack from NVIDIA cuDNN site during installing cudnn.
- (optional)cudnn
- (optional)torch7-cudnn-gitAUR:
waifu2x source
Fetch waifu2x source code from GitHub:
git clone --depth 1 https://github.com/nagadomi/waifu2x.git
Enter source directory. Now you can test waifu2x command line tool:
th waifu2x.lua
Command line tools
-force_cudnn 1
option. cuDNN is too much faster than default kernel.Noise Reduction
th waifu2x.lua -m noise -noise_level 1 -i input_image.png -o output_image.png
th waifu2x.lua -m noise -noise_level 0 -i input_image.png -o output_image.png th waifu2x.lua -m noise -noise_level 2 -i input_image.png -o output_image.png th waifu2x.lua -m noise -noise_level 3 -i input_image.png -o output_image.png
2x Upscaling
th waifu2x.lua -m scale -i input_image.png -o output_image.png
Noise Reduction + 2x Upscaling
th waifu2x.lua -m noise_scale -noise_level 1 -i input_image.png -o output_image.png
th waifu2x.lua -m noise_scale -noise_level 0 -i input_image.png -o output_image.png th waifu2x.lua -m noise_scale -noise_level 2 -i input_image.png -o output_image.png th waifu2x.lua -m noise_scale -noise_level 3 -i input_image.png -o output_image.png
For more, see waifu2x#command-line-tools.
Train your own models
-backend cudnn
option. And, you can convert trained cudnn model to cunn model with tools/rebuild.lua
.appendix/train_upconv_7_art.sh
, appendix/train_upconv_7_photo.sh
. Maybe it is helpful.Data Preparation
Genrating a file list.
find /path/to/image/dir -name "*.png" > data/image_list.txt
Converting training data:
th convert_data.lua
Train a Noise Reduction(level1) model
mkdir models/my_model th train.lua -model_dir models/my_model -method noise -noise_level 1 -test images/miku_noisy.png # usage th waifu2x.lua -model_dir models/my_model -m noise -noise_level 1 -i images/miku_noisy.png -o output.png
You can check the performance of model with models/my_model/noise1_best.png
.
Train a Noise Reduction(level2) model
th train.lua -model_dir models/my_model -method noise -noise_level 2 -test images/miku_noisy.png # usage th waifu2x.lua -model_dir models/my_model -m noise -noise_level 2 -i images/miku_noisy.png -o output.png
You can check the performance of model with models/my_model/noise2_best.png
.
Train a 2x UpScaling model
th train.lua -model upconv_7 -model_dir models/my_model -method scale -scale 2 -test images/miku_small.png # usage th waifu2x.lua -model_dir models/my_model -m scale -scale 2 -i images/miku_small.png -o output.png
You can check the performance of model with models/my_model/scale2.0x_best.png
.
Train a 2x and noise reduction fusion model
th train.lua -model upconv_7 -model_dir models/my_model -method noise_scale -scale 2 -noise_level 1 -test images/miku_small.png # usage th waifu2x.lua -model_dir models/my_model -m noise_scale -scale 2 -noise_level 1 -i images/miku_small.png -o output.png
You can check the performance of model with models/my_model/noise1_scale2.0x_best.png
.
For latest information, see waifu2x#train-your-own-model.
Docker
See waifu2x#docker.