This is a reimplementaion of the unconditional waveform synthesizer in DIFFWAVE: A VERSATILE DIFFUSION MODEL FOR AUDIO SYNTHESIS.
- 
To continue training the model, run python distributed_train.py -c config.json.
- 
To retrain the model, change the parameter ckpt_iterin the correspondingjsonfile to-1and use the above command.
- 
To generate audio, run python inference.py -c config.json -n 16to generate 16 utterances.
- 
Note, you may need to carefully adjust some parameters in the jsonfile, such asdata_pathandbatch_size_per_gpu.