Optimizer
Wrapper class for supported optimizer functions.
- class ecgan.utils.optimizer.BaseOptimizer(module_config, lr=0.0001, weight_decay=0.0)[source]
Bases:
ecgan.utils.configurable.Configurable
Base optimizer class for custom optimizers.
- class ecgan.utils.optimizer.Adam(module_config, lr=0.0001, weight_decay=0, betas=None, eps=1e-08)[source]
Bases:
ecgan.utils.optimizer.BaseOptimizer
Adam optimizer wrapper around the PyTorch implementation.
- class ecgan.utils.optimizer.StochasticGradientDescent(module_config, lr=0.0001, weight_decay=0)[source]
Bases:
ecgan.utils.optimizer.BaseOptimizer
Stochastic gradient descent optimizer. For a Momentum variant see Momentum.
- class ecgan.utils.optimizer.Momentum(module_config, lr=0.0001, weight_decay=0, momentum=0.9, dampening=0.0)[source]
Bases:
ecgan.utils.optimizer.BaseOptimizer
Momentum optimizer wrapper around the PyTorch implementation.
- class ecgan.utils.optimizer.RMSprop(module_config, lr=0.0001, weight_decay=0.0, momentum=0.0, alpha=0.99, eps=1e-08, centered=False)[source]
Bases:
ecgan.utils.optimizer.BaseOptimizer
Wrapper for the PyTorch RMSprop implementation.
- class ecgan.utils.optimizer.AdaBelief(module_config, lr=0.001, betas=None, eps=1e-16, weight_decay=0)[source]
Bases:
ecgan.utils.optimizer.BaseOptimizer
Wrapper for the AdaBelief implementation.
Not currently supported by PyTorch itself, taken from the official adabelief-pytorch repo until then. More information can be found at [Zhuang, GitHub Pages](https://juntang-zhuang.github.io/adabelief/).