Optimizer

Wrapper class for supported optimizer functions.

class ecgan.utils.optimizer.BaseOptimizer(module_config, lr=0.0001, weight_decay=0.0)[source]

Bases: ecgan.utils.configurable.Configurable

Base optimizer class for custom optimizers.

optimize(losses)[source]

Perform an optimization step given zero, one or several losses.

state_dict()[source]

Return the state dict of the PyTorch optimizer.

Return type

Dict

zero_grad()[source]

Zero the gradient of the optimizer.

Return type

None

step()[source]

Perform an optimizer step.

Return type

None

set_param_group(updated_lr)[source]

Set optimizer params for adaptive LR.

load_existing_optim(state_dict)[source]

Load an already trained optim from an existing state_dict.

Return type

None

static configure()[source]

Return the default configuration for an optimizer.

Return type

Dict

class ecgan.utils.optimizer.Adam(module_config, lr=0.0001, weight_decay=0, betas=None, eps=1e-08)[source]

Bases: ecgan.utils.optimizer.BaseOptimizer

Adam optimizer wrapper around the PyTorch implementation.

static configure()[source]

Return the default configuration for the Adam optimizer.

Return type

Dict

class ecgan.utils.optimizer.StochasticGradientDescent(module_config, lr=0.0001, weight_decay=0)[source]

Bases: ecgan.utils.optimizer.BaseOptimizer

Stochastic gradient descent optimizer. For a Momentum variant see Momentum.

static configure()[source]

Return the default configuration for the Adam optimizer.

Return type

Dict

class ecgan.utils.optimizer.Momentum(module_config, lr=0.0001, weight_decay=0, momentum=0.9, dampening=0.0)[source]

Bases: ecgan.utils.optimizer.BaseOptimizer

Momentum optimizer wrapper around the PyTorch implementation.

static configure()[source]

Return the default configuration for the Momentum optimizer.

Return type

Dict

class ecgan.utils.optimizer.RMSprop(module_config, lr=0.0001, weight_decay=0.0, momentum=0.0, alpha=0.99, eps=1e-08, centered=False)[source]

Bases: ecgan.utils.optimizer.BaseOptimizer

Wrapper for the PyTorch RMSprop implementation.

static configure()[source]

Return the default configuration for a the RMSprop optimizer.

Return type

Dict

class ecgan.utils.optimizer.AdaBelief(module_config, lr=0.001, betas=None, eps=1e-16, weight_decay=0)[source]

Bases: ecgan.utils.optimizer.BaseOptimizer

Wrapper for the AdaBelief implementation.

Not currently supported by PyTorch itself, taken from the official adabelief-pytorch repo until then. More information can be found at [Zhuang, GitHub Pages](https://juntang-zhuang.github.io/adabelief/).

static configure()[source]

Return the default configuration for the Adabelief optimizer.

Return type

Dict

class ecgan.utils.optimizer.OptimizerFactory[source]

Bases: object

Meta module for creating an optimizer instance.