Networks

Networks of and helper functions for supported architectures.

Helper functions and networks.

ecgan.networks.helpers.apply_input_normalization(channel_size, normalization, **kwargs)[source]

Apply input normalization to a layer of size channel_size.

Parameters
  • channel_size (int) -- Size of the channel/layer.

  • normalization (Optional[InputNormalization]) -- Selected normalization method.

  • kwargs -- Optional parameters which might be required for normalizations.

Return type

Optional[Module]

ecgan.networks.helpers.conv1d_block(in_channels, out_channels, k=4, s=2, p=1, bias=False)[source]

Abbreviate the creation of a Conv1d block.

Parameters
  • in_channels (int) -- input channels.

  • out_channels (int) -- output channels.

  • k (int) -- kernel size.

  • s (int) -- stride.

  • p (int) -- padding.

  • bias (bool) -- bias.

Return type

Conv1d

Returns

A Conv1d block.

ecgan.networks.helpers.conv1d_trans_block(in_channels, out_channels, k=4, s=2, p=1, bias=False)[source]

Abbreviate the creation of a 1d convolutional transpose block.

Parameters
  • in_channels (int) -- input channels.

  • out_channels (int) -- output channels.

  • k (int) -- kernel size.

  • s (int) -- stride.

  • p (int) -- padding.

  • bias (bool) -- bias.

Return type

ConvTranspose1d

Returns

A ConvTranspose1d block.

ecgan.networks.helpers.create_5_hidden_layer_convnet(input_channels, hidden_channels, output_channels, seq_len, input_norm, spectral_norm=False, track_running_stats=True)[source]

Generate a downsampling CNN architecture, with LeakyReLU activation and optionally weight/input normalization.

Note

seq_len has to be divisible by 32 for the pooling kernel.

Parameters
  • input_channels (int) -- Amount of input channels.

  • hidden_channels (List[int]) -- List of hidden channel sizes. Should be of length 5.

  • output_channels (int) -- Amount of output channels.

  • seq_len (int) -- Sequence length of the data.

  • input_norm (InputNormalization) -- Type of input normalization.

  • spectral_norm (bool) -- Flag to indicate if spectral weight normalization should be performed

  • track_running_stats (bool) -- Flag to indicate if a BatchNorm layer should track the running statistics.

Return type

Sequential

Returns

A five hidden layer CNN as nn.Module.

ecgan.networks.helpers.create_transpose_conv_net(input_channels, hidden_channels, output_channels, seq_len, input_norm, spectral_norm=False, track_running_stats=True)[source]

Create a 5 hidden layer conv transposed network.

Parameters
  • input_channels (int) -- Amount of input channels.

  • hidden_channels (List[int]) -- List of hidden channel sizes. Should be of length 5.

  • output_channels (int) -- Amount of output channels.

  • seq_len (int) -- Sequence length of the data.

  • input_norm (InputNormalization) -- Type of input normalization.

  • spectral_norm (bool) -- Flag to indicate if spectral weight normalization should be performed

  • track_running_stats -- Flag to indicate if a BatchNorm layer should track the running statistics.

Return type

Sequential

Returns

A five hidden layer transposed CNN as nn.Module.

ecgan.networks.helpers.conv_norm_relu(input_channels, output_channels, kernel_size, stride=1, padding=0, bias=False)[source]

Chain convolutional layers with ReLU activations and batch norm.

Return type

Sequential

Generic CNNs.

class ecgan.networks.cnn.ConvolutionalNeuralNetwork(input_channels, hidden_channels, out_channels, n_classes, seq_len, input_norm)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Generic CNN which can be used for classification.

forward(x)[source]

Forward pass of the CNN.

Return type

Tensor

static configure()[source]

Return a default configuration of a CNN.

Return type

Dict

class ecgan.networks.cnn.DownsampleCNN(kernel_sizes, pooling_kernel_size, input_channels, output_channels, seq_len, sampling_seq_len)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

A CNN used for downsampling.

forward(x)[source]

Forward pass of the downsample CNN.

static configure()[source]

Return the default configuration for the DownsampleCNN.

Return type

Dict

Simple RNN (LSTM) which can be used for classification.

class ecgan.networks.rnn.RecurrentNeuralNetwork(num_channels, hidden_dim, hidden_size, n_classes)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Generic Recurrent Neural Network classifier with LSTM blocks followed by a fully connected layer.

forward(x)[source]

Forward pass of the RNN.

Return type

Tensor

static configure()[source]

Configure rnn.

Return type

Dict

RGAN architectures for the discriminator and generator.

class ecgan.networks.rgan.RGANGenerator(input_size, output_size, params)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Generator with the RGAN architecture.

forward(x)[source]

Forward pass of the generator.

Return type

Tensor

static configure()[source]

Return the default configuration for the generator of the RGAN module.

Return type

Dict

class ecgan.networks.rgan.RGANDiscriminator(input_size, params)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Discriminator with the RGAN architecture with additional spectral normalization.

forward(x)[source]

Forward pass of the discriminator.

Return type

Tensor

static configure()[source]

Return the default configuration for the discriminator of the RGAN module.

Return type

Dict

Adapted DCGAN generator and discriminator.

class ecgan.networks.dcgan.DCGANGenerator(input_channels, output_channels, params, seq_len=128)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

A generator using an architecture similar to Radford et al. 2015.

forward(x)[source]

Forward pass of the generator.

Return type

Tensor

static configure()[source]

Return the default configuration for the generator of the DCGAN module.

Return type

Dict

class ecgan.networks.dcgan.DCGANDiscriminator(input_channels, params)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Slightly modified discriminator from Radford et al. 2015.

forward(x)[source]

Forward pass of the discriminator.

Return type

Tensor

static configure()[source]

Return the default configuration for the discriminator of the DCGAN module.

Return type

Dict

BeatGAN encoder, generator and discriminator from Zhou et al. 2019.

class ecgan.networks.beatgan.BeatganInverseEncoder(input_channels, hidden_channels, output_channels, seq_len, input_norm, spectral_norm)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Encoder of the BeatGAN model.

forward(x)[source]

Perform a forward pass.

static configure()[source]

Return the default configuration for the encoder of the BeatGAN module.

Return type

Dict

class ecgan.networks.beatgan.BeatganDiscriminator(input_channels, hidden_channels, output_channels, seq_len, input_norm, spectral_norm)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Discriminator of the BeatGAN model.

forward(x)[source]

Perform a forward pass.

static configure()[source]

Return the default configuration for the discriminator of the BeatGAN model.

Return type

Dict

class ecgan.networks.beatgan.BeatganGenerator(input_channels, hidden_channels, latent_size, seq_len, input_norm, spectral_norm, tanh_out)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Generator of the BeatGAN model.

forward(x)[source]

Perform a forward pass.

static configure()[source]

Return the default configuration for the generator of the BeatGAN module.

Return type

Dict

VAEGAN encoder.

class ecgan.networks.vaegan.VAEEncoder(input_channels, latent_size, hidden_channels, seq_len, spectral_norm=False, input_norm=None, track_running_stats=True)[source]

Bases: ecgan.utils.configurable.ConfigurableTorchModule

Variational Convolutional Encoder Module.

forward(x)[source]

Forward pass of the VAEGAN encoder.

Parameters

x (Tensor) -- Input data.

Return type

Tuple[Tensor, Tensor]

Returns

Tuple of (mu, log_var).

static configure()[source]

Return the default configuration for the encoder of the VAEGAN module.

Return type

Dict