Main Content

addPix2PixHDLocalEnhancer

Add local enhancer network to pix2pixHD generator network

Description

example

netWithEnhancer = addPix2PixHDLocalEnhancer(net) adds a local enhancer network to a pix2pixHD generator network, net. For more information about the network architecture, see pix2pixHD Local Enhancer Network.

This function requires Deep Learning Toolbox™.

netWithEnhancer = addPix2PixHDLocalEnhancer(net,Name,Value) controls aspects of the local enhancer network creation using name-value arguments.

Examples

collapse all

Specify the network input size for 32-channel data of size 512-by-1024.

inputSize = [512 1024 32];

Create a pix2pixHD global generator network.

pix2pixHD = pix2pixHDGlobalGenerator(inputSize)
pix2pixHD = 
  dlnetwork with properties:

         Layers: [84x1 nnet.cnn.layer.Layer]
    Connections: [92x2 table]
     Learnables: [110x3 table]
          State: [0x3 table]
     InputNames: {'GlobalGenerator_inputLayer'}
    OutputNames: {'GlobalGenerator_fActivation'}
    Initialized: 1

Add a local enhancer network to the pix2pixHD network.

pix2pixHDEnhanced = addPix2PixHDLocalEnhancer(pix2pixHD)
pix2pixHDEnhanced = 
  dlnetwork with properties:

         Layers: [113x1 nnet.cnn.layer.Layer]
    Connections: [124x2 table]
     Learnables: [146x3 table]
          State: [0x3 table]
     InputNames: {'LocalEnhancer_inputLayer'  'GlobalGenerator_inputLayer'}
    OutputNames: {'LocalEnhancer_fActivation'}
    Initialized: 1

Display the network with the local enhancer.

analyzeNetwork(pix2pixHDEnhanced)

Input Arguments

collapse all

Pix2pixHD generator network, specified as a dlnetwork (Deep Learning Toolbox) object. You can create a pix2pixHD generator network using the pix2pixHDGlobalGenerator function.

Name-Value Pair Arguments

Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside quotes. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN.

Example: 'FilterSizeInFirstAndLastBlocks',[5 7] adds a local enhancer whose first and last convolution layers have a size of 5-by-7

Filter size in the first and last convolution layers of the local enhancer network, specified as a positive odd integer or 2-element vector of positive odd integers of the form [height width]. When you specify the filter size as a scalar, the filter has equal height and width.

Filter size in intermediate convolution layers in the local enhancer network, specified as a positive odd integer or 2-element vector of positive odd integers of the form [height width]. The intermediate convolution layers are the convolution layers excluding the first and last convolution layer. When you specify the filter size as a scalar, the filter has identical height and width. Typical values are between 3 and 7.

Number of residual blocks in the local enhancer network, specified as a positive integer. Each residual block consists of a set of convolution, normalization and nonlinear layers with skip connections between every block.

Style of padding used in the local enhancer network, specified as one of these values.

PaddingValueDescriptionExample
Numeric scalarPad with the specified numeric value

[314159265][2222222222222222314222215922222652222222222222222]

'symmetric-include-edge'Pad using mirrored values of the input, including the edge values

[314159265][5115995133144113314415115995622655662265565115995]

'symmetric-exclude-edge'Pad using mirrored values of the input, excluding the edge values

[314159265][5626562951595141314139515951562656295159514131413]

'replicate'Pad using repeated border elements of the input

[314159265][3331444333144433314441115999222655522265552226555]

Method used to upsample activations in the local enhancer network, specified as one of these values:

Data Types: char | string

Weight initialization used in convolution layers of the local enhancer network, specified as "glorot", "he", "narrow-normal", or a function handle. For more information, see Specify Custom Weight Initialization Function (Deep Learning Toolbox).

Activation function to use in the local enhancer network, specified as one of these values. For more information and a list of available layers, see Activation Layers (Deep Learning Toolbox).

  • "relu" — Use a reluLayer (Deep Learning Toolbox)

  • "leakyRelu" — Use a leakyReluLayer (Deep Learning Toolbox) with a scale factor of 0.2

  • "elu" — Use an eluLayer (Deep Learning Toolbox)

  • A layer object

Normalization operation to use after each convolution in the local enhancer network, specified as one of these values. For more information and a list of available layers, see Normalization, Dropout, and Cropping Layers (Deep Learning Toolbox).

Probability of dropout in the local enhancer network, specified as a number in the range [0, 1]. If you specify a value of 0, then the network does not include dropout layers. If you specify a value greater than 0, then the network includes a dropoutLayer (Deep Learning Toolbox) in each residual block.

Prefix to all layer names in the local enhancer network, specified as a string or character vector.

Data Types: char | string

Output Arguments

collapse all

Pix2pixHD generator network with local enhancer, returned as a dlnetwork (Deep Learning Toolbox) object.

More About

collapse all

pix2pixHD Local Enhancer Network

The addPix2PixHDLocalEnhancer function performs these operations to add a local enhancer network to a pix2pixHD global generator network. The default enhanced network follows the architecture proposed by Wang et. al. References.

  1. The local enhancer network has an initial block of layers that accepts images of size [2*H 2*W C], where H is the height, W is the width, and C is the number of channels of the input to the global generator network, net. When net has multiple image input layers, the input image size of the local enhancer network is twice the input size with the maximum resolution.

  2. After the initial block, the local enhancer network has a single downsampling block that downsamples the data by a factor of two. Therefore, the output after downsampling has size [H W 2*C].

  3. The addPix2PixHDLocalEnhancer function trims the final block from the global generator network. The function then adds the output of the last upsampling block in the global generator network to the output of the downsampled data from the enhancer network using an additionLayer (Deep Learning Toolbox).

  4. The output of the addition then passes through NumResidualBlocks residual blocks from the local enhancer.

  5. The residual blocks are followed by a single upsampling block that upsamples data to size [2*H 2*W C].

  6. The addPix2PixHDLocalEnhancer function adds a final block to the enhanced network. The convolution layer has properties specified by the arguments of addPix2PixHDLocalEnhancer. If the global generator network has a final activation layer, then the function adds the same type of activation layer to the enhanced network.

The table describes the blocks of layers that comprise the local enhancer network.

Block TypeLayersDiagram of Default Block
Initial block
  • An imageInputLayer (Deep Learning Toolbox) that accepts images of twice the size as pix2pixHD global generator network, net.

  • A convolution2dLayer (Deep Learning Toolbox) with a stride of [1 1] and a filter size of FilterSizeInFirstAndLastBlocks

  • An optional normalization layer, specified by the NormalizationLayer name-value argument.

  • An activation layer specified by the ActivationLayer name-value argument.

Image input layer, 2-D convolution layer, instance normalization layer, ReLU layer

Downsampling block
  • A convolution2dLayer (Deep Learning Toolbox) with a stride of [2 2] to perform downsampling. The convolution layer has a filter size of FilterSizeInIntermediateBlocks.

  • An optional normalization layer, specified by the NormalizationLayer name-value argument.

  • An activation layer specified by the ActivationLayer name-value argument.

2-D convolution layer, instance normalization layer, ReLU layer

Residual block
  • A convolution2dLayer (Deep Learning Toolbox) with a stride of [1 1] and a filter size of FilterSizeInIntermediateBlocks.

  • An optional normalization layer, specified by the NormalizationLayer name-value argument.

  • An activation layer specified by the ActivationLayer name-value argument.

  • An optional dropoutLayer (Deep Learning Toolbox). By default, residual blocks omit a dropout layer. Include a dropout layer by specifying the Dropout name-value argument as a value in the range (0, 1].

  • A second convolution2dLayer (Deep Learning Toolbox).

  • An optional second normalization layer.

  • An additionLayer (Deep Learning Toolbox) that provides a skip connection between every block.

2-D convolution layer, instance normalization layer, ReLU layer, 2-D convolution layer, instance normalization layer, addition layer

Upsampling block
  • An upsampling layer that upsamples by a factor of 2 according to the UpsampleMethod name-value argument. The convolution layer has a filter size of FilterSizeInIntermediateBlocks.

  • An optional normalization layer, specified by the NormalizationLayer name-value argument.

  • An activation layer specified by the ActivationLayer name-value argument.

Transposed 2-D convolution layer, instance normalization layer, ReLU layer

Final block
  • A convolution2dLayer (Deep Learning Toolbox) with a stride of [1 1] and a filter size of FilterSizeInFirstAndLastBlocks.

  • An optional activation layer according to the global generator network, net.

2-D convolution layer, tanh layer

References

[1] Wang, Ting-Chun, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. "High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs." In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 8798–8807. Salt Lake City, UT, USA: IEEE, 2018. https://doi.org/10.1109/CVPR.2018.00917.

Introduced in R2021a