Main Content

List of Deep Learning Layer Blocks and Subsystems

This page provides a list of deep learning layer blocks and subsystems in Simulink®. To export a MATLAB® object-based network to a Simulink model that uses deep learning layer blocks and subsystems, use the exportNetworkToSimulink function. Use layer blocks for networks that have a small number of learnable parameters and that you intend to deploy to embedded hardware.

Deep Learning Layer Blocks

The exportNetworkToSimulink function generates these blocks and subsystems to represent layers in a network. Each block and subsystem corresponds to a layer object in MATLAB. For each layer in a network, the function generates the corresponding block or subsystem.

The function supports only a limited set of layer objects and does not support certain property values for certain layer objects.

  • If the input network contains a layer object that does not have a corresponding layer block or subsystem, then the function generates a placeholder subsystem for you to replace with your own implementation of the layer. For more information, see Implement Unsupported Deep Learning Layer Blocks.

  • If the input network contains a layer object that has a corresponding layer block but the object uses a property value that the block does not support, then the function either generates a placeholder subsystem (since R2026a), substitutes a different value, or throws an error.

    Before R2026a: The function throws an error for some networks with unsupported property values.

The Limitations column in the tables in this section lists conditions where the blocks and subsystems do not have parity with the corresponding layer objects.

For a list of deep learning layer objects in MATLAB, see List of Deep Learning Layers.

Activation Layers

BlockCorresponding Layer ObjectDescriptionLimitations
Clipped ReLU LayerclippedReluLayerA clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling. 
Leaky ReLU LayerleakyReluLayerA leaky ReLU layer performs a threshold operation, where any input value less than zero is multiplied by a fixed scalar. 

PReLU Layer (since R2026a)

preluLayerA PReLU layer performs a threshold operation, where for each channel, any input value less than zero is multiplied by a scalar learned at training time. 
ReLU LayerreluLayerA ReLU layer performs a threshold operation to each element of the input, where any value less than zero is set to zero. 
Sigmoid LayersigmoidLayerA sigmoid layer applies a sigmoid function to the input such that the output is bounded in the interval (0,1). 
Softmax LayersoftmaxLayerA softmax layer applies a softmax function to the input.

Before R2026a: If you specify a data format that contains multiple spatial (S) dimensions, the spatial dimensions of the input data must be singleton.

Swish Layer (since R2026a)

swishLayerA swish activation layer applies the swish function on the layer inputs.  
Tanh LayertanhLayerA hyperbolic tangent (tanh) activation layer applies the tanh function on the layer inputs. 

Combination Layers

BlockCorresponding Layer ObjectDescriptionLimitations
Addition LayeradditionLayerAn addition layer adds inputs from multiple neural network layers element-wise.The additionLayer object accepts scalar and vector inputs and expands those inputs to have the same dimensions as the matrix inputs, but the Addition Layer block supports expanding only scalar inputs.
Concatenation LayerconcatenationLayerA concatenation layer takes inputs and concatenates them along a specified dimension. The inputs must have the same size in all dimensions except the concatenation dimension. 
Depth Concatenation LayerdepthConcatenationLayerA depth concatenation layer takes inputs that have the same height and width and concatenates them along the channel dimension. 
Multiplication LayermultiplicationLayerA multiplication layer multiplies inputs from multiple neural network layers element-wise.The multiplicationLayer object accepts scalar and vector inputs and expands those inputs to have the same dimensions as the matrix inputs, but the Multiplication Layer block supports expanding only scalar inputs.

Convolution and Fully Connected Layers

BlockCorresponding Layer ObjectDescriptionLimitations
Convolution 1D Layerconvolution1dLayerA 1-D convolutional layer applies sliding convolutional filters to 1-D input.
  • The Layer parameter does not support convolution layer objects that have the PaddingValue property set to "symmetric-exclude-edge". If you specify an object that uses that padding value, the block produces a warning and uses the value "symmetric-include-edge" instead.

  • The Layer parameter does not support convolution layer objects that have the DilationFactor property set to a value other than 1.

  • Before R2026a: The Layer parameter supports the manual padding mode for only some padding sizes.

  • Before R2026a: The Layer parameter for the Convolution 1D Layer block does not support the causal padding mode.

Convolution 2D Layerconvolution2dLayerA 2-D convolutional layer applies sliding convolutional filters to 2-D input.
Convolution 3D Layerconvolution3dLayerA 3-D convolutional layer applies sliding cuboidal convolution filters to 3-D input.
Fully Connected LayerfullyConnectedLayerA fully connected layer multiplies input vectors by a weight matrix and then adds a bias vector.
  • The Layer parameter does not support fully connected layer objects that have the InputLearnables and OutputLearnables properties set to nonempty values. For an example that shows how to untie shared learnables, see Neural Network Weight Tying.

  • The Layer parameter supports only fully connected layer objects that have the OperationDimension property set to "spatial-channel".

Input Layers

For input layer objects that have the Normalization property set to "none", the exportNetworkToSimulink function generates an Inport (Simulink) block.

BlockCorresponding Layer ObjectDescriptionLimitations
Rescale-Symmetric 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "rescale-symmetric"

The Rescale-Symmetric 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [-1, 1].

  • The Layer parameter does not support objects that have the SplitComplexInputs property set to 1 (true).

  • The 2D and 3D blocks support only input data that has 1 or 3 channels corresponding to grayscale or RGB image data, respectively.

Rescale-Symmetric 2DimageInputLayer that has the Normalization property set to "rescale-symmetric"

The Rescale-Symmetric 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [-1, 1].

Rescale-Symmetric 3Dimage3dInputLayer that has the Normalization property set to "rescale-symmetric"

The Rescale-Symmetric 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [-1, 1].

Rescale-Zero-One 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "rescale-zero-one"

The Rescale-Zero-One 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [0, 1].

Rescale-Zero-One 2DimageInputLayer that has the Normalization property set to "rescale-zero-one"

The Rescale-Zero-One 2D block inputs 2-dimensional image data to a neural network and rescales the input to be in the range [0, 1].

Rescale-Zero-One 3Dimage3dInputLayer that has the Normalization property set to "rescale-zero-one"

The Rescale-Zero-One 3D block inputs 3-dimensional image data to a neural network and rescales the input to be in the range [0, 1].

Zerocenter 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "zerocenter"

The Zerocenter 1D block inputs 1-dimensional data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.

Zerocenter 2DimageInputLayer that has the Normalization property set to "zerocenter"

The Zerocenter 2D block inputs 2-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.

Zerocenter 3Dimage3dInputLayer that has the Normalization property set to "zerocenter"

The Zerocenter 3D block inputs 3-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.

Zscore 1DfeatureInputLayer or sequenceInputLayer that has the Normalization property set to "zscore"

The Zscore 1D block inputs 1-dimensional data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.

Zscore 2DimageInputLayer that has the Normalization property set to "zscore"

The Zscore 2D block inputs 2-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.

Zscore 3Dimage3dInputLayer that has the Normalization property set to "zscore"

The Zscore 3D block inputs 3-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.

Exporting networks with input layer objects that have the SplitComplexInputs property set to 1 (true) is not supported.

Normalization Layers

BlockCorresponding Layer ObjectDescriptionLimitations
Batch Normalization LayerbatchNormalizationLayerA batch normalization layer block normalizes a mini-batch of data for each channel independently. 

Instance Normalization Layer (since R2026a)

instanceNormalizationLayerAn instance normalization layer block normalizes a mini-batch of data across each channel for each observation independently. 
Layer Normalization LayerlayerNormalizationLayer

A layer normalization layer block normalizes a mini-batch of data across all channels.

If you set the Data format parameter to SSC or SSSC, the Layer parameter does not support layerNormalizationLayer objects that have the OperationDimension set to 'channel-only'.

Inverse Zscore (since R2026a)

inverseNormalizationLayer with the Normalization property set to "zscore".An inverse Z-score block applies the inverse of the Z-score normalization operation.
  • The layer object specified by the Layer parameter must have the Normalization property set to "zscore".

  • The layer object specified by the Layer parameter must have the OperationDimension property set to "channel".

Inverse Zerocenter (since R2026a)

inverseNormalizationLayer with the Normalization property set to "zerocenter".An inverse zero-center block applies the inverse of the zero-center normalization operation.
  • The layer object specified by the Layer parameter must have the Normalization property set to "zerocenter".

  • The layer object specified by the Layer parameter must have the OperationDimension property set to "channel".

Pooling Layers

BlockCorresponding Layer ObjectDescriptionLimitations
Average Pooling 1D LayeraveragePooling1dLayerA 1-D average pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the average of each region.
  • The Layer parameter does not support average pooling layer objects that have the PaddingValue property set to "mean". If you specify an object that uses that padding value, the block produces a warning and uses the value 0 instead.

  • Before R2026a: The Layer parameter supports the manual padding mode for only some padding sizes.

Average Pooling 2D LayeraveragePooling2dLayerA 2-D average pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the average of each region.
Average Pooling 3D LayeraveragePooling3dLayerA 3-D average pooling layer performs downsampling by dividing three-dimensional input into cuboidal pooling regions, then computing the average values of each region.
Global Average Pooling 1D LayerglobalAveragePooling1dLayerA 1-D global average pooling layer performs downsampling by outputting the average of the time or spatial dimensions of the input. 
Global Average Pooling 2D LayerglobalAveragePooling2dLayerA 2-D global average pooling layer performs downsampling by computing the mean of the height and width dimensions of the input. 
Global Average Pooling 3D LayerglobalAveragePooling3dLayerA 3-D global average pooling layer performs downsampling by computing the mean of the height, width, and depth dimensions of the input. 
Global Max Pooling 1D LayerglobalMaxPooling1dLayerA 1-D global max pooling layer performs downsampling by outputting the maximum of the time or spatial dimensions of the input. 
Global Max Pooling 2D LayerglobalMaxPooling2dLayerA 2-D global max pooling layer performs downsampling by computing the maximum of the height and width dimensions of the input. 
Global Max Pooling 3D LayerglobalMaxPooling3dLayerA 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input. 
Max Pooling 1D LayermaxPooling1dLayerA 1-D max pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the maximum of each region.
  • Before R2026a: The Layer parameter supports the manual padding mode for only some padding sizes.

Max Pooling 2D LayermaxPooling2dLayerA 2-D max pooling layer performs downsampling by dividing the input into rectangular pooling regions, then computing the maximum of each region.
Max Pooling 3D LayermaxPooling3dLayerA 3-D global max pooling layer performs downsampling by computing the maximum of the height, width, and depth dimensions of the input.

Sequence Layers

BlockCorresponding Layer ObjectDescriptionLimitations
Flatten LayerflattenLayerA flatten layer collapses the spatial dimensions of the input into the channel dimension. 
GRU Layer (since R2025a)gruLayerA GRU layer is an RNN layer that learns dependencies between time steps in time-series and sequence data.

The Layer parameter does not accept gruLayer objects that have the HasStateInputs or HasStateOutputs properties set to 1 (true).

GRU Projected Layer (since R2025a)gruProjectedLayerA GRU projected layer is an RNN layer that learns dependencies between time steps in time-series and sequence data using projected learnable weights.

The Layer parameter does not accept gruProjectedLayer objects that have the HasStateInputs or HasStateOutputs properties set to 1 (true).

LSTM LayerlstmLayer

An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data. The layer performs additive interactions, which can help improve gradient flow over long sequences during training.

The Layer parameter does not accept lstmLayer or lstmProjectedLayer objects that have the HasStateInputs or HasStateOutputs properties set to 1 (true).

LSTM Projected LayerlstmProjectedLayer

An LSTM projected layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data using projected learnable weights.

Utility Layers

Block or SubsystemCorresponding Layer ObjectDescriptionLimitations
Dropout LayerdropoutLayer

At training time, a dropout layer randomly sets input elements to zero with a given probability. At prediction time, the output of a dropout layer is equal to its input.

 
Identity Layer (since R2026a)identityLayer

An identity layer is a layer whose output is identical to its input.

 
Scaling LayerscalingLayerA scaling layer linearly scales and offsets the input data. 
Spatial Dropout Layer (since R2026a)spatialDropoutLayerAt training time, a spatial dropout layer randomly selects input channels with a given probability, and sets all its elements to zero during training. At prediction time, the output of a spatial dropout layer is equal to its input. 
Subsystem representing nested neural network (since R2026a)networkLayerA network layer contains a nested network. Use network layers to simplify building large networks that contain repeating components. 

Subsystem representing projected layer (since R2026a)

ProjectedLayer

A projected layer is a compressed neural network layer resulting from projection.

 

Neural ODE Layers

SubsystemCorresponding Layer ObjectDescriptionLimitations
Integrator block as ODE solver and ODE network represented as layer blocks (since R2025a)neuralODELayerA neural ODE layer learns to represent dynamic behavior as a system of ODEs.

The subsystem supports continuous-time integration only. For discrete time integration (for example, for fixed-point conversion applications), replace the integrator block in the subsystem with a discrete-time integrator block.

See Also

Topics