For most tasks, you can use built-in layers. If there is not a built-in layer that you need for your task, then you can define your own custom layer. You can specify a custom loss function using a custom output layer and define custom layers with learnable and state parameters. After defining a custom layer, you can check that the layer is valid, GPU compatible, and outputs correctly defined gradients. To learn more, see Define Custom Deep Learning Layers. For a list of supported layers, see List of Deep Learning Layers.
trainingOptions function does not
provide the training options that you need for your task, or custom output
layers do not support the loss functions that you need, then you can define a
custom training loop. For models that layer graphs do not support, you can
define a custom model as a function. To learn more, see Define Custom Training Loops, Loss Functions, and Networks.
Use deep learning operations to develop MATLAB® code for custom layers, training loops, and model functions.
|Deep learning array for customization (Since R2019b)|
|Dimension labels of |
|Find dimensions with specified label (Since R2019b)|
|Extract data from |
|Check if object is |
Deep Learning Operations
|Deep learning convolution (Since R2019b)|
|Deep learning transposed convolution (Since R2019b)|
|Long short-term memory (Since R2019b)|
|Gated recurrent unit (Since R2020a)|
|Dot-product attention (Since R2022b)|
|Embed discrete data (Since R2020b)|
|Sum all weighted input data and apply a bias (Since R2019b)|
|Deep learning solution of nonstiff ordinary differential equation (ODE) (Since R2021b)|
|Normalize data across all observations for each channel independently (Since R2019b)|
|Cross channel square-normalize using local responses (Since R2020a)|
|Normalize data across grouped subsets of channels for each observation independently (Since R2020b)|
|Normalize across each channel for each observation independently (Since R2021a)|
|Normalize data across all channels for each observation independently (Since R2021a)|
|Pool data to average values over spatial dimensions (Since R2019b)|
|Pool data to maximum value (Since R2019b)|
|Unpool the output of a maximum pooling operation (Since R2019b)|
|Apply rectified linear unit activation (Since R2019b)|
|Apply leaky rectified linear unit activation (Since R2019b)|
|Apply Gaussian error linear unit (GELU) activation (Since R2022b)|
|Apply softmax activation to channel dimension (Since R2019b)|
|Apply sigmoid activation (Since R2019b)|
|Cross-entropy loss for classification tasks (Since R2019b)|
|L1 loss for regression tasks (Since R2021b)|
|L2 loss for regression tasks (Since R2021b)|
|Huber loss for regression tasks (Since R2021a)|
|Half mean squared error (Since R2019b)|
|Connectionist temporal classification (CTC) loss for unaligned sequence classification (Since R2021a)|
- List of Functions with dlarray Support
View the list of functions that support
- Automatic Differentiation Background
Learn how automatic differentiation works.
- Use Automatic Differentiation In Deep Learning Toolbox
How to use automatic differentiation in deep learning.
- Train Network Using Model Function
This example shows how to create and train a deep learning network by using functions rather than a layer graph or a
- Update Batch Normalization Statistics Using Model Function
This example shows how to update the network state in a network defined as a function.
- Make Predictions Using Model Function
This example shows how to make predictions using a model function by splitting data into mini-batches.
- Initialize Learnable Parameters for Model Function
Learn how to initialize learnable parameters for custom training loops using a model function.
Deep Learning Function Acceleration
- Deep Learning Function Acceleration for Custom Training Loops
Accelerate model functions and model loss functions for custom training loops by caching and reusing traces.
- Accelerate Custom Training Loop Functions
This example shows how to accelerate deep learning custom training loop and prediction functions.
- Check Accelerated Deep Learning Function Outputs
This example shows how to check that the outputs of accelerated functions match the outputs of the underlying function.
- Evaluate Performance of Accelerated Deep Learning Function
This example shows how to evaluate the performance gains of using an accelerated function.