crossChannelNormalizationLayer
Channel-wise local response normalization layer
Description
A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization.
Creation
Syntax
Description
creates a channel-wise local response normalization layer and sets the
layer
= crossChannelNormalizationLayer(windowChannelSize
)WindowChannelSize
property.
sets the optional properties layer
= crossChannelNormalizationLayer(windowChannelSize
,Name,Value
)WindowChannelSize
, Alpha
, Beta
, K
, and Name
using name-value pairs. For example,
crossChannelNormalizationLayer(5,'K',1)
creates a local
response normalization layer for channel-wise normalization with a window size
of 5 and K hyperparameter 1. You can specify multiple
name-value pairs. Enclose each property name in single quotes.
Properties
Cross-Channel Normalization
WindowChannelSize
— Size of the channel window
positive integer less than or equal to 16
Size of the channel window, which controls the number of channels that are used for the normalization of each element, specified as a positive integer less than or equal to 16.
If WindowChannelSize
is even, then the window is
asymmetric. The software looks at the previous
floor((w-1)/2)
channels and the following
floor(w/2)
channels. For example, if
WindowChannelSize
is 4, then the layer
normalizes each element by its neighbor in the previous channel and by
its neighbors in the next two channels.
Example:
5
Alpha
— α hyperparameter in normalization
0.0001 (default) | numeric scalar
α hyperparameter in the normalization (the multiplier term), specified as a numeric scalar.
Example:
0.0002
Beta
— β hyperparameter in normalization
0.75 (default) | numeric scalar
β hyperparameter in the normalization, specified as
a numeric scalar. The value of Beta
must be greater
than or equal to 0.01.
Example:
0.8
K
— K hyperparameter in the normalization
2 (default) | numeric scalar
K hyperparameter in the normalization, specified as
a numeric scalar. The value of K
must be greater than
or equal to 10-5.
Example:
2.5
Layer
Name
— Layer name
""
(default) | character vector | string scalar
NumInputs
— Number of inputs
1
(default)
This property is read-only.
Number of inputs to the layer, returned as 1
. This layer accepts a
single input only.
Data Types: double
InputNames
— Input names
{'in'}
(default)
This property is read-only.
Input names, returned as {'in'}
. This layer accepts a single input
only.
Data Types: cell
NumOutputs
— Number of outputs
1
(default)
This property is read-only.
Number of outputs from the layer, returned as 1
. This layer has a
single output only.
Data Types: double
OutputNames
— Output names
{'out'}
(default)
This property is read-only.
Output names, returned as {'out'}
. This layer has a single output
only.
Data Types: cell
Examples
Create Local Response Normalization Layer
Create a local response normalization layer for channel-wise normalization, where a window of five channels normalizes each element, and the additive constant for the normalizer is 1.
layer = crossChannelNormalizationLayer(5,K=1)
layer = CrossChannelNormalizationLayer with properties: Name: '' Hyperparameters WindowChannelSize: 5 Alpha: 1.0000e-04 Beta: 0.7500 K: 1
Include a local response normalization layer in a Layer
array.
layers = [ ...
imageInputLayer([28 28 1])
convolution2dLayer(5,20)
reluLayer
crossChannelNormalizationLayer(3)
fullyConnectedLayer(10)
softmaxLayer]
layers = 6x1 Layer array with layers: 1 '' Image Input 28x28x1 images with 'zerocenter' normalization 2 '' 2-D Convolution 20 5x5 convolutions with stride [1 1] and padding [0 0 0 0] 3 '' ReLU ReLU 4 '' Cross Channel Normalization cross channel normalization with 3 channels per element 5 '' Fully Connected 10 fully connected layer 6 '' Softmax softmax
Algorithms
Cross Channel Normalization Layer
A channel-wise local response (cross-channel) normalization layer carries out channel-wise normalization.
This layer performs a channel-wise local response normalization. It usually follows the ReLU
activation layer. This layer replaces each element with a normalized value it obtains using
the elements from a certain number of neighboring channels (elements in the normalization
window). That is, for each element in the input, the trainnet
function computes a
normalized value using
where K, α, and β
are the hyperparameters in the normalization, and ss is the
sum of squares of the elements in the normalization window [1]. You must specify the size of the normalization window using the
windowChannelSize
argument of the
crossChannelNormalizationLayer
function. You can also specify the
hyperparameters using the Alpha
, Beta
, and
K
name-value pair arguments.
The previous normalization formula is slightly different than what is presented in [1]. You can obtain the equivalent formula by multiplying the alpha
value by the windowChannelSize
.
Layer Input and Output Formats
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray
objects.
The format of a dlarray
object is a string of characters in which each
character describes the corresponding dimension of the data. The formats consist of one or
more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the
first two dimensions correspond to the spatial dimensions of the images, the third
dimension corresponds to the channels of the images, and the fourth dimension
corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray
objects in automatic differentiation
workflows, such as those for developing a custom layer, using a functionLayer
object, or using the forward
and predict
functions with
dlnetwork
objects.
This table shows the supported input formats of CrossChannelNormalizationLayer
objects and the
corresponding output format. If the software passes the output of the layer to a custom
layer that does not inherit from the nnet.layer.Formattable
class, or a
FunctionLayer
object with the Formattable
property
set to 0
(false
), then the layer receives an
unformatted dlarray
object with dimensions ordered according to the formats
in this table. The formats listed here are only a subset. The layer may support additional
formats such as formats with additional "S"
(spatial) or
"U"
(unspecified) dimensions.
Input Format | Output Format |
---|---|
|
|
|
|
|
|
In dlnetwork
objects, CrossChannelNormalizationLayer
objects also
support these input and output format combinations.
Input Format | Output Format |
---|---|
|
|
|
|
|
|
References
[1] Krizhevsky, A., I. Sutskever, and G. E. Hinton. "ImageNet Classification with Deep Convolutional Neural Networks." Advances in Neural Information Processing Systems. Vol 25, 2012.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Version History
Introduced in R2016a
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)