- The concatenation layer expects all its inputs to be in the same format. For example, if one input is height x width x channel x batch, the other inputs should be height x width x channel x batch as well. By default, the LSTM layer outputs the format channel x batch x time while the flatten layer outputs the format channel x batch. To make sure that the LSTM layer also outputs channel x batch, I’ve used the “OutputMode” option and set it to output activations of only the last time step.
- The concatenation layer supports concatenation only along the height, width, or channel dimensions. Assuming you want to concatenate along the channel dimension, after changing the “OutputMode” of the LSTM, the channel dimension is the first dimension for the flatten and concatenation layers. As such, I’ve changed the concatenation dimension to 1.
create a new deep learning network with two inputs, one is imageinput, the other is sequence input, but deep learning network analyzer makes error that Incompatible layer type
1 view (last 30 days)
Show older comments
lgraph = layerGraph();
tempLayers = [
sequenceInputLayer(1,"Name","sequence")
lstmLayer(128,"Name","lstm")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
imageInputLayer([32 32 1],"Name","imageinput")
convolution2dLayer([3 3],32,"Name","conv_1","Padding","same","Stride",[2 2])
reluLayer("Name","relu_1")
convolution2dLayer([3 3],32,"Name","conv_2","Padding","same","Stride",[2 2])
reluLayer("Name","relu_2")
globalAveragePooling2dLayer("Name","gapool")
flattenLayer("Name","flatten")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(2,2,"Name","concat")
softmaxLayer("Name","softmax")
classificationLayer("Name","classoutput")];
lgraph = addLayers(lgraph,tempLayers);
% 清理辅助变量
clear tempLayers;
lgraph = connectLayers(lgraph,"flatten","concat/in2");
lgraph = connectLayers(lgraph,"lstm","concat/in1");
0 Comments
Answers (1)
Malay Agarwal
on 14 Feb 2024
Edited: Malay Agarwal
on 14 Feb 2024
Hi,
I am assuming that you’re using MATLAB R2021b or earlier.
It is not possible to combine a Recurrent Neural Network and a Convolutional Neural Network in MATLAB R2021b or earlier. The Incompatible Layer Types error is due to this limitation.
It is possible to do so starting from MATLAB R2022a and above. The following code works as expected:
lgraph = layerGraph();
tempLayers = [
sequenceInputLayer(1,"Name","sequence")
lstmLayer(128,"Name","lstm", "OutputMode","last")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
imageInputLayer([32 32 1],"Name","imageinput")
convolution2dLayer([3 3],32,"Name","conv_1","Padding","same","Stride",[2 2])
reluLayer("Name","relu_1")
convolution2dLayer([3 3],32,"Name","conv_2","Padding","same","Stride",[2 2])
reluLayer("Name","relu_2")
globalAveragePooling2dLayer("Name","gapool")
flattenLayer("Name","fc")];
lgraph = addLayers(lgraph,tempLayers);
tempLayers = [
concatenationLayer(1,2,"Name","concat")
softmaxLayer("Name","softmax")
classificationLayer("Name","classoutput")];
lgraph = addLayers(lgraph,tempLayers);
clear tempLayers;
lgraph = connectLayers(lgraph,"fc","concat/in2");
lgraph = connectLayers(lgraph,"lstm","concat/in1");
The original code and the code above differ in the following ways:
Please find the output of “analyzeNetwork” attached. Notice how there are no errors.
More information regarding the input/output behaviour of the concatenation layer can be found here: https://www.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.concatenationlayer.html#mw_12612bec-4413-46d8-a7bb-655b57081707.
0 Comments
See Also
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!