LSTM Layer Architecture: LSTM units and sequence length
17 views (last 30 days)
Show older comments
I am a researcher of human motion analysis. In my work, I want to build a LSTM neural network. After some trying, my net can work. But I want to check some things about the function, "LSTMlayer(outputSize)".
My data is a sequence that have 1000 frames. And I think that LSTM is not appropriate for being longer than 500, so I set the outputSize as 200-400. But I am not sure whether the "outputSize" is the same as "time step" in matlab. I had search lots of page and information. But I did not find any information about it.
Also, if the "time step" is shorter than the length of sequence input, the LSTM Architecture is like a window and slide through all data? Or the all sequence will input into the LSTM units (one LSTM unit receive one more frame)?
Thanks for any answers or discussion!!
0 Comments
Answers (2)
Maria Duarte Rosa
on 3 Jan 2018
Hi Otto,
The outputSize of a LSTM layer is not directly related to a time window that slides through the data. The entire sequence runs through the LSTM unit. The outputSize is more like a complexity parameter, where a larger outputSize will allow the network to learn more complex recurrent patterns from the data, while being more prone to overfitting. A smaller outputSize will not be able to learn very complex patterns but will be less prone to overfit. There isn't a hard rule to set the outputSize parameter as this is highly problem specific. Please see more information on how to use LSTM networks in MATLAB in the following links:
Y Q
on 5 Oct 2018
Edited: Y Q
on 5 Oct 2018
I have the same confusion. My understanding is the outputSize is dimensions of the output unit and the cell state. for example, if the input sequences have the dimension of 12*50 (50 is the time steps), outputSize is set to be 10, then the dimensions of the hidden unit and the cell state are 10*1, which don't have anything to do with the dimension of the input sequence. I have studied all the LiveScript example and found out the above conclusion. Is it correct?
What I still confusing is the parameter 'OutputMode: sequence/last'. if lstm layer is followed by a fully connected (FC) layer, the number of the input neurons in FC is equal to the outputSize set in the lstm layer. The change of the parameter 'OutputMode: sequence/last' doesn't change the dimensions of the hidden unit and the cell state, and also doesn't change the number of the weights in the FC layer. My question is when 'OutputMode' is set to be 'sequence' what is exactly passed from the last layer to the FC layer, the whole sequence of the hidden unit, with the dimension of 10*50(from the example I listed above)? if so, how can the FC layer handel this 10*50 data with a 10 input neurons?
2 Comments
Ning Wang
on 1 Aug 2019
I have exactly the same question. Would you please give me some inspiration if you have solved this question.
See Also
Categories
Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!