Clear Filters
Clear Filters

Multivariate LSTM data preparation

9 views (last 30 days)
Julia Kolaszynska
Julia Kolaszynska on 17 Jun 2024
Answered: Taylor on 18 Jun 2024
Hello,
I am training na LSTM Network to predict a future position of an object. I have 3000 simulations of objects in X,Y,Z and time. The training data passed t to the network is 3000x1 cell array with each cell array having different number of time steps (from 2000 to 5000) and 4 dimensions. Im using mu,sigma normalisation.
My input training sequences and output training sequences are, respecively:
Xtrain{n}=Xtrain_sequence(1:end-1,:);
Ytrain{n}=Xtrain_sequence(2:end,:);
The Network has a stander two LSTM layer design
layers = [
sequenceInputLayer($)
lstmLayer(64)
dropoutLayer(0.5)
lstmLayer(128)
dropoutLayer(0.5)
fullyConnectedLayer(4)];
While i get good results for open loop forecasting my goal is to obtain a whole sequence while passing only small number of time steps. For example I want to pass the 32 times steps of 4 dimensions (32x4) and obtain the rest of the sequence, so (1968x4). do that i use closed loop forecsating but the resulats i get are not the results I am looking for.
My question is: how should I be passign the data to be ablr to predict whole sequence from 32 time steps?
I have been trying to use a slging window technique to mu data but then i am not sure how to pass that data into the newtok s every of the 3000 sequences would be divided into subsequences of length 32?
Is there other way that i can structure the way i predict the sequences to get the results im lookng for?
Thank you for your help,
Julia
  1 Comment
Umar
Umar on 18 Jun 2024
Hi Julia, To predict a whole sequence from a short input sequence in an LSTM network, you can employ techniques like sequence-to-sequence models or teacher forcing. One approach is to use a sliding window to divide each sequence into subsequences of length 32 and pass them sequentially to the network. This way, the network can learn dependencies within the shorter sequences to predict the entire sequence accurately.
Another strategy is to implement a sequence-to-sequence model where the encoder processes the input sequence (32 time steps) and the decoder generates the output sequence (remaining time steps). This architecture allows the model to capture long-term dependencies and predict the entire sequence effectively.
Additionally, consider adjusting the network architecture, training parameters, and data preprocessing techniques to enhance the model's performance. Experiment with different hyperparameters, layer configurations, and normalization methods to optimize the network for sequence prediction tasks.
By exploring these strategies and fine-tuning your LSTM network, you can improve the accuracy of whole sequence predictions from a limited number of time steps.

Sign in to comment.

Answers (1)

Taylor
Taylor on 18 Jun 2024
I think this example on time series forcasting is what you're looking to replicate.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!