Clear Filters
Clear Filters

how to merge two networks trained on different dataset ?

28 views (last 30 days)
Hi
I am trainning LSTM network for time series prediction.
My training data size is 160 and number of channels are 19.
Training this data on single machine takes a lot of time. Therefore, divided the data set to train on different machine.
This way I can optimize training hyperparameters quickly rather than waiting for long time.
My question is: when I train LSTM network on different machines with different data set. Is there a way to merge these trained network.
If not, what is the ideal method to optimize the training time and process.
-Chetan
  3 Comments
Chetan Thakur
Chetan Thakur on 29 Feb 2024
Thanks for the comment.
I do not wish to train the hyper parameter. My question was how do I train the LSTM network on two different machines on datasets(e.g. divided in half) and then combine these two trained network into one as if the network is trained on entire dataset.
I asked this because my machine is not powerfull enough and I want to take advantage of other machines with GPU to train my network.
Each machine has only one GPU.
As you said I do use parallel computing toolbox and experiment manager. But this does not speed up the training processes.
I hope you understand my requirement. If you need additional information please feel free to ask.
Looking forward to hear from you.
-Chetan
Hiro Yoshino
Hiro Yoshino on 29 Feb 2024
NN is just a big chunk of network parameters where linear and non-linear calculations take place. So if you want to, you can add the value together and devid it by the number of models but I wonder if this "network" works as you expect.
This is an example how to read the parameter values:
net.Layers(2).Bias

Sign in to comment.

Answers (1)

Jayanti
Jayanti on 19 Sep 2024 at 3:59
Edited: Jayanti on 19 Sep 2024 at 4:59
There is a technique called ensemble learning which allows to combine the multiple models. We can train various models to solve the same objective on similar datasets. Then we can combine their output using the ensemble technique.
The idea here is to train two models and then use strategies like voting (in case of classification), and averaging predictions (in case of regression).
In this case, you can use the ensemble learning on the two different trained LSTM models.
You can also use the concept of bagging and boosting to leverage the benefits of ensemble learning.
You can use the framework of Ensemble learning available in MATLAB. I have attached the MathWorks documentation link for your reference:
Let me know if you have any further query.

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!