train neural net with prior solution

2 views (last 30 days)
Dilip Madan
Dilip Madan on 28 Nov 2024
Edited: Matt J on 28 Nov 2024
Net training finished with 10000 epochs. Need to strat where it finished.

Answers (1)

Matt J
Matt J on 28 Nov 2024
Edited: Matt J on 28 Nov 2024
Your post is under-detailed and does not tell us how the network and training are implemented. If I assume you are using trainnet, e.g.,
you can simply run the training again, giving as the second input argument your pre-existing, partially trained network.
  4 Comments
Dilip Madan
Dilip Madan on 28 Nov 2024
When I do this the initial Performance is much larger than the final Performance of the previous net. Is this a concern or should one not worry about this. Many Thanks.
Matt J
Matt J on 28 Nov 2024
Edited: Matt J on 28 Nov 2024
This method of resuming training is not optimal. The optimal method is using checkpoint saves, as explained at the link I gave you. But since you did not set checkpoints, the training algorithm does not have everything it needs to resume gracefully.
Even though you have the network weights and biases, there is no record of prior algorithm state variables, like learning rate schedules and momentum, etc... Therefore, the algorithm will need time to reconverge. You still might end up saving iterative effort as compared to starting from scratch, but next time you should use checkpoints. Or, consider moving to the Deep Learning Toolbox, which does give you finer control of algorithm variables.

Sign in to comment.

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!