QTable reset when using train

1 view (last 30 days)
Hi,
I am using the Matlab Reinforcement Learning toolbox to train an rlQAgent.
The issue that I am facing is that the corresponding QTable, i.e., the output of the command getLearnableParameters(getCritic(qAgent)), is reset each time the train command is used.
Is it possible to avoid this reset so to train further a previously trained agent?
Thank you
Corrado

Accepted Answer

Emmanouil Tzorakoleftherakis
Edited: Emmanouil Tzorakoleftherakis on 20 May 2020
If you stop training, you should be able to continue from where you left off. I called 'train' on the basic grid world example a couple of times in a row and the output of 'getLearnableParameters(getCritic(qAgent))' was different. You can always save the trained agent and reload it as well to make sure you don't accidentally delete it.
Update:
There is a regularization term added to the loss which causes the other entries to change slightly. To avoid this, you can type:
qRepresentation.Options.L2RegularizationFactor=0;
  5 Comments
Emmanouil Tzorakoleftherakis
Updated my answer above with a solution - hope that helps.
Corrado Possieri
Corrado Possieri on 20 May 2020
Thank you Emmanouil, this solved the issue.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!