Why does the SAC trainning stop at the first episode? What can trigger it?
2 views (last 30 days)
Show older comments
I am trainning an SAC Agent for a path following mobile robot in matlab with 2 different PI controllers one for the linear velocity controll and the other for the angular velocity. I connected the parameters Ki and Kp of both Controllers to the SAC Agent. I decided to define the Reward as (Reward = -0.1*(abs(Error_Linear)+abs(Error_Angular))) and the stopping condition (Is_done = (abs(Error_Linear)+abs(Error_Angular))<1). I am not understanding what triggers the trainning process to stop at the first episode.
0 Comments
Answers (1)
Ayush Aniket
on 14 Nov 2024
Edited: Ayush Aniket
on 14 Nov 2024
Hi Renaldo,
The reason for the agent training stopping after first episode could be due to the "Training termination" condition specified as the StopTrainingCriteria argument in the rlTrainingOptions function. Refer to the following documentation link to read about the argument:
One similar example can be found here: https://www.mathworks.com/matlabcentral/answers/1779640-reinforcement-learning-agent-stops-training-unexpectedly
If this is not the issue, please share the script you are using.
0 Comments
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!