Reinforcement Learning -- Rocket Lander
8 views (last 30 days)
Show older comments
Averill Law
on 19 May 2020
Commented: Emmanouil Tzorakoleftherakis
on 3 Jun 2020
Th "Rocket Lander" example does not converge with the stated hyperparameters. Someone was helpful enough to give me the following values:
learning rate = 1e-4
clip factor = 0.1
mini-batch size = 128
Although these values work better, the algorithm still does not converge. After about 14,000 episodes there are many successful landings, but they are interspersed with violent crash landings. Anybody at MathWorks or otherwise have any suggestions? Thank you.
Averill M. LAW
7 Comments
Emmanouil Tzorakoleftherakis
on 27 May 2020
Hi Averill,
I am not sure why you do not get convergence, but comparing the screenshot you sent with the one that is in the live script I sent, you can clearly see that the Episode rewards are on a different scale (~7000 vs ~300-400). I would suggest to start fresh by deleting temp files and download and run the example I sent from below. You shouldn't need to change the clip factor or any other hyperparameter in the example below.
The reason we made changes to the example is that some of the latest underlying optimizations changed the numerical behavior of training (which is why the example was not converging), so we made these changes to get a more robust result. The reward is typically the most important thing you need to get right to be able to get the desired behavior. This is most of the time the first thing that needs to be retuned if you don't get the right behavior.
In terms of epsilon, I think you may be confusing epsilon greedy exploration, which is used e.g. in DQN and Q learning with the clip rate epsilon in PPO (please correct me if I am wrong). The former does indeed change with time in the current implementation but the latter is fixed. They share the same letter so it may be confusing but the two hyperparameters serve very different purposes. PPO does not use the "exploration epsilon" because it handles exploration through the stochastic nature of the actor as well as through an additional entropy term in the objective. PPO uses 'clip factor epsilon' to control the amount of change that happens to the neural network weights.
Hope that helps.
Accepted Answer
Emmanouil Tzorakoleftherakis
on 20 May 2020
Hi Averill,
Here is a version that converges in ~18-20k episodes - thank you for pointing out that this example was not converging properly. This version will also be included in a few weeks in the next R2020a update. We changed some of the hyperparameters as well as the reward signal.
For the stochastic grid world I don't think we have a published example (if I recall correctly). If you used the basic grid world example as reference, you will likely need to make some changes over there as well.
Hope that helps
0 Comments
More Answers (2)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!