MATLAB Answers

Reinforcement Learning -- Rocket Lander

19 views (last 30 days)
Th "Rocket Lander" example does not converge with the stated hyperparameters. Someone was helpful enough to give me the following values:
learning rate = 1e-4
clip factor = 0.1
mini-batch size = 128
Although these values work better, the algorithm still does not converge. After about 14,000 episodes there are many successful landings, but they are interspersed with violent crash landings. Anybody at MathWorks or otherwise have any suggestions? Thank you.
Averill M. LAW

  7 Comments

Show 4 older comments
Averill Law
Averill Law on 26 May 2020
Hi Emmanouil,
I could not get your model to work, which is probably the result of me doing something wrong. I have, instead, included results for the Rocket Lander model in Version 2020a. With a Clip Rate of 0.125, it converged in 13,065 episodes, which was the best case. It did not converge for Clip Rate = 0.02, and I have enclosed the corresponding plot. Why are you changing the reward structure and hyperparameters for this model?
You can see that convergence is very sensitive to the value of Clip Rate. Do you have any plans to add a "formal" mechanism for performing hyperparameter tuning in the next release? Thank you.
Averill M. Law
Averill Law
Averill Law on 26 May 2020
Hi Emmanouil,
Presumably, the value of epsilon is decayed during an episode according to the formula in the documentation. Is it decayed after each state transition? What happens to epsilon at the beginning of a new episode? Thank you very much for your assistance.
Best regards,
Averill M. Law
Emmanouil Tzorakoleftherakis
Hi Averill,
I am not sure why you do not get convergence, but comparing the screenshot you sent with the one that is in the live script I sent, you can clearly see that the Episode rewards are on a different scale (~7000 vs ~300-400). I would suggest to start fresh by deleting temp files and download and run the example I sent from below. You shouldn't need to change the clip factor or any other hyperparameter in the example below.
The reason we made changes to the example is that some of the latest underlying optimizations changed the numerical behavior of training (which is why the example was not converging), so we made these changes to get a more robust result. The reward is typically the most important thing you need to get right to be able to get the desired behavior. This is most of the time the first thing that needs to be retuned if you don't get the right behavior.
In terms of epsilon, I think you may be confusing epsilon greedy exploration, which is used e.g. in DQN and Q learning with the clip rate epsilon in PPO (please correct me if I am wrong). The former does indeed change with time in the current implementation but the latter is fixed. They share the same letter so it may be confusing but the two hyperparameters serve very different purposes. PPO does not use the "exploration epsilon" because it handles exploration through the stochastic nature of the actor as well as through an additional entropy term in the objective. PPO uses 'clip factor epsilon' to control the amount of change that happens to the neural network weights.
Hope that helps.

Sign in to comment.

Accepted Answer

Emmanouil Tzorakoleftherakis
Hi Averill,
Here is a version that converges in ~18-20k episodes - thank you for pointing out that this example was not converging properly. This version will also be included in a few weeks in the next R2020a update. We changed some of the hyperparameters as well as the reward signal.
For the stochastic grid world I don't think we have a published example (if I recall correctly). If you used the basic grid world example as reference, you will likely need to make some changes over there as well.
Hope that helps

  0 Comments

Sign in to comment.

More Answers (2)

Averill Law
Averill Law on 22 May 2020
Hi Emmanouil,
You are absolutely right about the "Waterfall Grid World" examples. There is a basic discussion but not complete programs. I got the Deterministic version to converge, but not the Stochastic version.
I look forward to hearing from you further on the "Rocket Lander" example, as per my comments earlier today. Using clip rate = 0.125 I got convergence in 13,065 episodes. Thank you.
Averill M. Law

  0 Comments

Sign in to comment.


Averill Law
Averill Law on 1 Jun 2020
Hi Emmanouil,
Your new version of the "Rocket Lander" example does not work on my computer. At line 25 I get the error message:
Unrecognized function or variable 'RLValueRepresentation'
I do not know how to delete Temp files.
I was, in fact, interested in the epsilon for an epsilon-greedy policy when using Q-learning. As episode 1 ends and episode 2 begins, does the value of epsilon continue to be decayed? Is it correct that the state S is NOT reinitialized at this time?
Thank you very much for your assistance.
Averill M. Law

  7 Comments

Show 4 older comments
Emmanouil Tzorakoleftherakis
Glad to help. I sent you an invite in the email found on your webpage for tomorrow.
Thank you,
Emmanouil
Averill Law
Averill Law on 3 Jun 2020
Hi Emmanouil,
If I understand correctly, you will call me at 520-795-6265 on Thursday at 11AM Arizona time.
I'm totally amazed that you want your whole team to discuss the convergence of the Rocket Lander example with me. Thank you.
Averill M. Law
520-795-6265
Emmanouil Tzorakoleftherakis
Of course. Talk to you tomorrow,
Emmanouil

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!