2 out of 7 Observations Defined in MATLAB DDPG Reinforcement Learning Environment. Are the rest given random values?

1 view (last 30 days)
After reading up one Deep Deterministic Policy Gradient, I found this example on MATLAB:
My question is the following: In DDPG, we plug in the Observation to our Actor to get our actions. The observations in the MATLAB environment are 7: x, y, dx, dy, sin, cos, dtheta. However, only x and y are assigned in the beginning. Does that mean that the rest are given random values before placed in the Critic Network? If my understanding is wrong, could someone please explain to me what is occurring in this model? Thank You

Accepted Answer

Emmanouil Tzorakoleftherakis
Hello,
I am assuming you are referring to the initialization of x and y inside the "flyingRobotResetFcn" function. Basically, if you are using a Simulink model as your environment (like in this case), there is no need to initialize any of the observations in your problem. The initial conditions are directly decided by values in your Simulink blocks. However, it is good practive to try and change the initial conditions of every episode so that the agent gets exposed to different scenarios. Reinforcement Learning Toolbox lets you do that using the reset function mechanism. So what is happening here is that we are using the reset function to change x0 and y0 and let the remaining observations to the values determined in the Simulink model.
Hope that helps.

More Answers (0)

Products


Release

R2020a

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!