I have saved DDPG agent using the optiopn
rlTrainingOptions.SaveAgentValue = 3000
During the simulations number of agents are saved that have episode value greater than 3000. However when I am trying to use the exact same agent for simulation using the command:
simOptions = rlSimulationOptions('MaxSteps',maxSteps);
experience = sim(env,saved_agent,simOptions);
But i an not getting the exact same response as I got during the training. My variance is 0.5 and my variance decay rate is 1e-4. How to replicate the behavior that I got during training using the same agent
0 Comments
Sign in to comment.