PPO agent applied to ACC model
2 views (last 30 days)
Show older comments
I'm applying the PPO algorithm to the ACC model in this DDPG example:Train DDPG Agent for Adaptive Cruise Control
How ever the agent went wrong at 50th episode with error:
An error occurred while simulating "rlACCMdl" with the agent "agent".
[varargout{1},varargout{2}] = simWithPolicy(this.Env,this.Agent,simOpts);
[varargout{1:nargout}] = runImpl(this);
[varargout{1:nargout}] = run(task);
[this.Outputs{1:getNumOutputs(this)}] = internal_run(this);
runDirect(this);
runScalarTask(task);
run(seriestaskspec);
run(trainer);
train(this);
TrainingStatistics = run(trainMgr);
Caused by:
Invalid input argument type or size such as observation, reward, isdone or loggedSignals.
Standard deviation must be nonnegative. Ensure your representation always outputs nonnegative values for outputs that correspond to the standard deviation.
Is there anybody could solve this problem? Thanks for your help!
Answers (1)
Emmanouil Tzorakoleftherakis
on 3 Sep 2020
Hello,
Can you make sure that you set up your actor following a structure similar to this one? It seems that your variance path is not set up properly and gives negative values.
0 Comments
See Also
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!