Why my RL Agent action still passing the upper and lower limit ?
10 views (last 30 days)
Show older comments
I am using Policy Gradient Agent, I want that my action only in range 0 - 100 and i already set up my UpperLimit to 100, and LowerLimit to 0. But as you can see -scope display 3-, my action still can passing the limit. How can i fix that ?
2 Comments
Emmanouil Tzorakoleftherakis
on 9 Jun 2021
which one is the action here? How does your actor network look like?
denny
on 7 Dec 2021
I have solve my similar problem.
actInfo =rlNumericSpec([ 1],'UpperLimit',0.0771,'LowerLimit',-0.0405)
it means the minimum value is -0.0405, the maximum value is -0.0405+0.0771*2.
but your output is -1000 to 1000, I also donot know it.
Answers (2)
Azmi Yagli
on 5 Sep 2023
Edited: Azmi Yagli
on 5 Sep 2023
If you look at rlNumericSpec, you can see this on LoweLimit or UpperLimit section.
DDPG, TD3 and SAC agents use this property to enforce lower limits on the action. When using other agents, if you need to enforce constraints on the action, you must do so within the environment.
So if you use other algorithms you can use saturation, but it didn't work for me.
You can try discretize actions of your agent so it can have boundaries.
Or you can give negative reward, if your agent exceeds limits for action.
0 Comments
See Also
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!