Why my RL Agent action still passing the upper and lower limit ?

10 views (last 30 days)
I am using Policy Gradient Agent, I want that my action only in range 0 - 100 and i already set up my UpperLimit to 100, and LowerLimit to 0. But as you can see -scope display 3-, my action still can passing the limit. How can i fix that ?
  2 Comments
denny
denny on 7 Dec 2021
I have solve my similar problem.
actInfo =rlNumericSpec([ 1],'UpperLimit',0.0771,'LowerLimit',-0.0405)
it means the minimum value is -0.0405, the maximum value is -0.0405+0.0771*2.
but your output is -1000 to 1000, I also donot know it.

Sign in to comment.

Answers (2)

Azmi Yagli
Azmi Yagli on 5 Sep 2023
Edited: Azmi Yagli on 5 Sep 2023
If you look at rlNumericSpec, you can see this on LoweLimit or UpperLimit section.
DDPG, TD3 and SAC agents use this property to enforce lower limits on the action. When using other agents, if you need to enforce constraints on the action, you must do so within the environment.
So if you use other algorithms you can use saturation, but it didn't work for me.
You can try discretize actions of your agent so it can have boundaries.
Or you can give negative reward, if your agent exceeds limits for action.

denny
denny on 18 Nov 2021
I have the same problems. How to solve it?

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!