Options for AC agent
rlACAgentOptions object to specify options for
creating actor-critic (AC) agents. To create an actor-critic agent, use
For more information see Actor-Critic Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
creates a default
option set for an AC agent. You can modify the object properties using dot
opt = rlACAgentOptions
NumStepsToLookAhead— Number of steps ahead
1(default) | positive integer
Number of steps to look ahead in model training, specified as a positive integer. For AC agents, the number of steps to look ahead corresponds to the training episode length.
EntropyLossWeight— Entropy loss weight
0(default) | scalar value between
Entropy loss weight, specified as a scalar value between
1, inclusive. A higher loss weight value promotes agent exploration
by applying a penalty for being too certain about which action to take. Doing so can
help the agent move out of local optima.
The entropy loss function for episode step t is:
E is the entropy loss weight.
M is the number of possible actions.
μk(St) is the probability of taking action Ak when in state St following the current policy.
When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.
SampleTime— Sample time of agent
1(default) | positive scalar
Sample time of agent, specified as a positive scalar.
DiscountFactor— Discount factor
0.99(default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
|Actor-critic reinforcement learning agent|
Create an AC agent options object, specifying the discount factor.
opt = rlACAgentOptions('DiscountFactor',0.95)
opt = rlACAgentOptions with properties: NumStepsToLookAhead: 1 EntropyLossWeight: 0 SampleTime: 1 DiscountFactor: 0.9500
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;