Create options for AC agent
opt = rlACAgentOptions
rlACAgentOptions object for use as an argument when creating an AC
agent using all default settings. You can modify the object properties using dot
Create an AC agent options object, specifying the discount factor.
opt = rlACAgentOptions('DiscountFactor',0.95)
opt = rlACAgentOptions with properties: NumStepsToLookAhead: 1 EntropyLossWeight: 0 SampleTime: 1 DiscountFactor: 0.9500
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;
To train an agent using the asynchronous advantage actor-critic (A3C) method, you must set the agent and parallel training options appropriately.
When creating the AC agent, set the
NumStepsToLookAhead value to be greater than
1. Common values are
agentOpts = rlACAgentOptions('NumStepsToLookAhead',64);
agentOpts when creating your agent.
Configure the training algorithm to use asynchronous parallel training.
trainOpts = rlTrainingOptions('UseParallel',true); trainOpts.ParallelizationOptions.Mode = "async";
Configure the workers to return gradient data to the host. Also, set the number of steps before the workers send data back to the host to match the number of steps to look ahead.
trainOpts.ParallelizationOptions.DataToSendFromWorkers = "gradients"; trainOpts.ParallelizationOptions.StepsUntilDataIsSent = agentOpts.NumStepsToLookAhead;
trainOpts when training your agent.
comma-separated pairs of
the argument name and
Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
'NumStepsToLookAhead'— Number of steps ahead
1(default) | numeric value
Number of steps to look ahead in model training, specified as the comma-separated
pair consisting of
'NumStepsToLookAhead' and a numeric positive
integer value. For AC agents, the number of steps to look ahead corresponds to the
training episode length.
'EntropyLossWeight'— Entropy loss weight
0(default) | scalar value between
Entropy loss weight, specified as the comma-separated pair consisting of
'EntropyLossWeight' and a scalar value between
1, inclusive. A higher loss weight value
promotes agent exploration by applying a penalty for being too certain about which
action to take. Doing so can help the agent move out of local optima.
The entropy loss function for episode step t is:
E is the entropy loss weight.
M is the number of possible actions.
μk(St) is the probability of taking action Ak when in state St following the current policy.
When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.
'SampleTime'— Sample time of agent
1(default) | numeric value
Sample time of agent, specified as the comma-separated pair consisting of
'SampleTime' and a numeric value.
'DiscountFactor'— Discount factor
0.99(default) | numeric value
Discount factor applied to future rewards during training, specified as the
comma-separated pair consisting of
'DiscountFactor' and a positive
numeric value less than or equal to 1.