Options for DQN agent
rlDQNAgentOptions object to specify options for deep
Q-network (DQN) agents. To create a DQN agent, use
For more information, see Deep Q-Network Agents.
For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.
creates an options
object for use as an argument when creating a DQN agent using all default settings. You
can modify the object properties using dot notation.
opt = rlDQNAgentOptions
UseDoubleDQN— Flag for using double DQN
Flag for using double DQN for value function target updates, specified as a logical
value. For most application set
"on". For more information, see Deep Q-Network Agents.
EpsilonGreedyExploration— Options for epsilon-greedy exploration
Options for epsilon-greedy exploration, specified as an
EpsilonGreedyExploration object with the following
|Probability threshold to either randomly select an action or select the
action that maximizes the state-action value function. A larger value of
|Minimum value of |
At the end of each training time step, if
Epsilon is greater than
EpsilonMin, then it is updated using the following formula.
Epsilon = Epsilon*(1-EpsilonDecay)
To specify exploration options, use dot notation after creating the
rlDQNAgentOptions object. For example, set the epsilon value to
opt = rlDQNAgentOptions; opt.EpsilonGreedyExploration.Epsilon = 0.9;
If your agent converges on local optima too quickly, promote agent exploration by
SequenceLength— Maximum batch-training trajectory length when using RNN
1(default) | positive integer
Maximum batch-training trajectory length when using a recurrent neural network for
the critic, specified as a positive integer. This value must be greater than
1 when using a recurrent neural network for the critic and
TargetSmoothFactor— Smoothing factor for target critic updates
1e-3(default) | positive scalar less than or equal to 1
Smoothing factor for target critic updates, specified as a positive scalar less than or equal to 1. For more information, see Target Update Methods.
TargetUpdateFrequency— Number of steps between target critic updates
1(default) | positive integer
Number of steps between target critic updates, specified as a positive integer. For more information, see Target Update Methods.
ResetExperienceBufferBeforeTraining— Flag for clearing the experience buffer
Flag for clearing the experience buffer before training, specified as a logical value.
SaveExperienceBufferWithAgent— Flag for saving the experience buffer
Flag for saving the experience buffer data when saving the agent, specified as a
logical value. This option applies both when saving candidate agents during training and
when saving agents using the
For some agents, such as those with a large experience buffer and image-based
observations, the memory required for saving their experience buffer is large. In such
cases, to not save the experience buffer data, set
If you plan to further train your saved agent, you can start training with the
previous experience buffer as a starting point. In this case, set
MiniBatchSize— Size of random experience mini-batch
64(default) | positive integer
Size of random experience mini-batch, specified as a positive integer. During each training episode, the agent randomly samples experiences from the experience buffer when computing gradients for updating the critic properties. Large mini-batches reduce the variance when computing gradients but increase the computational effort.
When using a recurrent neural network for the critic,
MiniBatchSize is the number of experience trajectories in a
batch, where each trajectory has length equal to
NumStepsToLookAhead— Number of steps ahead
1(default) | positive integer
Number of steps to look ahead during training, specified as a positive integer.
N-step Q learning is not supported when using a recurrent neural network for the
critic. In this case,
NumStepsToLookAhead must be
ExperienceBufferLength— Experience buffer size
10000(default) | positive integer
Experience buffer size, specified as a positive integer. During training, the agent updates the critic using a mini-batch of experiences randomly sampled from the buffer.
SampleTime— Sample time of agent
1(default) | positive scalar
Sample time of agent, specified as a positive scalar.
DiscountFactor— Discount factor
0.99(default) | positive scalar less than or equal to 1
Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.
|Deep Q-network reinforcement learning agent|
This example shows how to create a DQN agent options object.
rlDQNAgentOptions object that specifies the agent mini-batch size.
opt = rlDQNAgentOptions('MiniBatchSize',48)
opt = rlDQNAgentOptions with properties: UseDoubleDQN: 1 EpsilonGreedyExploration: [1×1 rl.option.EpsilonGreedyExploration] SequenceLength: 1 TargetSmoothFactor: 1.0000e-03 TargetUpdateFrequency: 1 ResetExperienceBufferBeforeTraining: 1 SaveExperienceBufferWithAgent: 0 MiniBatchSize: 48 NumStepsToLookAhead: 1 ExperienceBufferLength: 10000 SampleTime: 1 DiscountFactor: 0.9900
You can modify options using dot notation. For example, set the agent sample time to
opt.SampleTime = 0.5;
Behavior changed in R2020a
Target update method settings for DQN agents have changed. The following changes require updates to your code:
TargetUpdateMethod option has been removed. Now, DQN agents
determine the target update method based on the
TargetSmoothFactor option values.
The default value of
TargetUpdateFrequency has changed from
To use one of the following target update methods, set the
properties as indicated.
|Smoothing||Less than |
|Periodic||Greater than |
|Periodic smoothing (new method in R2020a)||Greater than ||Less than |
The default target update configuration, which is a smoothing update with a
TargetSmoothFactor value of
0.001, remains the
This table shows some typical uses of
and how to update your code to use the new option configuration.
opt = rlDQNAgentOptions('TargetUpdateMethod',"smoothing");
opt = rlDQNAgentOptions;
opt = rlDQNAgentOptions('TargetUpdateMethod',"periodic");
opt = rlDQNAgentOptions; opt.TargetUpdateFrequency = 4; opt.TargetSmoothFactor = 1;
opt = rlDQNAgentOptions; opt.TargetUpdateMethod = "periodic"; opt.TargetUpdateFrequency = 5;
opt = rlDQNAgentOptions; opt.TargetUpdateFrequency = 5; opt.TargetSmoothFactor = 1;