Set up reinforcement learning environment to run multiple simulations
When you define a custom training loop for reinforcement learning, you can
simulate an agent or policy against an environment using the
function. Use the
setup function to configure the environment for running
simulations using multiple calls to
Simulate Environment and Agent
Create a reinforcement learning environment and extract its observation and action specifications.
env = rlPredefinedEnv("CartPole-Discrete"); obsInfo = getObservationInfo(env); actInfo = getActionInfo(env);
Create a Q-value function approximator.
actorNetwork = [... featureInputLayer(obsInfo.Dimension(1),... Normalization="none",Name="state") fullyConnectedLayer(24,Name="fc1") reluLayer(Name="relu1") fullyConnectedLayer(24,Name="fc2") reluLayer(Name="relu2") fullyConnectedLayer(2,Name="output") softmaxLayer(Name="actionProb")]; actorNetwork = dlnetwork(actorNetwork); actor = rlDiscreteCategoricalActor(actorNetwork,obsInfo,actInfo);
Create a policy object using the function approximator.
policy = rlStochasticActorPolicy(actor);
Create an experience buffer.
buffer = rlReplayMemory(obsInfo,actInfo);
Set up the environment for running multiple simulations. For this example, configure the training to log any errors rather than send them to the command window.
Simulate multiple episodes using the environment and policy. After each episode, append the experiences to the buffer. For this example, run 100 episodes.
for i=1:100 output = runEpisode(env,policy,MaxSteps=300); append(buffer,output.AgentData.Experiences) end
Cleanup the environment.
Sample a mini-batch of experiences from the buffer. For this example, sample 10 experiences.
batch = sample(buffer,10);
You can then learn from the sampled experiences and update the policy and actor.
env — Reinforcement learning environment
rlFunctionEnv object |
SimulinkEnvWithAgent object |
rlNeuralNetworkEnvironment object |
rlMDPEnv object | ...
Reinforcement learning environment, specified as one of the following objects.
rlFunctionEnv— Environment defined using custom functions.
rlMDPEnv— Markov decision process environment
rlNeuralNetworkEnvironment— Environment with deep neural network transition models
Predefined environment created using
Custom environment created from a template (
Specify optional pairs of arguments as
the argument name and
Value is the corresponding value.
Name-value arguments must appear after other arguments, but the order of the
pairs does not matter.
StopOnError — Option to stop episode when error occurs
"on" (default) |
Option to stop an episode when an error occurs, specified as one of the following:
"on"— Stop the episode when an error occurs and generate an error message in the MATLAB® command window.
"off"— Log errors in the
UseParallel — Option for using parallel simulations
false (default) |
Option for using parallel simulations, specified as a
value. Using parallel computing allows the usage of multiple cores, processors,
computer clusters, or cloud resources to speed up simulation.
When you set
output of a subsequent call to
runEpisode is an
rl.env.Future object, which supports deferred evaluation of the
SetupFcn — Function to run on each worker before running an episode
 (default) | function handle
Function to run on the each worker before running an episode, specified as a handle to a function with no input arguments. Use this function to perform any preprocessing required before running an episode.
CleanupFcn — Function to run on each worker when cleaning up the environment
 (default) | function handle
Function to run on each worker when cleaning up the environment, specified as a
handle to a function with no input arguments. Use this function to clean up the
workspace or perform other processing after calling
TransferBaseWorkspaceVariables — Option to send model and workspace variables to parallel workers
"on" (default) |
Option to send model and workspace variables to parallel workers, specified as
"off". When the option is
"on", the client sends variables used in models and defined in
the base MATLAB workspace to the workers.
AttachedFiles — Additional files to attach to the parallel pool
string | string array
Additional files to attach to the parallel pool before running an episode, specified as a string or string array.
WorkerRandomSeeds — Work random seeds
-1 (default) | vector
Worker random seeds, specified as one of the following:
-1— Set the random seed of each worker to the worker ID.
Vector with length equal to the number of workers — Specify the random seed for each worker.