newFcn = setLearnableParameters(oldFcn,pars)
returns a new actor or critic function approximator object, newFcn,
with the same structure as the original function object, oldFcn, and
the learnable parameter values specified in pars.
Policy
newPol = setLearnableParameters(oldPol,pars)
returns a new policy object, newPol, with the same structure as the
original function object, oldFcn, and the learnable parameter values
specified in pars.
Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.
load("DoubleIntegDDPG.mat","agent")
Obtain the critic function approximator from the agent.
Assume that you have an existing trained reinforcement learning agent. For this example, load the trained agent from Compare DDPG Agent to LQR Controller.
load("DoubleIntegDDPG.mat","agent")
Obtain the actor function approximator from the agent.
actor = getActor(agent);
Obtain the learnable parameters from the actor.
params = getLearnableParameters(actor)
params=2×1 cell array
{[-15.5717 -7.1444]}
{[ 0]}
Modify the parameter values. For this example, simply multiply all of the parameters by 2.
agent is an handle object. Therefore its parameters are
updated by setLearnableParameters whether
agent is returned as an output argument or not. For more
information about handle objects, see Handle Object Behavior.
oldFcn — Original actor or critic function object reinforcement learning function approximator object
Original function approximator object, specified as one of the following:
Learnable parameter values for the representation object, specified as a cell array.
The parameters in pars must be compatible with the structure and
parameterization of the agent, function approximator, or policy object passed as a first
argument.
To obtain a cell array of learnable parameter values from an existing agent,
function approximator, or policy object , which you can then modify, use the getLearnableParameters function.
newFcn — New actor or critic object rlValueFunction object | rlQValueFunction object | rlVectorQValueFunction object | rlContinuousDeterministicActor object | rlDiscreteCategoricalActor object | rlContinuousGaussianActor object
New actor or critic object, returned as a function object of the same type as
oldFcn. Except of its new learnable parameter values,
newFcn is the same as oldFcn.
New reinforcement learning policy, returned as a policy object of the same type as
oldPol. Apart from the learnable parameter values,
newPol is the same as oldPol.
Updated agent, returned as an agent object. Note that agent is
an handle object. Therefore its parameters are updated by
setLearnableParameters whether agent is
returned as an output argument or not. For more information about handle objects, see
Handle Object Behavior.
R2022a: setLearnableParameters now uses approximator objects instead of representation objects
Using representation objects to create actors and critics for reinforcement learning
agents is no longer recommended. Therefore, setLearnableParameters now
uses function approximator objects instead.
R2020a: setLearnableParameterValues is now setLearnableParameters
setLearnableParameterValues is now
setLearnableParameters. To update your code, change the function name
from setLearnableParameterValues to
setLearnableParameters. The syntaxes are equivalent.
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.