Reinforcement Learning SAC Agent Critics initialise with identical weights
Show older comments
I am following the basic outline from https://uk.mathworks.com/help/reinforcement-learning/ref/rl.agent.rlsacagent.html#mw_1dfe5ced-ed0e-4ce9-a25c-38c4f163ffcb to create an RL SAC agent but I'm getting an error that the two critics I generate are identical. The help page says to use the "initialize" function to initialize the two networks separetely to ensure that the initial weights are different.
% Initialise the two critics separately
criticNetwork1 = initialize(criticNetwork);
criticNetwork2 = initialize(criticNetwork);
When I do this and input the same test values for (Observation) and (Action), I get the same value from both critic networks, implying that the initial weights are identical.
critic1 = rlQValueFunction(criticNetwork1,observationInfo,actionInfo,'ActionInputNames','action','ObservationInputNames','observation');
critic2 = rlQValueFunction(criticNetwork2,observationInfo,actionInfo,'ActionInputNames','action','ObservationInputNames','observation');
I assume this is a problem on my end and not with the "initialize" function but would welcome any suggestions as to what would be causing this problem!
Accepted Answer
More Answers (0)
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!