MATLAB crashes when using Reinforcement Learning Toolbox to train an agent using Parallel Computing.
11 views (last 30 days)
Show older comments
MathWorks Support Team
on 30 Jul 2020
Answered: MathWorks Support Team
on 30 Jul 2020
I am running the Reinforcement Learning toolbox to train an agent using parallel computing.
When I use 20 cores (+4*16GB gpu) it runs well but when 32cores / 36cores / 40 cores are used, MATLAB 2020a crashes.
Why is the crash happening?
Accepted Answer
MathWorks Support Team
on 30 Jul 2020
MATLAB might crash while attempting to train a reinforcement learning agent in parallel with ten or more workers. The crash is due to a communication race condition between the client and worker processes.
You can avoid this error by updating MATLAB to R2020a Update 3.
As a workaround, to bypass the communication race condition for PG, DQN, DDPG, TD3, and PPO agents, use synchronous parallel training and configure the workers to wait until the end of the episode before sending data to the host. To do so, configure your rlTrainingOptions object as shown in the following code:
>> trainOptions = rlTrainingOptions;
>> trainOptions.UseParallel = true;
>> trainOptions.ParallelizationOptions.Mode = "sync";
>> trainOptions.ParallelizationOptions.StepsUntilDataIsSent = -1;
Using StepsUntilDataIsSent = -1 is not supported for AC agents. To avoid a communication race condition for these agents, consider using a PPO agent with experience-based parallel training or a PG agent with gradient-based parallel training.
0 Comments
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!