MATLAB Answers

MATLAB crashes when using Reinforcement Learning Toolbox to train an agent using Parallel Computing.

6 views (last 30 days)
I am running the Reinforcement Learning toolbox to train an agent using parallel computing.
When I use 20 cores (+4*16GB gpu) it runs well but when 32cores / 36cores / 40 cores are used, MATLAB 2020a crashes.
Why is the crash happening?

Accepted Answer

MathWorks Support Team
MathWorks Support Team on 30 Jul 2020
MATLAB might crash while attempting to train a reinforcement learning agent in parallel with ten or more workers. The crash is due to a communication race condition between the client and worker processes.
You can avoid this error by updating MATLAB to R2020a Update 3.
As a workaround, to bypass the communication race condition for PG, DQN, DDPG, TD3, and PPO agents, use synchronous parallel training and configure the workers to wait until the end of the episode before sending data to the host. To do so, configure your rlTrainingOptions object as shown in the following code:
>> trainOptions = rlTrainingOptions;
>> trainOptions.UseParallel = true;
>> trainOptions.ParallelizationOptions.Mode = "sync";
>> trainOptions.ParallelizationOptions.StepsUntilDataIsSent = -1;
Using StepsUntilDataIsSent = -1 is not supported for AC agents. To avoid a communication race condition for these agents, consider using a PPO agent with experience-based parallel training or a PG agent with gradient-based parallel training.


Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!