Parallel CPU computing for recurrent Neural Networks (LSTMs)

13 views (last 30 days)
Hello,
states that parallel CPU computing for LSTMs is possible using the trainNetwork function and choosing the execution environment as parallel using trainingOptions. It also states that the Parallel Computing Toolbox is necessary.
I do have the Parallel Computing Toolbox installed, writing pool = parpool gives me the number of workers as 23 (the amount of cores my CPU has)
I also added 'ExecutionEnvironment','parallel' to my trainingOptions(), however, I get the error that "Parallel training of recurrent networks is not supported. 'ExecutionEnvironment' value in trainingOptions function must be 'auto', 'gpu' or 'cpu'"
...why?

Answers (2)

Raymond Norris
Raymond Norris on 4 Feb 2022
I'm assuming you're only running this on your local machine (with 23 cores)? And I'm assuming you don't have a GPU? If so, set ExecutionEnvironment to "cpu" (or even "auto", which defaults to gpu if it exists and cpu if a gpu doesn't exist).
  2 Comments
ThomasP
ThomasP on 4 Feb 2022
thanks for your answer, yes I'm running it on my local machine with 23 cores and don't have a gpu, however, if I set ExecutionEnvironment to "cpu" it will only run on a single core
Raymond Norris
Raymond Norris on 4 Feb 2022
Right, fair point. One option is to download the R2022a prelease to see if that resolves your issue.
Keep in mind, "parallel" will default to (any) GPU MATLAB finds. Therefore, you'll want MATLAB to ignore it by first calling
setenv CUDA_VISIBLE_DEVICES -1
and then train your model.

Sign in to comment.


Joss Knight
Joss Knight on 7 Feb 2022
That doc page is about shallow networks (using train) rather than deep networks (using trainNetwork). Parallel training in trainNetwork for sequence networks is supported from the next release.
How are you confirming that ExecutionEnvironment 'cpu' is only using a single core? It should be using all your cores.
Parallel training for CPU is only really useful when you have a multi-node cluster of machines. Generally speaking all CPU Deep Learning code is multithreaded and makes full use of your hardware and there is no advantage to parallel training or inference - in fact it should make it slower.

Products


Release

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!