Multi GPU option for LSTM/GRU Layers
5 views (last 30 days)
I know that right now it is not possible to use LSTM Layers and the multi-gpu option for the training process in Deep Learning. Is this a function that will be implemented in near future? I would realy like to use Matlab for my current research but the calculations are taking just too long with the size of the data and the current restriction of only one Geforce 1080TI.
Bhargavi Maganuru on 10 Jul 2020
Parallel training is not currently supported for networks with LSTM layers. This has been brought to the concerned people. It might be considered in any future release.