Significant accuracy difference between CPU and GPU computation for training a feed-forward ANN

Dear Matlab Community,
I am experiencing significant performance difference betweem CPU and GPU calculations for a feed-forward ANN used for regression with differences in accuracy up to 20% (there is no randomness in the results, hence the problem is not the variation of the results but the absolut performance difference).
First a little explanation of what I am doing. I try to identify power consumption of different devices by only looking at the aggregated power consumption (sum of all device consumptions). The model is trained for regression with a feedforward net in 1 vs. All configuration for each device hence device consumption are iteratively extracted from the aggreagted signal.
To visualize my problem I created a very simple dataset:
  • Three devices with only on/off behaviour and power consumption with T=20000 samples
  • The aggregated signal with T=20000 samples
  • The ANN is trained and tested for each device 1,2 and 3 separately
  • Train data: and
  • Test data:
The training was done with the following architecture: the only difference is the calculation on GPU or CPU in both methods parpool and parallel computing is used.
%% CPU
net_init = feedforwardnet([32 32 32]);
net = train(net_init,Xtrain,Ytrain,'useParallel','yes','useGPU',no,'showResources','yes');
Ytest = net(Xtest);
%% GPU
net_init = feedforwardnet([32 32 32]);
net = train(net_init,Xtrain,Ytrain,'useParallel','yes','useGPU',only,'showResources','yes');
Ytest = net(Xtest);
The regresssion results (blue displays the ground truth) for both GPU(yellow) and CPU(red) are attached as Image and the results for GPU are significant worse.
The hardware used is the following:
CPU: Intel i7 4700k
GPU: 2x Nvidia Gtx 1080Ti in SLI
RAM: 64GB DDR4
OS: Linux Ubuntu 17.0 on SSD
Further the number of neurons was varried, which obviously affects the overall accuracy but not the performance difference between CPU and GPU.
rng('default') was set in various combination to init the weights of the neurons. All those methods do not improve accuracy on the GPU.
If anyone had a similar problem and could share his/her ideas it would be much appreciated
Cheers
Pascal

1 Comment

I have noticed the same kind of behaviour on my small two-layer network. I am using scaled gradient descent as training function for both GPU and CPU. Did you solve the problem? Best, Mikael

Sign in to comment.

Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Products

Release

R2017a

Asked:

on 1 Mar 2019

Commented:

on 9 Jul 2019

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!