Using gpuArrays to speed up a simulation (utilizing an NVIDIA GPU)
Show older comments
I have a Matlab simulation which updates an array :
Array=zeros(1,1000)
as follows:
for j=1:100000
Array=Array+rand(1,1000)
end
My question is the following: This loop is linear, so it cannot be parralelized for each slot in the array, but different slots are updated independently. So, naturally Matlab performs array operations such as this in parralell using all the cores of the CPU.
I wish to get the same calculation performed on my NVIDIA GPU, in order to speed it up (utilizing the larger number of cores there).
The problem is: that naively doing
tic
Array=gpuArray(zeros(1,1000));
for j=1:100000
Array=Array+gpuArray(rand(1,1000));
end
toc
results in the calculation time being 8 times longer!
What am I doing wrong?
Accepted Answer
More Answers (0)
Categories
Find more on GPU Computing in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!