Can this GPU code snippet be redone without nested loops?
Show older comments
Hello, I have two matrices: matrix1 is a logical array of 1s and 0s (1000 x 800) matrix2 is a different logical array (2000 x 800)
I am essentially taking the first row of matrix 1 and calculating the row summation of common elements / total number of elements. Both of these arrays are gpuArrays. What I finding out:
for j=gpuArray.colon(1,x)
for k=gpuArray.colon(1,y)
output(j,k)=sum(matrix1(j,:) & matrix2(k,:)) / sum(matrix1(j,:) | matrix2(k,:))
end
end
Runs very fast for small values of x and y, but once x,y is large is takes exponentially longer to run on the GPU
I am investigating the use of repmat here but I am not sure how to implement. Any ideas here? Or if there is another option for to get rid of the nested for loops?
Thanks
Accepted Answer
More Answers (1)
Sean de Wolski
on 11 Nov 2013
Edited: Sean de Wolski
on 11 Nov 2013
Is output preallocated?
Before the loops:
output = gpuArray.zeros(x,y);
This should speed it up dramatically.
3 Comments
Amr Ragab
on 11 Nov 2013
Sean de Wolski
on 11 Nov 2013
Edited: Sean de Wolski
on 11 Nov 2013
Do matrix1 and matrix2 already live on the gpu, i.e. are they gpuArrays?
Amr Ragab
on 11 Nov 2013
Categories
Find more on Parallel Computing in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!