How to convert this vectorized code into GPU code for MAXIMUM speedup ?

Answers (2)

I was able to get a marginal speedup with additional vectorization of the mask:
x = sum(I < cat(3, 120, 155, 160), 3) == true;
but otherwise you've done pretty well. You've got to wonder why you need to replicate the output on every channel however. Why not discard the colour channels if you're using grayscale?

2 Comments

Actually i want to show speedup or lets say difference between normal cpu time and gpu time..... so if i first convert the picture into grayscale and use imtool then cpu time is also very less so i am not able to show speed up ...hence i decided not to discard R G B channels.
Right, but then you're including the cost of replicating data in GPU memory and doing indexing, which is memory-bound and doesn't necessarily show the GPU in a great light.

Sign in to comment.

The bottlenecks of the code are the darn clear all and the disk access using imwrite. Moving this to the GPU will not help.

Asked:

on 13 Apr 2017

Commented:

Jan
on 26 Apr 2017

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!