Direct GPU-to-GPU Communication with Parallel Computing Toolbox / SPMD
4 views (last 30 days)
Show older comments
I am using spmd to enable parallel computing with multiple GPUs on one workstation. Basically, the GPUs do some calculation, broadcast their results, update their parameters, and iterate. The problem is, using labSend (actually, gplus in my case) to aggregate and broadcast the results is pretty slow. It is first pulling the results off of the GPU, copying to system memory, sending to other workers, then uploading to the other GPUs.
I understand that CUDA now has Peer-to-Peer memory access capability. This way, multiple-GPUs can directly access each other's memory. http://www.nvidia.com/docs/IO/116711/sc11-multi-gpu.pdf This is accomplished with a function like: cudaMemcpyPeerAsync().
Thus, I would like to have a gplus() or labSend() that copies a gpuArray directly to the memory of another GPU on another worker.
Is this possible today? If not, is it something you are working on?
Thanks, Jon
0 Comments
Answers (1)
Edric Ellis
on 27 Apr 2015
Edited: Edric Ellis
on 27 Apr 2015
Unfortunately, as you observe, Parallel Computing Toolbox currently has no means by which to achieve this. I believe you can use the peer-to-peer memory copying across multiple processes within a single node, which means you could use the GPU MEX interface to copy data.
See Also
Categories
Find more on GPU Computing in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!