Answered
Complex numbers in ifft(fft(x))
I'm not sure but I assume it's because the GPU FFT computation is non-deterministic. The order of operations is not preserved be...

8 years ago | 1

| accepted

Answered
GPU Out of memory on device.
Reduce the |'MiniBatchSize'| option to |classify|.

8 years ago | 5

| accepted

Answered
Does matlab support parallelized loops on GPU
Scrap |A = repmat(A,N,1)| and instead repeat along dim 3. Then use |pagefun|. A = repmat(A,1,1,N); inv_A = pagefun(@mldi...

8 years ago | 1

| accepted

Answered
How to solve following CUDA error "matconvnet"?
Unknown CUDA errors like this are nearly always due to a kernel timeout. If you have a graphics card that is driving your displa...

8 years ago | 0

Answered
When using multiple GPU's get out of memory error at less than max single GPU memory x number GPUS
Make sure your default parallel pool only opens one worker per GPU, otherwise many workers will all try to share the same GPU, h...

8 years ago | 0

| accepted

Answered
Coding a deep learning network
There is no current CPU support for codegen for deep networks. You could use MATLAB Compiler instead, which can generate DLLs.

8 years ago | 0

Answered
Solving multiple systems of equations using GPU and iterative methods
Sparse |gpuArrays| have been supported since R2015b and |bicg| since R2016b, so you should just call |bicg| with the original sp...

8 years ago | 0

| accepted

Answered
How to run "Train a Convolutional Neural Network for Regression" example in single precision on laptop with NVIDIA GPU?
Laptop GPUs are severely limited in power and performance and are often no better for compute than the CPU, even in single preci...

8 years ago | 0

Answered
What's the name of the fully connected layers in googlenet?
net = googlenet; lg = getLayerGraph(net); for i = 1:numel(lg.Layers) name = sprintf('%s_%d', class(lg.Layers{i}),...

8 years ago | 1

Answered
Select a group gpu's that are discontinuous in gpuDevice
There are two ways: The best way is to set environment variable CUDA_VISIBLE_DEVICES to 0,2,3 before you start MATLAB, or as ...

8 years ago | 0

| accepted

Answered
Titan V using huge amount of memory
To avoid this cost, do not use the same GPU on multiple workers. Although the GPU is a data parallel system, it is not task para...

8 years ago | 1

Answered
Out of memory error in matlab
Clearly you can't store the activations for your entire dataset in memory. fc7 of AlexNet has 4096 single precision outputs. Tha...

8 years ago | 0

Answered
Titan V using huge amount of memory
The CUDA runtime in the latest drivers appears to be allocating a huge amount of main memory when it is initialized on a Volta c...

8 years ago | 0

Answered
Matlab limits GPU utilization?
I'm not sure I understand why it was recommended to you to move to Windows - can you explain? The way the Task Manager define...

8 years ago | 0

Answered
GTX1060 for deep learning semantic image segmentation
Yes, 3GB isn't enough for this example, sorry. SegNet is just too high resolution a network. You could try training on the CPU. ...

8 years ago | 0

Answered
How to apply a function to each column of a 3D array?
There isn't anything supported for |gpuArray| that can take any generic user function in this way. If |test_f| contains operatio...

8 years ago | 0

Answered
griddedInterpolant not a built-In Functions on a GPU
I'm not sure what you're asking, but if you're asking "am I right that I have to rewrite my code?" then yes, you are right.

8 years ago | 0

| accepted

Answered
Why is Titan V training performance so poor?
I am posting here the same information with which I responded to your tech support request. Perhaps others will find this useful...

8 years ago | 1

| accepted

Answered
Using Arrays inside arrayfun()
You can't do any array or matrix operations in a GPU |arrayfun| kernel. You can access the contents of an array that is present ...

8 years ago | 0

Answered
Debugging CUDA in Matlab without having to restart
If you crash your card, you often need to reload the driver, in much the same way that if a program segfaults, it has to be rest...

8 years ago | 0

Answered
Qt GUI wrapper for matlab based dlls to run on GPU
MATLAB Compiler will work correctly for |gpuArray| code. You don't need to write your own CUDA kernels, you just write MATLAB co...

8 years ago | 0

| accepted

Answered
GPU memory usage using Matlab deep learning
While you are training a network it needs a lot of working memory. The deeper the network, the more memory it needs. Which netwo...

8 years ago | 0

Answered
What happen to the CUDA cache mem?
The result is stored as the variable |ans|, which means you have less memory the second time round.

8 years ago | 1

Answered
GPU problem with Matlab 2017b
To use a Fermi card with R2017b you will first need to install Update 2. Follow the link in this bug report: <https://www.mathwo...

8 years ago | 1

Answered
How are gpuArrays handled inside parfor?
Yes, they can all use the same GPU. By default, anything you run on the same GPU from different processes will run in serial. Ho...

8 years ago | 1

Answered
why i receive the following error?
In <http://www.vlfeat.org/matconvnet/install/ MatConvNet's own installation documentation>, which I found by typing "MatConvNet ...

8 years ago | 1

| accepted

Answered
Choose a graphic card to train SegNet for deep learning
The GeForce 10 series will all work fine, they just have different capabilities and constraints. On a cheaper GPU you may need t...

8 years ago | 1

Answered
Geforce 1080ti vs Quadro P4000 for neural networks and deep learning
On the face of it the GTX 1080Ti is an all-round better performing card than the Quadro P4000 for deep learning applications, wi...

8 years ago | 3

| accepted

Answered
Issue CUDA_ERROR_LAUNCH_FAILED and reset(gpuDevice) doesn't work
In the last few days MATLAB R2017b was updated with a bug fix for the Volta architecture. This requires all users of Deep Networ...

8 years ago | 0

Answered
trying to make mex file of cuda code which places elements of one variable to another but getting there errors.
I think you meant to declare |W| as |double * const W| rather than |double const * W|. You can't modify the data being pointed t...

8 years ago | 0

Load more