Setting Up the Prerequisite Products
To use GPU Coder™ for CUDA® code generation, set up the prerequisite software. To compile generated code, set up a compiler. To deploy standalone code, you must additionally set environment variables to locate the libraries the generated code uses. For more information about the software and versions required for GPU Coder, see Installing Prerequisite Products.
Set Up Compiler
To compile generated CUDA code, you must install a supported compiler on the host machine.
Windows
For Windows®, GPU Coder only supports Microsoft® Visual C++® compilers. If MATLAB® detects multiple compilers on your Windows system, it selects one as the default compiler. If the selected compiler is not compatible with GPU Coder, change the selection.
To change the default C++ compiler, use the mex -setup C++ command to display a message with links that you can use to set up a different compiler. The compiler that you choose remains the default until you call mex -setup C++. For more information, see Change Default Compiler. To change the default compiler for C, use mex -setup C.
Linux
On Linux® platforms, GPU Coder only supports the GCC/G++ compiler for the C/C++ language.
Set Environment Variables
To build standalone code on the host machine with MATLAB, you must set environment variables to locate the tools, compilers, and libraries required for code generation. To build code for NVIDIA® GPUs, set the environment variables to locate the CUDA Toolkit. To use third-party libraries in generated code, you must also set environment variables to locate these libraries.
To build CUDA MEX functions and accelerate Simulink® simulations on a GPU, GPU Coder uses the host compiler and NVIDIA libraries installed with MATLAB. You do not need to set the environment variables to generate MEX functions or accelerate simulations.
Note
GPU Coder does not support standalone deployment of generated CUDA MEX files using MATLAB Runtime.
In R2025a: The NVIDIA TensorRT™ library is not installed by default in MATLAB for generating MEX functions or accelerating Simulink simulations. To use the TensorRT library, you must install it by using
gpucoder.installTensorRT.
Windows
To generate and deploy standalone code on Windows, set these environment variables.
On Windows, a space or special character in the path to the tools, compilers, and libraries can create issues during the build process. You must install third-party software in locations that do not contain spaces or change your Windows settings to enable the creation of short names for files, folders, and paths. For more information, see the Using Windows short names solution in MATLAB Answers.
| Variable Name | Description | Example Path |
|---|---|---|
CUDA_PATH | Path to the CUDA Toolkit installation. | C:\Program Files\NVIDIA\CUDA\v12.2\ |
NVIDIA_CUDNN | Path to the root folder of cuDNN installation. The root folder contains the | C:\Program Files\NVIDIA\CUDNN\v8.9\ |
NVIDIA_TENSORRT | Path to the root folder of the NVIDIA
TensorRT installation. The root folder contains the
| C:\Program Files\NVIDIA\CUDA\v12.2\TensorRT\ |
PATH | Path to the CUDA executables. The CUDA Toolkit installer typically sets this value automatically. | C:\Program Files\NVIDIA\CUDA\v12.2\bin |
Path to the | C:\Program Files\NVIDIA\CUDNN\v8.9\bin | |
Path to the | C:\Program Files\NVIDIA\CUDA\v12.2\TensorRT\lib |
Linux
To deploy standalone code on a Linux machine, set these environment variables.
| Variable | Description | Example Path |
|---|---|---|
PATH | Path to the CUDA Toolkit executable. | /usr/local/cuda/bin |
LD_LIBRARY_PATH | Path to the CUDA library folder. | /usr/local/cuda/lib64 |
Path to the cuDNN library folder. | /usr/local/cuda/lib64/ | |
Path to the NVIDIA TensorRT library folder. | /usr/local/cuda/TensorRT/lib/ | |
Path to the ARM® Compute Library folder on the target hardware. Set | /usr/local/arm_compute/lib/ | |
NVIDIA_CUDNN | Path to the root folder of cuDNN library installation. | /usr/local/cuda/ |
NVIDIA_TENSORRT | Path to the root folder of NVIDIA TensorRT library installation. | /usr/local/cuda/TensorRT/ |
ARM_COMPUTELIB | Path to the root folder of the ARM Compute Library installation on the ARM target hardware. Set this value on the ARM target hardware. | /usr/local/arm_compute |
To deploy to an NVIDIA
Jetson™ board, set environment variables by using the Hardware
Setup tool. Alternatively, set environment variables in the
.bashrc file on the board. For more information, see Install Required Libraries on NVIDIA Boards Using the Terminal.
Verify Setup
To verify that your development computer has all the tools and
configuration required for GPU code generation, use the coder.checkGpuInstall function. This function performs checks to verify
if your environment has all the third-party tools and
libraries required for GPU code generation. Create a coder.gpuEnvConfig object to specify which checks
coder.checkGPuInstall runs.
In the MATLAB Command Window, enter:
gpuEnvObj = coder.gpuEnvConfig;
gpuEnvObj.BasicCodegen = 1;
gpuEnvObj.BasicCodeexec = 1;
gpuEnvObj.DeepLibTarget = 'tensorrt';
gpuEnvObj.DeepCodeexec = 1;
gpuEnvObj.DeepCodegen = 1;
results = coder.checkGpuInstall(gpuEnvObj)
The output shown here is representative. Your results might differ.
Compatible GPU : PASSED
CUDA Environment : PASSED
Runtime : PASSED
cuFFT : PASSED
cuSOLVER : PASSED
cuBLAS : PASSED
cuDNN Environment : PASSED
TensorRT Environment : PASSED
Basic Code Generation : PASSED
Basic Code Execution : PASSED
Deep Learning (TensorRT) Code Generation: PASSED
Deep Learning (TensorRT) Code Execution: PASSED
results =
struct with fields:
gpu: 1
cuda: 1
cudnn: 1
tensorrt: 1
basiccodegen: 1
basiccodeexec: 1
deepcodegen: 1
deepcodeexec: 1
tensorrtdatatype: 1
Alternatively, use the GPU Environment
Check app to check the GPU environment. To open this application, use the
MATLAB command, gpucoderSetup.
See Also
Apps
Functions
Objects
Topics
- Installing Prerequisite Products
- Prerequisites for Generating Code for NVIDIA Boards
- The GPU Environment Check and Setup App
- Generate Code by Using the GPU Coder App
- Generate Code Using the Command Line Interface
- Code Generation for Deep Learning Networks by Using cuDNN
- Code Generation for Deep Learning Networks by Using TensorRT