Main Content

Install and Setup Prerequisites for NVIDIA Boards

Target Requirements

Hardware

MATLAB® Coder™ Support Package for NVIDIA® Jetson® and NVIDIA DRIVE® Platforms supports the following development boards:

  • NVIDIA Jetson Xavier™ NX.

  • NVIDIA Jetson AGX Xavier.

  • NVIDIA Jetson Nano.

  • NVIDIA Jetson TX2.

  • NVIDIA Jetson TX1.

  • NVIDIA DRIVE PX2.

The support package uses an SSH connection over TCP/IP to execute commands while building and running the generated code on the DRIVE or Jetson platforms. Connect the target platform to the same network as the host computer. Alternatively, use an Ethernet crossover cable to connect the board directly to the host computer.

Note

On the Windows® platform, open port 18735 in the Windows Firewall settings. This port is needed to establish a connection to the MATLAB server running on the embedded platforms.

Software

  • Use the JetPack (NVIDIA) or the DriveInstall (NVIDIA) software to install the OS image, developer tools, and the libraries required for developing applications on the Jetson or DRIVE platforms. You can use the Component Manager in the JetPack or the DriveInstall software to select the components to be installed on the target hardware. For installation instructions, refer to the NVIDIA board documentation. At a minimum, you must install:

    • CUDA® toolkit.

    • cuDNN library.

    • TensorRT library.

    • OpenCV library.

    • GStreamer library (v1.0 or higher) for deployment of the videoReader function.

    The MATLAB Coder Support Package for NVIDIA Jetson and NVIDIA DRIVE Platforms has been tested with the following JetPack and DRIVE SDK versions:

    Hardware PlatformSoftware Version
    Jetson Xavier NX, AGX Xavier, TX2/TX1, NanoJetPack 4.4.1
    DRIVEDRIVE SDK 5.0.10.3-12606092

  • Install the Simple DirectMedia Layer (SDL v1.2) library, V4L2 library, and V4L2 utilities for running the webcam examples. You must also install the development packages for these libraries.

  • For deploying the Audio File Read Simulink® block, install the Sound eXchange (SoX) utility and its development and format libraries.

    For example, on Ubuntu®, use the apt-get command to install these libraries.

    sudo apt-get install libsdl1.2-dev v4l-utils sox libsox-fmt-all libsox-dev
    

Environment Variable on the Target

The support package uses environment variables to locate the necessary tools, compilers, and libraries required for code generation. Ensure that the following environment variables are set.

Variable NameDefault ValueDescription
PATH/usr/local/cuda/bin

Path to the CUDA toolkit executable on the Jetson or DRIVE platform.

LD_LIBRARY_PATH/usr/local/cuda/lib64

Path to the CUDA library folder on the Jetson or DRIVE platform.

Ensure that the required environment variables are accessible from non-interactive SSH logins. For example, you can use the export command at the beginning of the $HOME/.bashrc shell configuration file to add the environment variables.

 Example .bashrc File

Alternatively, you can set system-wide environment variables in the /etc/environment file. You must have sudo privileges to edit this file.

 Example /etc/environment File

Input Devices

  • Camera connected to the USB or CSI port of the target hardware.

  • USB audio device for recording and playback of audio signals.

Development Host Requirements

MathWorks Products

  • MATLAB (required).

  • MATLAB Coder (required).

  • GPU Coder™ (required for GPU targeting).

  • Parallel Computing Toolbox™ (required for GPU targeting).

  • Simulink (required for generating code from Simulink models).

  • Computer Vision Toolbox™ (recommended).

  • Deep Learning Toolbox™ (required for deep learning).

  • Embedded Coder® (recommended).

  • Image Processing Toolbox™ (recommended).

  • Simulink Coder (required for generating code from Simulink models).

  • GPU Coder Interface for Deep Learning Libraries support package (required for deep learning).

Third-Party Products

  • NVIDIA GPU enabled for CUDA.

  • CUDA toolkit and driver.

  • C/C++ Compiler.

  • CUDA Deep Neural Network library (cuDNN).

  • NVIDIA TensorRT – high performance deep learning inference optimizer and run-time library.

For information on the version numbers for the compiler tools and libraries, see Installing Prerequisite Products (GPU Coder). For information on setting up the environment variables on the host development computer, see Setting Up the Prerequisite Products (GPU Coder).

Related Topics