Deep Learning

MATLAB and Simulink for Edge AI

Deploy machine learning and deep learning applications to embedded systems

Simulate, test, and deploy machine learning and deep learning models to edge devices and embedded systems. Generate code for complete AI applications, including pre-processing and post-processing algorithms.

With MATLAB® and Simulink®, you can:

  • Generate optimized C/C++ and CUDA code for deployment to CPUs and GPUs
  • Generate synthesizable Verilog and VHDL code for deployment to FPGAs and SoCs
  • Accelerate inference with hardware-optimized deep learning libraries, including oneDNN, Arm Compute Library, and TensorRT
  • Incorporate pre-trained TensorFlow Lite (TFLite) models into applications deployed to hardware
  • Compress AI models for inference on resource-constrained hardware with tools for hyperparameter tuning, quantization, and network pruning

“From data annotation to choosing, training, testing, and fine-tuning our deep learning model, MATLAB had all the tools we needed—and GPU Coder enabled us to rapidly deploy to our NVIDIA GPUs even though we had limited GPU experience.”

Valerio Imbriolo, Drass Group
Screenshot of C/C++ code being deployed to an image of embedded hardware.

CPUs and Microcontrollers

Generate portable, optimized C/C++ code from trained machine learning and deep learning models with MATLAB Coder™ and Simulink Coder™. Optionally include calls to vendor-specific libraries for deep learning inference in the generated code, such as oneDNN and Arm® Compute Library.

Screenshot of C/C++ code in Simulink being deployed to images of a NVIDIA desktop and embedded GPU.

GPUs

Generate optimized CUDA® code for trained deep learning networks with GPU Coder™. Include pre-processing and post-processing along with your networks to deploy complete algorithms to desktops, servers, and embedded GPUs. Use NVIDIA® CUDA libraries, such as TensorRT™ and cuDNN, to maximize performance.

Running FPGA-based deep learning inference on prototype hardware from MATLAB, then generating a deep learning HDL IP core for deployment on any FPGA or ASIC.

FPGAs and SoCs

Prototype and implement deep learning networks on FPGAs and SoCs with Deep Learning HDL Toolbox™. Program deep learning processors and data movement IP cores with pre-built bitstreams for popular FPGA development kits. Generate custom deep learning processor IP cores and bitstreams with HDL Coder™.

Screenshot of a layered graph, calibration statistics, and validation results to optimize AI models for embedded deployment.

AI Model Compression

Reduce memory requirements for machine learning and deep learning models with size-aware hyperparameter tuning and quantization of weights, biases, and activations. Minimize the size of a deep neural network by pruning insignificant layer connections.