Run TensorFlow Lite Models with MATLAB and Simulink - MATLAB
Video Player is loading.
Current Time 0:00
Duration 3:41
Loaded: 0.00%
Stream Type LIVE
Remaining Time 3:41
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 3:41

    Run TensorFlow Lite Models with MATLAB and Simulink

    The Deep Learning Toolbox™ Interface for TensorFlow Lite enables the use of pretrained TensorFlow Lite (TFLite) models directly within MATLAB® and Simulink® for deep learning inference. Incorporate pretrained TFLite models with the rest of your application implemented in MATLAB or Simulink for development, testing, and deployment. Inference of pretrained TFLite models is executed by the TensorFlow Lite Interpreter while the rest of the application code is executed by MATLAB or Simulink. Data is exchanged between MATLAB or Simulink and the TensorFlow Lite Interpreter automatically. Use MATLAB Coder™ or Simulink Coder™ to generate C++ code from applications containing TFLite models for deployment to target hardware. In the generated code, inference of the TFLite model is executed by the TensorFlow Lite Interpreter while C++ code is generated for the remainder of the MATLAB or Simulink application, including pre- and postprocessing. For example, use a TensorFlow Lite model pretrained for object detection in a Simulink model to perform vehicle detection on streaming video input. For more information on prerequisites and getting started with using TensorFlow Lite Models in MATLAB and Simulink, please refer to documentation on this site and the Related Resources section.

    Published: 11 Jul 2022

    The Deep Learning Toolbox interface for TensorFlow Lite enables new support for co-simulation and code generation with pre-trained TensorFlow Lite models in MATLAB and Simulink. This workflow allows you to incorporate pre-trained TF Lite models, including classification and object detection networks into larger applications for development and testing.

    During simulation, inference of pre-trained TensorFlow Lite models is executed by the TensorFlow Lite interpreter, while the rest of the application code, including pre- and post-processing is executed by MATLAB and Simulink. The data exchange between MATLAB and TensorFlow Lite happens automatically. For code generation, application logic runs as C++ code generated for MATLAB Coder, while the network is, again, executed by the TensorFlow Lite interpreter.

    Now, let's look at an example of adding a TensorFlow Lite network to a Simulink model for simulation and code generation. In this example, we use a TF Lite model to detect cars in a highway driving scene used to simulate inputs from a camera. Let's take a quick look at the input video.

    The TF Lite model we use can be found in the TensorFlow Model Zoo on GitHub. This model is an object detector with a MobileNet V3 backbone and is quantized to operate on 8-bit integers. We've added the model file to the current folder for our project.

    Now, let's move over to Simulink. In Simulink, we'll pass in the video file with a firm multimedia file block and resize it to match the input dimensions accepted by the TF Lite network, in this case 320 by 320. Next, the video signal is passed to the object detection network for inference with the MATLAB function block. Finally, the network's outputs in the original video input are sent to a second MATLAB function block for post processing to apply bounding boxes, classifications, and scores.

    Let's take a closer look at the MATLAB function block containing the TensorFlow Lite model. First, we load the TensorFlow Lite model in our current folder into a persistent network object with the function loadTFLiteModel. Then we perform inference by passing the network object to the predict function. After inference, we pass the outputs of the network from the TF Lite interpreter back to Simulink, where we perform post-processing inside a MATLAB function block.

    This script applies bounding boxes, class labels, and prediction scores to the four objects with the greatest scores in each video frame. Let's run the model and co-simulate the Simulink model with the TensorFlow Lite network.

    As we can see, the bounding boxes and prediction scores programmed with a MATLAB function block are applied to four objects in each frame. We can take things a step further and generate C++ code for the complete application with the Embedded Coder app.

    Taking a closer look at the generated code and the deep learning predict function, we can see calls to a function called invokeinterpreter, which is used to call into the TensorFlow Lite interpreter at each time step. Additionally, we can see that code is generated for the remainder of the model-- for example, the post-processing steps.

    To get started using TensorFlow Lite models with MATLAB and Simulink, download the new Deep Learning Toolbox interface for TensorFlow Lite from the Add-on Explorer in MATLAB or the File Exchange on MATLAB Central. For more information on prerequisites, installation, and additional examples, please refer to our documentation.