Video length is 20:57

Understanding and Verifying Your AI Models

Neural networks can obtain state-of-the-art performance in a wide variety of tasks, including image classification, object detection, speech recognition, and machine translation. Due to this impressive performance, there has been a desire to utilize neural networks for applications in industries with safety-critical components such as aerospace, automotive, and medical. While these industries have established processes for verifying and validating traditional software, it is often unclear how to verify the reliability of neural networks. In this talk, we explore a comprehensive workflow for verifying and validating AI models. Using an image classification example, we will discuss explainability methods for understanding the inner workings of neural networks. Learn how Deep Learning Toolbox™ Verification Library enables you to formally verify the robustness properties of networks and determine whether the data your model is seeing during inference time is out of distribution. By thoroughly testing the requirements of the AI component, you can ensure that the AI model is fit for purpose in applications where reliability is of the utmost importance.

Published: 5 May 2023