Alexander Schreiber, MathWorks
Learn how to use MATLAB® for designing, developing, and deploying computer vision and deep learning applications on NVIDIA® Tesla® GPUs or Tegra® system-on-chips, whether on your local machine, in a cluster, or on embedded systems, including NVIDIA Jetson™ TK1/TX1/TX2 and DRIVE™ PX platforms. The workflow starts with algorithm design in MATLAB. The deep learning network is defined in MATLAB and is trained using GPU and parallel computing support for MATLAB, either on the desktop computer, a local compute cluster, or in the cloud. Then, the trained network is augmented with traditional computer vision techniques and the application is verified in MATLAB. Finally, a compiler automatically generates portable and highly optimized CUDA® code from the MATLAB algorithm, which is then implemented on the Tegra platform using cross-compilation. The execution speed of the auto-generated CUDA code is ~2.5x faster than Apache MXNet™, ~5x faster than Facebook Caffe2, ~7x faster than Google™ TensorFlow™, and comparable to an optimized TensorRT™ implementation.
Prior to R2019a, MATLAB Parallel Server was called MATLAB Distributed Computing Server.
Recorded: 17 Apr 2018
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .Select web site
You can also select a web site from the following list:
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.