Deploy Model with AXI4-Stream Video Interface in Zynq Workflow
This example shows how to use the AXI4-Stream Video interface to enable high speed video streaming on the generated HDL IP core.
Before You Begin
To run this example, you must have the following software and hardware installed and set up:
HDL Coder™ Support Package for Xilinx® Zynq® Platform
Embedded Coder® Support Package for Xilinx Zynq Platform
Computer Vision System Toolbox Support Package for Xilinx Zynq-Based Hardware
Vision HDL Toolbox
Xilinx Vivado® Design Suite, with supported version listed in HDL Language Support and Supported Third-Party Tools and Hardware
FMC HDMI I/O card (FMC-HDMI-CAM or FMC-IMAGEON)
To setup the Zedboard, refer to the Set up Zynq hardware and tools section in the example Getting Started with Targeting Xilinx Zynq Platform.
This example shows how to:
Model a video streaming algorithm using the streaming pixel protocol.
Generate an HDL IP core with AXI4-Stream Video interface.
Integrate the generated IP core into a ZedBoard video reference design with access to HDMI interfaces.
Use the ARM® processor to tune the parameters on the FPGA fabric to change the live video output.
Create your own custom video reference design.
The picture above is a high level architecture diagram that shows how the generated HDL DUT IP core works in a pre-defined video reference design. In this diagram, the HDL DUT IP block is the IP core that is generated from the IP core generation workflow. The rest of the diagram represents the pre-defined video reference design, which contains other IPs to handle the HDMI input and output interfaces.
The HDL DUT IP processes a video stream coming from the HDMI input IP, generates an output video stream, and sends it to the HDMI output IP. All of these video streams are transferred in AXI4-Stream Video interface.
The HDL DUT IP can also include an AXI4-Lite interface for parameter tuning. Compared to the AXI4-Lite interface, the AXI4-Stream Video interface transfers data much faster, making it more suitable for the data path of the video algorithm.
Set up Zynq hardware and tools
1. Set up the ZedBoard and the FMC HDMI I/O card as shown in the figure below. To learn more about the ZedBoard hardware setup, please refer to the board documentation.
1.1. Connect the USB UART cable, the Ethernet cable and the power cable as shown in the figure above (marker 1 to 3).
1.2. Make sure the
JP11 jumpers are set as shown in the figure above (marker 4), so you can boot Linux from the SD card.
1.3. Make sure the
J18 jumper are set on 2V5 as shown in the figure above (marker 5).
1.4. Connect HDMI video source to the FMC HDMI I/O card as shown in the figure above (marker 6). The video source must be able to provide 1080p video output, for example, it could be a video camera, smart phone, tablet, or your computer's HDMI output.
1.5. Connect a monitor to the FMC HDMI I/O card as shown in the figure above (marker 7). The monitor must be able to support 1080p display.
2. If you haven't already, install the HDL Coder and Embedded Coder Support Packages for Xilinx Zynq Platform, and Computer Vision System Toolbox Support Package for Xilinx Zynq-Based Hardware. To install the support package, go to the MATLAB® toolstrip and click Add-Ons > Get Hardware Support Packages.
3. Make sure you are using the SD card image provided by the Embedded Coder Support Package for Xilinx Zynq Platform. If you need to update your SD card image, run the following command at the MATLAB prompt:
4. Set up the Zynq hardware connection by entering the following command in the MATLAB command window:
h = zynq
zynq function logs in to the hardware via COM port and runs the
ifconfig command to obtain the IP address of the board. This function also tests the Ethernet connection.
5. Set up the Xilinx Vivado synthesis tool path using the following command in the MATLAB command window. Use your own Vivado installation path when you run the command.
hdlsetuptoolpath('ToolName', 'Xilinx Vivado', 'ToolPath', 'C:\Xilinx\Vivado\2017.4\bin\vivado.bat')
Model Video Streaming Algorithm using the Streaming Pixel Protocol
To deploy a simple Sobel edge detection algorithm on Zynq, the first step is to determine which part of the design to be run on FPGA, and which part of the design to be run on the ARM processor. In this example, we want to implement the edge detector on FPGA to process the incoming video stream in AXI4-Stream Video protocol. And we want to use the ARM processor to tune the parameters on FPGA to change the live video output.
In the example model, the DUT subsystem, Sobel_HW, uses a edge detector block to implement the Sobel edge detection algorithm. The video data and control signals are modeled in the video streaming pixel protocol, which is used by all the blocks in Vision HDL Toolbox. pixelIn and pixelOut are data ports for video streams. ctrlIn and ctrlOut are control ports for video streams. They are modeled using a bus data type (Pixel Control Bus) which contains following signals: hStart, hEnd, vStart, vEnd, valid.
Four input ports, Threshold, Sobel_Enable, Background_Color and Show_Gradient, are control ports to adjust the parameters the Sobel edge detection algorithms. You can use the Slider Gain or Manual Switch block to adjust the input values of these ports. After mapping these ports to AXI4-Lite interface, the ARM processor can control the generated IP core by writing to the generated AXI interface accessible registers.
modelname = 'hdlcoder_sobel_video_stream'; open_system(modelname); sim(modelname);
Generate HDL IP core with AXI4-Stream Video Interface
Next, we start the HDL Workflow Advisor and use the Zynq hardware-software co-design workflow to deploy this design on the Zynq hardware. For a more detailed step-by-step guide, you can refer to the Getting Started with Targeting Xilinx Zynq Platform example.
1. Start the HDL Workflow Advisor from the DUT subsystem,
hdlcoder_sobel_video_stream/Sobel_HW. The target interface settings are already saved in this example model, so the settings in Task 1.1 to 1.3 are automatically loaded. To learn more about saving target interface settings in the model, you can refer to the Save Target Hardware Settings in Model example.
In Task 1.1, IP Core Generation is selected for Target workflow, and ZedBoard is selected for Target platform.
In Task 1.2, Default video system (requires HDMI FMC module) is selected for Reference Design.
In Task 1.3, the Target platform interface table is loaded as shown in the following picture. The video data stream ports, pixelIn, ctrlIn, pixelOut, and ctrlOut, are mapped to the AXI4-Stream Video interfaces, and the control parameter ports, such as Threshold, are mapped to the AXI4-Lite interface.
The AXI4-Stream Video interface communicates in master/slave mode, where the master device sends data to the slave device. Therefore, if a data port is an input port, assign it to an AXI4-Stream Video Slave interface, and if a data port is output port, assign it to an AXI4-Stream Video Master interface.
2. Right-click Task 3.2, Generate RTL Code and IP Core, and select Run to Selected Task to generate the IP core. You can find the register address mapping and other documentation for the IP core in the generated IP Core Report.
Integrate IP Into AXI4-Stream Video Compatible Reference Design
Next, in the HDL Workflow Advisor, we run the Embedded System Integration tasks to deploy the generated HDL IP core on Zynq hardware.
1. Run Task 4.1, Create Project. This task inserts the generated IP core into the Default video system reference design. As shown in the first diagram, this reference design contains the IPs to handle HDMI input and output interfaces. It also contains the IPs to do color space conversion from YCbCr to RGB. The generated project is a complete Zynq design, including the algorithm part (the generated DUT algorithm IP), and the platform part (the reference design).
2. Click the link in the Result pane to open the generated Vivado project. In the Vivado tool, click Open Block Design to view the Zynq design diagram, which includes the generated HDL IP core, other video pipelining IPs and the Zynq processor.
3. In the HDL Workflow Advisor, run the rest of the tasks to generate the software interface model, and build and download the FPGA bitstream. Choose Download programming method in the task Program Target Device to download the FPGA bitstream onto the SD card on the ZedBoard, so your design will be automatically reloaded when you power cycle the ZedBoard.
Generate ARM executable to Tune Parameters on the FPGA Fabric
A software interface model is generated in Task 4.2, Generate Software Interface Model.
1. Before you generate code from the software interface model, comment out the Video Source and Video Viewer in the generated model, as shown in the following picture. These blocks do not need to be run on the ARM processor. The ARM processor is using AXI4-Lite interface to control the FPGA fabric. The actual video source and display interface are all running on the FPGA fabric. The video source comes from the HDMI input, and the video output will be sent to the monitor connected to the HDMI output.
2. Configure and build the software interface model for external mode:
In the generated model, open the Configuration Parameters dialog box.
Select Solver and set "Stop Time" to "inf".
From the model menu, select Simulation > Mode > External.
Click the Run button on the model toolstrip. Embedded Coder builds the model, downloads the ARM executable to the ZedBoard hardware, executes it, and connects the model to the executable running on the ZedBoard hardware.
3. Now, both the hardware and software parts of the design are running on Zynq hardware. Use the Sobel_Enable switch to observe that the live video output switches between the edge detector output and the original video. Use the Threshold or Background_Color switch to see the different edge detection effects on the live video output. These parameter values are sent to the Zynq hardware via external mode and the AXI4-Lite interface.
Customize your video reference design
You may want to extend the existing Default video system reference design to add additional pre-processing or post-processing camera pipelining IPs, or you may want to use a different SoC hardware or video camera interface. The Default video system reference design is an example or a starting point to create your own custom reference design.
For example, the Default video system reference design contains two IP cores to do color space conversion from YCbCr to RGB, as shown in following picture. These two IP cores are generated by HDL Coder as well using the IP Core Generation workflow. You can optionally generate other pre-processing or post-processing camera pipelining IP cores, and add them into a custom reference design to extend your video platform.
For more details on creating your own custom reference design, you can refer to the Define Custom Board and Reference Design for Zynq Workflow example.