Target platform 'DLXCKU5PE' is not supported for quantization.

20 views (last 30 days)
Hi,
DLXCKU5PE is my self-generated deep learning bitstream with the datatype of int8. When I want to validate the quantized deep learning network on my FPGA platform.
An error occurred.
I wonder how to solve this problem. Or only offical evaluation boards support this function.

Accepted Answer

Anjaneyulu Bairi
Anjaneyulu Bairi on 16 Oct 2025 at 4:34
Hi,
This error usually arises in Deep Learning HDL Toolbox when:
  • The target FPGA platform you selected (DLXCKU5PE) is not officially supported by the quantization workflow in MATLAB/HDL Coder/Deep Learning HDL Toolbox.
  • The platform is either custom or not included in the list of supported boards for quantized deployment.
Try to follow below steps which might helpful for you:
1. Check Supported Platforms
2. Custom Board Registration
  • For custom boards, you may need to create a custom platform registration using the dlhdl.Target and dlhdl.Board classes, but quantization support may still be limited.
3. Try Float Deployment
  • If quantized (int8) deployment is not supported, you may be able to deploy your network using single (floating point) precision instead.
I hope this helps!
  1 Comment
KH
KH on 6 Nov 2025 at 8:28
Thanks,
I am glad to receive your reply.
I have solved this problem by following the guide at
The Matlab can work with my platform.
However, the accuracy drops by roughly 4%. I’m currently looking into ways to recover this loss.

Sign in to comment.

More Answers (0)

Categories

Find more on FPGA, ASIC, and SoC Development in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!