2025-06-28 6:36 AM
Hello, we encountered the following issues when deploying a binary classification model using the ll library. Do you have any solutions?
1. The output data of the AI model deployed according to the example (without quantization) does not match the data calculated locally on the PC (before softmax).
2. We compared the model calculation results on the PC with those on the STM32N6 layer by layer and found that the mismatch in the calculation results before and after reshape (in the underlying code module of DMA to NPU) led to the mismatch in the final output results. The calculation results of other layers such as conv, maxpool, and relu were all correct.
3. How should we solve the model deployment?