cancel
Showing results for 
Search instead for 
Did you mean: 

Data format to run the quantized neural network?

THarn
Associate II

Hello,

i used the stm32ai software to quantize my MNIST model.

I can load the quantized model on my STM32H743-Nucleo Board and with the Validation Software I can validate my quantized model on target.

But I would like to create my own Application, so I used the Application Template.

I can already run my "normal" model with floating point input-vecors.

But if I want to feed my quantized net with fixed point input vector, I only get the same probability

on every number (see picture).

0690X000009ZfDiQAK.png

I think the problem is my input data type. The Application Template expects ai_i8. But this data type is to small for grey-scaled Mnist Digits (0-255). So I changed it to ai_u8 and got the output data shown in the picture.

But if I put the exactly same input Data as .csv file in the Validation software, i get the right classified number.

Does anyone knows the answer to my question?

1 ACCEPTED SOLUTION

Accepted Solutions
THarn
Associate II

I solved the problem by myself. The stm32ai quantize creates a file with inputs and outputs. Any value above 127 ist set to 127 (no overflow). Any other value remains the same.

View solution in original post

1 REPLY 1
THarn
Associate II

I solved the problem by myself. The stm32ai quantize creates a file with inputs and outputs. Any value above 127 ist set to 127 (no overflow). Any other value remains the same.