cancel
Showing results for 
Search instead for 
Did you mean: 

STM32Cube AI fails to analyze QUANTIZE node with input type uint8

JRade
Associate II

I am using the STM32L476RG with its respective NUCLEO board. I want to experiment with object detection and edge ML on this device so I am using STMCube AI package. I want to use the efficientdet/lite0/detection quantization-trained model but when I import this into STM32CubeMX and attempt to analyze with the TF Micro Runtime, I get this error message:

[AI:efficientdet] input->type == kTfLiteFloat32 || input->type == kTfLiteInt16 || input->type == kTfLiteInt8 was not true.

 [AI:efficientdet] Node QUANTIZE (number 0f) failed to prepare with status 1

 [AI:efficientdet] Analyze fail on the AI model

Additionally, when analyzing with the STM32Cube AI runtime, I get this error:

INTERNAL ERROR: 'TFLite_Detection_PostProcess'

The beginning quantize node transforms 2D image of uint8 into an int8 type. It says in the documentation of my version of Cube AI (7.0.0) that it supports TFLite quantization operations with input type of float32, int8, and uint8. Why would this be happening?

0 REPLIES 0