cancel
Showing results for 
Search instead for 
Did you mean: 

X-CUBE-AI Object Detection Model - STM32H745I-DISCO

iker_arrizabalaga
Associate II

Hi everyone,

 

I'm currently working on a TinyML project and I'm using a STM32H745I-DISCO with X-CUBE-AI 8.1.0 version. I'm facing a problem when understanding how to change the input of a Mobilenet object detection TFlite model. I would like to modify the code for the input to be an image instead of those random numbers, and then see the predictions.PreguntaSTM.png 

How should I get the predictions when the input of the model is an image or a dataset? B.t.w. I tried to do a Validation on target but the console output would give me just "Invalid firmware COM19:115200 ...". Does anybody have the same problem?

PreguntaSTM2.png

Thank you in advance.

3 REPLIES 3
Elo
Associate

Hello,

I don't know if this can help you but when you do Validation on desktop, it gives you a link to a report file. If you look in that same folder you'll see the input and format your data should be in. I think you just need to transform your image to match this format (usually a csv file with a list and each element representing your pixel). Then, instead of selecting Random numbers, you select the csv file of your image.

For the Validation on target, have you tried setting Enabled in Automatic compilation and download?

Have a nice day!

fauvarque.daniel
ST Employee

It maybe unrelated to the error you have on the validation on the target but your model seems to be a Keras model and not a TFLite model. In the UI, you should select Keras instread of TFLite if your model name ends with .h5 or .hdf5.

For the problem of the validation on the target failing, it may due to a baud rate that doesn't correspond to the the baud rate you selected in STM32CubeMX for the USART or maybe the COM port that it is trying to connect to is not the one created by Windows when connecting the board. This can happen when you have multiple boards connected or if you have other devices that create a virtual com port. 

The best way to see the exact COM port is to issue the "mode" windows command with and without the board connected, you will be able to see the COM port created upon connection. Then, in the UI you can force the COM port to use

Regards


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
iker_arrizabalaga
Associate II

Thank you both for your responses!

I've already tried enabling the automatic compilation and download and it's still the same.

iker_arrizabalaga_0-1703148586363.png

Regarding the first response @Elo , you mean that just using the Validation on desktop, I should be able to see the predictions in the network_c_outputs files?

On the other hand @fauvarque.daniel , the format is .tflite so its only made for TFlite and not for Keras. This is the full name of the file: mobv2_128x128x3_adam02_respin01.best_val.035.h5.uint8.tflite  Nevertheless, I tried configuring the network as if it was a Keras model, it wouldn't let me. So I guess only TensorFlowLite works here.

I've just checked what you mentioned about the baud rate, but it seems to be correct. Some others STM users seem to have the same issue.

PreguntaSTM3.png