cancel
Showing results for 
Search instead for 
Did you mean: 

STM32N6570-DK loading self-trained model to example GettingStarted program

anardella
Associate II

I have been working with and familiarized myself with the GettingStarted applications, most specifically the object detection program found on GitHub: https://github.com/STMicroelectronics/STM32N6-GettingStarted-ObjectDetection

I would like to take the next step and load my own model onto the board, I like the setup the current code has to output the frames and show the inference processing on the LCD screen and would prefer to utilize the current sample code if possible. 

 

My question is this

Am I able to load in my own model I trained using the ModelZoo services, replacing the current model after I convert it to C code. Or will I have to make my own code from scratch to do this Camera->Frame->NN->LCD pipeline

And

Do I have to train the model using a certain resolution? I am currently gathering training images by running code found here: https://github.com/STMicroelectronics/x-cube-n6-camera-capture
The output frame that is captured is 800x480 resolution RGB888, I am then going to annotate and train using those images gathered. I ask this because in the app_config.h the NN_WIDTH and NN_HEIGHT are both set to 480, making me believe it is currently taking in images at 480x480. If so would it be easier to resize/crop images to 480x480 or set the config differently?

 

All-in-all I'm just wanting to see if there is an easy way to load a custom model and test it on this board. Preferably with code that is already available.

 

Thanks for your help!

5 REPLIES 5
Julian E.
ST Employee

Hello @anardella,

 

In the /Doc folder, you will find a document on how to deploy your own TFlite.

STM32N6-GettingStarted-ObjectDetection/Doc/Deploy-your-tflite-Model-STM32N6570-DK.md at main · STMicroelectronics/STM32N6-GettingStarted-ObjectDetection · GitHub

 

It mainly consists of doing a generate command with the st edge ai core, sign the binary and change the post processing.

 

I would suggest you look at the model supported (hosted on model zoo and listed in the postprocessing: STM32N6-GettingStarted-ObjectDetection/Middlewares/lib_vision_models_pp/lib_vision_models_pp/README.md at main · STMicroelectronics/STM32N6-GettingStarted-ObjectDetection · GitHub)

 

Then try with your own model and create the corresponding post processing.

 

You can also look at the model zoo services for object detection: stm32ai-modelzoo-services/object_detection at main · STMicroelectronics/stm32ai-modelzoo-services · GitHub

You can easily retrain model we provide on model zoo (not services) and deploy it easily with script on the dk board (it modifies the getting started application)

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
anardella
Associate II

Thank you @Julian E. 

 

I am now getting back up to speed on this project. I am currently going through the ST Edge AI Developer Cloud process of generating C code for my custom model I have trained and exported to .onnx

 

When I generate the network.c file from the ST Edge AI Developer Cloud I would then like to replace the model that is used in the "GettingStarted-ObjectDetection" sample project. I would like to know if my steps below are correct.

1. Generate network.c and ll_aton code using ST Edge AI Developer Cloud

2. Replace network.c and the ll_aton files with what has been generated (Please let me know if I have to replace ll_aton or if I can leave that as is) 

3. Change any postprocessing to utilize the correct model architecture (yolov8 uint8)

 

Am I missing any steps or overlooking anything?

After this am I able to build the project and create the necessary .hex files and flash the code onto the board?

 

Thanks for the help!

Hello @anardella,

 

The steps to follow are described here:

STM32N6-GettingStarted-ObjectDetection/Doc/Deploy-your-tflite-Model-STM32N6570-DK.md at main · STMicroelectronics/STM32N6-GettingStarted-ObjectDetection

 

You can see that the command to do the generate is as follow:

stedgeai generate --model Model_File.tflite --target stm32n6 --st-neural-art default@user_neuralart_STM32N6570-DK.json

The user_neuralart_STM32N6570-DK.json contains compilator option and I am not sure it corresponds to the default option in the dev cloud. So, I would suggest doing the generate manually instead of using the dev cloud.

 

Then, the step 3 will be the most difficult for you I think as you use a custom model. 

Look at what is done for supported model and try to create the right postprocessing for your model.

 

Have a good day

Julian

 

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
anardella
Associate II

Thanks for the response @Julian E. 

 

I have generated my network_data.hex and flashed my new model to the board. I had to change it tot tflite and I was able to get the .hex file created by following the md file.

 

I am having other issues now. With both STMCubeIDE, CubeProgrammer, and with the makefile.

In STMCubeIDE when I change the app_config.h and build the program it builds with no problem and no errors. However when I try to run or debug the program in CubeIDE with my board in development mode there is a brief "progress information" window that flashes for ~0.1 seconds and then disappears with no error code or any information. So running from CubeIDE does not work...

 

I decided to try and flash the binary file that was created when I built the program. I tried both cubeProgrammer and CubeProgrammer_CLI for that. Both attempts cause the board to boot into nothing. LED4 is red and LED3 is green. Black LCD screen that should be showing captured frames from the camera and inference time. The binary application file, ai_fsbl, and network_data all were flashed with no errors. This lead me to believe that there may just be an error in my code. It is then that I decide to use the makefile, just to try and load the code onto the board.

 

I use WSL to use make -j8, first issue I have is a failed build with the error "arm-none-eabi-gcc: error: unrecognized command-line option '-fcyclomatic-complexity'", Okay so I edit the make file to remove the '-fcyclomatic-complexity' comand line parameters. Then it begins to build but get stuck on the ll_aton_runtime.c file... Error callback pasted below. I'm guessing this is due to removing the '-fcyclomatic-complexity' from the makefile.

Error.png

 

Now I am out of ideas. I feel like I am so close but I don't know if my code is being flashed to the board incorrectly or if there is an error in my postprocessing/application code. Any help would be greatly appreciated. I dont know why my STMCudeIDE wont run code. It would be really nice if I was able to run the code using development mode while in CubeIDE, but as I said that doesn't work for me unfortunately... 

 

Please help me

@Julian E. 

 

I had made some progress on this. I have currently followed these steps to get a working program on the board, unfortunately my model seems to be an issue...

1. Train custom model and export to TFLite, then quantize model to be uint8 input ->int8 output. I have also included the .tflite file attached
2. Use STEdgeAI to generate C code and create a network_data.hex file
3. Flash the ai_fsbl.hex file using CubeProgrammer
4. Flash network_data.hex using CubeProgrammer
5. Now I had to change the postprocessing in the code by changing definitions in the app_config.h to work with YOLOv8, I also changed the rest of the app_config.h and have attached it
a. NN_WIDTH and NN_HEIGHT are still both 480
b. Configuration was changed to match my model (seen below)

anardella_0-1761243303747.png

6. I built the project and there were no errors, I then used the .bin file that was created, signed it using the signing tool, and flashed the signed .bin to the board

 

When I turn the board to Boot from Flash mode and power cycle it there is a working camera->LCD pipeline and I am getting a 480x480 image. Unfortunately the model outputs seem to be wrong, there is always "Objects 1" at the top of the screen and a single bounding box line at the bottom of the screen, basically indicating that the model is always finding a target. 

 

Any direction would help a lot, I feel like I am very close to getting my own model loaded, I just don't know if there was an error when training the YOLOv8 model or quantization. If you could help me out that would be amazing!

 

If you can check or point me in the right direction on how to check that my model is correct and if my app_config.h is correct I think I'll be that much closer to solving this issue!