cancel
Showing results for 
Search instead for 
Did you mean: 

STM32N6 model accuracy benchmark

adakPal
Associate II

Dear ST Team,

I would like to ask whether you provide any tools or recommended workflow to measure the accuracy drop of a neural network when converting a .tflite model into the STM32 binary format using STM32Cube.AI.

I am looking for a simple way to compare the model’s performance before and after conversion (e.g., running the same test dataset on both versions and checking accuracy).

Do you have any built-in tools, scripts, or guidelines to benchmark this accuracy difference?

Best regards,
Adam

5 REPLIES 5
Julian E.
ST Employee

Hi @adakPal,

 

The AI runner is made for that: How to use the AiRunner package

 

The AI runner can be found here: \ST\STEdgeAI\2.2\scripts\ai_runner

First, you need to install the requirements: pip install -r requierement.txt. Then here is how it works

 

Load validation application:

On the N6, you first need to load the validation firmware to then interact with it using the AI Runner:

  1. Go to  \ST\STEdgeAI\2.2\scripts\N6_scripts
  2. edit the config.json here is mine for reference
    JulianE_0-1764318394644.png
  3. Copy your model in \models
  4. Run the generate command, for example: stedgeai.exe generate --model ./models/my_model.tflite --target stm32n6 --st-neural-art
  5. Connect the board in dev mode and Run: python n6_loader.py
  6. Optional, you can run a validate command to make sure everything is set up correctly: stedgeai validate --model ./models/my_model.tflite --target stm32n6 --mode target -d serial:921600

 

Please note that if you are using option in the generate, you need to include the same option in the validate, for example:

  • stedgeai.exe generate --model ./models/my_model.tflite --target stm32n6 --st-neural-art input-data-type uint8 --inputs-ch- position chlast --output-data-type int8 --outputs-ch-position chlast 
  • stedgeai validate --model ./models/my_model.tflite --target stm32n6 input-data-type uint8 --inputs-ch-position chlast --output-data-type int8 --outputs-ch-position chlast --mode target -d serial:921600 

 

Use the AI runner:

Then it is time to run the AI runner. Go back to the AI_runner then:

  • Open a git bash (in ai_runner folder) and run:
    export PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
  • Then run: python ./example/checker.py -d serial:921600

If you get an error saying that st_ai_runner module is not found, edit the checker.py like this: 

JulianE_1-1764318814096.png

By default, the AI checker gives you information about the inference time by node (epoch). In your case, I think the example "tflite_test.py" is more interesting for you, as it provides a typical example to compare the outputs of the generated C-model against the predictions from the tf.lite.Interpreter.

 

I let you look at it, I will here explain how to edit the checker.py example to :

  • Load your dataset (npz)
  • Save up the outputs
  • Then compare the C model output and your tflite model output

 

Please not that this is a preliminary version of a tutorial, so this is a bit handcrafted, we are working on more polished / automatic tutorials in the meantime.

 

Back to the tutorial, first open checker.py:

JulianE_2-1764318930236.png

  • Comment the line 62 and replace it to load your dataset: inputs = np.load(“./dataset.npy)
  • Save ‘outputs’ after the line 64 with something like: np.save("./ output.npy", outputs)
  • Comment line 66
  • Run again the checker.py and you should get the output saved.

 

With that, you should now be able to create another python script or notebook, where you can:
• Load your dataset
• Use the onnx runtime to get your original model outputs (or load them if you already have them)
• Load the N6 outputs you just created
• Run any tests you would like to run

 

Feel free to let us know what you think of this, how it could be improved so that we include it in the tutorials we are preparing.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi @adakPal,

 

Does it help? is that what you are looking for?

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi Julian,

I had problems with your scripts and dataset generation. Do you have any examples how to do it based on your object detection model zoo models - like mobilenet?

Hi @adakPal,

 

For object detection, you need tfs files.

For that, in object_detection/dataset, we provide a few scripts, namely, to analyze and most importantly to convert your dataset to tfs. You need datasets in COCO or Pascal VOC format to be converted to YOLO Darknet format.

 

stm32ai-modelzoo-services/object_detection/docs/README_DATASETS.md at main · STMicroelectronics/stm32ai-modelzoo-services

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Thank you for your response.

I generated tfs file from Pascal dataset, but I think I still don't understand. Should I use my tfs with your `./example/checker.py` script?