2025-11-27 12:46 PM
Dear ST Team,
I would like to ask whether you provide any tools or recommended workflow to measure the accuracy drop of a neural network when converting a .tflite model into the STM32 binary format using STM32Cube.AI.
I am looking for a simple way to compare the model’s performance before and after conversion (e.g., running the same test dataset on both versions and checking accuracy).
Do you have any built-in tools, scripts, or guidelines to benchmark this accuracy difference?
Best regards,
Adam
2025-11-28 12:42 AM - edited 2025-11-28 12:45 AM
Hi @adakPal,
The AI runner is made for that: How to use the AiRunner package
The AI runner can be found here: \ST\STEdgeAI\2.2\scripts\ai_runner
First, you need to install the requirements: pip install -r requierement.txt. Then here is how it works
Load validation application:
On the N6, you first need to load the validation firmware to then interact with it using the AI Runner:
Please note that if you are using option in the generate, you need to include the same option in the validate, for example:
Use the AI runner:
Then it is time to run the AI runner. Go back to the AI_runner then:
If you get an error saying that st_ai_runner module is not found, edit the checker.py like this:
By default, the AI checker gives you information about the inference time by node (epoch). In your case, I think the example "tflite_test.py" is more interesting for you, as it provides a typical example to compare the outputs of the generated C-model against the predictions from the tf.lite.Interpreter.
I let you look at it, I will here explain how to edit the checker.py example to :
Please not that this is a preliminary version of a tutorial, so this is a bit handcrafted, we are working on more polished / automatic tutorials in the meantime.
Back to the tutorial, first open checker.py:
With that, you should now be able to create another python script or notebook, where you can:
• Load your dataset
• Use the onnx runtime to get your original model outputs (or load them if you already have them)
• Load the N6 outputs you just created
• Run any tests you would like to run
Feel free to let us know what you think of this, how it could be improved so that we include it in the tutorials we are preparing.
Have a good day,
Julian
2025-12-04 8:27 AM
Hi @adakPal,
Does it help? is that what you are looking for?
Have a good day,
Julian
2025-12-06 7:06 AM
Hi Julian,
I had problems with your scripts and dataset generation. Do you have any examples how to do it based on your object detection model zoo models - like mobilenet?
2025-12-07 11:50 PM - edited 2025-12-09 11:52 AM
Hi @adakPal,
For object detection, you need tfs files.
For that, in object_detection/dataset, we provide a few scripts, namely, to analyze and most importantly to convert your dataset to tfs. You need datasets in COCO or Pascal VOC format to be converted to YOLO Darknet format.
Have a good day,
Julian
2025-12-15 12:36 PM
Thank you for your response.
I generated tfs file from Pascal dataset, but I think I still don't understand. Should I use my tfs with your `./example/checker.py` script?