cancel
Showing results for 
Search instead for 
Did you mean: 

ModuleNotFoundError: No module named 'models_utils'

croto
Associate II

Hi all, I'm playing around with STM32H747I-DISCO eval board and the stm32ai-modelzoo demo.

I've managed to train and evaluate a model with my custom dataset using the script in "image_classification\src\stm32ai_main.py"

Now I'm struggling with the deployment of the model. I tried using the deploy script in "image_classification\deployment\deploy.py" but I get the following error:

ModuleNotFoundError: No module named 'models_utils'

 

Any ideas what's going wrong here? The error isn't very descriptive (to me at least) so I'm a bit stuck.

Sorry if this is a simple question, coming from a Hardware and FPGA design background using VHDL where everything is very explicit and strongly typed, I really struggle with the layers of abstraction that Python libraries bring.

1 ACCEPTED SOLUTION

Accepted Solutions

Hello @croto,

 

What is done by the stm32ai_main.py is described in the yaml. In other words, you do not need to touch the Python file ever, just edit the user_config.yaml and execute the Python do to what is described in the yaml.

 

Based on your CMD screenshot, your yaml is using an operation mode that at least contains quantization and benchmarking.

By default the config file for image classification is here: https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/main/image_classification/src/user_config.yaml 

 

When you open it, you see multiple parts:

  • general describes your project, including project name, directory where to save models, etc.
  • operation_mode describes the service or chained services to be used
  • dataset describes the dataset that you are using, including directory paths, class names, etc.
  • preprocessing specifies the methods that you want to use for rescaling and resizing the images.
  • training specifies your training setup, including batch size, number of epochs, optimizer, callbacks, etc.
  • mlflow specifies the folder to save MLFlow logs.
  • hydra specifies the folder to save Hydra logs.

 

By default, the cloud is indeed on the cloud (on_cloud: True)

 

tools:
  stedgeai:
    version: 9.1.0
    optimization: balanced
    on_cloud: True
    path_to_stedgeai: C:/Users/<XXXXX>/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/<*.*.*>/Utilities/windows/stedgeai.exe
  path_to_cubeIDE: C:/ST/STM32CubeIDE_<*.*.*>/STM32CubeIDE/stm32cubeide.exe

 

If you want to do it in local, you have to edit the two paths and change True to False.

 

As for the deployment, if the operation mode contains the deployment (so deployment or chain_qd, see the tab in my first post), if your board is plugged, a basic application and your model will be deployed on your board.

I am not familiar with it, but you can look for the code here: https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/stm32ai_application_code/image_classification

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

4 REPLIES 4
Julian E.
ST Employee

Hello @croto ,

No worries, there are no wrong questions ^^

 

Just to put some context, the idea behind model zoo is to configurate the user_config.yaml to do a lot of things:

JulianE_0-1731601449302.png

 

So the idea is to edit the user_config.yaml and use the python script stm32ai_main.py, not to use the deploy.py.

If you still get errors when using the stm32ai_main.py, you may consider re installing model zoo from the start: 

GitHub - STMicroelectronics/stm32ai-modelzoo: AI Model Zoo for STM32 devices

 

Before deployment, you should consider quantizing the model (reducing the weight from float32 to int8) to gain a lot in terms of memory footprint and inference time (and lose a little in accuracy generally).

 

Because you retrained your own model, you should have a best_model.h5 in the /experiment_ouputs/saved_models folder.

To quantize your model, you need to edit the user_config.yaml to set the operation mode to quantize and set some other things: your model path, quantization dataset path (advised to put the training data) etc

By default, the quantization is made using the cloud, you will need to log to your ST account.

The process is explained here: 

stm32ai-modelzoo/image_classification/src/quantization/README.md at main · STMicroelectronics/stm32ai-modelzoo · GitHub

 

After quantization, in the experiment_output, you will get a .tflite model (tflite is tensorflow lite models, or quantized models, .h5 is tensorflow models).

Then you can look for the documentation to deploy your quantized model to the board:

stm32ai-modelzoo/image_classification/deployment/README.md at main · STMicroelectronics/stm32ai-modelzoo · GitHub

Make sure to plug your board on the ST-Link-V3 plug to the PC before running the python script as it will deploy it directly on your board.

 

 

Here is the full documentation for the image_classification: stm32ai-modelzoo/image_classification at main · STMicroelectronics/stm32ai-modelzoo · GitHub

 

Note that with the other operation modes. You can benchmark and evaluate your model before quantization, then after to compare them for example.  There are also chains operation modes to do multiple things in one go...

 

Do not hesitate if you have any other questions.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi Julian, thanks a lot for your very detailed and helpful reply. It looks like quantization is enabled by default in stm32ai_main.py, so is the benchmarking.

croto_0-1731670152119.png

I believe that the benchmarking is done on the cloud, judging by this line in "stm32ai_main.py" and the "credentials" requirement?

benchmark(cfg=cfg, model_path_to_benchmark=quantized_model_path, credentials=credentials, custom_objects=IC_CUSTOM_OBJECTS)

I'm still a bit confused about how to deploy the code onto my dev board. Is there a binary output resulting from "stm32ai_main.py" that I can use to program the kit? In which step of the script does the kit gets flashed?

 

Thanks once again for the help!

Best regards,

croto

 

 

Hello @croto,

 

What is done by the stm32ai_main.py is described in the yaml. In other words, you do not need to touch the Python file ever, just edit the user_config.yaml and execute the Python do to what is described in the yaml.

 

Based on your CMD screenshot, your yaml is using an operation mode that at least contains quantization and benchmarking.

By default the config file for image classification is here: https://github.com/STMicroelectronics/stm32ai-modelzoo/blob/main/image_classification/src/user_config.yaml 

 

When you open it, you see multiple parts:

  • general describes your project, including project name, directory where to save models, etc.
  • operation_mode describes the service or chained services to be used
  • dataset describes the dataset that you are using, including directory paths, class names, etc.
  • preprocessing specifies the methods that you want to use for rescaling and resizing the images.
  • training specifies your training setup, including batch size, number of epochs, optimizer, callbacks, etc.
  • mlflow specifies the folder to save MLFlow logs.
  • hydra specifies the folder to save Hydra logs.

 

By default, the cloud is indeed on the cloud (on_cloud: True)

 

tools:
  stedgeai:
    version: 9.1.0
    optimization: balanced
    on_cloud: True
    path_to_stedgeai: C:/Users/<XXXXX>/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/<*.*.*>/Utilities/windows/stedgeai.exe
  path_to_cubeIDE: C:/ST/STM32CubeIDE_<*.*.*>/STM32CubeIDE/stm32cubeide.exe

 

If you want to do it in local, you have to edit the two paths and change True to False.

 

As for the deployment, if the operation mode contains the deployment (so deployment or chain_qd, see the tab in my first post), if your board is plugged, a basic application and your model will be deployed on your board.

I am not familiar with it, but you can look for the code here: https://github.com/STMicroelectronics/stm32ai-modelzoo/tree/main/stm32ai_application_code/image_classification

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi Julian, thanks again for your answer, that's very helpful. I am now able to deploy the demo application into my board.

 

Here are the steps I've taken in order to do so:

1. Install STM32CubeMX

2. Install X-CUBE-AI, follow instructions on page 7 of UM2526

3. Modify the following in stm32ai-modelzoo\image_classification\src\user_config.yaml

general:
  model_path: best_model.h5 #Path to h5 trained model saved_models/best_model.h5
...
operation_mode: 'chain_qd'
...
training: #Not sure if necessary
  trained_model_path: best_model.h5 #Path to h5 trained model saved_models/best_model.h5
...
tools:
  stedgeai:
    version: 9.1.0
    optimization: balanced
    on_cloud: False #True
    path_to_stedgeai: C:/Users/(your_windows_user)/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/9.1.0/Utilities/windows/stedgeai.exe 
  path_to_cubeIDE: C:/ST/STM32CubeIDE_1.16.1/STM32CubeIDE/stm32cubeide.exe