cancel
Showing results for 
Search instead for 
Did you mean: 

Error when trying to conver tflite to C-code with stedgeai generate parameter --st-neural-art

lyannen
Associate II

Hello, 

im trying to convert a .tflite model (encoder-decoder model, with only supported layers by stedgeai) to C-code to use it in my STM32CubeIde project.

When using the command:
/opt/ST/STEdgeAI/2.1/Utilities/linux/stedgeai generate --model model.tflite --target stm32n6 --st-neural-art 

I'm getting the following error output, im not really getting my head around:

ST Edge AI Core v2.1.0-20194 329b0e98d
WARNING: gemm_0 is not quantized                                                                                                                                    
WARNING: nl_0_nl is not quantized                                                                                                                                   
WARNING: gemm_1 is not quantized                                                                                                                                    
WARNING: nl_1_nl is not quantized                                                                                                                                   
WARNING: gemm_2 is not quantized                                                                                                                                    
WARNING: nl_2_nl is not quantized                                                                                                                                   
WARNING: reshape_3 is not quantized                                                                                                                                 
WARNING: gemm_4 is not quantized                                                                                                                                    
WARNING: nl_5 is not quantized                                                                                                                                      
WARNING: reshape_6 is not quantized                                                                                                                                 
 >>>> EXECUTING NEURAL ART COMPILER                                                                                                                                 
   /opt/ST/STEdgeAI/2.1/Utilities/linux/atonn -i "/local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_output/gcode_model_OE_3_2_0.onnx" --json-quant-file "/local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_output/gcode_model_OE_3_2_0_Q.json" -g "network.c" --load-mdesc "/opt/ST/STEdgeAI/2.1/Utilities/configs/stm32n6.mdesc" --load-mpool "/local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/my_mpools/stm32n6-app2_STM32N6570-DK.mpool" --save-mpool-file "/local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_ws/neural_art__network/stm32n6-app2_STM32N6570-DK.mpool" --out-dir-prefix "/local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_ws/neural_art__network/" -O3 --all-buffers-info --mvei --cache-maintenance --Oalt-sched --native-float --enable-virtual-mem-pools --Omax-ca-pipe 4 --Ocache-opt --Os --enable-epoch-controller --output-info-file "c_info.json"
                                                                                                                                                                    
      >>> Shell execution has FAILED (returned code = 1)
      $ /opt/ST/STEdgeAI/2.1/Utilities/linux/atonn -i /local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_output/gcode_model_OE_3_2_0.onnx --json-quant-file /local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_output/gcode_model_OE_3_2_0_Q.json -g network.c --load-mdesc /opt/ST/STEdgeAI/2.1/Utilities/configs/stm32n6.mdesc --load-mpool /local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/my_mpools/stm32n6-app2_STM32N6570-DK.mpool --save-mpool-file /local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_ws/neural_art__network/stm32n6-app2_STM32N6570-DK.mpool --out-dir-prefix /local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_ws/neural_art__network/ -O3 --all-buffers-info --mvei --cache-maintenance --Oalt-sched --native-float --enable-virtual-mem-pools --Omax-ca-pipe 4 --Ocache-opt --Os --enable-epoch-controller --output-info-file c_info.json
                                                                                                                                                                    
      saving memory pool description "/local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_ws/neural_art__network/stm32n6-app2_STM32N6570-DK.mpool"
      Error: optimize round /home/atonci/ci/atonci/workspace/release_STAI_21/onnx_backend/onnx_passes/transform_gemm_fc_into_conv.cc:201: runTransform: Assertion `(b_shape.size() == 1) || ((b_shape[0].dim == M) || b_shape[0].dim == N)` failed.
      Warning: Missing Quantization info for Gemm_1_weights; will consider Gemm_1_weights as a native Float                                                         
      Warning: Missing Quantization info for Gemm_1_weights; will consider Gemm_1_weights as a native Float                                                         
      Warning: Missing Quantization info for Gemm_3_weights; will consider Gemm_3_weights as a native Float                                                         
      Warning: Missing Quantization info for Gemm_3_weights; will consider Gemm_3_weights as a native Float                                                         
      Warning: Missing Quantization info for Gemm_5_weights; will consider Gemm_5_weights as a native Float                                                         
      Warning: Missing Quantization info for Gemm_5_weights; will consider Gemm_5_weights as a native Float                                                         
      NOT all epochs mapped on epoch controller: num_epochs=11 num_mapped_epochs=2 num_blobs=2                                                                      
      <<<                                                                                                                                                           
                                                                                                                                                                    
   E103(CliRuntimeError): Error calling the Neural Art compiler - ['', 'saving memory pool description "/local/projekte/Uni/Masterarbeit/EmbeddedModelsSt/gcode_prediction_dataset/gcode_prediction_model/STM_friendly_LSTM_model/st_ai_ws/neural_art__network/stm32n6-app2_STM32N6570-DK.mpool"', '', 'Error: optimize round /home/atonci/ci/atonci/workspace/release_STAI_21/onnx_backend/onnx_passes/transform_gemm_fc_into_conv.cc:201: runTransform: Assertion `(b_shape.size() == 1) || ((b_shape[0].dim == M) || b_shape[0].dim == N)` failed.', '', 'Warning: Missing Quantization info for Gemm_1_weights; will consider Gemm_1_weights as a native Float', 'Warning: Missing Quantization info for Gemm_1_weights; will consider Gemm_1_weights as a native Float', 'Warning: Missing Quantization info for Gemm_3_weights; will consider Gemm_3_weights as a native Float', 'Warning: Missing Quantization info for Gemm_3_weights; will consider Gemm_3_weights as a native Float', 'Warning: Missing Quantization info for Gemm_5_weights; will consider Gemm_5_weights as a native Float', 'Warning: Missing Quantization info for Gemm_5_weights; will consider Gemm_5_weights as a native Float', 'NOT all epochs mapped on epoch controller: num_epochs=11 num_mapped_epochs=2 num_blobs=2']

But when im using the stedgeai generate command without the --st-neural-art parameter the conversion works and i get the network.c/h files, but im missing the "network_atonbuf.xSPI2.raw" file, which i think is needed for using the model within the application (correct me if im wrong and the network.c/h files are enough)?

So my question is: Can i use the converted C-code of my tflite model, that i generated using stedgeai generate without the --st-neural-art paramter and without the "network_atonbuf.xSPI2.raw" file?

 

If any further information is needed, feel free to let me know!

Kind regards

1 ACCEPTED SOLUTION

Accepted Solutions
Julian E.
ST Employee

Hello @lyannen,

 

The --st-neural-art is to use the NPU.

Without it is a running the model only on the MCU, or said differently only with SW epoch.

But yes it works.

 

I am not sure what the meaning of your error is exactly.

All I can say is that in any case, for the NPU to be used to do the calculation, layers in your model must be in int8.

 

In short:

no --st-neural-art -> MCU

not quantized int8 -> MCU

else NPU if the layer is supported or potential SW fallback (MCU)

 

You can find all the documentation here:

https://stedgeai-dc.st.com/assets/embedded-docs/index.html

In particular for the NPU in the section: Neural-ART accelerator™ target

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

1 REPLY 1
Julian E.
ST Employee

Hello @lyannen,

 

The --st-neural-art is to use the NPU.

Without it is a running the model only on the MCU, or said differently only with SW epoch.

But yes it works.

 

I am not sure what the meaning of your error is exactly.

All I can say is that in any case, for the NPU to be used to do the calculation, layers in your model must be in int8.

 

In short:

no --st-neural-art -> MCU

not quantized int8 -> MCU

else NPU if the layer is supported or potential SW fallback (MCU)

 

You can find all the documentation here:

https://stedgeai-dc.st.com/assets/embedded-docs/index.html

In particular for the NPU in the section: Neural-ART accelerator™ target

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.