cancel
Showing results for 
Search instead for 
Did you mean: 

ST EdgeAI Core run model validation on stm32mp257, missing information 'inference time for each node'

charlie128
Associate II

Hello

I want to use ST Edge AI Core for model validate on STM32MP257.

When I set the parameter --mode host, the report results include "Inference time per node" 

command: stedgeai validate -m ./linear_model_quantized.onnx --target stm32mp2 --mode host -v 2

part of report result: 

Inference time per node
---------------------------------------------------------------------------
c_id m_id type dur (ms) % cumul name
---------------------------------------------------------------------------
0 0 NL (0x107) 0.001 30.8% 30.8% ai_node_0
1 18 Dense (0x104) 0.001 30.8% 61.5% ai_node_1
2 18 NL (0x107) 0.001 26.9% 88.5% ai_node_2
---------------------------------------------------------------------------
n/a n/a Inter-nodal 0.000 11.5% 100.0% n/a
---------------------------------------------------------------------------
total 0.003
---------------------------------------------------------------------------

 

But I want to see the actual inference time of each layer of the model running on NPU.

So, I run command: stedgeai validate -m ./linear_model_quantized.onnx --target stm32mp2 --mode target -d mpu:192.168.0.10:npu -v 2

report is in attach files,without infomation "Inference time per node" 

 

Is it because the validation on STM32MP257 does not support "Inference time per node", or did I use incorrect tool parameters or firmware version?

 

Thanks

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
Julian E.
ST Employee

Hello @charlie128,

 

The validate mode is made to compare the generated NBG model and the original onnx model.

The goal is not to benchmark the model on the board.

 

To benchmark the model, you need to send it on the target and do a x-linux-ai-benchmark.

I am not sure that per layer inference time is a feature.

 

Learn more here:

https://wiki.st.com/stm32mpu/wiki/How_to_deploy_your_NN_model_on_STM32MPU

https://wiki.st.com/stm32mpu/wiki/How_to_benchmark_your_NN_model_on_STM32MPU

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

2 REPLIES 2
Julian E.
ST Employee

Hello @charlie128,

 

The validate mode is made to compare the generated NBG model and the original onnx model.

The goal is not to benchmark the model on the board.

 

To benchmark the model, you need to send it on the target and do a x-linux-ai-benchmark.

I am not sure that per layer inference time is a feature.

 

Learn more here:

https://wiki.st.com/stm32mpu/wiki/How_to_deploy_your_NN_model_on_STM32MPU

https://wiki.st.com/stm32mpu/wiki/How_to_benchmark_your_NN_model_on_STM32MPU

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Thanks alot