cancel
Showing results for 
Search instead for 
Did you mean: 

RandomForestClassifier using stedgeai

djvodasafe
Associate

I am attempting to deploy an sklearn RandomForestClassifier on a STM32H7 using ST Edge AI.

I am using the following function to export the RandomForestClassifier to ONNX:

djvodasafe_0-1747436425469.png

My call to the stedgeai cli is:

stedgeai generate --model ../Python/ml_model/rf.onnx --optimization balanced --binary --address 0x91A00000 --target stm32h7 --name aq_network_rf --workspace ../Python/ml_model/workspace_rf --output ../Python/ml_model/output_rf
 
When I subsequently perform inference, the results do not match my python tests. Subsequently, I tried using the evaluate function:
stedgeai validate -m ../Python/ml_model/rf.onnx --target stm32 -vi ../Python/ml_model/test/test_rf_input.npy --output ../Python/ml_model/output_rf
 
The output from this evaluate call shows a discrepancy in outputs between c and the original model found in `network_val_c_outputs_2.csv` and `network_val_m_outputs_2.csv`. The second output matches the python tests. Contents of the files are below:
`network_val_c_outputs_2.csv`
djvodasafe_1-1747436811703.png

`network_val_m_outputs_2.csv`

djvodasafe_2-1747436838432.png

Next, I repeated the above experiment using a sklearn DecisionTreeClassifier. I found that the ouputs in c matched the original model both compiled by stedgeai and in python.

 I am looking for support to successfully convert and deploy the RandomForestClassifier to my stm32h7 device. If anyone has experience with this deployment I would appreciate the help.

 

1 ACCEPTED SOLUTION

Accepted Solutions
Julian E.
ST Employee

Hello @djvodasafe,

 

The ONNX model and the C version coming from the ST Edge AI Core are not byte to byte exact.

When you run your validate on host (PC) you compare the execution of the ONNX model (python) and the C version. You probably will see differences because the kernels are not the exact same for optimization purposes I believe.

 

If you do a validate on target (your STM32), it compares the C code on the pc (x86) and C code for STM32 (cortexM). Here also you may see differences because of compilation flags and optimization done by arm.

 

The goal here is to check the COS metric that you should have when running a validate and have a value close to 1 (meaning around 0.99, 0.9 is bad). If you have that it means that the results are very close.

 

The results you will get depends on the model you use and how it is converted to C.

In your case, the DecisionTreeClassifier probably gives a COS of 1 if you see the exact same results.

 

For your RandomForestClassifier, you want to test ,with real data if possible, if you get a similar accuracy/a good COS. If you do, then it is fine, even if the results are not identical. If you don't, you probably need to retrain/change model.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

1 REPLY 1
Julian E.
ST Employee

Hello @djvodasafe,

 

The ONNX model and the C version coming from the ST Edge AI Core are not byte to byte exact.

When you run your validate on host (PC) you compare the execution of the ONNX model (python) and the C version. You probably will see differences because the kernels are not the exact same for optimization purposes I believe.

 

If you do a validate on target (your STM32), it compares the C code on the pc (x86) and C code for STM32 (cortexM). Here also you may see differences because of compilation flags and optimization done by arm.

 

The goal here is to check the COS metric that you should have when running a validate and have a value close to 1 (meaning around 0.99, 0.9 is bad). If you have that it means that the results are very close.

 

The results you will get depends on the model you use and how it is converted to C.

In your case, the DecisionTreeClassifier probably gives a COS of 1 if you see the exact same results.

 

For your RandomForestClassifier, you want to test ,with real data if possible, if you get a similar accuracy/a good COS. If you do, then it is fine, even if the results are not identical. If you don't, you probably need to retrain/change model.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.