cancel
Showing results for 
Search instead for 
Did you mean: 

the result of onnx model validate on target has two different output, why?

cxf
Associate III

hi

I have validate on the STM32N6 board by onnx model in STM32MX,but i have two different output ,like this:

 m_outputs_1: (10, 3)/float64, min/max=[-282.578064, 481.370483], mean/std=[-2.619613, 298.301398], output  m_outputs_2: (10, 1)/float64, min/max=[0.000000, 0.000000], mean/std=[0.000000, 0.000000], node_139  c_outputs_1: (10, 1, 1, 3)/float32, min/max=[-4.547175, 5.813805], mean/std=[0.008044, 4.321293], output  c_outputs_2: (10, 1, 1, 1)/float32, min/max=[0.000800, 0.000800], mean/std=[0.000800, 0.000000], node_139

the  m_outputs_1 is error, and c_outputs_1 is correct,why? what can lead to this problem?

please help me ,thanks very much!!!

I have upload the report.txt and onnx model in the attach.

25 REPLIES 25
Julian E.
ST Employee

Hello @cxf,

 

Can you give me more context about what you have done exactly:

  • What version of X Cube AI do you use?
  • Which board: Nucleo or DK n6?
  • Did you use custom data for validation or the random one by default when you do a validate on target?
  • Are you using STM32CubeIDE as the IDE when doing the Validation on target or do you use another IDE?

 

I did test with the Nucleo N6 and N6 DK board, with version 10.1.0 and 10.2.0 of X Cube AI.

I am using STM32CubeIDE and windows, but I never replicated your issue.

 

Here is an example of report I get, which is correct:

Saving validation data...
 output directory: C:\ST\STEdgeAI\2.2\scripts\N6_scripts\st_ai_output
 creating C:\ST\STEdgeAI\2.2\scripts\N6_scripts\st_ai_output\network_val_io.npz
 m_outputs_1: (10, 3)/float64, min/max=[-282.578064, 481.370483], mean/std=[-2.619613, 298.301398], output
 m_outputs_2: (10, 1)/float64, min/max=[0.000000, 0.000000], mean/std=[0.000000, 0.000000], node_139
 c_outputs_1: (10, 1, 1, 3)/float32, min/max=[-282.578094, 481.370422], mean/std=[-2.619609, 298.301392], output
 c_outputs_2: (10, 1, 1, 1)/float32, min/max=[0.000000, 0.000000], mean/std=[0.000000, 0.000000], node_139

 
Computing the metrics...

 Cross accuracy report #1 (reference vs C-model)
 ----------------------------------------------------------------------------------------------------
 notes: - data type is different: r/float64 instead p/float32
        - ACC metric is not computed ("--classifier" option can be used to force it)
        - the output of the reference model is used as ground truth/reference value
        - 10 samples (3 items per sample)

  acc=n.a. rmse=0.000060476 mae=0.000043488 l2r=0.000000203 mean=-0.000004 std=0.000061 nse=1.000000 cos=1.000000 

 Cross accuracy report #2 (reference vs C-model)
 ----------------------------------------------------------------------------------------------------
 notes: - data type is different: r/float64 instead p/float32
        - the output of the reference model is used as ground truth/reference value
        - 10 samples (1 items per sample)

  acc=n.a. rmse=0.000000000 mae=0.000000000 l2r=0.000000000 mean=0.000000 std=0.000000 nse=1.000000 cos=1.000000 


Evaluation report (summary)
---------------------------------------------------------------------------------------------------------------------------------------------------
Output       acc    rmse          mae           l2r           mean        std        nse        cos        tensor                                  
---------------------------------------------------------------------------------------------------------------------------------------------------
X-cross #1   n.a.   0.000060476   0.000043488   0.000000203   -0.000004   0.000061   1.000000   1.000000   'output', 10 x f32(1x3), m_id=[110]     
X-cross #2   n.a.   0.000000000   0.000000000   0.000000000   0.000000    0.000000   1.000000   1.000000   'node_139', 10 x f32(1x1), m_id=[112]   
---------------------------------------------------------------------------------------------------------------------------------------------------

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

hello @Julian E. 

 

Thank you for your suggestion very much . Now I list my  development environment like this:

1: X Cube AI 10.1.0

2:STM32N6570-DK  board

3:validation input - random numbers 

4:STM32CubeIDE 1.18.1 on windows10

5:STM32cubeMX 6.15.0 

 

As you result, i have two question:

1: your's m_outputs_1 and c_outputs_1 are almost the same, so output have been stabilized‌,

This is a progress.  But they are error results, because the min<-100, max>100, the correct result is min/max @ [-30,30]

 

2: In my STM32MX, when i click check update,the X Cube AI 10.1.0 is the highset version,

where is the 10.2.0?

 

Thanks!

hamitiya
ST Employee

hello @cxf 

it seems that it is not yet available on STM32CubeMX updater. You can install it manually by downloading packages from this link:

X-CUBE-AI - AI expansion pack for STM32CubeMX - STMicroelectronics

 

Best regards,

Yanis


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello @cxf,

 

As of now, you need to install X Cube AI 10.2.0 from local files. You can download it here:

X-CUBE-AI - AI expansion pack for STM32CubeMX - STMicroelectronics

And then install it by clicking "From Local..." in the package install manager in STM32CubeMX.

 

I didn't understand your other point:

your's m_outputs_1 and c_outputs_1 are almost the same, so output have been stabilized‌,

This is a progress.  But they are error results, because the min<-100, max>100, the correct result is min/max @ [-30,30]

Where does the -100,100 comes from and more importantly why do you say that -30,30 is the correct value?

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

hi @Julian E. 

Thanks for your help very much!

 

The reason is that my colleague has validate on target successfully in the cloud by model service ,and check that the correct result is between -30 and 30.So I want to Verify this result by STM32cubeMx.

 

Now I have install the X Cube AI 10.2.0 successfully by your way.and generate the code in STM32cubeMX and put it into  STM32cubeIDE,and add some hardware initialization functions.when i click the validate on target button,

i get the result like this:

Saving validation data...
output directory: C:\Users\lenovo\.stm32cubemx\network_output
creating C:\Users\lenovo\.stm32cubemx\network_output\network_val_io.npz
m_outputs_1: (10, 3)/float64, min/max=[-282.578064, 481.370483], mean/std=[-2.619613, 298.301398], output
m_outputs_2: (10, 1)/float64, min/max=[0.000000, 0.000000], mean/std=[0.000000, 0.000000], node_139
c_outputs_1: (10, 1, 1, 3)/float32, min/max=[-4.547175, 5.813805], mean/std=[0.008044, 4.321293], output
c_outputs_2: (10, 1, 1, 1)/float32, min/max=[0.000800, 0.000800], mean/std=[0.000800, 0.000000], node_139


Computing the metrics...

Cross accuracy report #1 (reference vs C-model)
----------------------------------------------------------------------------------------------------
notes: - data type is different: r/float64 instead p/float32
- ACC metric is not computed ("--classifier" option can be used to force it)
- the output of the reference model is used as ground truth/reference value
- 10 samples (3 items per sample)

acc=n.a. rmse=294.400878906 mae=274.044097900 l2r=68.127830505 mean=-2.627660 std=299.421814 nse=0.058448 cos=0.906592

Cross accuracy report #2 (reference vs C-model)
----------------------------------------------------------------------------------------------------
notes: - data type is different: r/float64 instead p/float32
- the output of the reference model is used as ground truth/reference value
- 10 samples (1 items per sample)

acc=n.a. rmse=0.000799542 mae=0.000799542 l2r=0.999952853 mean=-0.000800 std=0.000000 nse=-4.362565 cos=1.000000


Evaluation report (summary)
-----------------------------------------------------------------------------------------------------------------------------------------------------------
Output acc rmse mae l2r mean std nse cos tensor
-----------------------------------------------------------------------------------------------------------------------------------------------------------
X-cross #1 n.a. 294.400878906 274.044097900 68.127830505 -2.627660 299.421814 0.058448 0.906592 'output', 10 x f32(1x3), m_id=[110]
X-cross #2 n.a. 0.000799542 0.000799542 0.999952853 -0.000800 0.000000 -4.362565 1.000000 'node_139', 10 x f32(1x1), m_id=[112]
-----------------------------------------------------------------------------------------------------------------------------------------------------------

 

So i have questions that my two results are different,but your results are almost the same,why?

Is there  any problem in it?

cxf
Associate III

hi @Julian E. 

 

By the way, I put my network_analyze_report.txt and network_validate_report.txt in the attach.

Thanks!

Hello @cxf,

 

When you say: "So i have questions that my two results are different,but your results are almost the same,why?"

You talk about this right?

X-cross #1 n.a. 294.400878906 274.044097900 68.127830505 -2.627660 299.421814 0.058448 0.906592 'output', 10 x f32(1x3), m_id=[110]
X-cross #2 n.a. 0.000799542 0.000799542 0.999952853 -0.000800 0.000000 -4.362565 1.000000 'node_139', 10 x f32(1x1), m_id=[112]

 

In my case I ran the validate with the model you shared. This model is not quantized.

While in your case, I believe, you are using the quantized version of your model (that you maybe got from the dev cloud).

 

As for the validation on dev cloud and on X Cube AI.

Both run the same functions in background: stedgeai.exe validate ...

 

The only things that change are:

  • By default, the input are random, so if you do not provide an input file (you can do it with a npy file in X Cube AI), you won't see the same results. I ran the validate 10 times on the dev cloud and got 10 different results.
  • The config is also different. When you ran the validate command, by default it uses the default user_neuralart.json.

You can find the user_neuralart.json in user/<your_user>/STM32Cube\Repository\Packs\STMicroelectronics\X-CUBE-AI\10.1.0\scripts\N6_scripts or in ST/stedgeai/2.2/\scripts\N6_scripts.

 

If you use the same profile (compilation option) and the same data, you will have the same results.

 

Also, to give more context on the different results with quantized or unquantized models. When you quantized your model, you risk to lost quite a lot of precision.

In my case, the model is not quantized, so the COS (or the similarity) between my .onnx model and the c model is 1.0000 (100% similar).

When the model is quantized, you may lose precision in int8 layers of the C model.

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi @Julian E. 

Thanks for you help very much!!!

 

Now I understand the reason that my two results are different .But I still have a problem.

As your result are almost the same,I think that you shoud use the  npy file as the validation inputs instead of random input.Because I donot know npy file structure, so i can not generate a npy file as the validation inputs. I can only try to select a non npy file as the validation inputs,I find that STM32cubeMX do not identify it and give me a error .

Can you share your npy file with me ? So I can to prove the same result.

Thanks!!!

 

I also put my user_neuralart.json in the attach ,But i think the problem is not here,because it is the default file.

Hello @cxf,

 

So if we take the model you shared in your first message, the non quantized one, it looks like this:

JulianE_0-1754040105290.png

(the app is netron)

 

You have one input of shape (2,625) and 2 outputs of shape (3,) and (1,).

 

Here is a script that generate npy files with 10 examples:

import numpy as np

# --- Config ---
num_samples = 10
input_shape = (2, 625)  # Your model input
output1_shape = (3,)     # First output (e.g., classification with 3 classes)
output2_shape = (1,)     # Second output (e.g., binary or regression)

# --- Generate Dummy Inputs ---
inputs = np.random.rand(num_samples, *input_shape).astype(np.float32)

# --- Generate Output 1: One-hot class labels (3 classes) ---
labels_1 = np.zeros((num_samples, *output1_shape), dtype=np.float32)
for i in range(num_samples):
    class_index = np.random.randint(0, 3)
    labels_1[i, class_index] = 1.0

# --- Generate Output 2: Binary or regression values ---
labels_2 = np.random.rand(num_samples, *output2_shape).astype(np.float32)

# --- Save as .npy ---
np.save("inputs.npy", inputs)
np.save("labels_1.npy", labels_1)
np.save("labels_2.npy", labels_2)

print("Files saved: inputs.npy, labels_1.npy, labels_2.npy")

 

Here I used random vectors of value, if you have real data and labels, you can edit the code.

The output of this are:

  • inputs.npy
  • labels_1.npy and labels_2.npy

 

Then in STM32CubeMX, in the X Cube AI pannel:

  • Select the model 
  • Select the input

JulianE_1-1754040357939.png

 

For the output, here because you have 2 files, you cannot just use the "validation outputs" option. You need to go in the option (click the blue wheel above "show graph" on the right) and add an "Extra command line option":

JulianE_2-1754040455124.png

the command is 

-vo path_to_labels_1.npy path_to_labels_2.npy

I set the path starting at the root C:/..../labels_1.npy

 

Then click ok. it maybe will launch something.

Then click on validation on target and set the right option:

JulianE_3-1754040630074.png

and click ok.

 

For more information about the validation, the -vi (inputs) and -vo options, look at this documentation:

https://stedgeai-dc.st.com/assets/embedded-docs/stneuralart_getting_started.html

 

Have a good day,

Julian

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.