cancel
Showing results for 
Search instead for 
Did you mean: 

keras model on cubemx ai is giving accurancy = n.a

Zzoua.1
Associate

hello,

i have a keras model generated on google colab, i have test on google colab and it works fine, i wanted to export it on cubemx and when i start teh validation on desktop process i'am having an accurancy that is n.a

here is the output of my cubemx

c_id  m_id  desc           output                   ms          % 

 -------------------------------------------------------------------------- 

 0     0     Dense (0x104)  (1,1,1,1)/float32/4B          0.004   10.6% 

 1     1     Dense (0x104)  (1,1,1,30)/float32/120B       0.003    9.5% 

 2     1     NL (0x107)     (1,1,1,30)/float32/120B       0.003    8.8% 

 3     2     Dense (0x104)  (1,1,1,30)/float32/120B       0.007   18.7% 

 4     2     NL (0x107)     (1,1,1,30)/float32/120B       0.003    7.4% 

 5     3     Dense (0x104)  (1,1,1,10)/float32/40B        0.004   11.7% 

 6     3     NL (0x107)     (1,1,1,10)/float32/40B        0.002    6.4% 

 7     4     Dense (0x104)  (1,1,1,6)/float32/24B         0.003    7.8% 

 8     4     NL (0x107)     (1,1,1,6)/float32/24B         0.002    6.4% 

 9     5     Dense (0x104)  (1,1,1,1)/float32/4B          0.002    6.4% 

 10    5     NL (0x107)     (1,1,1,1)/float32/4B          0.002    6.4% 

 -------------------------------------------------------------------------- 

                                                          0.035 ms 

 NOTE: duration and exec time per layer is just an indication. They are dependent of the HOST-machine work-load. 

Running the Keras model... 

Saving validation data... 

 output directory: C:\Users\Administrateur\.stm32cubemx\network_output 

 creating C:\Users\Administrateur\.stm32cubemx\network_output\network_val_io.npz 

 m_outputs_1: (10, 1, 1, 1)/float32, min/max=[0.053, 0.241], mean/std=[0.112, 0.066], dense_19 

 c_outputs_1: (10, 1, 1, 1)/float32, min/max=[0.053, 0.241], mean/std=[0.112, 0.066], dense_19 

Computing the metrics... 

 Cross accuracy report #1 (reference vs C-model) 

 ---------------------------------------------------------------------------------------------------- 

 notes: - the output of the reference model is used as ground truth/reference value 

        - 10 samples (1 items per sample) 

  acc=n.a., rmse=0.000000010, mae=0.000000007, l2r=0.000000074 

Evaluation report (summary) 

----------------------------------------------------------------------------------------------------------------------------------- 

Output       acc    rmse          mae           l2r           mean           std           tensor                                  

----------------------------------------------------------------------------------------------------------------------------------- 

X-cross #1   n.a.   0.000000010   0.000000007   0.000000074   -0.000000002   0.000000010   dense_19, ai_float, (1,1,1,1), m_id=[5] 

----------------------------------------------------------------------------------------------------------------------------------

1 REPLY 1
jean-michel.d
ST Employee

Hi, Zzoua,

Computation of the ACC metric (Classification accuracy) is performed only if the model is considered as a classifier. This implies that the output of the model should represent probabilities within a given tolerance, else it is considered as a regressor model and this metric is not applicable (n.a.).

According the min/max values of the your outputs, I suppose that the output of the model has a "std" non-linearity (not a softmax operator), this explains why the ACC is not computed. In the CLI or through the UI interface, you have the possibility to force the computation of the ACC ("--classifier") if this is relevant for your model.

br,

Jean-Michel