cancel
Showing results for 
Search instead for 
Did you mean: 

Similarity constantly at 100 using NanoEdge AI Studio.

Edzzio
Associate II

Hello, 

First off, I'd like to preface that I'm still quite new to the STM32 environment as a whole, and being a student my experience in embedded systems is quite limited, so please feel free to correct me. 

I'm currently working on a Nucleo-F446RE board and wish to implement a real-time (using CMSIS-RTOS) anomaly detection method using the following, simulated (random numbers essentially, generated via a Python code, data separated by tabulations) time series data : 

Abnormal temperature data (randomly generated)Abnormal temperature data (randomly generated)

Normal dataNormal data

After which I benchmarked different models on NanoEdge, and selected the suggested model. 

Following the documentation in order to implement said detection model under STM32CubeIDE, I chose to use the knowledge used while benchmarking the different models, as I don't have a way to get real data yet : 

Model initialization with knowledgeModel initialization with knowledge

 

No errors up till here. In order to correctly determine whether my model is functionnal or not, I chose to use a random buffer of samples generated using the Python Code (just for illustration purposes, here below is an example) : 

Abnormal buffer of samples (example)Abnormal buffer of samples (example)

And thus I can try to detect anomalies using the following code : 

Anomaly detection functionAnomaly detection function

Unfortunately, whether I'm using an abnormal, normal or even an array with only constant values, the similarity was always of 100 : 

Anomaly detection resultsAnomaly detection results

 

Could I be doing something wrong ? Or are these results to be expected ? 

I have come with the following plausible causes (with my limited experience) : 

  • The signal processing step is not included inside the libneai.a file 
  • The data format is not respected (float array in my case)
  • The knowledge is not correctly integrated in the initialization phase
  • The model's execution (being 0.2 ms), might be exceeding the real-time deadline given to a task, and thus, detection is not correctly done (tried to make the prediction task of higher priority, with no change in results)

I thank you in advance for any advice anyone would be giving me.  

 

1 ACCEPTED SOLUTION

Accepted Solutions

Hello @Julian E. ,

In my case, it was a mix of multiple issues which led to a constant similarity of 100 : 

  • The model was not correctly trained on the knowledge (knowledge_buffer → knowledge)
  • The anomaly detection model was at fault : it was overfitted, as it had a difficult time generalizing to slightly different data

I then tried training an nCC model, which allowed a pseudo-detection of anomalies. Although this is not the optimal way of detecting anomalies, it at least unblocked me.

I thank you for your response !  

View solution in original post

3 REPLIES 3
Julian E.
ST Employee

Hello @Edzzio ,

 

What are the sizes of your datasets for training?

Overfitting could happen, can you try to use your board to send signal via serial and test libraries in the "Validation step" Serial Emulator and see if you get the same issue. (try multiple libraries)

If you notice the same behavior inside the studio, then the issue comes from the library, else, it is maybe due to the real-time application as you pointed.

 

Can you please try inside the studio serial emulator and tell me what you observe.

 

Concerning your other questions, the preprocessing are included in the libraries and the data format should not be a problem. You also don't seem to have any issue with the knowledge initialization.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hello @Julian E. ,

In my case, it was a mix of multiple issues which led to a constant similarity of 100 : 

  • The model was not correctly trained on the knowledge (knowledge_buffer → knowledge)
  • The anomaly detection model was at fault : it was overfitted, as it had a difficult time generalizing to slightly different data

I then tried training an nCC model, which allowed a pseudo-detection of anomalies. Although this is not the optimal way of detecting anomalies, it at least unblocked me.

I thank you for your response !  

Julian E.
ST Employee

Hello @Edzzio ,

 

I am glad you solved your issue.

To avoid overfitting, the main thing to do is to use more data.

Then you can use the serial emulator or experiments in Validation Step to check multiples libraries. 

The first library on top is the best/most trained, so you can try to use other libraries, found earlier (less likely to have overfitted)

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.