cancel
Showing results for 
Search instead for 
Did you mean: 

How to use NanoEdge AI Studio for anomaly detection

B.Montanari
ST Employee

Summary

This article offers a quick guide on how to implement anomaly detection using NanoEdge. It provides a step-by-step tutorial accessible to AI novices on how to use the tool. The demo implemented is based on a simple orientation detection application using an accelerometer. NanoEdge is responsible for creating the machine learning library, running on an STM32 device.

Introduction

NanoEdge AI Studio is a free software provided by ST to add AI into any embedded project running on any STM32. It empowers embedded engineers, even those unfamiliar with AI, to find the optimal AI model for their requirements through straightforward processes.

This article shows you how to use the anomaly detection project type on NanoEdge AI Studio, with step by step on how to configure and deploy your project on any STM32 device. The example case uses the accelerometer to detect the board orientation.

Prerequisites

 

1. Starting a NanoEdge AI Studio project

1.1. Data logger generation

The NanoEdge AI Studio has an integrated feature to allow you to collect and import data from serial USB, to create your own dataset. Consider checking how to do so in this article:
How to use NanoEdge AI Studio to create a data logger

1.2. NanoEdge AI Studio Project Development

Now, back to the NanoEdge AI Studio Home Page, select the [Anomaly Detection] button to create a new project.

You will find a brief description of this project type, with its use case, user input, and studio output.

Click on [CREATE NEW PROJECT].

Figure 1 - New Anomaly Detection ProjectFigure 1 - New Anomaly Detection Project

The project creation is divided in 7 simple steps. Note that not all of these steps are mandatory and this tutorial will skip the non-mandatory ones. This means that step 5: Validation and step 6: Emulator will not be shown in this tutorial and the article will jump from Step 4: Benchmark straight to Step 7: Deployment.

Step 1: Project settings. Here you can rename it, choose your board and sensor, and set a limit to flash and RAM usage if needed.

On this page, you also have some characteristics of the chosen board, with the description, available memory and sensors, and product page link.

Figure 2 - Project SettingsFigure 2 - Project Settings

Step 2: Regular signals. In this step you need to add the signals to your project, which is your dataset. You can import your dataset or create it using NanoEdge AI Studio.

It is particularly important to follow the specified data format. The tool requires, for this project, n buffers of 256 values x 3 axes (x,y,z) with space separator. The "Best Practices” are also really helpful to make sure that you have the necessary data in your dataset. Since we create our own dataset, click on [ADD SIGNALS] and then on [FROM SERIAL (USB)].

For this step, remember to close the Tera Term terminal window to allow successfully collecting the data with NanoEdge AI Studio. Select the correct COM port and baud rate (115200), for this case, use 14 on the number of lines. Once that is configured, make sure that the board is positioned properly in the steady state and then click on the [START/STOP] button to start the acquisition. For this demonstration project, we will collect “Steady State” signals, which will be the data collected with the board steady horizontally only.

Figure 3 - Import SignalFigure 3 - Import Signal

When it is all done, press the [CONTINUE] button and then [IMPORT] your dataset, with the default delimiter option.

 

Figure 4 - Import SignalFigure 4 - Import Signal

The NanoEdge AI Studio shows the collected signal file characteristics and generate graphics related to the input signal of each axis.

Figure 5 - Collected SignalsFigure 5 - Collected Signals

Step 3: Abnormal signals. This next step is very similar to Step 2, as we collect the abnormal signal of our application following the same steps as previously described. For this demonstration, we only turn the board upside down. This is our abnormal state.

You should have something similar to the following image:

Figure 6 - Abnormal SignalFigure 6 - Abnormal Signal

In this step, you can choose to enable the NanoEdge AI filters to remove unwanted frequency components from the signals as well.

Figure 7 - Filter SettingsFigure 7 - Filter Settings 

Step 4: Benchmark. Once the dataset is completed with the 2 inputs (regular and abnormal signal) we can start our benchmark. Go to Benchmark, click on [RUN NEW BENCHMARK], select the signal that is used and click [START]. 

Figure 8 - BenchmarkFigure 8 - Benchmark

The benchmark process begins, and here is where the NanoEdge AI Studio really shines! In this process, the tool will select the best machine learning algorithm for your application, based on the signal files shared.

You can see the score and balanced accuracy percentage, RAM, and flash usage. These parameters change in real time as new ML libraries are evaluated with your dataset.

The Benchmark can take several minutes to be completed, depending on your project complexity. For this example, a few minutes are more than enough. When the Score/Balanced accuracy percentage is already high enough for your application, as well as the RAM/flash low consumption, you can press the [STOP] button at the right side to stop and complete the process at any given time of the automatic process:

Figure 9 - Benchmark and ScoreFigure 9 - Benchmark and Score

At the end, you will have more information, such as the benchmark score, execution time, search duration, and how many libraries were evaluated with your project at the “Result” area. Below this information, you will find the minimum recommended number of iterations (learn() function calls, which will be used later in the application.

Figure 10 - ResultsFigure 10 - Results

For this benchmark, 49 libraries were used to evaluate the performance of the project. You can see each library’s results and its confusion matrix.

The winner model was the ZSM and the basic information, including the confusion matrix, can be observed:

Figure 11 - Accuracy and Confusion MatrixFigure 11 - Accuracy and Confusion Matrix

As previously mentioned, step 5: Validation and step 6: Emulator are optional, so you can validate your project’s performance and emulate it for the next steps before deploying it onto the board.

Step 7: Deployment. In this final step, you can click on the [COMPILE LIBRARY] button and save the *.zip file in any path you wish. This zip file will be used later on the project that will be created.

1.3. STM32CubeIDE project development

Now, let us work with the STM32CubeIDE to write our application code!

Firstly, you need to create a new STM32CubeIDE project for your board, we use B-U585I-IOT02A in this case:

Figure 12 - STM32CubeIDE New ProjectFigure 12 - STM32CubeIDE New Project

Click [Yes] to have the default peripheral configured as well. However, this is not needed for this demo, as only the UART, I2C and GPIO will be used.

Once the graphic view loads, click in the [Software Packs] and [Select Components], locate the [X-CUBE-MEMS1] software package and add the [Board Part AccGyr] for the ISM330DLC, the one available in the board.

Figure 13 - X-CUBE-MEMS1 settingsFigure 13 - X-CUBE-MEMS1 settings

 

This holds the necessary files and codes for configuring the accelerometer. If you need more details on how to use it, consider watching this video tutorial: Getting Started with X-CUBE-MEMS1 (youtube.com)

Going to the *.ioc file [Pinout & Configuration] tab, we first need to configure the I2C. Go to the [Connectivity] tab and select the [I2C2]. At the [Parameter Settings] select the I2C speed mode as “Fast Mode”. At the [GPIO Settings] tab, make sure the I2C pins used are PH4 and PH5, as per the schematic. A similar step is needed for the ISM330DHCX interrupt pin. Make sure to add the PE11 as GPIO_EXTI11. After this, go to [System Core] > [NVIC] and check the “EXTI Line11 interrupt” box. 

 

The last peripheral to be set is the USART1, so the result can be printed through serial communication.

Go to [Connectivity] > [USART1] and configure it to have 115200, 8 bits, None Parity and 1 Stop Bit. Make sure to change the USART1 pinout to use PA9 and PA10. 

We can now generate the code by clicking on the gear icon or by the Alt + K keyboard shortcut. 

1.4. Adding NanoEdge AI ML library to STM32CubeIDE

The next step is to add the ML library generated by NanoEdge AI Studio into your STM32CubeIDE project. Unzip the compiled library file downloaded through NanoEdge AI Studio.

The unzipped folder should look like the figure below:

Figure 14 - Library and headerFigure 14 - Library and header

We need to add the *.h and *.a files to our project, so the first step is to create a folder within your project for the NanoEdge AI Studio Library:

Figure 15 - Create a New FolderFigure 15 - Create a New Folder

 

Name the new folder as “NEAI_Lib”, then create two additional folders within the NEAI_Lib called “Inc” and “Lib”, just as the figure below shows:

Figure 16 - Folder StructureFigure 16 - Folder Structure

With this, unzip the library folder downloaded and then drag and drop the compiled “NanoEdgeAI.h” files into the [NEAI_Lib] > [Inc] folder and the “libneai.a” file into the [NEAI_Lib] > [Lib] folder:

Figure 17 - Folder PopulatedFigure 17 - Folder Populated

After adding the generated library to your project, right click on your project name and click on “Properties”. Then go to [C/C++ General] > [Paths and Symbols] > [Includes] and add the “NEAI_Lib/Inc” path then click on [Apply].

Figure 18 - Adding Library Part 1Figure 18 - Adding Library Part 1

Move to the [Libraries] tab and add “neai” library, then click on [Apply]:

Figure 19 - Adding Library Part 2Figure 19 - Adding Library Part 2

Finally, move to the [Library Paths] tab and add the “NEAI_Lib/Lib” library path, then click on “Apply and Close”:

Figure 2-- Adding Library Part 3Figure 2-- Adding Library Part 3

 

1.5. Let’s Code!

The code is implemented within the “main.c” file. Add the following code in each section described using the USER CODE XYZ as guidance.

/* USER CODE BEGIN Includes */
//Accelerometer header
#include "ism330dhcx.h"
//I2C2 Bus and BSP header
#include "b_u585i_iot02a_bus.h"
//Standard IO library
#include <stdio.h>
//NanoEdgeAI Headers
#include "NanoEdgeAI.h"
/* USER CODE END Includes */
/* USER CODE BEGIN PD */
//Declare the accelerometer data struct
ISM330DHCX_Object_t MotionSensor;
//Data reception Indicator
volatile uint32_t dataRdyIntReceived;
/* Number of samples for learning: set by user ---------------------------------*/
#define LEARNING_ITERATIONS 20
/* USER CODE END PD */

 

/* USER CODE BEGIN PM */
static void MEMS_Init(void);
/* USER CODE END PM */

/* USER CODE BEGIN PV */
// Buffer of input values
float input_user_buffer[DATA_INPUT_USER * AXIS_NUMBER];
// Percentage of similarity
uint8_t similarity = 0;
/* USER CODE END PV */
/* USER CODE BEGIN 0 */
void fill_buffer(float input_buffer[])
{
      /* USER BEGIN */
      uint16_t i;
      ISM330DHCX_Axes_t acc_axes;
      for(i=0;i<(DATA_INPUT_USER*AXIS_NUMBER);i+=3)
      {
             while (dataRdyIntReceived == 0);
             dataRdyIntReceived = 0;
             ISM330DHCX_ACC_GetAxes(&MotionSensor, &acc_axes);
             input_buffer[i] =  (int) acc_axes.x;
             input_buffer[i+1] = (int) acc_axes.y;
             input_buffer[i+2] = (int) acc_axes.z;
      }
      /* USER END */
}

static void MEMS_Init(void)
{
      ISM330DHCX_IO_t io_ctx;
      uint8_t id;
      ISM330DHCX_AxesRaw_t axes;
      /* Link I2C functions to the ISM330DHCX driver */
      io_ctx.BusType     = ISM330DHCX_I2C_BUS;
      io_ctx.Address     = ISM330DHCX_I2C_ADD_H;
      io_ctx.Init        = BSP_I2C2_Init;
      io_ctx.DeInit      = BSP_I2C2_DeInit;
      io_ctx.ReadReg     = BSP_I2C2_ReadReg;
      io_ctx.WriteReg    = BSP_I2C2_WriteReg;
      io_ctx.GetTick     = BSP_GetTick;
      ISM330DHCX_RegisterBusIO(&MotionSensor, &io_ctx);
      /* Read the ISM330DHCX WHO_AM_I register */
      ISM330DHCX_ReadID(&MotionSensor, &id);
      if (id != ISM330DHCX_ID) {
             Error_Handler();
      }
      /* Initialize the ISM330DHCX sensor */
      ISM330DHCX_Init(&MotionSensor);
      /* Configure the ISM330DHCX accelerometer (ODR, scale and interrupt) */
      ISM330DHCX_ACC_SetOutputDataRate(&MotionSensor, 26.0f); /* 26 Hz */
      ISM330DHCX_ACC_SetFullScale(&MotionSensor, 4);     /* [-4000mg; +4000mg] */
      ISM330DHCX_Set_INT1_Drdy(&MotionSensor, ENABLE);   /* Enable DRDY */
      ISM330DHCX_ACC_GetAxesRaw(&MotionSensor, &axes);   /* Clear DRDY */
      /* Start the ISM330DHCX accelerometer */
      ISM330DHCX_ACC_Enable(&MotionSensor);
}
/* USER CODE END 0 */
/* USER CODE BEGIN 2 */
  dataRdyIntReceived = 0;
  MEMS_Init();
  /* Initialization ------------------------------------------------------------*/
  enum neai_state error_code;
  error_code = neai_anomalydetection_init();
  if (error_code != NEAI_OK) {
        /* This happens if the library works on a not supported board. */
  }
  /* Learning process ----------------------------------------------------------*/
  for (uint16_t iteration = 0 ; iteration < LEARNING_ITERATIONS ; iteration++) {
        fill_buffer(input_user_buffer);
        neai_anomalydetection_learn(input_user_buffer);
  }
  /* USER CODE END 2 */
  /* USER CODE BEGIN WHILE */
  while (1)
  {
        fill_buffer(input_user_buffer);
        neai_anomalydetection_detect(input_user_buffer, &similarity);
        if (similarity >= 70)
               printf("Similarity %d = NORMAL BEHAVIOR\r\n", similarity);
        else
               printf("Similarity %d = ANOMALY DETECTED!\r\n", similarity);
   /* USER CODE END WHILE */

 

/* USER CODE BEGIN 4 */
void HAL_GPIO_EXTI_Rising_Callback(uint16_t GPIO_Pin)
{
  if (GPIO_Pin == GPIO_PIN_11)
    dataRdyIntReceived++;
}
int _write(int fd, char * ptr, int len)
{
  HAL_UART_Transmit(&huart1, (uint8_t *) ptr, len, HAL_MAX_DELAY);
  return len;
}
/* USER CODE END 4 */

 

After completing the code execution, you can build and run it onto your board.

Note: Before running the application, be sure to leave the board in the “Normal” position. This ensures that the “learn()” function runs in the normal state. The similarity percentage is defined based on this first state. If you want to store the learned content and restore it, refer to the ST Wiki’s chapter 2.3: AI:NanoEdge AI Library for anomaly detection (AD) - stm32mcu

 

2. Code Validation and Conclusion

To verify if the code is properly working, open the Tera Term Terminal with the following parameters: 115200 / 8 / N / 1.

Leave your board horizontally steady. You can notice the correct behavior being printed:

Figure 21 - Normal BehaviorFigure 21 - Normal Behavior

Now, if you place your board upside down, or cause any disturbance, you can notice that the data collected is no longer recognized as “NORMAL BEHAVIOUR”.

Figure 22 - Anomaly DetectionFigure 22 - Anomaly Detection

Note: Keep in mind that the dataset has a huge role in AI performance. This dataset was made just to showcase the steps needed to create the application from scratch. Any movement, or board position that deviates from the trained position will most likely be misclassified.

Hope this article was useful for your application! 

Related links

 

Version history
Last update:
‎2024-07-17 07:07 AM
Updated by: