cancel
Showing results for 
Search instead for 
Did you mean: 

Trigger delay in Machine Learning Core feature detection

EC.2
Associate II

Hello,

I’ve been trying to figure out how fast the machine learning core on the LSM6DSOX can generate an output once the condition for the change has occurred. I am working on a design where feature detection is needed in less than 50ms. I am using the Unico software and an STEVAL-MKI197V1 module with the STEVAL-MKI109V3 board.

From reading the datasheet and the application note AN5259, my understanding of how the Machine learning core (MLC) works is the following:

- samples are taken until the sampling windows is filled.

- once the window is full, statistical parameters are calculated from the samples in the window.

- at the start of the MLC period, the last stored statistical parameters calculated are used to evaluate the MLC decision tree.

- if the decision tree result changes, the tree values is updated, and an interrupt is generated

Let’s consider the following case as an example:

- a sensor output data rate of 833 Hz

- a window length of 8 samples

- a Machine learning core output data rate (ODR) of 104 Hz

- simple identification of tilt angle, using a threshold on the mean accelerometer value on the X axis.

I am using Unico to save and visualize the data from the IMU and the decision tree output.

Now, for this configuration I would expect for the tree output to change in no more than around 10ms after the threshold is crossed, given the MLC data rate of 104Hz, and the fact that the samples windows gets filled at the same rate. The results are quite far from that - I am observing values of around 100ms for this configuration.

I have tried many configurations with varied results. For example, for an MLC ODR of 104Hz, with sampling at 6664Hz and window length of 64 samples, I’ve seen triggering times of 500ms. If the windows length is reduced to 16 samples while maintaining the 6664 HZ sampling rate and 104 Hz MLC ODR, the triggering delays get reduced, with something around 150ms or so.

I suspect the calculation of the statistical parameters is not done independently after the sample window is filled, and is somehow linked to the MLC ODR, as well as the samples window length, but maybe someone knows more about this and can help me understand it, as I can’t seem to find this information anywhere.

I hope I’ve explained the problem reasonably well. I have attached a number of graphs with example results.

Many thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

Hi @EC.2​ ,

please have a look to this part of the Application note, section 1.4.

0693W00000FDtxiQAD.pngFor better understanding, it might be useful to give this example:

Window length is expressed as number of samples. The time window can be obtained by dividing the number of samples by the data rate chosen for MLC (MLC_ODR):

    Time window = Window length / MLC_ODR

For instance, selecting 104 samples for window length and 104 Hz for MLC data rate, the obtained time window will be:

    Time window = 104 samples / 104 Hz = 1 second

So, not all the widows length values are the best fit.

See also this other section, the highlighted part:

0693W00000FDty2QAD.png 

-Eleon

View solution in original post

5 REPLIES 5
Eleon BORLINI
ST Employee

Hi @EC.2​ ,

The feature value depends upon the window length and upon the MLC_ODR, and these parameters are chosen by the user. You can refer to the LSM6DSOX: Machine Learning Core Application note AN5259, p.9:

All the features are computed within a defined time window, which is also called “window length�? since it is expressed as the number of samples. The size of the window has to be determined by the user and is very important for the machine learning processing, since all the statistical parameters in the decision tree will be evaluated in this time window. It is not a moving window, features are computed just once for every WL sample (where WL is the size of the window).

The window length can have values from 1 to 255 samples. The choice of the window length value depends on the sensor data rate (ODR), which introduces a latency for the generation of the Machine Learning Core result, and on the specific application or algorithm. In an activity recognition algorithm for instance, it can be decided to compute the features every 2 or 3 seconds, which means that considering sensors running at 26 Hz, the window length should be around 50 or 75 samples respectively.

So, you are right, if you change the window / ODR settings you might get different outputs. You have to chose the parameter values that fit at most your application target.

-Eleon

EC.2
Associate II

Hi Eleon,

First of all thank you for your reply, I much appreciate the help.

I had read those paragraphs from the AN5259, and that was pretty much the starting point for my tests, although I did not insist too much in the time frame of the example (26Hz and 50/75 samples), since I am looking for much smaller time frames.

I have just tried the combination of 26Hz sampling rate, with 75 samples in the window and MLC ODR of 26 Hz, and the output is generated exactly as one would expect. However, if the case [26 Hz sampling frequency, 50 samples WL for 2 seconds response, and 26Hz MLC ODR] is changed to a [104Hz sampling frequency,200 samples WL and 26 Hz ODR], the lags appear again.

I think the problem I am noticing is related to how and when the statistical parameters for the data in the window are calculated. Is there any information about that? Because I think they are in no way ready, or immediately calculated once the window is filled, but there is no information concerning that aspect in the datasheet or the AN5259, that I'm aware of.

I feel like there's some information missing about how the window length, sampling rate and MLC ODR parameters correlate. While I would like to make the best choice of parameters for my application, the fact that I don't know how to calculate how much time the generation of an output will take for a given choice of parameters requires many trials to estimate that value. I am pretty sure it's deterministic, so there's probably a formula to calculate the maximum time for an output to be generated.

Hi @EC.2​ ,

please have a look to this part of the Application note, section 1.4.

0693W00000FDtxiQAD.pngFor better understanding, it might be useful to give this example:

Window length is expressed as number of samples. The time window can be obtained by dividing the number of samples by the data rate chosen for MLC (MLC_ODR):

    Time window = Window length / MLC_ODR

For instance, selecting 104 samples for window length and 104 Hz for MLC data rate, the obtained time window will be:

    Time window = 104 samples / 104 Hz = 1 second

So, not all the widows length values are the best fit.

See also this other section, the highlighted part:

0693W00000FDty2QAD.png 

-Eleon

EC.2
Associate II

Hi Eleon,

I will be working more on this and try to find a good combination of parameters for my application.

Thanks

Eleon BORLINI
ST Employee

Hi @EC.2​ ,

Thank you for the update, let me please know if you are able to solve your issue.

-Eleon