cancel
Showing results for 
Search instead for 
Did you mean: 

LSM6DSOX: configure MLC / UNICO to generate INT1 on only ONE specific gesture?

mminhaj23
Associate II

Hello ST community,

I am working with LSM6DSOX and using the Machine Learning Core (MLC) configured through UNICO GUI. My goal is to use the sensor as a low-power wake-up source for my system.

System overview:

  • MCU: nRF52840 (goes to deep sleep / system OFF)

  • Sensor: LSM6DSOX

  • Interface: I²C

  • Interrupt: INT1 used as hardware wake-up source

  • MCU is sleeping, so no software filtering is possible before wake-up


What I want to achieve

I want the MCU to wake up ONLY when one specific gesture occurs (for example a “wave” gesture).

  • INT1 should assert only for this one gesture

  • INT1 should NOT trigger for:

    • static state

    • random shake

    • other trained gestures

    • noise or motion not matching the target gesture


What I have already done

  • I successfully trained multiple gestures in UNICO GUI using MLC.

  • I can export and load the UCF correctly.

  • INT1 triggers when any gesture is detected.

  • I can read MLC status (Get_MLC_Status) after MCU wakes, but this does not help because:

    • The MCU must already be awake to do software filtering.

    • I need hardware-level filtering, since INT1 is the wake source.


Problems / limitations

  1. Training only one gesture

    • Training with a single gesture often produces false positives.

    • In practice, accuracy improves when training with two or more gestures, but then:

      • INT1 triggers for all detected classes.

  2. Interrupt behavior

    • INT1 appears to trigger on any MLC state change, not on a specific class.

    • I want INT1 to be asserted only for one Meta Classifier output (e.g., class 0).

  3. UNICO configuration uncertainty

    • It is unclear how to:

      • Properly define “everything else” as negative class

      • Tune features / windows to avoid false wakeups

      • Map only one MLC class to INT1 at hardware level


My questions

  1. Training question (UNICO / MLC)

    • What is the recommended way to train one wake-up gesture while ignoring all other motions?

    • Is it better to:

      • Train one gesture + “no motion”?

      • Train multiple gestures but only use one as wake-up?

    • Which UNICO features are recommended to minimize false positives for wake-up use cases?

  2. Interrupt routing question

    • Is it possible to configure LSM6DSOX so that:

      • INT1 is asserted only for one specific MLC class (Meta Classifier index)?

      • Other classes do not toggle INT1 at all?

    • If yes:

      • Which register(s) should be used?

      • Should MLC_INT1 be programmed manually?

      • Is this configurable directly from UNICO with offline mode?

  3. Best-practice question

    • What is the recommended ST approach for using LSM6DSOX MLC as a gesture-based wake-up source when the MCU is fully asleep?

3 REPLIES 3
Federica Bossi
ST Employee

Hi @mminhaj23 ,

It is not possible to differentiate the interrupt generation based on the detected class. What you can do instead is differentiate it based on the decision tree.

Do you need to distinguish all gestures individually or only the wake-up gesture from all the others?

  • If you do not need to distinguish the other gestures from each other, then you can train a single tree where you group all non–wake-up gestures and the no-motion condition into one single class.

  • If you do need to distinguish the other gestures from each other, you can achieve the same behavior by configuring two trees:

    1. A first tree that recognizes the wake-up gesture vs all the others (as described above).
    2. A second tree that recognizes all gestures and outputs a specific class for each gesture.
      The first tree does not necessarily need to be trained separately; it can be derived from the second tree by manually replacing the class names so that all non–wake-up gestures map to the same output class.

In both cases, the interrupt will be generated when the wake-up gesture is recognized and again when the class changes after the wake-up gesture is no longer detected. This means you will have 2 interrupts for each recognized wake-up gesture (one at detection and one when it ends).

In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
mminhaj23
Associate II

Thank you.

For our application, we only require a single specific gesture to trigger the wake-up and generate a hardware-level interrupt. We do not need to differentiate between the other gestures individually.

Therefore, a one-vs-all configuration is sufficient for us: the wake-up gesture versus all other gestures (including no-motion). When the wake-up gesture is detected, the interrupt pin should go high; otherwise, no interrupt action is required.

mminhaj23
Associate II

Let’s focus on the first scenario.

Suppose my device goes to sleep (or turns off) when there is no motion, and I want to wake it up using one specific gesture — for example, drawing a “Z” gesture in the air.

What exactly do I need to train? Are there unlimited non-awakened gestures? And for example, if I trained on no motion, will it give an interrupt on static, and will the device wake up?