2026-02-05 5:04 AM - last edited on 2026-02-05 7:53 AM by Andrew Neil
Hello ST community,
I am working with LSM6DSOX and using the Machine Learning Core (MLC) configured through UNICO GUI. My goal is to use the sensor as a low-power wake-up source for my system.
System overview:
MCU: nRF52840 (goes to deep sleep / system OFF)
Sensor: LSM6DSOX
Interface: I²C
Interrupt: INT1 used as hardware wake-up source
MCU is sleeping, so no software filtering is possible before wake-up
I want the MCU to wake up ONLY when one specific gesture occurs (for example a “wave” gesture).
INT1 should assert only for this one gesture
INT1 should NOT trigger for:
static state
random shake
other trained gestures
noise or motion not matching the target gesture
I successfully trained multiple gestures in UNICO GUI using MLC.
I can export and load the UCF correctly.
INT1 triggers when any gesture is detected.
I can read MLC status (Get_MLC_Status) after MCU wakes, but this does not help because:
The MCU must already be awake to do software filtering.
I need hardware-level filtering, since INT1 is the wake source.
Training only one gesture
Training with a single gesture often produces false positives.
In practice, accuracy improves when training with two or more gestures, but then:
INT1 triggers for all detected classes.
Interrupt behavior
INT1 appears to trigger on any MLC state change, not on a specific class.
I want INT1 to be asserted only for one Meta Classifier output (e.g., class 0).
UNICO configuration uncertainty
It is unclear how to:
Properly define “everything else” as negative class
Tune features / windows to avoid false wakeups
Map only one MLC class to INT1 at hardware level
Training question (UNICO / MLC)
What is the recommended way to train one wake-up gesture while ignoring all other motions?
Is it better to:
Train one gesture + “no motion”?
Train multiple gestures but only use one as wake-up?
Which UNICO features are recommended to minimize false positives for wake-up use cases?
Interrupt routing question
Is it possible to configure LSM6DSOX so that:
INT1 is asserted only for one specific MLC class (Meta Classifier index)?
Other classes do not toggle INT1 at all?
If yes:
Which register(s) should be used?
Should MLC_INT1 be programmed manually?
Is this configurable directly from UNICO with offline mode?
Best-practice question
What is the recommended ST approach for using LSM6DSOX MLC as a gesture-based wake-up source when the MCU is fully asleep?
2026-02-11 12:47 AM
Hi @mminhaj23 ,
It is not possible to differentiate the interrupt generation based on the detected class. What you can do instead is differentiate it based on the decision tree.
Do you need to distinguish all gestures individually or only the wake-up gesture from all the others?
If you do not need to distinguish the other gestures from each other, then you can train a single tree where you group all non–wake-up gestures and the no-motion condition into one single class.
If you do need to distinguish the other gestures from each other, you can achieve the same behavior by configuring two trees:
In both cases, the interrupt will be generated when the wake-up gesture is recognized and again when the class changes after the wake-up gesture is no longer detected. This means you will have 2 interrupts for each recognized wake-up gesture (one at detection and one when it ends).
2026-02-11 5:09 AM
Thank you.
For our application, we only require a single specific gesture to trigger the wake-up and generate a hardware-level interrupt. We do not need to differentiate between the other gestures individually.
Therefore, a one-vs-all configuration is sufficient for us: the wake-up gesture versus all other gestures (including no-motion). When the wake-up gesture is detected, the interrupt pin should go high; otherwise, no interrupt action is required.
2026-02-11 5:21 AM
Let’s focus on the first scenario.
Suppose my device goes to sleep (or turns off) when there is no motion, and I want to wake it up using one specific gesture — for example, drawing a “Z” gesture in the air.
What exactly do I need to train? Are there unlimited non-awakened gestures? And for example, if I trained on no motion, will it give an interrupt on static, and will the device wake up?