2025-10-08 12:36 PM
Hi all — I’ve successfully trained → quantized → deployed one of the audio models from the STM32 AI Model Zoo to an STM32N6570-DK (N6) and can see correct output over the serial port. Now I want to deploy my own model. The model was trained on custom features. I can export to ONNX with fixed input size and compatible ops and I know how to run the quantization + deployment flows from the Model Zoo service.
My questions:
Preprocessing location
The N6 audio example seems to compute only log-mel on-device. Can the Model Zoo service itself be configured to produce my custom features on the target, or is the intended approach to modify the STM32 application C code to implement my feature pipeline before inference?
Using ONNX with custom features
Anything specific I should set in the user config for ONNX quantization with a non-mel feature front-end (e.g., any flags to disable built-in mel generation assumptions)?