2026-03-03 12:42 PM - edited 2026-03-04 12:49 PM
Hi, I'm developing a project generated with Cube AI Studio on a Nucleo STM32N6 board, where inference is done on audio data. Right now I'm only controlling whether NN inference and audio transmission can coexist at the same time. The neural network appearently works, but, even commenting the functions that initialize and run the model, I cannot hear audio (I'm using I2S1 in master half-receive mode and I2S3 in master half-transmit mode, both with DMA and circular buffers).
Appearently is a problem that involves also other peripherals apart from I2S, since LPUART with DMA does not show messages when the code is inside while loop of main.c (or inside the code of e.g. app_x-cube-ai.c), but instead printf functions works (I don't know the reason).
I don't know what could be done to solve the problem since, even giving privilege access in RIF to both I2S1 and I2S3, the situation does not change.
I upload the code to inspect it.
Update: appearently I2S transmits audio when I’m not transmitting data in DMA mode: the problem seems that DMA does not work when a neural network is making inference or the NPU is being used!