AnsweredAssumed Answered

MEMS mic sample-code buffer-size reasoning?

Question asked by steer.andrew on Jul 27, 2016
Latest reply on Jan 18, 2017 by Schroeder.Seton
I'm just getting started with the CCA02M1 expansion board, Nucleo '476RG and X-CUBE-MEMSMIC1 software.

I've got the software compiling in Kiel, and the USB-audio interface works well.
(44.1kHz sampling doesn't seem to work properly, though 16 and 48kHz are fine. The documentation is a bit ambiguous regarding support for 44.1kHz, but that's not a big deal at the moment.)

My question is that the demo software uses 1 millisecond buffers and this is pretty hardcoded - with many instances of (samplerate/1000) and variations of that in the codebase.
I'm interested in performing signal-analysis on buffers of exactly 256 or 512 samples (which will be around -but not exactly- 10ms in duration).
Is there any special reason why the buffers have been kept so small, and/or the buffer size isn't more parameterised?
(I don't require USB streaming for this application, although it might be useful for debugging.)

I wonder whether you kept the buffer small primarily to minimise latency, to minimise RAM-needs (for the smaller processors), because of USB limits, or something else.

Might I be better writing another layer to accumulate your 1ms buffers until I have (say) 512 samples - or changing numerous aspects of the demo-code to increase the underlying buffer size?

I can begin to appreciate some of the pros and cons, but would welcome any other thoughts or reasonings I might have overlooked.