2016-07-27 11:51 AM
I'm just getting started with the CCA02M1 expansion board, Nucleo '476RG and X-CUBE-MEMSMIC1 software.
I've got the software compiling in Kiel, and the USB-audio interface works well.(44.1kHz sampling doesn't seem to work properly, though 16 and 48kHz are fine. The documentation is a bit ambiguous regarding support for 44.1kHz, but that's not a big deal at the moment.)My question is that the demo software uses 1 millisecond buffers and this is pretty hardcoded - with many instances of (samplerate/1000) and variations of that in the codebase.I'm interested in performing signal-analysis on buffers of exactly 256 or 512 samples (which will be around -but not exactly- 10ms in duration).Is there any special reason why the buffers have been kept so small, and/or the buffer size isn't more parameterised?(I don't require USB streaming for this application, although it might be useful for debugging.)I wonder whether you kept the buffer small primarily to minimise latency, to minimise RAM-needs (for the smaller processors), because of USB limits, or something else.Might I be better writing another layer to accumulate your 1ms buffers until I have (say) 512 samples - or changing numerous aspects of the demo-code to increase the underlying buffer size?I can begin to appreciate some of the pros and cons, but would welcome any other thoughts or reasonings I might have overlooked.Thanks. #cca02m1-memsmic12017-01-17 04:12 PM
I'm trying to use the same expansion board and
X-CUBE-MEMSMIC1 software to add my own audio DSP but can't get this mic recording application code provided to work. In particular, I can't get Audacity to record sounds from the mics. It establishes connection to them but can't read in any audio. How did you get this to work? How are your configuration jumpers/resistors set?
To answer your quesiton, I suspect 10 ms of data is taken into the buffer because in the audio beamformering application the DSP must perform a short-time Fourier Transformer (STFT) to isolate voice from noise very fast as this process can be very time-consuming with too many data points run in through this algorithm. I've noticed that the typical sample time window is about 20 ms normally.