For a while now, I've been wanting to learn the DFSDM (cascaded-integrator-comb, or CIC, filter) peripheral on the STM32L4 and use it for some serious DSP work. This week I finally had time to dig in and get it working. The application (for now) is an ultrasonic translator. The ultrasonic sounds are recorded by an external microphone and digitized at ~ 400 kHz. In software, I mix this signal with a tunable local oscillator (NCO), then decimate/LPF by 16x, and shove the result out the codec to headphones. (I am using an STM32L476 discovery board with some custom hardware.)
The 16x decimation was working with a software-only, FFT-based filter, and I've been listening to bats with the thing. So far so good! But I wanted to use the hardware CIC (DFSDM) to reduce the processor load. I also have some other project ideas for the DFSDM, and this was a good testbed for getting started with it.
The DFSDM chapter in the reference manual is rather crudely written, but after some trial and error I got things working. I am running DFSDM in parallel-input mode with DMA on both sides. On the input side there is a memory-to-memory DMA from my ADC ping-pong input buffer into the DFSDM, and on the output side there is a peripheral-to-memory DMA into my SAI ping-pong output buffer which is driving the codec. (I should note that I am NOT using HAL or Cube or whatever the supplied libraries are called nowadays. In my experience it's much easier to understand what's happening without them. Hopefully this does not scare anyone away.)
I am seeing some unexpected behavior at high oversampling factors (F_OSR) and/or high filter orders (F_ORD). For testing purposes I am driving the DFSDM with the full-scale, int16 sinusoid from my digital oscillator, so I should hear a perfect sine wave in the headphones. (From experience I am fairly adept at telling when the sound is not a pure sinusoid.) What I find is that for all F_OSR above some critical factor, I get clipping and distortion. I can remove the distortion by reducing the amplitude of my driving sinusoid, i.e. by reducing its dynamic range to something less than 16 bits. I made an approximate table showing the distortion threshold for filter orders of 3, 4, and 5, and a range of decimation/oversampling factors:
|F_OSR (decim factor)||sinc^3||sinc^4||sinc^5|
|64||15||8||? very small|
The values in the cells are the dynamic ranges, in bits, before clipping/distortion occur. So e.g. 12 would mean that the upper four MSBs of the [nominal] 16-bit input must be zero to avoid problems. Obviously, if the number is not 16 then the filter is broken, in the sense that the advertised 16-bit dynamic range is unattainable.
Now, anyone who understands CIC filters is bound to say at this point, hey, you bozo, you forgot to set the output bit-shift and you're clipping on the 24-bit output! (In the DFSDM this field is called DTRBS in the CHyCFGR2 register.) I really wish this were the case, but it's not. I am setting the shift correctly, and in fact have checked by intentionally shifting extra to the right. The output gets quieter, but you can still hear all the same distortion. The distortion goes away when the input amplitude drops below the (approximate) numbers in the table above.
I double- and triple-checked everything I can think of, and at this point my only remaining theories are
1) unpublished hardware errata
so I am really hoping that someone with intimate knowledge of the DFSDM can suggest an alternative to hypothesis 1 !!
It would really help if someone reading this is using the DFSDM successfully for
F_ORD==3 and F_OSR > 64, or
F_ORD==4 and F_OSR > 24, or
F_ORD==5 and F_OSR > 16
with ~ full-scale 16-bit input, and could reply to say "yeah, it's working for me". Otherwise I am suspecting an errata. If true, it's a huge letdown, because most of my imagined applications for the DFSDM use higher F_OSR than this