cancel
Showing results for 
Search instead for 
Did you mean: 

Double Buffering 32-bit Audio (Aliasing problem)

maxborglowe
Associate III

Hi,

I'm developing an audio device which will process incoming audio signals into a distorted output.

Since there is a series of processes that will be executed, I thought it would be useful to use "double buffering" to process the collected audio, while simultaneously fetching new data in the background using DMA.

Below is the essence of what I'm doing. My understanding is that this is a pretty basic way to go about it: Do processing when buffer is half-full, and then again when buffer is full.

 

#define AUDIO_BUFFER_SIZE 128
#define AUDIO_BUFFER_SIZE_HALF (uint32_t)(AUDIO_BUFFER_SIZE * 0.5f)

int32_t audio_rx[AUDIO_BUFFER_SIZE];
int32_t audio_tx[AUDIO_BUFFER_SIZE];

static volatile int32_t *audio_rx_p;
static volatile int32_t *audio_tx_p = &audio_tx[0];

void Audio_Transmit(){

	for(uint8_t n = 0; n < (AUDIO_BUFFER_SIZE_HALF - 1); n += 2){
		//left channel
		audio_tx_p[n] = FLOAT_TO_INT32 * Audio_Process(INT32_TO_FLOAT * audio_rx_p[n]);

		//right channel
		audio_tx_p[n + 1] = FLOAT_TO_INT32 * Audio_Process(INT32_TO_FLOAT * audio_rx_p[n + 1]);
	}
	audio_ready_flag = AUDIO_DATA_FREE;
}

void HAL_SAI_TxHalfCpltCallback(SAI_HandleTypeDef *hi2s){
	audio_rx_p = &audio_rx[0];
	audio_tx_p = &audio_tx[0];

	audio_ready_flag = AUDIO_DATA_READY;
}

void HAL_SAI_TxCpltCallback(SAI_HandleTypeDef *hi2s){
	audio_rx_p = &audio_rx[AUDIO_BUFFER_SIZE_HALF];
	audio_tx_p = &audio_tx[AUDIO_BUFFER_SIZE_HALF];

	audio_ready_flag = AUDIO_DATA_READY;
}

 

 

My issue is that when I set the buffer size larger than 128 elements, the output audio faces some serious aliasing. What previously sounded like a modern audio interface, suddenly turns into a Gameboy Advanced.

Any ideas as to what may be the culprit?

My intuition is that if double buffering is done correctly, increasing the buffer size should just create a delay between the input and output audio, since the data to be processed becomes increasingly large (if you've ever used an audio interface for recording, you probably know what I'm talking about).

Thankful for any help I can receive on this issue!

Br,
Max

3 REPLIES 3
LCE
Principal

Back again! 😉

So you want to have low latency between in and out, for some live audio FX, or whatever.

1) 2's complement, I don't remember if standard signed integer uses that - so check your conversion functions: let your Audio_Process() function do actually nothing but give back its input. That will show you if your conversion macros work.

2) check the time that your actual Audio_Process() takes - is the processing actually fast enough ?

3) I would set the flags for processing the data in the RX callbacks, and I would set the RX SAI to master.

Haha, yes sir 😅

Some latency is OK, if that's a side-effect of double buffering large buffers. And yes, used for live audio FX.

For your reference, the audio engine is running at 48kHz, 32-bit. Device runs at 160MHz (1.5 DMIPS).

Just to be clear, the audio effects that I've implemented sound great when I utilise a small buffer (say 16 elements or so). But as I start to increase the buffer size, the aliasing becomes gradually worse. Sounds like an 8-bit video game at >384 elements, and then turning into violent noise at >1024 elements.

Using CYCCNT to count the amount of instructions executed with Audio_Process completely empty, i.e. no effects, takes ~449 cycles. Uncommenting "signal = effects[3]->process(signal)" from the process enables a reverb effect, which instead takes ~7000 cycles to run.

 

#define AUDIO_BUFFER_SIZE 16

float Audio_Process(float input){
	float signal = input;
//	signal = effects[3]->process(signal);
	return signal;
}


void Audio_Transmit(){
	t1 = DWT->CYCCNT;
	for(uint8_t n = 0; n < (AUDIO_BUFFER_SIZE_HALF - 1); n += 2){
		//left channel
		audio_tx_p[n] = FLOAT_TO_INT32 * Audio_Process(INT32_TO_FLOAT * audio_rx_p[n]);

		//right channel
		audio_tx_p[n + 1] = FLOAT_TO_INT32 * Audio_Process(INT32_TO_FLOAT * audio_rx_p[n + 1]);
	}
	audio_ready_flag = AUDIO_DATA_FREE;

	t2 = DWT->CYCCNT - t1;
}

 

 
Just to experiment, I increased the buffer to when the aliasing started occurring (~508 elements), which takes 214700 cycles to process. Maybe I'm looking at this the wrong way, or maybe it's a reading of interest.

I changed the callbacks to RX, but nothing really changes. The RX is set to master mode in the .ioc file, if that's what you mean?

LCE
Principal

Then maybe your audio processing has problems with the buffer size.