2023-06-13 01:06 AM
Hello,
I have an issue with the ADC conversion timing while using the DMA controller.
I have only configured 1 ADC channel, 12b res. and the ADC clock is at 24MHz (50% of SYSCLK), and it shows correct conversion results (verified by a simple voltage divider).
However, I tried timing it and it takes 6.6us with optimization on. The datasheet shows 0.4us per conversion. I am using theHAL_ADC_Start_DMA() function. My timer starts right before it and stops in the HAL_ADC_ConversionCpltCallback() function. Is this timing the result of abstraction layer overhead, or am I missing something?
Thanks! :)
Solved! Go to Solution.
2023-06-13 06:38 AM
Everything that runs between your timing events is causing overhead. I would expect most of it due to HAL here, but since you're timing an event of 0.4us, even small delays will have a big impact.
You mention HAL_ADC_ConversionCpltCallback, which is going to be called before your last statements are executed.
> And i should measure the average time between DMA transfer complete (TC) flags being set?
That's how I would do it (if I didn't trust the reference manual). It will have jitter, but the average jitter will be 0 over the long term and so it will measure the exact time you want.
2023-06-13 06:16 AM
The overhead involved in timing is causing the discrepancy.
To do better at timing, convert a whole bunch of samples in circular mode and toggle a pin when the TC flag is set. Measure the time between TC flags being set and average it over a long period.
2023-06-13 06:31 AM
So if I understand correctly, the code:
/* Benchmarking */
/* Reset TIM6 counter */
TIM6->CNT = 0;
/* Start TIM6 counter */
TIM6->CR1 |= TIM_CR1_CEN;
/* ...CODE TO TIMESTAMP... */
/* Benchmarking */
/* Stop TIM6 counter */
TIM6->CR1 &= ~TIM_CR1_CEN;
/*Obtain the elapsed time*/
volatile float elapsedTime = TIM6->CNT/48.0; /*48MHz sysclk*/
is causing overhead, not the HAL functions? And i should measure the average time between DMA transfer complete (TC) flags being set?
2023-06-13 06:38 AM
Everything that runs between your timing events is causing overhead. I would expect most of it due to HAL here, but since you're timing an event of 0.4us, even small delays will have a big impact.
You mention HAL_ADC_ConversionCpltCallback, which is going to be called before your last statements are executed.
> And i should measure the average time between DMA transfer complete (TC) flags being set?
That's how I would do it (if I didn't trust the reference manual). It will have jitter, but the average jitter will be 0 over the long term and so it will measure the exact time you want.
2023-06-13 06:40 AM
Thank you very much for the response, it was very helpful! :)