cancel
Showing results for 
Search instead for 
Did you mean: 

ADC via DMA conversion time on Nucleo-C031C6

DDjur.1
Associate II

Hello,

I have an issue with the ADC conversion timing while using the DMA controller.

I have only configured 1 ADC channel, 12b res. and the ADC clock is at 24MHz (50% of SYSCLK), and it shows correct conversion results (verified by a simple voltage divider).

However, I tried timing it and it takes 6.6us with optimization on. The datasheet shows 0.4us per conversion. I am using theHAL_ADC_Start_DMA() function. My timer starts right before it and stops in the HAL_ADC_ConversionCpltCallback() function. Is this timing the result of abstraction layer overhead, or am I missing something?

Thanks! :)

1 ACCEPTED SOLUTION

Accepted Solutions
TDK
Guru

Everything that runs between your timing events is causing overhead. I would expect most of it due to HAL here, but since you're timing an event of 0.4us, even small delays will have a big impact.

You mention HAL_ADC_ConversionCpltCallback, which is going to be called before your last statements are executed.

> And i should measure the average time between DMA transfer complete (TC) flags being set?

That's how I would do it (if I didn't trust the reference manual). It will have jitter, but the average jitter will be 0 over the long term and so it will measure the exact time you want.

If you feel a post has answered your question, please click "Accept as Solution".

View solution in original post

4 REPLIES 4
TDK
Guru

The overhead involved in timing is causing the discrepancy.

To do better at timing, convert a whole bunch of samples in circular mode and toggle a pin when the TC flag is set. Measure the time between TC flags being set and average it over a long period.

If you feel a post has answered your question, please click "Accept as Solution".

So if I understand correctly, the code:

/* Benchmarking */

/* Reset TIM6 counter */

TIM6->CNT = 0;

/* Start TIM6 counter */

TIM6->CR1 |= TIM_CR1_CEN;

/* ...CODE TO TIMESTAMP... */

/* Benchmarking */

/* Stop TIM6 counter */

TIM6->CR1 &= ~TIM_CR1_CEN;

/*Obtain the elapsed time*/

volatile float elapsedTime = TIM6->CNT/48.0; /*48MHz sysclk*/

is causing overhead, not the HAL functions? And i should measure the average time between DMA transfer complete (TC) flags being set?

TDK
Guru

Everything that runs between your timing events is causing overhead. I would expect most of it due to HAL here, but since you're timing an event of 0.4us, even small delays will have a big impact.

You mention HAL_ADC_ConversionCpltCallback, which is going to be called before your last statements are executed.

> And i should measure the average time between DMA transfer complete (TC) flags being set?

That's how I would do it (if I didn't trust the reference manual). It will have jitter, but the average jitter will be 0 over the long term and so it will measure the exact time you want.

If you feel a post has answered your question, please click "Accept as Solution".
DDjur.1
Associate II

Thank you very much for the response, it was very helpful! :)