cancel
Showing results for 
Search instead for 
Did you mean: 

Achieving the theoretical ADC Conversion Time (12-bit) on STM32G4

BitCurious
Associate II

Hello,

I am using a NUCLEO-G431KB with IAR EWARM 9.20.2 and trying to achieve a 250 ns ADC conversion time on ADC1 (IN1, Single-Ended mode).

Configuration: 12-bit resolution, 2.5 cycles sampling time, ADC clock at 60 MHz (synchronous/2, SYSCLK 120 MHz, HCLK 60 MHz).


Theoretical conversion time:

Tconversion=Tsampling+T12-bit=250 ns

Measured time: 2590 ns (2.59 µs) using interrupt mode (HAL_ADC_Start_IT).

Below is the relevant part of the code where the ADC is started, and a debug pin is toggled before and after the ADC conversion:

while (1)
{
    HAL_ADC_Start_IT(&hadc1);
    HAL_GPIO_WritePin(PA9_GPIO_Port, PA9_Pin, GPIO_PIN_SET);
}

void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc)
{
    HAL_GPIO_WritePin(PA9_GPIO_Port, PA9_Pin, GPIO_PIN_RESET);
    dmaBuffer = HAL_ADC_GetValue(&hadc1);
}

Is the delay caused by the HAL functions? On reviewing the definition of HAL_ADC_Start_IT, it appears to involve a lengthy process. Could this overhead be due to unnecessary steps like calibration or other operations being performed every time it is called? If yes, how can I minimize this overhead and achieve the theoretical minimum conversion time?

Thank you for your time!

4 REPLIES 4
AScha.3
Chief III

Hi,

just imagine: you have a very fast cpu, running at 120MHz - ok?

And you think, you could let it make an INT at 4MHz (250ns S.time) -- so it has about (120/4)=30 cycles time. ok?

And you (should) know, an INT on an ARM is very fast, only one instruction (that stores 12 registers), so is about 13 cyc to the INT and 13 back, remains about 4 (!!) instructions for your action, what you do in your program and in the INT.  

Or short : very strange idea. ( = useless + impossible ! )

 

So at this speed, if you need it really, only the DMA is your friend.

Use DMA (and a TIM to start the ADC at a certain speed, or continuous mode, to run it a its max. speed) for storing the ADC results , then get an INT, when DMA ready (or run it circular and use callbacks ).

If you feel a post has answered your question, please click "Accept as Solution".

> So at this speed, if you need it really, only the DMA is your friend.

I very much agree.
As a side note, the "old" SPL came with proper working ADC/DMA examples, and I have quite a few applications based upon this method.

Although this is only one side of the coin.
The other is the processing and output cycle that needs to at least keep up with the input rate.
And reading the initial post, this sounds suspiciously like a cycle-by-cycle control application.
The Cortex M architecture was not specifically designed for applications requiring short cycles and very high interrupt cadence. Maybe a DSP would be better suited for the OP's purpose.

ok, is DMA supposed to be the quicked mode of ADC?.

So, I'm working on a project where precise sampling of a sine wave signal (currently 1 kHz, can be higher later) is critical. I need to know the exact number of samples per cycle to perform specific calculations in my program.

Initially, I implemented ADC sampling using interrupt mode and measured performance under various clock frequencies. Here's my code snippet :

 

void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc)
{ static uint32_t counter =0;
  counter++;
  if (counter == 10000)
  {
     HAL_GPIO_WritePin(GPIOA, GPIO_PIN_9, GPIO_PIN_RESET);
     counter = 0;
  }
  adcBuffer = HAL_ADC_GetValue(&hadc1); }

 

To validate, I calculated the expected sampling time and measured it using an oscilloscope. However, I noticed significant discrepancies between expected and measured times:

Clock Frequency (ADC) 

Expected Time (ms)Measured Time (ms)

60 MHz (120 MHz/2) 

 2.515.43

30 MHz (120 MHz/4) 

514.67

20 MHz (120 MHz/6) 

 7.514.97

15 MHz (120 MHz/8) 

 1017.58

7.5 MHz (120 MHz/16) 

 2019.97

Clearly, the results are far from expected. I’m considering switching to DMA mode for potentially better performance and consistency. What could be causing the significant delay ?

and Is DMA generally the recommended method for high-speed ADC sampling in STM32 applications?

>is DMA supposed to be the quicked mode of ADC?

You should really read a little about the magic machine, your working with: the SOC (system on chip), CPU+peripherals.

A DMA is a hardware sequencer, just accessing RAM or devices at its fast as possible speed...

 

>Is DMA generally the recommended method for high-speed ADC sampling in STM32 applications?

Not only STM....on any controller, accessing any peripherals.

 

+

>What could be causing the significant delay ?

Your program/INT or whatever is way to slow...so at 7MHz ADC -> 20ms you get the speed, you expect.

Or at lower speed...try it.  But for higher speed : only DMA is useful

If you feel a post has answered your question, please click "Accept as Solution".