2025-01-08 12:36 AM - edited 2025-01-08 12:40 AM
Hello,
I am using a NUCLEO-G431KB with IAR EWARM 9.20.2 and trying to achieve a 250 ns ADC conversion time on ADC1 (IN1, Single-Ended mode).
Configuration: 12-bit resolution, 2.5 cycles sampling time, ADC clock at 60 MHz (synchronous/2, SYSCLK 120 MHz, HCLK 60 MHz).
Theoretical conversion time:
Tconversion=Tsampling+T12-bit=250 ns
Measured time: 2590 ns (2.59 µs) using interrupt mode (HAL_ADC_Start_IT).
Below is the relevant part of the code where the ADC is started, and a debug pin is toggled before and after the ADC conversion:
while (1)
{
HAL_ADC_Start_IT(&hadc1);
HAL_GPIO_WritePin(PA9_GPIO_Port, PA9_Pin, GPIO_PIN_SET);
}
void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc)
{
HAL_GPIO_WritePin(PA9_GPIO_Port, PA9_Pin, GPIO_PIN_RESET);
dmaBuffer = HAL_ADC_GetValue(&hadc1);
}
Is the delay caused by the HAL functions? On reviewing the definition of HAL_ADC_Start_IT, it appears to involve a lengthy process. Could this overhead be due to unnecessary steps like calibration or other operations being performed every time it is called? If yes, how can I minimize this overhead and achieve the theoretical minimum conversion time?
Thank you for your time!
2025-01-08 06:14 AM
Hi,
just imagine: you have a very fast cpu, running at 120MHz - ok?
And you think, you could let it make an INT at 4MHz (250ns S.time) -- so it has about (120/4)=30 cycles time. ok?
And you (should) know, an INT on an ARM is very fast, only one instruction (that stores 12 registers), so is about 13 cyc to the INT and 13 back, remains about 4 (!!) instructions for your action, what you do in your program and in the INT.
Or short : very strange idea. ( = useless + impossible ! )
So at this speed, if you need it really, only the DMA is your friend.
Use DMA (and a TIM to start the ADC at a certain speed, or continuous mode, to run it a its max. speed) for storing the ADC results , then get an INT, when DMA ready (or run it circular and use callbacks ).
2025-01-08 06:34 AM
> So at this speed, if you need it really, only the DMA is your friend.
I very much agree.
As a side note, the "old" SPL came with proper working ADC/DMA examples, and I have quite a few applications based upon this method.
Although this is only one side of the coin.
The other is the processing and output cycle that needs to at least keep up with the input rate.
And reading the initial post, this sounds suspiciously like a cycle-by-cycle control application.
The Cortex M architecture was not specifically designed for applications requiring short cycles and very high interrupt cadence. Maybe a DSP would be better suited for the OP's purpose.