cancel
Showing results for 
Search instead for 
Did you mean: 

STM32G474 TIM triggered ADC + DMA limited to 50kHz (50ksps)

acsatue
Associate II

Hi all, I am triggering ADC conversion from TIM1 and I can't achieve sampling rate higher than 50kHz. I am using STM32G474RE and HAL library. Based on AN5496 and AN5497 I should be able to do it up to 200kHz and my target is 100kHz.

 

I am testing STM32G474 (STEVAL-DPSG474) for a power converter application where i need to sample ADC at 100kHz (Timer TRGO driven). For that purpose I use ADC + DMA sampling without FMAC, and SPI transfering the sampled data.

As ADC samples in 16bit format (12bit converter) I am using 2 bytes per data for SPI transfer. A tests would send 2 values through SPI, one comming from ADC (2 variable bytes), and the other a fixed one for debug (2 fixed bytes):

 

// main.c

/* USER CODE BEGIN PV */

#define Buflen 1
uint16_t adcValue = 0;
volatile uint16_t adc_dma_result[Buflen];
uint8_t SPI_buffer_tx[4] = {20, 60, 160, 200};

/* Private function prototypes -----------------------------------------------*/
void SystemClock_Config(void);
void MX_GPIO_Init(void);
static void MX_DMA_Init(void);
void MX_RTC_Init(void);
static void MX_ADC1_Init(void);
static void MX_TIM15_Init(void);
static void MX_TIM16_Init(void);
static void MX_TIM17_Init(void);
static void MX_TIM1_Init(void);
static void MX_SPI3_Init(void);

/* USER CODE END PV */

int main(void)
{
  /* Reset of all peripherals, Initializes the Flash interface and the Systick. */
  HAL_Init();

  /* Configure the system clock */
  SystemClock_Config();

  /* Initialize all configured peripherals */
  MX_GPIO_Init();
  MX_DMA_Init();
  MX_RTC_Init();
  MX_ADC1_Init();
  MX_TIM15_Init();
  MX_TIM16_Init();
  MX_TIM17_Init();
  MX_TIM1_Init();
  MX_SPI3_Init();
  /* USER CODE BEGIN 2 */

  /* Perform an ADC automatic self-calibration and enable ADC */
  if (HAL_ADCEx_Calibration_Start(&hadc1, ADC_SINGLE_ENDED) != HAL_OK)
  {
    /* Configuration Error */
    Error_Handler();
  }

  /* Start the ADC DMA */
  if (  HAL_ADC_Start_DMA(&hadc1, (uint32_t*)adc_dma_result, Buflen) != HAL_OK)
  {
    /* Configuration Error */
    Error_Handler();
  }

  // start pwm generation
  if(HAL_TIM_PWM_Start(&htim1, TIM_CHANNEL_1) != HAL_OK) {
                Error_Handler();
  }

  /* USER CODE END 2 */

  /* Infinite loop */
  /* USER CODE BEGIN WHILE */
  while (1)
  {
  }
}

 

In order to debug the development I am sending an SPI message with logged ADC data every Transfer Completed using HAL predefined function

 

// stm32g4xx_it.c

/* USER CODE BEGIN 1 */

void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc)
{
  /* Prevent unused argument(s) compilation warning */
  //UNUSED(hadc);
	if (hadc == &hadc1) {

		  adcValue = adc_dma_result[0];

		  SPI_buffer_tx[1]=adcValue & 0xFF;
		  SPI_buffer_tx[0]=(adcValue >>  & 0xFF;

		  HAL_SPI_Transmit(&hspi3, SPI_buffer_tx, sizeof(SPI_buffer_tx), 1000);

		//If continuousconversion mode is DISABLED uncomment below
		HAL_ADC_Start_DMA(&hadc1, (uint32_t*)dmaBuffer, Buflen);
			
	}
}

/* USER CODE END 1 */

 

Main clock is HSI + PLL (170MHz) for TIM and ADC is configured at 56.66MHz, so a single ADC input should be done in less than 10us (1/100kHz).

Clock Config.png

 

To configure TIM1 to trigger TRGO at 100kHz I am configuring it with PSC = 0 and ARR = 1699. 170MHz TIM1 CLK would result on 100kHz period as stated in this support note.

 

What I achieve is a 50kHz sampling (20us between SPI transfers). If I increase TIM1 ARR over 3.399 (value for 50kHz) the sampling rate starts to decrease from 50kHz. I can sample at exact 20kHz, 30kHz, 40kHz... but under ARR=3.399 I can't see any sampling increase over 50kHz. Could it be because AN5497 is based on CRM SRAM memory instead of flash? Could it be because they use DMA+FMAC instead of DMA register reading? Could it be because I use HAL library and LL is more optimised for register programming?

In next figure I represent 50kHz ADC sampling and SPI sending (20us between SPI send) sending 4 bytes.

50kHz_spi.png

When I RUN the code (no matter if Debug or Release version) it samples just once, and in ADC I can measure a decreasing value from original value. In numbers:

ADC input is 0V, that with right alignment corresponds to around 17000 ADC value (12 bits). In SPI (hex coded) I can see around 10-20 samples starting from 17000 and ending at 0. Could it be the ADC SAR capacities discharging? (as If I only sample once)

 

Thanks in advance for any help or tip!

1 ACCEPTED SOLUTION

Accepted Solutions
acsatue
Associate II

Finally I switched to Simulink Embedded Coder for the application (it is based on LL library, not HAL), and the problem is solved.

ADC triggered by HRTIM at 100kHz is working using DMA TC interrupts.

SPI baud rate is limited due to the SPI clock (20Mbps), and data buffer size to be transferred (6 values with 2 bytes/value for now).

 

Pros: Fast prototyping

Cons: Difficulty to reduce non-deterministic transfer time and baud rate for SPI. Here I show how data transfer at 20kbauds (desired 50us between data package) could result in a conversion time from 12us (24% of real time) to 31us (62%). I will consider reducing SPI clock to 10Mbps and reduce baud rate to 1kHz or even 100Hz as it is only for external datalogging.

acsatue_0-1702991028101.png

Thank you all for your tips and support!

View solution in original post

11 REPLIES 11
Tinnagit
Senior II

I'm unsure what the root of your problem but I just thinking from your code reviewing.

I think that you can do it at 200Ksps but your process have to redesign to be better.

You should think about Time sampling and conversion of ADC to not exceed a half of your overall sampling time.

and Next stuff that you should very concern is your SPI.

If it's possible that it should operate in DMA mode too to reduce time in ADC Conversion Complete and ADC is going to ready for next sampling.

So I don't recommend to have many process in Interrupt callback, it's about time and task management.

It's not hard so fighto.  

TDK
Guru

Definitely seems like CPU resources are the bottleneck here. Generally interrupts are going to be limited to tens of kHz. I'm surprised you can get 50 kHz with what you're doing.

Perhaps break down the problem. Use ADC in polling mode until you're satisfied with the results from the ADC. Pins connected to GND should return close to 0, for example. After ADC is working, establish continuous output over SPI using DMA as the desired rate. You can have the processes run asynchronously if the ADC is sampled faster than SPI, or you can synchronize them with some amount of effort.

If you feel a post has answered your question, please click "Accept as Solution".

Thank you TDK,

 

I was sending through SPI synchronously, but for large amount of data it is not possible even at 50kHz. I am now sending through SPI at 10kHz but ADC for control is still required to sample at 100ksps... I am trying to leave HAL library and do it using LL, as the Application Note is able to reach 200ksps in ADC sampling and reference generation.

 

The issue about having only 1 measurement and then measuring a "discharging capacity" is probably due to the "Continuous mode: Disable" configuration, in wich I need to start the ADC after every conversion, and in my code I was only restarting the DMA.

Thank you Tinnagit,

I didn't knew It was recommended to not exceed  a half of the overall sampling time for ADC conversion. I will take that into account. I was counting with the full sampling time (1/100kHz) to convert the maximum amount of ADC channels (16 bit data is 2 bytes per data, so for 16 analog input channels I need to sample 32 bytes within 1/100kHz. I can reduce the total conversion time by increasing the ADC clk if I am not wrong.

I am using DMA, and reading every DMA TC (Transfer Complete) interrupt, so I think the sampling limit is based on HAL library, but not in the ADC limit.

To avoid having the DMA read inside ADC TC IRQ handler I tried using DMA interrupt without success.

Keep you informed!

Tinnagit
Senior II

So, did it solved?

"I didn't knew It was recommended to not exceed  a half of the overall sampling time for ADC conversion."
No, I mean half of your overall sampling time which it's not the same as overall sampling time for ADC conversion because the overall sampling time for ADC conversion is ADC Sampling Time + ADC Conversion Time which it was set by Tsampling register in ADC register and Tconversion is depend on your ADC resolution but "your overall sampling time" is frequency that you start sampling ADC.

Ex.

you had set ADC to 12bits and 1.5cycle clock sampling and use 14MHz ADC Clock.
So you had 1.5cycle and 12.5cycle for ADC total conversion = 14cycle and time for conversion is 1us.
This is overall sampling time for ADC conversion.

But

your overall sampling time is how frequency that you had start to sampling ADC.
like as you set timer to trigger to start ADC every 2us and you have 1us to conversion it and another space for do anything about 1us. and that make it can trigger every 2us, So you can sampling every 2us with precise sampling time. 

Switch on optimization.

Avoid Cube/HAL.

While LL may be slimmer, I personally avoid it, as it's mostly just renaming registers, so in case of any problem or nonstandard requirement checking/modifying code against RM becomes a lookup nightmare. But the ability to click in CubeMX may be appealing or even be helpful in some particular cases. You do you.

JW

 

Thank you wacla,

We choose STM32G474 for our application for the ease to program complex high level control strategies using HAL or at least LL in a fast prototyping stage, so probably we understimated dev times. NXP or Texas achieve 100-200 ksps ADC sampling for power converters using their low-level libraries.

This application note achieves 200ksps ADC sampling (single input) using LL, so I will follow this path and try to include multiple inputs for ADC sampling, SPI data logging and then focus on higher level control strategies. I wish I could switch to RM dev without slowing down development time!!

 

And have you switched on complier optimizations?

JW

My optimization level is None (-O0) and the debug level is Maximum (-g3).