cancel
Showing results for 
Search instead for 
Did you mean: 

Creating a "Delay by x microseconds" function using TIM6

kj.obara
Associate III

Hi, I'm trying to understand what I'm doing wrong that the following function seems to delay by anywhere between 667ns and 5.67us even though it's being called as Delay_us(4)

I'm using Nucleo STM32F446RE

void KJO_TIM6_Init(uint16_t arel)
{
	LL_APB1_GRP1_EnableClock(LL_APB1_GRP1_PERIPH_TIM6);
	LL_TIM_InitTypeDef tim6init = {0};
 
	//clock on APB1 is 40MHz so we need to reduce it by 40
	//tim6init.ClockDivision = LL_TIM_CLOCKDIVISION_DIV1;
	tim6init.Prescaler = 40;
	tim6init.Autoreload = arel;
	//tim6init.CounterMode = LL_TIM_COUNTERMODE_DOWN;
 
	LL_TIM_Init(TIM6, &tim6init);
	LL_TIM_SetOnePulseMode(TIM6, LL_TIM_ONEPULSEMODE_SINGLE);
	LL_TIM_ClearFlag_UPDATE(TIM6);
}
 
void KJO_Delay_us(uint16_t delay)
{
	LL_TIM_SetAutoReload(TIM6, delay);
	LL_TIM_SetPrescaler(TIM6, 40); //it's cleared on each reload
	LL_TIM_EnableCounter(TIM6); //one-pulse mode disables the counter
	while(!LL_TIM_IsActiveFlag_UPDATE(TIM6));
	LL_TIM_ClearFlag_UPDATE(TIM6);
}

The SysClk clock was configured to 160MHz using code generated with CubeMX, so as I understand TIM6 should take APB1 frequency which is 40MHz, prescale it by 40 to effectively give 1 MHz or 1us delay.

void SystemClock_Config(void)
{
  LL_FLASH_SetLatency(LL_FLASH_LATENCY_5);
 
  if(LL_FLASH_GetLatency() != LL_FLASH_LATENCY_5)
  {
  Error_Handler();
  }
  LL_PWR_SetRegulVoltageScaling(LL_PWR_REGU_VOLTAGE_SCALE1);
  LL_PWR_DisableOverDriveMode();
  LL_RCC_HSI_SetCalibTrimming(16);
  LL_RCC_HSI_Enable();
 
   /* Wait till HSI is ready */
  while(LL_RCC_HSI_IsReady() != 1)
  {
 
  }
  LL_RCC_PLL_ConfigDomain_SYS(LL_RCC_PLLSOURCE_HSI, LL_RCC_PLLM_DIV_8, 160, LL_RCC_PLLP_DIV_2);
  LL_RCC_PLL_Enable();
 
   /* Wait till PLL is ready */
  while(LL_RCC_PLL_IsReady() != 1)
  {
 
  }
  LL_RCC_SetAHBPrescaler(LL_RCC_SYSCLK_DIV_1);
  LL_RCC_SetAPB1Prescaler(LL_RCC_APB1_DIV_8);
  LL_RCC_SetAPB2Prescaler(LL_RCC_APB2_DIV_2);
  LL_RCC_SetSysClkSource(LL_RCC_SYS_CLKSOURCE_PLL);
 
   /* Wait till System clock is ready */
  while(LL_RCC_GetSysClkSource() != LL_RCC_SYS_CLKSOURCE_STATUS_PLL)
  {
 
  }
  LL_SetSystemCoreClock(160000000);
 
   /* Update the time base */
  if (HAL_InitTick (TICK_INT_PRIORITY) != HAL_OK)
  {
    Error_Handler();
  };
  LL_RCC_SetTIMPrescaler(LL_RCC_TIM_PRESCALER_TWICE);
}

My function is used to toggle a GPIO PIN to enable an LCD, but according to logic analyser it never time right.

Pin is configured this way

  GPIO_InitStruct.Pin = LCD_RS_PIN|LCD_RW_PIN|LCD_EN_PIN;
  GPIO_InitStruct.Mode = LL_GPIO_MODE_OUTPUT;
  GPIO_InitStruct.Speed = LL_GPIO_SPEED_FREQ_MEDIUM;
  GPIO_InitStruct.OutputType = LL_GPIO_OUTPUT_PUSHPULL;
  //GPIO_InitStruct.Pull = LL_GPIO_PULL_DOWN;
  LL_GPIO_Init(GPIOB, &GPIO_InitStruct);

Does this approach even makes sense? I've seen other suggestions for creating microsecond delay functions, but I'm just trying to understand here how timers work. I used to use Atmel AVR chips which were much simpler than this one and I'm just lost.

Thanks in advance.

11 REPLIES 11
Nikita91
Lead II

Using a timer for this seems a waste.

Alternatively, see the sample code in:

https://community.st.com/s/question/0D53W00000BJ7KdSAL/is-haldelay1-quaranteed-to-be-close-to-1ms

LL_TIM_EnableCounter(TIM6); //one-pulse mode disables the counter

As it disables automatically, you don't need checking/clearing UPDATE flag. Just wait until the counter is disabled.