cancel
Showing results for 
Search instead for 
Did you mean: 

I know sounds dumb, but how do you configure SysTick

RobG
Associate III

I need to run SysTick every microseconds instead of every millisecond. So I called:

SysTick_Config(SystemCoreClock / 1000000);

except that delay never ended, because the interrupt was low priority so:

HAL_InitTick(TICK_INT_PRIORITY);

Except this sets SysTick back to milliseconds. And if you reverse the order the Config set the interrupt to low priority. So either I am not using the preferred functions (I did get them from an example) or ???

Thanks in Adcance

9 REPLIES 9
AvaTar
Lead

> I need to run SysTick every microseconds instead of every millisecond.

> So I called:

> SysTick_Config(SystemCoreClock / 1000000);

> except that delay never ended,...

The "dumb" thing is, an interrupt needs already 12 clock cycles for entry, and 12 for the exit, without even doing anything useful. Cranking up the interrupt rate quickly saturated the MCU, i.e. the next interrupt arrives before the fist one is executed. Which probably happened in your case.

For tasks at said frequency, check available hardware options (i.e. timer + other peripherals), which do not load the core.

RobG
Associate III

Thanks for the information AvaTar; I will keep it in mind.

I found a solution:

	HAL_InitTick(TICK_INT_PRIORITY);
	SysTick_Config(SystemCoreClock / 1000000);
	HAL_NVIC_SetPriority(SysTick_IRQn, 0, 0U);

What I don't understand is why the Cube does what it does.

__weak HAL_StatusTypeDef HAL_InitTick(uint32_t TickPriority)
{
  /* Configure the SysTick to have interrupt in 1ms time basis*/
  if (HAL_SYSTICK_Config(SystemCoreClock / (1000U / uwTickFreq)) > 0U)
  {
    return HAL_ERROR;
  }
 
  /* Configure the SysTick IRQ priority */
  if (TickPriority < (1UL << __NVIC_PRIO_BITS))
  {
    HAL_NVIC_SetPriority(SysTick_IRQn, TickPriority, 0U);
    uwTickPrio = TickPriority;
  }
  else
  {
    return HAL_ERROR;
  }
 
  /* Return function status */
  return HAL_OK;
}

I can see the init setting a default rate and interrupt priority, but when then does

__STATIC_INLINE uint32_t SysTick_Config(uint32_t ticks)
{
  if ((ticks - 1UL) > SysTick_LOAD_RELOAD_Msk)
  {
    return (1UL);                                                   /* Reload value impossible */
  }
 
  SysTick->LOAD  = (uint32_t)(ticks - 1UL);                         /* set reload register */
  NVIC_SetPriority (SysTick_IRQn, (1UL << __NVIC_PRIO_BITS) - 1UL); /* set Priority for Systick Interrupt */
  SysTick->VAL   = 0UL;                                             /* Load the SysTick Counter Value */
  SysTick->CTRL  = SysTick_CTRL_CLKSOURCE_Msk |
                   SysTick_CTRL_TICKINT_Msk   |
                   SysTick_CTRL_ENABLE_Msk;                         /* Enable SysTick IRQ and SysTick Timer */
  return (0UL);                                                     /* Function successful */
}

Reset the priority to the lowest. Perhaps because I am mixing HAL and CMSIS? As I said forcing the priority after tha facts seems to work. Just wondering what the 'proper' approach is.

Thanks Again

AvaTar
Lead

> What I don't understand is why the Cube does what it does.

You are not alone here ... 😉

I would suggest to add a GPIO toggle to the SysTick interrupt, and measure the system load (relative time spent in the interrupt).

The proper approach might be to NOT interrupt at 1 MHz. If you need a timebase, have a 32-bit TIM counter clock at 1 MHz, or use DWT_CYCCNT to measure time down to the CPU clock granularity. For toggling an LED, or other pulse/signal generation tasks, use the TIM HW functionality. More complex stuff, use the TIM+DMA to drive a pattern buffer out to GPIO pins.

For HAL, most of the code has an expectation that SysTick has the highest preemption level so the HAL_Delay() and timeouts function properly in other interrupts/callbacks.

HAL allows you to replace WEAKly defined functions with cleaner ones better suited to your own specific use case.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..
Piranha
Chief II

By the way CPU load can be measured even better putting this code in idle/sleep routine:

__disable_irq();
t1 = DWT->CYCCNT;
__DSB();
__WFI();
t2 = DWT->CYCCNT;
__enable_irq();
__ISB();

It also needs some additional variables and basic calculation, but the basic idea is to measure and accumulate cycles spent in sleep mode and compare those to total cycles. That way measured load will include all tasks, interrupts and also cycles spent during IRQ mode switching.

Can this be done at the end of main loop for a bare metal code? I was also interested in measuring cpu load of main and interrupts separately. Currently thinking about using separate counters for holding DWT counts? If you have some better ideas, please do suggest.

gbm
Lead III

(deleted)

Without RTOS the CPU load measurement and sleep mode and should be implemented like this:

https://community.st.com/t5/stm32-mcus-products/compute-cpu-load-for-a-stm32l0-a-gpio-and-a-voltmeter/m-p/140470/highlight/true#M26851

And generally the inefficient and constrained superloop should be replaced by a cooperative task scheduler:

https://community.st.com/t5/stm32-mcus-products/best-practices-how-to-work-with-interrupts-and-low-power-modes/m-p/620693/highlight/true#M230366

 

Do you intend to measure the interrupts time for development purposes? If so, maybe it's better to take a look at SEGGER SystemView or something similar.

Thank you, I will checkout these.