cancel
Showing results for 
Search instead for 
Did you mean: 

STM32F4: Purpose of the usage of ATOMIC_SET_BIT ATOMIC_CLEAR_BIT macros in the low level drivers related to UART peripheral

Vladislav Yurov
Associate III

Hello, I'm using LL driver in projects and I found these differences comparing the newest LL driver version (1.7.13) with my current why:

in file stm32f4xx_ll_usart.h/c (macro with prefix ATOMIC_ is now used in several functions)

  • LL_USART_EnableDirectionRx is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_DisableDirectionRx is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_EnableDirectionTx is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_DisableDirectionTx is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_SetTransferDirection is now using ATOMIC_MODIFY_REG macro instead of MODIFY_REG
  • LL_USART_EnableIT_IDLE is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_EnableIT_RXNE is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_EnableIT_TC is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_EnableIT_TXE is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_EnableIT_PE is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_EnableIT_ERROR is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_EnableIT_CTS is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_DisableIT_IDLE is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_DisableIT_RXNE is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_DisableIT_TC is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_DisableIT_TXE is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_DisableIT_PE is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_DisableIT_ERROR is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_DisableIT_CTS is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_EnableDMAReq_RX is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_DisableDMAReq_RX is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT
  • LL_USART_EnableDMAReq_TX is now using ATOMIC_SET_BIT macro instead of SET_BIT
  • LL_USART_DisableDMAReq_TX is now using ATOMIC_CLEAR_BIT macro instead of CLEAR_BIT

So the question is, for what reason the macro with ATOMIC_ prefix is now used? Why only for UART peripheral? What these changes may affect?

36 REPLIES 36
Pavel A.
Evangelist III

> Documentation from ARM indicates "memory" access.

ARM's documentation says: "Load Exclusive and Store Exclusive operations must be performed only on Normal memory"

So, @Vladislav Yurov​ you can open a bug.

Already opened, here

KnarfB
Principal III

Interesting. And when you use simple SET_BIT/CLEAR_BIT ASSERT bailed out?

Correct. I also varied ARR to ensure it interrupts in different spots.

The IRQ code is a little different than what I wrote above.

void TIM1_UP_TIM10_IRQHandler() {
  // clear update flag
  TIM1->SR = ~TIM_SR_UIF;
  ++timx_toggles;
  // toggle bit
  if (timx_toggles % 2) {
    ASSERT(!(GPIOA->ODR & GPIO_PIN_0));
    ATOMIC_SET_BIT(GPIOA->ODR, GPIO_PIN_0);
    ASSERT(GPIOA->ODR & GPIO_PIN_0);
  } else {
    ASSERT(GPIOA->ODR & GPIO_PIN_0);
    ATOMIC_CLEAR_BIT(GPIOA->ODR, GPIO_PIN_0);
    ASSERT(!(GPIOA->ODR & GPIO_PIN_0));
  }
}
 
 
void VerifyStrEx() {
  __HAL_RCC_GPIOA_CLK_ENABLE();
  GPIOA->ODR = 0;
 
  __HAL_RCC_TIM1_CLK_ENABLE();
  TIM1->ARR = 500;
  TIM1->DIER |= TIM_DIER_UIE;
  TIM1->CR1 |= TIM_CR1_CEN;
  NVIC_EnableIRQ(TIM1_UP_TIM10_IRQn);
 
  while (1) {
    ASSERT(!(GPIOA->ODR & GPIO_PIN_1));
    SET_BIT(GPIOA->ODR, GPIO_PIN_1);
    ASSERT(GPIOA->ODR & GPIO_PIN_1);
    CLEAR_BIT(GPIOA->ODR, GPIO_PIN_1);
    --TIM1->ARR;
    if (TIM1->ARR < 100) {
      TIM1->ARR = 500;
    }
  }
}

If you feel a post has answered your question, please click "Accept as Solution".
KnarfB
Principal III

Toggling makes sense. From the main loop's view, the IRQ handler acts atomic anyway.

Thanks

KnarfB

Thanks.

JW

Harvey White
Senior III

If this is now a two cycle instruction, then an interrupt can happen right in the middle of an instruction cycle. When using an RTOS, this can lead to very unwanted behavior. Making it atomic turns off interrupts so that the read/modify/write cycle that seems to be here cannot have an undesired context switch. Possibly has to do with AZURE RTOS and perhaps dual core processors?

It very clearly does not turn off interrupts. Nor is there any indication that this is intended for dual core applications.
If you feel a post has answered your question, please click "Accept as Solution".
Harvey White
Senior III

Then this is not the meaning of "ATOMIC" that I am familiar with. I'm used to something like

"ATOMIC (option) 
(
Code with interrupts turned off
}
 
 
 

S.Ma
Principal

I agree with you Harvey. If the reference code must be compatible with all projects, of course it wilk have to give up some performance away for the sake of shorter debug experience with non expert embedded coders. In the end, if the code you develop goes into a resale product, coder will own the whole code that you probably personalized and optimised as the spec is known. Maybe it is bare metal, so atomic maybe removed. Maybe the code runs in non priviledged mode and it's ok for the coder.

The key is to start with a safe code for new coders. Now the atomic could be done by the coder, and to me it is passing a know how and challenge upward, increasing the jitter coming from the WCET growing...

No, it is not. Turning interrupts off has global impact on the core (timing). One task/thread/irq takes ownership of the core and the others are starving. In contrast, ldrex + strex have only impact on the current task/thread/irq: the strex might fail. Often spin lock loops are built around that. So, if a race condition occured, the task/thread/irq repeats the request until done. This sounds fair, at least if there are many tasks/threads/irqs and the chance of a collision is low.

hth

KnarfB