cancel
Showing results for 
Search instead for 
Did you mean: 

ATOMIC_SET_BIT() stopping code execution

DavidNaviaux
Senior II

HI

I am using the STM32G474 MCU and ran into a problem with my code stopping execution.

In my application, I have a level 0 priority interrupt that takes 1.34us to execute that occurs once every 6.66us.  In the final code, I will need 3 of these running, 1 every 2.2us (still leaving about 37% of the MCU bandwidth for other tasks).

When I have 2 of these interrupts every 6.66us, all seems to work fine, but when I add a third, my code stops executing.  After a lot of debugging, I have found that my code is stuck at the following line of code:

 

      ATOMIC_SET_BIT(huart->Instance->CR3, USART_CR3_TXFTIE);

 

 

in the HAL_UART_Transmit_IT() function.  Normally, when writing low level interrupt handlers for serial I/O, when I need to change the UART interrupt flags, I would disable interrupts make the change then enable the interrupts again.  Doing that, it is very clearly safe for a single core MCUs like the STM32G474.  However, when I examined the STM driver code for this function, it uses the ATOMIC_SET_BIT() macro that I am not familiar with.

I have tried to switch to DMA serial I/O but it had the same problem; it worked with 2 1.34us interrupts every 6.6us but not with 3.

I'd like to not use this macro for modifying the UART interrupt flags, but it is in the driver file supplied by STM and when I modify it, it forgets my modifications the next time I compile.

Could someone explain what is going on here, and even better, how I can solve my problem?

 

5 REPLIES 5
TDK
Guru

The ATOMIC_SET_BIT uses the LDREX/STREX instructions to atomically set bits. If an interrupt is called between these, the STREX fails and it has to go do the LDREX again. Usually not a problem, but when you're interrupting every few cycles, this is going to prevent it from working.

> how I can solve my problem?

Few options:

  • Don't use HAL.
  • Overwrite ATOMIC_SET_BIT in some user code that gets called after it's defined but before it's used. Not sure if this is possible.
  • Interrupt less often.
If you feel a post has answered your question, please click "Accept as Solution".
gbm
Lead III

You may solve the problem by not using HAL for UART. With interrupt frequency over 100 kHz UART HAL would consume too much of your precious processor time.

My STM32 stuff on github - compact USB device stack and more: https://github.com/gbm-ii/gbmUSBdevice

Thank you.  I understand that I can write my own UART functions and will do that.  However, it also stops USB communications.  The 150kHz interrupts are to regulate the current out of 3 synchronous buck converters that should get their PWM duty cycle updated every PWM cycle running at 150kHz (every 6.67us period).  I can reduce the PWM frequency, but that is not ideal.

Do you have any suggestions what to do about USB?  I examined the middleware source code but wasn't able to find anything that seems like it would stop USB.

 

USB should work as long as the interrupt is called regularly. I'm not sure what would be holding it up. Maybe toggle a pin within the USB IRQ handler and monitor it on a scope to ensure there are no gaps of more than 1ms or so? Could also use a USB analyzer to see if incoming packets are happening and are correct.

 

Other solution could be to combine the three IRQs. CCR registers are preloaded so depending on what you're doing, might be able to achieve the same scheme. I'm guessing you rules out non-cpu approaches, which is of course the best solution.

If you feel a post has answered your question, please click "Accept as Solution".

It is critical that when the ADC for a particular driver is sampled, an interrupt must occur as soon as possible that performs the required floating-point math to calculate the duty cycle at the next PWM pulse.  The math takes a maximum of 1.4us to execute.  It is important that the 3 interrupts do not overlap.  One of the reasons for designing with this MCU is that control loop compensation is handled entirely in firmware, so it is easy to make adjustments for PWM frequency and desired output and switch between voltage and current regulation as required.

I have found that it all works perfectly when I drop the PWM frequency down to 100kHz.  These three PS are actually regulating current in some high-power LEDs at a programmed current (up to 15A).

The problem is that I had never controlled 3 separate DSMPSs from this MCU at the same time and I had originally designed the 3 power supplies for 200kHz PWM, not realizing the limitations, so the inductors and output capacitors are much smaller than they should be for 100kHz operation.  However, I can make the design changes and revise the circuit board, knowing that it will work for now but with a significant more current ripple in the LEDs.

Inspired by your suggestions, I can do the following:

  1. Write a low-level UART driver avoiding any use of HAL functions. It will take me a couple of days to write and to test the driver.
  2. Set the PWM frequency as high as I can that will allow the USB and UART to continue to function
  3. Test the current PCB to see if the LED ripple current is acceptable.
  4. If not acceptable, I can redesign the power supplies with new inductors and output capacitance to provide the required filtering.

I should point out that for proper DSMPS control, it requires that they have the highest interrupt priority (0).  I have found in previous testing that by adjusting the various interrupt priority settings I can change the stability of the current regulators and the USB operation.