cancel
Showing results for 
Search instead for 
Did you mean: 

STM32H7A3,Trying to change CPU clock dynamically without affecting peripherals

majianjia
Associate II

Hi all,

It is embarrassing when the browser crash when writing long text...

Here is the original thought:

I am trying to implement some low power in my MCU but not looking for extreme standby mode. I first try to stop CPU but it doesnt save too much power because the AXI are still running at high frequency. Then I try this, a dynamic CPU clock (possibly with AXI memory clock).

From the clock tree, I can see there are 2 registers to control the clock to CPU and most of the peripherals. My goal is to change the CPU clock without affecting the peripherals clock.

0693W00000HqsBGQAZ.png0693W00000Hqs3eQAB.pngI am using an RTOS so I can easily know when to enter idle thread (task) (reduced CPU clock) or user thread (high CPU clock). I implement the below codes to switch between high and low clocks. These code are called bay RTOS's scheduler, when it switch to idle, call low_performance, when it switch to user threads, call the other.

static inline void cpu_high_performance()
{
    LL_RCC_SetSysPrescaler(LL_RCC_SYSCLK_DIV_1);
    LL_RCC_SetAHBPrescaler(LL_RCC_AHB_DIV_8);
    HAL_SYSTICK_CLKSourceConfig(SYSTICK_CLKSOURCE_HCLK_DIV8);
}
 
static inline void cpu_low_power()
{
    LL_RCC_SetAHBPrescaler(LL_RCC_AHB_DIV_1);
    LL_RCC_SetSysPrescaler(LL_RCC_SYSCLK_DIV_8);
    HAL_SYSTICK_CLKSourceConfig(SYSTICK_CLKSOURCE_HCLK);
}

After a few trials, the MCU is sucessfuly running with 200M/25M now.

But I still have many concerns about this method.

  1. Whether the register is designed to be changed so frequently? RTOS can switch thousands of times in a second.
  2. There is a gap between changing 2 registers (while one changed the other doesn't.) It will introduce a spike in the peripheral clock. I am not too worried about spi/i2c etc, but UART might be the case?

I did meet some problems during trials, such as the OPSI clock being too high (200M) which the CPU being only 25M, the whole system stop working (cannot even enter hard fault or download firmware). Another problem is the clock in interrupt will sometimes be the low frequency 25M, but it seems ok. If it doesn't, I can still implement the clock change in the interrupt.

The power saving result is quite significant, I save 50% of power consumption with the dynamic clock compared to 200M while my system load is only 20%.

My problems are, whether this method is correct? What are the other possible issues?

Thanks.

6 REPLIES 6
TDK
Guru

You can change the system clock dynamically as many times as you want and as often as you want. It can take time for the PLL to be ready, but generally this is negligible.

However, if you are clocking the peripherals off of this, there is no way to get around the issue of the clock speed changing. For SPI/I2C, this is likely not an issue if you use a slow enough speed, however UART is going to have problems.

You can disable UART before the change and re-initialize it after, but of course you will miss characters if they are being sent.

On this chip, many peripherals have multiple clock sources. You could select a clock source which does not change when you do the CPU clock adjustment to eliminate this issue. For example, use PLL3Q to drive USART1 independently from PLL1P (which drives PCLK1) which is used to drive the CPU.

0693W00000HqzYVQAZ.png

If you feel a post has answered your question, please click "Accept as Solution".
S.Ma
Principal

On the STM32F437 which is simpler, peripherals have only one clock source. I think you can crrate a dynamic clock gear box between both prescaler as lonh as the total dividing period with peripherals remain the same. If systick is an issue, use a timer.

Now some refman explains the sequence to clock speed up and down, like take care of wait states. And all prescalers should be in the same 32 bit registers to change at once. I feel this dynamic clock speed gear box deserves a hal function.... don't you think?

PMath.4
Senior III

"My problems are, whether this method is correct? What are the other possible issues?"

Your mechanism is clever. The one thing you need to cater for is any I/O that is happening in the background on the various peripherals (e.g. uart output under dma or interrupt control) in which case I think you should wait for completion before doing the clock switches as you can't change both pre-scalers in a single atomic statement

S.Ma
Principal

Briefly clock disable DMA should be fine, dmas can be delayed by higher priority requests/channels. That means a dedicated smart HAL function coding the refman sequence would bring the clock scaling to more people with less support....

I cant believe I did really missed that part! As a 10 year ST developer....

Thank you so much to reminding me those independent clock sources.

I thought all APB/AHB peripherals can only used their AHB clock but I was wrong.

I think that is the solution is what I need, apart from regular timers, which doesnt have clock selection. But LP timers do have.

Also I dont know if the peripherials clock is higher than its bus clock. What will happen

These code will be called during the disable of interrupt. The atomic for the code level should be ok. But for the hardware which is what I am not sure. As you mentioned, DMA accessing hardward could be an issue.

It has been running for 7 days with dynamic clocks, receiving GPS data and storing to sd card. Looks fine until now.