cancel
Showing results for 
Search instead for 
Did you mean: 

STM32H7 64MHz HSI is off by 4%!

TDK
Guru

According to the STM32H745xI/G datasheet, the 64MHz HSI is accurate to +/- 0.3 MHz around room temperature, so +/- 0.5%.

During testing, I noticed about 10-20% of my UART characters were getting dropped. I then looked at the signal on a scope and noticed the frequency is off.

After redirecting the system clock to MCO2 (PC9) and measuring on a scope, I discovered the problem:

The HSI RC (64 MHz) on the STM32H745 chip is off by 4%!! So much for the "factory calibration".

Am I missing something here?

Clock initialization (480MHz):

 /** Initializes the CPU, AHB and APB busses clocks 
  */
 RCC_OscInitStruct.OscillatorType = RCC_OSCILLATORTYPE_HSI;
 RCC_OscInitStruct.HSIState = RCC_HSI_DIV1;
 RCC_OscInitStruct.HSICalibrationValue = RCC_HSICALIBRATION_DEFAULT;
 RCC_OscInitStruct.PLL.PLLState = RCC_PLL_ON;
 RCC_OscInitStruct.PLL.PLLSource = RCC_PLLSOURCE_HSI;
 RCC_OscInitStruct.PLL.PLLM = 32;
 RCC_OscInitStruct.PLL.PLLN = 480;
 RCC_OscInitStruct.PLL.PLLP = 2;
 RCC_OscInitStruct.PLL.PLLQ = 2;
 RCC_OscInitStruct.PLL.PLLR = 2;
 RCC_OscInitStruct.PLL.PLLRGE = RCC_PLL1VCIRANGE_1;
 RCC_OscInitStruct.PLL.PLLVCOSEL = RCC_PLL1VCOWIDE;
 RCC_OscInitStruct.PLL.PLLFRACN = 0;
 if (HAL_RCC_OscConfig(&RCC_OscInitStruct) != HAL_OK) {
  Error_Handler();
 }
 /** Initializes the CPU, AHB and APB busses clocks 
  */
 RCC_ClkInitStruct.ClockType = RCC_CLOCKTYPE_HCLK | RCC_CLOCKTYPE_SYSCLK
   | RCC_CLOCKTYPE_PCLK1 | RCC_CLOCKTYPE_PCLK2 | RCC_CLOCKTYPE_D3PCLK1
   | RCC_CLOCKTYPE_D1PCLK1;
 RCC_ClkInitStruct.SYSCLKSource = RCC_SYSCLKSOURCE_PLLCLK;
 RCC_ClkInitStruct.SYSCLKDivider = RCC_SYSCLK_DIV1;
 RCC_ClkInitStruct.AHBCLKDivider = RCC_HCLK_DIV2;
 RCC_ClkInitStruct.APB3CLKDivider = RCC_APB3_DIV2;
 RCC_ClkInitStruct.APB1CLKDivider = RCC_APB1_DIV2;
 RCC_ClkInitStruct.APB2CLKDivider = RCC_APB2_DIV2;
 RCC_ClkInitStruct.APB4CLKDivider = RCC_APB4_DIV2;
 
 if (HAL_RCC_ClockConfig(&RCC_ClkInitStruct, FLASH_LATENCY_4) != HAL_OK) {
  Error_Handler();
 }

MCO initialization:

HAL_RCC_MCOConfig(RCC_MCO2, RCC_MCO2SOURCE_SYSCLK, RCC_MCODIV_10);

Measured frequency is 45.99 MHz, which is -4.2% off from what it should be: 48 MHz.

0690X00000ArWkRQAV.png

If I change to 400MHz, the result is the same. -4.2% from what it should be.

If you feel a post has answered your question, please click "Accept as Solution".
18 REPLIES 18

In the last post in the related thread linked to also above https://community.st.com/s/question/0D50X0000B41tlASQQ/stm32h743-hsi-frequency-waaaayyy-off it's reported that CubeMX generates the calibration-value-changing code (also with incorrect parameter) for 'L452, so this part of the problem (that CubeMX generates that call at all) might have wider scope than just the 'H743.

I don't have CubeMX, but maybe this is related to some tickbox being ticked inadvertently?

JW

TDK
Guru

There's a bit of misunderstanding/misinformation going on here. The hard-coded calibration value cannot be overwritten. What __HAL_RCC_HSI_CALIBRATIONVALUE_ADJUST and similar does is adjust HSITRIM, which indirectly adjusts HSICAL. The original HSICAL can always be reset by restoring the default on-reset value of HSITRIM, which, unfortunately, differs depending on chip rev.

ST's current code has the following:

#if defined(RCC_HSICFGR_HSITRIM_6)
#define RCC_HSICALIBRATION_DEFAULT     (0x40U)         /* Default HSI calibration trimming value, for STM32H7 rev.V and above  */
#else
#define RCC_HSICALIBRATION_DEFAULT     (0x20U)         /* Default HSI calibration trimming value, for STM32H7 rev.Y */
#endif

but since RCC_HSICFGR_HSITRIM_6 is always defined, it always evaluates to 0x40. It's not like there are different include files for different chip revisions.

The solution proposed by @DWest.1​ works and is similar to what I did, as long as you call it after HAL_RCC_OscConfig. IMO, HAL_RCC_OscConfig shouldn't be touching HSITRIM at all.

CubeMX lists the calibration value it is using, so you could either adjust it there if you know your chip rev ahead of time or within code. And because it's listed and there is no option to select "default" or "do not change", it doesn't allow for it to be adjusted based on chip revision. Consistent with the code, but not super helpful.

Presumably, as the old chip revision becomes less common, this will be less of an issue.

If you feel a post has answered your question, please click "Accept as Solution".

but since RCC_HSICFGR_HSITRIM_6 is always defined, it always evaluates to 0x40. It's not like there are different include files for different chip revisions.

The solution proposed by @DWest.1 (Community Member)​ works and is similar to what I did, as long as you call it after HAL_RCC_OscConfig. IMO, HAL_RCC_OscConfig shouldn't be touching HSITRIM at all.

==> This is reported again to our development team.

To give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.

Thanks, TDK, for the thorough analysis and explanation.

CA..1
Associate II

I have the same problem on H73x, using CubeMX v6.3.0, and MCU Package 1.9.0.

Cube sets HSI Calibration value to 32 by default, allowing maximum value to be 63, when it should be 64 with maximum value 127. This cause decalibrated HSI clocks which affects every peripheral.

It's fixed in recent CubeMX versions for the H7 family. Ensure the correct hardware revision on your chip is selected in CubeMX.

https://github.com/STMicroelectronics/STM32CubeH7/blob/master/Drivers/STM32H7xx_HAL_Driver/Inc/stm32h7xx_hal_rcc.h#L7172

If you feel a post has answered your question, please click "Accept as Solution".

Hi again,

It sounds like the stm32h7xx_hal_rcc.h solution checked in for STM32Cube_FW_H7_V1.8.0 expects the developer to know the chip revision prior to compile time?

That sounds awful for a product's lifecycle. I don't expect the calibration default value to change with new chip revisions, but I am concerned that production will get a batch of chips with an unknown revision. Load up the approved binary and experience timing failures.

I agree that it is not an ideal solution, but that is the method ST chose to use.
As posted in the thread, there are workarounds you can use to reset the chip back to default. Treat early hardware revisions as special and assume future revisions, if any, will have the same value as the latest revision.
If you feel a post has answered your question, please click "Accept as Solution".
CA..1
Associate II

Is this fixed also for LL libraries? We do not use HAL with our projects.

We ecounted this problem while trying to use timer capture to measure an external frequency, and found an 6% error, then we thought to calibrate HSI dynamically, by measuring an external quartz, as LSE, for RTC, and found that default HSI calibration value has half the one found from auto-calibration, then notice the difference of calibration range, in the CubeMX. We still need to test if HSI deviation with temperature worth being calibrated dynamically, in which case our auto-calibration will fix the problem produced by Cube MX, otherwise we will need to overwrite the value generated by Cube, in the user code section.