I've got a really bizarre problem with STM32F746ZG. I can reproduce the same problem on both several Nucleo-144 boards and on a custom PCB with the same part (and same package). Wondering if anyone else has encountered this before.
- System Workbench toolchain - version 220.127.116.11701261202
- Windows 7
- STM32CubeMX V4.20.0
The problem is this: at (seemingly) random times, during the initialization of the MCU, the UART clock source _changes_ between where it is set in SystemClock_Config() and when MX_USARTx_UART_Init() is run. When the value changes, it always ends up being set to the LSE clock - which I'm not even using in this design and is disabled in CubeMX.
I've dug deep into the workings of the HAL drivers trying to track this one down. The only other thing in my system that would need DCKCFGR2 for configuration is I2C4, which gets configured in the same call to HAL_RCCEx_PeriphCLKConfig() in SystemClock_Config(). I have tried disabling USART1 entirely and using USART3 instead, but it fails in the exact same way. What happens is that when MX_USARTx_UART_Init() gets called, execution gets stuck in UART_CheckIdleState indefinitely. I first got any leads on solving this by noticing that the clock source was not returned as I had set it by UART_GETCLOCKSOURCE() in UART_SetConfig().
What causes the problem to flare up is still a mystery - in today's case, I had made no changes to the MCU code at all (not even flashing it) between a working state and when it began to fail in this way again. I had previously spent a day trying to get past this on a Nucleo board, eventually starting from scratch with a new CubeMX file, which appeared to fix it for a time, until now. In hindsight the new project file had no impact.
I do have a workaround. It's ugly but it's holding for the time being. After SystemClock_Config(), I explicitly clobber the USART1 clock selection bits with 0 (corresponding to PCLK2, the desired setting), and spam read the register until it behaves. (The compare value, 0x03, is the setting for LSE.) As far as I can tell, the loop is not running, and the register write after SystemClock_Config() is enough to make it stable. With this fix I have yet to be able to reproduce the failure.
My fix, at the start of main:
volatile uint32_t isUartBroken;
/* Reset of all peripherals, Initializes the Flash interface and the Systick. */
/* Configure the system clock */
MODIFY_REG(RCC->DCKCFGR2, RCC_DCKCFGR2_USART1SEL, (uint32_t)(0));
isUartBroken = ((uint32_t)(READ_BIT(RCC->DCKCFGR2, RCC_DCKCFGR2_USART1SEL)));
} while (isUartBroken == 0x00000003);
I'm attaching a screenshot of my clock configuration as well, though I should point out I had this problem no matter if I used HSI or HSE, with several different HSE clock sources across different boards, with PLL enabled and with PLL disabled, even if I clocked the whole system down to 16 MHz. (ergo... it doesn't look to me like it's the clock config at fault.) I'm currently running code from internal FLASH via ITCM bus with ART accelerator enabled to hit my high clock speed, but I had the same problem when using AXI for instructions with all caches disabled as well.
Been using STM32 for years and I've never run into this before, nor can I find a thing online about this error. Would love to understand more about the problem if anyone has come across it.