I think that we have ‘crossed wires’ – I probably didn’t explain myself very well last time. Let’s try again.
I do not have external memory and I am not using the FSMC. Last time, I just commented that if I was using it, because it is clocked from HCLK, then all its settings would be in terms of HCLK cycles.
The OP (phil) was asking about the STM32F101.
However, I believe that all STM32s are the same in this matter: they actually clock the internal program ROM from SYSCLK (at least when doing instruction fetches over the internal ICode bus; it may be different when reading data (constants) over the internal DCode bus (via the BusMatrix) – the documentation is confusing – see RM0008, section 2.1 – it’s all mentioned, but not fully described!).
As I said before, when two pieces of manufacturer’s documentation contradict each other, be afraid, be very afraid – perhaps even they don’t understand it!
If program ROM is clocked from SYSCLK, then the datasheet clock tree diagrams are wrong (and the ones shown in RM0008!), and the advice that STOne-32 gave to the OP is wrong.
He said (for the `101, that can only go as fast as 36 MHz) “…the flash wait state should be 1 if HCLK is greater then 24MHz.”, implying that as long as HCLK is less than 24 MHz, then 0 wait states are okay. I think that this is wrong advice, but as I reported last time, on a given device, and at ambient temperature, you might ‘get away with it’.
Since last time, I have repeated my experiment on my STM3210E-EVAL and got the same results. So, I’ve now tried this on three independent STM32 specimens (and each of these were different variants) and it is always the same – at SYCLK = 72 MHz, dropping to 0 wait states always crashes, even if HCLK is set to SYSCLK /4 = 18 MHz, which, according to the datasheet diagrams, should not need any wait states.
In answer to your code request: I have done the experiment using ST’s Device_Firmware_Upgrade (DFU) demo project from their USB Library, but really any of their projects would do – as I stated last time, the crash occurs while the CPU is still in system_stm32f10x.c: SetSysClockTo72(), which is part of the standard code supplied under CMSIS.
As you point out, (under CMSIS) SystemInit() and SetSysClock() are used to configure the clocks.
system_stm32f10x.c: SystemInit() is called on reset (from within startup_stm32f10x_**.s). This, in turn, calls system_stm32f10x.c: SetSysClock(), which, because I’ve left SYSCLK_FREQ_72MHz defined, calls SetSysClockTo72().
Essentially, the only alterations that I made to their standard code were in SetSysClockTo72():
If the wait state (aka latency) is set to 0, regardless of whether HCLK = 72 or 18 MHz (‘DIV1’ or DIV4’), the CPU always crashes after line 994 “RCC->CFGR |= (uint32_t)RCC_CFGR_SW_PLL;” is executed. This is the point when the code switches the CPU from HSI (~8 MHz) to HSE (72 MHz).
NB slightly different code is compiled if the CPU is a ‘STM32F10X_CL’, but my alterations and the crash point are the same in either case, as I’ve now proved with my latest test on a `103.
I may still be fooling myself, but anyone can repeat this experiment using just the ST library code. I’d be interested to know their results.
I hope that this helps.
Retrieving data ...