cancel
Showing results for 
Search instead for 
Did you mean: 

LPTIM stops working when RTC is activated

EC.3
Associate III

Hello,

I started with configuring all 4 of the U5's LPTIMs to "Count internal clock events", sourcing all of them with LSE and using different period values for each. The autogenerated code (STM32CubeIDE v1.13.2) worked, meaning each timer toggled independent GPIO outputs at the expected rates. The main.c file is attached.

Then I enabled RTC in the IOC and regenerated the code. If run from the debugger it works, but if run outside the debugger then the GPIO outputs toggled by the timers never get toggled. I can tell from a 5th GPIO output that it is in the do-nothing mainloop and not the Error_Handler() loop.

Is there some conflict or constraints between the LPTIMs and RTC that either prevent or limit how they can be used together?

Thanks 

1 ACCEPTED SOLUTION

Accepted Solutions
EC.3
Associate III

Manually added LL_RCC_SetRTCClockSource() as follows and it seems to address the issue without modification to autogen code:

/* Configure the system clock */
SystemClock_Config();

/* USER CODE BEGIN SysInit */
LL_RCC_SetRTCClockSource(LL_RCC_RTC_CLKSOURCE_LSE);

/* USER CODE END SysInit */

This function's comment says: "Once the RTC clock source has been selected, it cannot be changed anymore unless the Backup domain is reset". Since the autogen SystemClock_Config() enables LSE for RCC, it makes sense to call this function right after (it'd be nice if the autogen code did that itself). This prevents MX_RTC_Init() from trying to (unnecessarily) configure LSE and messing with resetting the Backup domain to do so, which apparently caused the issue I was seeing.

View solution in original post

2 REPLIES 2
EC.3
Associate III

So I commented back in MX_RTC_Init() in main.c and began systematically commenting out portions of code along its path. Look for the "NOTE" below (I also attached stm32u5xx_hal_msp.c). Running in the debugger, it does not execute any of the code inside the if(LL_RCC_GetRTCClockSource() != LL_RCC_RTC_CLKSOURCE_LSE) { ... } block because the condition is not met, but apparently it does run this block outside the debugger because taking out the block surrounded by the #if 1 / #endif pre-compiler conditional inside of it allows the app to run correctly. Of course, I'm not really using RTC is this demo/debug app so I don't know the impact of that change on RTC, but it allows LPTIM to "work". What's actually going on here though?

 

void HAL_RTC_MspInit(RTC_HandleTypeDef* hrtc)

{

if(hrtc->Instance==RTC)

{

/* USER CODE BEGIN RTC_MspInit 0 */

 

/* USER CODE END RTC_MspInit 0 */

if(LL_RCC_GetRTCClockSource() != LL_RCC_RTC_CLKSOURCE_LSE)

{

FlagStatus pwrclkchanged = RESET;

/* Update LSE configuration in Backup Domain control register */

/* Requires to enable write access to Backup Domain if necessary */

if (LL_AHB3_GRP1_IsEnabledClock (LL_AHB3_GRP1_PERIPH_PWR) != 1U)

{

/* Enables the PWR Clock and Enables access to the backup domain */

LL_AHB3_GRP1_EnableClock(LL_AHB3_GRP1_PERIPH_PWR);

pwrclkchanged = SET;

}

#if 1 // NOTE: This is where the problem lies...

if (LL_PWR_IsEnabledBkUpAccess () != 1U)

{

/* Enable write access to Backup domain */

LL_PWR_EnableBkUpAccess();

while (LL_PWR_IsEnabledBkUpAccess () == 0U)

{

}

}

LL_RCC_ForceBackupDomainReset();

LL_RCC_ReleaseBackupDomainReset();

#endif

LL_RCC_LSE_SetDriveCapability(LL_RCC_LSEDRIVE_LOW);

LL_RCC_LSE_Enable();

 

/* Wait till LSE is ready */

while(LL_RCC_LSE_IsReady() != 1)

{

}

LL_RCC_SetRTCClockSource(LL_RCC_RTC_CLKSOURCE_LSE);

/* Restore clock configuration if changed */

if (pwrclkchanged == SET)

{

LL_APB1_GRP1_DisableClock(LL_AHB3_GRP1_PERIPH_PWR);

}

}

 

/* Peripheral clock enable */

__HAL_RCC_RTC_ENABLE();

__HAL_RCC_RTCAPB_CLK_ENABLE();

__HAL_RCC_RTCAPB_CLKAM_ENABLE();

/* USER CODE BEGIN RTC_MspInit 1 */

 

/* USER CODE END RTC_MspInit 1 */

}

 

}

 

EC.3
Associate III

Manually added LL_RCC_SetRTCClockSource() as follows and it seems to address the issue without modification to autogen code:

/* Configure the system clock */
SystemClock_Config();

/* USER CODE BEGIN SysInit */
LL_RCC_SetRTCClockSource(LL_RCC_RTC_CLKSOURCE_LSE);

/* USER CODE END SysInit */

This function's comment says: "Once the RTC clock source has been selected, it cannot be changed anymore unless the Backup domain is reset". Since the autogen SystemClock_Config() enables LSE for RCC, it makes sense to call this function right after (it'd be nice if the autogen code did that itself). This prevents MX_RTC_Init() from trying to (unnecessarily) configure LSE and messing with resetting the Backup domain to do so, which apparently caused the issue I was seeing.