2025-01-08 07:25 AM
Hi,
My problem: how to configure DAC so that CPU core clock and DAC kernel clock can be asynchronous (i.e: 250MHz and 70MHz)?
Details:
- my hardware is based on the STM32H523CET6
- configuration:
- SYSCLK (HCLK) running at 250MHz (with PLL1 from HSE=TCXO @26MHz)
- TIM7 generates a 48kHz rectangle signal
- DAC1 is triggered by TIM7 trgo
- PLL2 configured for PLL2R=70MHz
- code loop:
- pulling on TIM7 SR's UIF bit (update interrupt flag) until it is set
- clear UIF bit
- toggling DAC out between 0 and 0xFFF0 values
- loop again (infinitly)
- results:
- when DAC is clocked by HCLK (ADC,DAC Clock Mux = HCLK): OK :)
=> output is toggling with a rectangle frequency of 24kHz (as expected)
- when DAC is clocked by PLL2 (ADC,DAC Clock Mux = PLL2R): KO :(
=> output is randomly toggling
My feeling is that the transfert of the DAC out value (DHR12L1 register) does not work if clocks (CPU and DAC ones) are asynchronous. Can you help me?
(Note: I need to run the ADC with an accurate frequency, so I need to set the mux to PLL2R. And I need DAC and ADC to work at same time. So far, the DAC does not satisfy me as its out is randomly generating the signal I program...)
Thanks!
P.S.: for any detail, I attache the Cube project "Hello" in the two above configurations (ok.tar.xz and ko.tar.xz). But feel free to ask me any detail!
Solved! Go to Solution.
2025-01-29 02:34 AM
Hi,
To conclude: the DAC CAN NOT be clocked by PLL2R (silicon bug).
If, reader, like me, you want to have a "fast" ADC running in same time with DAC, you must set the analog clock switch "ADCDACSEL" to "rcc_hclk" (=SYSCLK), otherwise the DAC is not reliable. And, if, like me, you want a specific sampling frequency ("FS"), but you prefer ADC and DAC to be synchronized (for simplified signal processing), then below are some options:
1) Simpler solution. Configure the ADC in "continuous mode"; the prescaler for ADC gives few (one?) possible FS: with SYSCLK=250MHz, FS can be set to 250/4/15=4.167MHz (The "15" comes from SMP+RES=2.5+12.5 at 12 bit; the "4" is the ADC prescaler from SYSCLK). Furthermore, if you accept a light reduction of CPU power, you may tune PLL1 so that SYSCLK is lightly smaller. For instance, if you target 4.0MHz for FS, you can set SYSCLK=4*15*4=240MHz. For the DAC, a timer, for instance TIM7, must be used. Clocked by SYSCLK, it is perfectly synchronized by DAC (but phase is not controlled). Even if FS can be tweaked by sampling duration (SMP=2.5 by default, but can be set to 6.5, for instance), flexibility for FS is pretty poor. For that reason, the next option is proposed.
2) Run the ADC in "triggered mode" (clocked by, e.g. SYSCLK/4, but triggered by another source); using a low power timer (LPTIMER), and assuming PLL3 is not used for something else (like USB), it is possible to exploit the PLL3 and hence have the maximum flexibility for FS. If the DAC runs at FS too, the LPTIMER trigger can be shared for a perfectly synchronized ADC/DAC couple. If the DAC runs slower, because, for instance, ADC OVERSAMPLING is in use, you must use two different LPTIMERs since one timer can not output two different periods. Nevertheless, even if the two LPTIMERs are clocked by the same PLL3, and hence are synchronized, they might be started with a random delay (depending on the software execution time), so the phase between ADC and DAC is not controlled (neither reproductible?). For that reason, the next option is proposed.
3) Still run the ADC in "triggered mode" and use one (or two) general purpose timer(s) (GP TIMER); these timers can not be clocked by PLL2 or PLL3, unfortunately. But, even if they are clocked by the SYSCLK (maximum available speed), the FS is more flexible than in 1) because frequency dividing number being higher, there are more possibilities; and if we accept to loose some CPU power, flexibility is almost full. Example: FS=3MHz; 250/3=83.33; so the chosen period count for the GP TIMER is 83; due to that, SYSCLK must be set to 249MHz, loosing a little bit of possible CPU power.
Only one timer is needed if ADC and DAC run at same FS; in case of OVERSAMPLING is actived in the ADC, so DAC runs slower, two different timers are needed (GP TIMERs have several channels, but these channels must run with same periods). A master/slave configuration, in which clock of second GP TIMER comes from output of first one, provides a perfect synchronisation and controlled phase between ADC and DAC. Example: use TIM3 for ADC; TIM4 for DAC; ADC works in triggered mode (clocked by SYSCLK/4 but triggered by TIM3_TRGO), with OVERSAMPLING of X; TIM3 is configured for FS; optionally, one can tweak rising and falling edges for a complete sampling control (limited to by GP TIMER granularity, 1/SYSCLK, but better than SMP register); TIM4 is configured as slave of TIM3, with a period count equal to X; the rising edge can be adjusted according to required ADC/DAC phase (with 1/X of granularity; if higher granularity is required, you may use the master/slave configuration only for starting TIM4 and hence have the 1/SYSCLK granularity); eventually, DAC is configured to be triggered by TIM4 TRGO.
---
I did not find any better trade-offs. But, for sure, you have a better idea for your specific use-case ;)
2025-01-08 07:46 AM - edited 2025-01-08 07:48 AM
> output is randomly toggling
Can you show this? Just a bit of jitter?
A better solution would be to handle this in DMA. Transfer to the DAC register when TRGO happens.
2025-01-08 08:01 AM
On the oscilloscope:
- when it is OK: perfect rectangle signal, freq is 24kHz
- when it is KO: the rectangle high or low levels last between 1 and about 20 time the expected duration; average freq is about 5kHz
Not simple to give you a picture of the oscilloscope (too old to export screenshots). But trust me: this is not "a bit of jitter", unless 2000% would be "a bit " for you! :D
Regarding the DMA, of course, in my full application, I use it. But I have exactly the same behavior: with DMA or CPU managed. As I preferred to provide a simpler example, I gave this code snippet without DMA.
Once again: my feeling is that when the DAC register is changed, by the CPU directly, or by the DMA, since the bus master clock differs from the slave one, then a clock domain synchronization would be required. Does my feeling make sense? Do I miss a configuration to get this synchronization?
Note that I did not find anything in the errata list...
2025-01-08 11:43 PM - last edited on 2025-01-10 03:08 AM by SofLit
Merged threads treating the same subject.
*** Sorry I closed my previous post (https://community.st.com/t5/stm32-mcus-products/dac-kernel-clock-and-cpu-clock-domains-how-to-configure-them-to/m-p/759757#M270108) by mistake. This one reopens it. ***
Hi,
My problem: how to configure DAC so that CPU core clock and DAC kernel clock can be asynchronous (i.e: 250MHz and 70MHz)?
Details:
- my hardware is based on the STM32H523CET6
- configuration:
- SYSCLK (HCLK) running at 250MHz (with PLL1 from HSE=TCXO @26MHz)
- TIM7 generates a 48kHz rectangle signal
- DAC1 is triggered by TIM7 trgo
- PLL2 configured for PLL2R=70MHz
- code loop (case 1):
- pulling on TIM7 SR's UIF bit (update interrupt flag) until it is set
- clear UIF bit
- toggling a GPIO pin for probing CPU activity
- toggling DAC out between 0 and 0xFFF0 values
- loop again (infinitely)
- results:
- when DAC is clocked by HCLK (ADC,DAC Clock Mux = HCLK): OK :)
=> output is toggling with a rectangle frequency of 24kHz (as expected)
- when DAC is clocked by PLL2 (ADC,DAC Clock Mux = PLL2R): KO :(
=> output is randomly toggling
Attached are oscilloscope results:
- caseX_OK.png corresponds to DAC MUX set to HCLK
- caseX_KO.png corresponds to DAC MUX set to PLL2R
Source 1 (yellow) is the DAC OUT.
Source 2 (magenta) is a GPIO "PROBE" that I added for better investigation (to check when DAC register is reached).
Different caseX are tried (code extracts below):
- case 1: same as previous, the DAC OUT is toggled each time
=> you should better understand the above "randomly toggling"
- case 2: the DAC OUT follow a simple 16-stage stair signal ("(cnt & 0xf) << 12"); expected freq = 48k/16 = 3kHz
=> this illustrates better my feeling that DAC OUT is unsync'd with CPU;
reminder: the magenta trace corresponds to CPU activity...
- case 3: the DAC OUT follow the simple 48kHz counter value, ending with triangle frequency = 48k/4096 = 11.719Hz
=> hard to see any problem, even if the signal is a bit noisy
I created this case to rise a warning on measurements: in some cases, we might think all is ok...
Attached also the code sources for case 1, for reference (case1_KO_sources.tar.xz and case1_OK_sources.tar.xz).
My feeling is that the transfert of the DAC out value (DHR12L1 register) does not work if clocks (CPU and DAC ones) are asynchronous. Can you help me?
Notes:
- I need to run the ADC with an accurate frequency, so I need to set the mux to PLL2R. And I need DAC and ADC to work at same time. So far, the DAC does not satisfy me as its out is randomly generating the signal I program...
- in my program I am using a DMA for changing the DAC output value; this does not improve anything; since the DMA runs at CPU speed, and hence it accesses to DAC register in the same way as the CPU does, this is not surprising.
Thanks!
CASE 1 code:
============
/* USER CODE BEGIN WHILE */
uint32_t cnt = 0;
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
while (!READ_BIT(htim7.Instance->SR, TIM_SR_UIF));
CLEAR_BIT(htim7.Instance->SR, TIM_SR_UIF);
HAL_GPIO_TogglePin(PROBE_GPIO_Port, PROBE_Pin);
cnt++;
if (cnt & 0x1)
hdac1.Instance->DHR12L1 = 0;
else
hdac1.Instance->DHR12L1 = 0xfff0;
}
/* USER CODE END 3 */
CASE 2 code:
============
/* USER CODE BEGIN WHILE */
uint32_t cnt = 0;
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
while (!READ_BIT(htim7.Instance->SR, TIM_SR_UIF));
CLEAR_BIT(htim7.Instance->SR, TIM_SR_UIF);
HAL_GPIO_TogglePin(PROBE_GPIO_Port, PROBE_Pin);
hdac1.Instance->DHR12L1 = (cnt & 0xf) << 12;
cnt++;
}
/* USER CODE END 3 */
CASE 3 code:
============
/* USER CODE BEGIN WHILE */
uint32_t cnt = 0;
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
while (!READ_BIT(htim7.Instance->SR, TIM_SR_UIF));
CLEAR_BIT(htim7.Instance->SR, TIM_SR_UIF);
HAL_GPIO_TogglePin(PROBE_GPIO_Port, PROBE_Pin);
hdac1.Instance->DHR12L1 = (cnt << 4) & 0xfff0;
cnt++;
}
/* USER CODE END 3 */
Edit: code formatting
2025-01-09 04:41 AM
Try different TIM prescaler/ARR values.
Try to generate TRGO from OCxREF rather than Update.
JW
2025-01-09 06:23 AM - last edited on 2025-01-10 02:18 AM by Amel NASRI
I tried these configuration for Prescaler/Period:
- 2604 / 1 (initial)
- 2604 / 2
- 2604 / 3
- 2605 / 1
- 2605 / 2
- 2605 / 3
- 2606 / 1
- 2606 / 2
- 2606 / 3
- 10000 / 100
- 1000 / 1
- 100 / 1
The "signature" of the "randomness" is changed, but the problem remains.
Regarding OCxREF, this is not available for basic timers TIM6 and TIM7.
*************************************
Anyway, I think you are not on the good way: the problem occurs when the DAC out value register (e.g. DHR12L1) is accessed for write; sometimes the change is not applied. This problem is seen _only if_ the DAC is supplied by the PLL2R.
To better illustrates this, I configured the DAC in software trigger. Then, the code becomes:
/* USER CODE BEGIN WHILE */
uint32_t cnt = 0;
while (1)
{
/* USER CODE END WHILE */
/* USER CODE BEGIN 3 */
while (!READ_BIT(htim7.Instance->SR, TIM_SR_UIF));
CLEAR_BIT(htim7.Instance->SR, TIM_SR_UIF);
HAL_GPIO_TogglePin(PROBE_GPIO_Port, PROBE_Pin);
cnt++;
if (cnt & 0x1)
hdac1.Instance->DHR12L1 = 0;
else
hdac1.Instance->DHR12L1 = 0xfff0;
SET_BIT(hdac1.Instance->SWTRIGR, DAC_SWTRIGR_SWTRIG1);
}
/* USER CODE END 3 */
In that case, the timer is only useful for the CPU, which takes care of driving the DAC.
If the DAC clock source is HCLK, it works well (with, of course, some very light jitter due to software driving), like any other "OK" cases. However, if the DAC is sourced by the PLL2R, I observe the same issue, still ("KO" cases). For that reason I think the problem is definitely not the timer, but the DAC. Do you get me?
Edit: code formatting
2025-01-09 06:35 AM
What's the content of DAC registers?
JW
2025-01-09 07:26 AM
2025-01-09 09:52 AM - edited 2025-01-09 10:02 AM
Those look OK to me.
At this point I have no more ideas, sorry.
[EDIT]
One more thing, though. You mentioned DMA, but the code snippets above don't use it, so I'm not quite sure how is it used, exactly.
Regardless of that, could you please, in those "manually filled" examples, insert a delay between detecting the TIM7 Update event, and writing to the DAC holding register? Something like 1us. Any time wasting delay would do, perhaps toggling the pin not just once but several times.
[/EDIT]
JW
2025-01-10 01:29 AM - last edited on 2025-01-10 02:16 AM by Amel NASRI
[DMA]
No, actually, the code of this post does NOT include any DMA utilization.
I talked about DMA because:
- in my "main" application, I use it (I would say: "of course")
- this does not improve anything (that's the main information)
- this is less flexible for tests (like inserting a delay, as you suggested)
- this is a bit more complex so harder to reproduce (I prefer to provide a simpler code)
[/DMA]
[DELAY]
I inserted a loop of nop's, calibrated on the oscilloscope to get 1.2us (about; more than 1us, for sure). The code is:
while (!READ_BIT(htim7.Instance->SR, TIM_SR_UIF));
CLEAR_BIT(htim7.Instance->SR, TIM_SR_UIF);
HAL_GPIO_TogglePin(PROBE_GPIO_Port, PROBE_Pin);
for (int i = 0; i < 10; i++) __asm__("nop");
HAL_GPIO_TogglePin(PROBE_GPIO_Port, PROBE_Pin);
cnt++;
if (cnt & 0x1)
hdac1.Instance->DHR12L1 = 0;
else
hdac1.Instance->DHR12L1 = 0xfff0;
Result: this does not change anything.
I also tried a loop on 100 nop's to get about 400us of delay: no more improvement.
[/DELAY]
Could you please double check with the design team about this? Could it be a silicon bug?
Edit: code formatting