Showing results for 
Search instead for 
Did you mean: 

H723ZG Nucleo ADC clock/noise question

Associate III

After assembling the X3 external 25MHz crystal and 2 caps and generating initialize code for 550MHz clock with CubeMX my NUCLEO-H723ZG board runs fine, except ADC. It is very noisy. The CubeMX configured the clock for ADC = 96.000671 MHz (PLL2P). In the datasheet of the MCU the maximum ADC clock is 50MHz (BOOST=3) Is this okay or I found a bug in CubeMX?

I use ADC1 and ADC2 to sample two 150Hz 2Vpp sine wave signals. There is 100us phase difference between signals. (Blue is delayed) I start the ADCs simultaneously from TIMER1. When HAL_ADC_ConvCpltCallback() is called I read the ADC1 (yellow) and ADC2 (bule) conversion results. I generate the ADC1 – ADC2 signal with DAC1. Because the ADCs are in 16-bit mode but the DAC1 can handle 12-bit numbers only I right shift the ADC result by 4 and subtract the 12-bit numbers.

uint32_t difference = (chn1Data >> 4) - (chn2Data >> 4);

I write the value of difference variable into the DAC1. This is the green channel.


  • Yellow: A signal (1V)
  • Blue: B signal (1V)
  • Orange: Comparator output (1V) It compares A input signal to +1.25V.
  • Green: A-B signal on output of 12-bit DAC1 (100mV)

I also tried the 64-bit oversampling but it did not help or I configured something faulty:

  hadc1.Init.OversamplingMode = ENABLE;
  hadc1.Init.Oversampling.Ratio = 1 << 6;
  hadc1.Init.Oversampling.RightBitShift = ADC_RIGHTBITSHIFT_6;
  hadc1.Init.Oversampling.TriggeredMode = ADC_TRIGGEREDMODE_SINGLE_TRIGGER;
  hadc1.Init.Oversampling.OversamplingStopReset = ADC_REGOVERSAMPLING_RESUMED_MODE; 

With these settings I could get back the (integer) average of 64 ADC conversions of every sample point. Am I right?

Anybody has any idea why the difference signal so terrible?




Increasing sampling time is generally the easiest way to improve accuracy. You have maybe 50mV of noise in a 3.3V signal, which gives you about 6 bits of precision. Pretty awful.

I expect CubeMX is generating the right ADC clock. There is a /2 prescaler on some chips, probably on this one, but I didn't look into it. Decreasing the clock by x2 would be insightful. So would attaching your IOC file.

If oversampling didn't have any effect, likely it's not configured correctly.

Lots of notes on improving ADC accuracy here:


I can get CubeMX to generate an invalid (too high) ADC clock without any warning messages, so very likely to be an issue there. Changing the clock tree manually to ensure ADC clock is within the limit should be straightforward.

If you feel a post has answered your question, please click "Accept as Solution".
Associate III

Thanks TDK!

Here is the IOC file.

Also thanks the ADC app note.

The initialization was generated by CubeMX...



The ADC isn't enabled in the IOC you attached.


But see my edit above, it definitely seems like CubeMX can set a too-high ADC clock without any warning. I could be missing something.

If you feel a post has answered your question, please click "Accept as Solution".
Associate III

Sorry, I probably sent you an old file. But the question are:

  1. EDIT: What frequency should I set for ADC clock in CubeMX?
  2. Why do I get awful signal from the ADC?

If this is noise on 3.3V (I do not see 50mV noise on +3.3V) then it comes from Nucleo board. I just use a two channel AFG sine wave generator and the NUCLEO board. Nothing else. Currently the ADC measurement is useless.I do hope that I did something wrong and the ADCs can work right.

EDIT: I connected external +2.5V VREF to CN7.6 VREFP pin meanwhile I disabled the internal VREFBUF. It did not help too much.

Thank you for your help.


Associate III

The battery test

I connected an 1.5V AAA battery to both ADC inputs: ADC1_INP15 (PA3) and ADC2_INP10 (PC0). I used my application which takes samples with 12800Hz and starts the ADCs simultaneously form TIMER_1. When EOC (End of conversion) interrupt happens I write the conversion results of ADC1 to DAC1.CHN1 (16-bit to 12-bit conversion is needed). When both ADC results are available then I subtract ADC1 conversion results from ADC2 conversion results and write the difference code to DAC2.CHN2. This way I can check the ADC1 conversion results and the difference between ADCs on an oscilloscope. Each conversion is saved in 3 buffers. The buffers can store the last 128 ADC1 and ADC2 conversion results and the difference between ADCs.

I would expect that the difference signal is around 0V (+/- 1-2bit). Don’t forget: both ADC inputs are connected tot same 1.5V battery. But what I get is 50mV sine around 50kHz! 50mV difference between ADCs connected to same 1.5V battery, even not DC signal? That is unexpected..

0693W00000LwphVQAR.bmpIn order to make sure this is not a scope issue put a breakpoint and halted the program when the buffers were full. Then I copied out the last 128 samples from the ADC1, ADC2, and DIFFERENCE buffers from the CPU memory. I simply pasted the data into an Excel sheet and created a graph. I repeated this test with 12-bit and 16-bit ADC modes.

0693W00000LwpidQAB.pngThe scope results are right. This is AD converter issue. I still do hope that I did something wrong during the initialization and somebody can explain what. STM experts?

Thanks for any help!


Associate III

I would appreciate very much if somebody would answer these question or explain why the conversion results of the ADC converters is so awful.



Apart from STM32 settings, don't forget about the analog world:

  • Are the ADC inputs properly filtered? Maybe a low pass 100R / 10nF as close as possible to the input ADC pin (depending on your bandwidth need) might help. (Check the datasheet for recommendations about maximum resistor value / min. cap. value for the S&H)
  • What about PCB routing? Any noisy / high frequency stuff close to the ADC inputs or the input PCB traces?

Then there's the DAC, the above also applies to the DAC output.

Concerning your 1st picture, if I understand you correctly: the green scope signal shows the DAC output, which is ADC1 - ADC2 + bit shift? Then the green signal is not too far from what I expect: noise + difference due to phase shift (mostly) and imperfect ADC accuracy.

What are the ADC specs in the datasheet?

16 bit... that's just the resolution, any SNR / noise values at sampling rate X ?

Look at the 24bit audio ADCs: their theoretical SNR is about 144dB, realistic is something between 100dB .. 120dB

Does the ADC support hardware oversampling?


Oh, I forgot about the reference voltage! If that one's crappy, so will be the ADC & DAC signals.

So check the same for VREF (PCB, buffering, ...).

Associate III


  • The board is the STM32H732 NUCLEO-144 development kit made by STM. I can't add filtering RC to the inputs.The PCB wiring is made by STM.
  • Yes, in the ADC Conversion Ready callback function the application converts the 16-bit conversion results to 12-bit and writes it to the corresponding DAC channel.The green signal is A-B (12-bit) and it is not null because I have 100us phase shift between A and B signals.
  • The reference voltage is the Internal +2.5V reference of the MCU. I also tried it with external +2.5V reference voltage. It was a bit better from external reference.
  • I set sampling time between 1.5 and 810.5 samples. It does not matter, just increases the conversion time.
  • The spikes on the difference signal is between 50mV and 100mV. It is too big for noise. It is something. else. For example because the clock of the ADC = 90.000625MHz. Do not ask why. This frequency is configured the STM32CubeMX configuration software from 25MHz external crystal oscillator automatically. The data sheet talks about 6.25 - 50 MHz clock. I tried to set the PLL2 to 6.25 MHz but it did not help.

Thanks for your help.