cancel
Showing results for 
Search instead for 
Did you mean: 

12 Bits ADC precision issue

Karem
Associate II

Hello,

We are working on a stm32f429.
The application uses ADC1 to sample data on Channel 0-7.

ADC1 and channels characteristics are:

- Mode: Independant
- Sampling clock: Around 2.8 Mhz
- No DMA access
- Sampling delay: 20 clocks periods (max value)
- Resolution: 12 bits
- No scan mode
- NO continuous conversion
- Single conversion mode
- Channel sampling time is 480 clock cyles (max value)
- Vref+ = Vdda = 3.3V
- VSS = 0V


Each channel is connected to a sensor with a voltage variation of 0-3.30V.

The signal on each channel is a low frequency.

The sampling period for all channel is 1 second.
Channels are sampled alternatively in the following order 0,1,2..7.

This works pretty well, with a very good precision for our application.

 

Now the specific issue we have is:

  • When the sensor voltage on channel N is >= 2V, The voltage measured by the ADC on Channel N+1
    is lower than the nominal value by 0.3V.
  • Only channel N+1 is affected all the other channels are not. And the change is always 0.3V.

Each channel has it owns conditionning chain independant from the others.


We have ruled out the conditionning chain as the source of the problem.
Because the input signal on Channel N+1 is stable when the voltage measured by the ADC is reduced by 0.3V.

Our Hypothesis:

We know that during each sampling phase, Channel N for example, the ADC
capacitors are charged up with the channel voltage: 3.3V for example.
At the end of the conversion, the capacitors are not discharged.

At the next sampling phase, Channel N+1, the new channel voltage is 1 V for example.
When the ADC sampling switch is closed, a charge transfer begins between the ADC
capacitors at 3.3V (Channel N value) and channel N+1 at 1V until equilibrium is reached or the adc
sampling time is reached.

If the sampling time is reached before the equilibrium, the ADC caps will holds
a voltage value different from the real channel value => which will produce an erroneous value.

To correct the problem, we have tested two solutions:

Solution 1: Using a dummy channel
We've set up an unused ADC channel tied to GND: let's say a dummy channel.
Then we sample this dummy channel before sampling any real channel.
This approach produce valid data on each channel and solves the problem.

 

Note: When the Dummy channel is tied to 3.3V, the problem remains.

 

Solution 2: Reducing the ADC frequency
By reducing the ADC frequency from 2.8 Mhz to 1.4 Mhz, The problem is solved.


We don't have a deep understanding on the problem at stake here. we have two question:
1 - Has anyone ever had this problem ?
2 - Is our hypothesis pointing to the problem root cause ?

 

11 REPLIES 11
TDK
Super User

This is an issue with high impedance sources. It takes a while for the capacitor to be charged up.

See 3.2.7. Effect of the analog signal source resistance here:

How to optimize the ADC accuracy in the STM32 MCUs - Application note

Might be some secondary effects happening as well like charge current injection.

If you feel a post has answered your question, please click "Accept as Solution".
Karem
Associate II

Thank you for the reply. The source for each ADC Channel comes from an Opamp which has a very low output impedance. Far less than the maximum value as state by the following equation:

Karem_0-1751887341720.png

Since the problem is occuring when switching from one channel to another, is there any know issue by ST on this specific point ?

 

The sampling and hold capacitor takes time to charge (it never truly reaches the signal in theory if it is a pure RC circuit and there is no inductance).

You need to wait until the error is lower than the resolution so for 12-bit. So charging more than 99.976% of the signal.

By sampling a dummy channel there is still crossover from the dummy channel to the signal channel, but it is always towards zero.

I suspect this may not be an issue since your sampling time is so long.

But you are running the ADC outside the specified range:
In stm32f437ai.pdf (DS9484 Rev 13) on page 160 it says Sampling rate can be max 2 Msps.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

Hi,

the basic "problem" is the way, these converters are working (capacitive SAR ADC

see :  https://www.renesas.com/en/document/apn/r14an0001-operation-sar-adc-based-charge-redistribution-rev100 )

BUT if your source from opamp is low impedance, the initial charge of the 6 pF ADC should do absolutely no harm.

So for your "error effect" : the opamp has a problem...or something with the software/settings is strange.

I would try with a more "normal" sampling time, about 56 , on all used (!) ADC channels, and check then again.

(Your super long 480 setting is ...from my tests not the best, but testing was on other chip, but maybe here similar.)

If you feel a post has answered your question, please click "Accept as Solution".

@Karem wrote:

Thank you for the reply. The source for each ADC Channel comes from an Opamp which has a very low output impedance.


I wouldn't be so sure. Since SAR ADC presents "switching load". Opamp output impedance must be low at very high frequency end, well above 100 Mhz. Only high speed opa's have this specification in DS, and in most cases chart is rolling up and have no practical meaning at all.  The best way to drive sar adc is to use high speed/ fast settling time amp (R2R output and capable to run at low voltage 3.3V or adc safety is in danger) and RC network in between amp output and adc input

Screenshot From 2025-07-07 09-47-14.png

The exact ADC clock value is 2812500 Hz. We can compute the sampling rate:

SR = ADCclk/ADCSampling time = 2812500/480 = 5859.375 Samples/s or 5,9 kSamples/s << 2 Msps

Unless my calculation is wrong, the ADC is being run within the specified range.

Using Normal sampling time goes against our observations: i.e when the sampling time increase,

the problem tends to disappear.

But w've tried changing the sampling time to 56 clock cycles and less but the problem remains.

Nota: the sampling time is always the same on each channel.

Try putting a small capacitor (a few nf) on each the ADC input channel.

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.

This hypothesis is very interesting as it goes against ours initial assumptions about the OPamp.

Our current design doesn't include the RC network at the OPamp output.

From our research based on your suggestion, if the an unsuited Opamp is used to drive
the SAR ADC, The opamp could present a ringing behavior and unstability and/or a long settling on
impulse from the ADC switch closing.

For those interested by the subject, Analog Device and Texas instruments have a
good series of video tutorial:

https://www.analog.com/en/resources/media-center/videos/1834672124001.html

https://www.ti.com/video/series/precision-labs/ti-precision-labs-analog-to-digital-converters-adcs.html

Using LTSpice, we have simulated the OPamp-ADC interaction to find out if there
is any ringing or unstability. The diagram used is shown below.

Karem_0-1752047800441.png

The opamp used is MCP6001 and the STM32F4 ADC is modelized by its sampling switch resistance
(R1) and its sampling cap (C1).
SW1 is used to model the acquisition phase and SW2 the conversion phase.

The results shows that there is no riging or unstability at the opamp output.
The charging and discharging phases are clean and are done far below the ADC
sampling time.

This demonstrate that the choosen opamp and design is well suited for the ADC driving.

But we still don't have a clear explanation about our issue.We are thinking about trying to use an ST devboard to sample multiples channels and see if the problem appears.