2026-02-19 5:44 PM
Described below method is intended to perform correction of the static ADC’s non-linearity in software using LUT. It requires pure sine-wave source only, that can be generated by any available analog circuitry.
Basically, it corrects non linearity induced by signal conversion analog front end all together, that includes ADC and it’s drivers, signal conditioning / amplification stages, inputs protection, etc.
There is a way to do the same linearity correction by applying linear ramp voltage and generating error LUT table. New method is using sine-wave, and consequently has a few advantages over “classic” :
1. Linearity of the sine-wave is much more easier to verify down to -140 -160dB (0.1 – 0.01 ppm) using simple tween-T notch filter.
2. Analog front end may include DC blocking capacitors.
3. Calibration test signal could function in the “working band” area, and some complex dynamic distortion of the drivers (due to fall of the AOL over frequency, output stage “compression” close to power rails, varying input offset over CM) would be corrected as well.
Software part
is simple: FFT produces spectrum components, than everything is zeroing except bins associated with main frequency (real & imaginary part). Doing i-FFT in reverse single tone is re-created. It preserves magnitude and phase of the original signal, and it has no distortion what's ever. In essence, this process likely be called “sine-wave regression”, if I’m not mistaken. Same way as linear or exponential regression generates mathematically well described curve to presented set of noisy /distorted array of data. Next stage is subtraction this sine-wave out of input data, leaving errors. All what is left, to define a table of “segments” where each error happened and create LUT of errors.
After calibration, each ADC sample evaluated over segments and associated error sample out of LUT simply subtracted.
Pictures help to understand all processes.
Proof of the concept
was done on nucleo-stm32G474re board. It has 12-bits ADC capable to do oversampling and run in differential mode. Both features were activated, OVS = 64, sampling rate 2.8 Msps drops to 43.75 ksps after OVS. 4096-FFT Split Radix, DIT and DIF for reverse part.
Segments table 10-bits. All software including calibration subroutine and SA part (shown on LCD attached) was running on the same G474re.
Over 20 dB improvement for 3-rd harmonic demonstrated. That’s more than ten times, bringing INL from the initial typical ~2 LSB down to less than 0.2 LSB.
Simulation of the method itself in pure software shows different efficiency vs distortion level, starting from 70 dB improvement for highly distorted signals (THD -20 dB), than stamina is proportionally decreasing for ultra low distorted signals, low limit about -150 dB using float math. Double precision was not estimated.
Sine-wave for calibration was generated using very high Q-2000 selective filter, purity was confirmed down to -120 dB by another SA. Tests conducted on two frequencies 200 Hz and 2 kHz shows similar results.
Stability verified, calibrated board (with LUT stored in the flash memory) effectively demonstrated the same low distortion spectrum when powered on the next day.
Limitation of the Demo project.
Hardware implementation in overall shows lower efficiency that software modeling (THD level -102 dB vs < -122 dB). That’s may be explained by complexity of the INL curve for SAR ADC. First of all, memory limitation doesn’t allow to run FFT more than 4k, and high noise level of the internal ADC forced me to activate OVS in order to get charts noise level below -110 dB.
Two INL curves for 2 kHz & 200 Hz test signals shows that OVS plays bad role “smoothing” all edges for higher frequency. I’m sure method can demonstrate better results for SD ADC, where INL curve looks different, w/o sharp edges.
Solved! Go to Solution.
2026-02-23 6:10 AM
More tests, using nucleo-stm32G474re with external adc this time.
I intefaced MCP3562 Delta-Sigma 24-bits ADC.
Results are very good, ADC that has +-4 "typical" (+-7 PPM data sheet) INL at gain x1 , was improved more than 10 times for the tested unit, bringing linearity up to less than 0.4 (!) ppm. This value is not precisely defined, just based on THD-3 -129dB level. LIkely be better, and low cost ADC gets specification suitable to construct 7.5 digits DMM or other high grade Lab test equipment.
*INL has 16-bits Y-scale, multiply x16 to convert into PPM.
2026-02-20 1:27 AM - edited 2026-02-20 1:28 AM
Impressive piece of work!
Can we please discuss the figures a bit more?
The first one, what is y, the raw readout numbers? Given 64-times oversampling and the basic ADC 12 bits, you've also employed a division-by-4 (right-shift-by-2) to fit into 16 bits, correct? x is time, I presume, in arbitrary units (index of the sample or so).
On the same first figure, I presume, blue is the measured signal, red is the reconstructed "fundamental", yellow is the diff, correct? If it's so, the diff looks to me disturbingly high, given the INL (from third figure) is in single digits, basically between -1.5..-2.5, can you please comment?
This
> Tests conducted on two frequencies 200 Hz and 2 kHz shows similar results.
is IMO a very important piece from the conclusion, and can be seen also on Fig.3. However, the x-axis disturbs me there, as I'd expect it to span the same 0..65535 as the y-axis in the first figure. Is this because you've bucketed the correction curve to fit into less memory, or do I misunderstand something?
Also, a slightly disturbing thing is the "DC offset", i.e. that the curves in Fig.3 are not centered around y=0. I would expect that the usual built-in calibration would subtract exactly this. Did you perform that calibration before measurement or not?
Also, a noteworthy thing is, that the in Fig.3 appear to be roughly rotationally symmetrical around mid-x-axis (especially the smoother 2kHz one - I understand that it may hide some of the details, as you've said it's maybe effect of the filtering caused by oversampling). I would kind'a-sort'a expect that for differential measurements; or maybe because of the SAR nature of conversion i.e. that the error imposed by the MSB (i.e. first switched capacitor) should cause a step symmetrical around zero in INL exactly at that x-midpoint, or something along these lines.
Can you please perform an experiment, measuring/calculating the INL with ADC set to single-ended (it should be enough to just set ADC as single ended on one of the inputs and leave everything else as is) in the same setup, and comparing the result to the differential one? That may be an eye-opener... or not. :)
Thanks again,
JW
2026-02-20 5:53 AM
@waclawek.jan wrote:Impressive piece of work!
Can we please discuss the figures a bit more?
The first one, what is y, the raw readout numbers? Given 64-times oversampling and the basic ADC 12 bits, you've also employed a division-by-4 (right-shift-by-2) to fit into 16 bits, correct? x is time, I presume, in arbitrary units (index of the sample or so).
On the same first figure, I presume, blue is the measured signal, red is the reconstructed "fundamental", yellow is the diff, correct? If it's so, the diff looks to me disturbingly high, given the INL (from third figure) is in single digits, basically between -1.5..-2.5, can you please comment?
Thanks.
First picture is just an illustration, software generated set of data (Qt), distorted -20 dBc sine-wave, it's just to help in understanding basic concept of LUT correction, what is subtracted from what and how difference supposed to look like. Scale is also not correct, data charted in LibreOffice Calculator (Excel spreadsheet analogy in Unix world.)
Regarding scaling real data: OVS results shifted right >>2 ( /4) so oversampling by 64 provides x 16 at the end, and it's exactly 16-bits. Have to do right shift due to memory constrain. For another board (H7) better to null out rounding error not shifting anything.
Data flows as 16-bits (halfword DMA) to accumulator-buffer, where I perform more averaging on the block size data, finding zero crossing first, but it's different topic. What is matter here, that integer 16-bits casted to float and divided by 16 before its gets into FFT processing. Scaling after this point completely in ADC initial units, 12-bits. So INL chart in correctly mapped. I do know all scaling is correct, testing 3.3Vp-p by scope at the adc inputs, and employing Calibration data set in flash-memory, that helps to verify processing at any moment. Data substituted instead of real adc and charted on TFT LCD by switching command over serial link. Harmonics level -120 dB verified, noise flour, freq. scaling etc.
@waclawek.jan wrote:This
> Tests conducted on two frequencies 200 Hz and 2 kHz shows similar results.
is IMO a very important piece from the conclusion, and can be seen also on Fig.3. However, the x-axis disturbs me there, as I'd expect it to span the same 0..65535 as the y-axis in the first figure. Is this because you've bucketed the correction curve to fit into less memory, or do I misunderstand something?
Also, a slightly disturbing thing is the "DC offset", i.e. that the curves in Fig.3 are not centered around y=0. I would expect that the usual built-in calibration would subtract exactly this. Did you perform that calibration before measurement or not?
You've missed important part -> Re-map. I play is software for a while to define appropriate size of the LUT. Not to big (mem limits again, float 4 kBytes table required 16 kBytes), and keep efficiency in linearity correction the same time. My research shows that 70 dB reduction in THD level ( I've mention ) is true for 10-bits table (1024 elements, segments).
for(int i = 0; i < FFT_SIZE; i++) {
//uint tmp1 = (inp_copy[i] +32768) /scale;
uint tmp1 = (inp_copy[i] +2048) /scale_seg;
if(tmp1 > 1023) tmp1 = 1023;
f_r[i] = inp_copy[i] - cor_inp[tmp1];
}As you can see re-map is simple, signed 12-bits converted to unsigned by +2048, than converted to 10-bits by scale-seg = 4. tmp1 value at this point is a number of a segment in the 10-bits LUT.
Very important to understand, that re-mapping during calibration step (to calculate errors table) leaving out of the scope frequency-phase-magnitude after this step.
DC slow ramp voltage calibration procedure does same thing, mapping inputs values to associated points in the LUT.
You are right about DC offset, I missed to corect. But it does not change anything. Hardware setup has an offset, I can monitor by printing raw input data, about 2.5-2.75 pixel, FFT output data[0] confirms same things more precisely.
@waclawek.jan wrote:Also, a noteworthy thing is, that the in Fig.3 appear to be roughly rotationally symmetrical around mid-x-axis (especially the smoother 2kHz one - I understand that it may hide some of the details, as you've said it's maybe effect of the filtering caused by oversampling). I would kind'a-sort'a expect that for differential measurements; or maybe because of the SAR nature of conversion i.e. that the error imposed by the MSB (i.e. first switched capacitor) should cause a step symmetrical around zero in INL exactly at that x-midpoint, or something along these lines.
Can you please perform an experiment, measuring/calculating the INL with ADC set to single-ended (it should be enough to just set ADC as single ended on one of the inputs and leave everything else as is) in the same setup, and comparing the result to the differential one? That may be an eye-opener... or not. :)
Thanks again,
JW
DC non-linearity measurements for G474 and H753 was tested using TI dac80501 (verified linearity down to +-0.5 LSB 16-bits by SD ADC mcp3562)
Here is a couple pictures (not sure if the same board or not, have 5 pieces flying around).
SE is much noisier, for DC OVS=256, all ramp about 2-5 seconds - very slow.
2026-02-20 6:21 AM
Thanks for the detailed explanation. Now the figures do make sense.
> I done this before
I've missed that post.
> SE is much noisier
Oh, those two pictures are an eye opener, indeed!
(and the symmetry I've seen in the INL fig.3 in your first post was then probably just a fluke, no need for my speculations)
To be honest, I never though of the integrated ADCs as of potentially precision devices (albeit at the cost of some extra software). Now I may rethink this. Thanks.
JW
2026-02-20 6:43 AM
Regarding DC offset, see comparative pictures, chart's left side sticking to Y-axes. First shows DC presents, and second picture shows much lower DC level. So, DC offset was "embedded" into INL table, than get subtracted. Good things, both AC and DC errors were corrected.
Precision could be good, if calibration runs periodically.
About stability, my remark "ADC works well with data stored previous day" is like a humor. But research confirmed that 12-bits SAR outperformed audio grade PCM1803 /1808 series ADC from BB (TI) on the first try !.
Method based on simple calibration hardware setup, high-Q selective filter costs me <$2, and since G474re has DAC, I'm thinking to run calibration often, not to rely on temperature stability or aging processes/ drift.
Distortion introduced by muxes switching inputs to external device than back to calibration oscillator also 'd be straighten up !
2026-02-23 6:10 AM
More tests, using nucleo-stm32G474re with external adc this time.
I intefaced MCP3562 Delta-Sigma 24-bits ADC.
Results are very good, ADC that has +-4 "typical" (+-7 PPM data sheet) INL at gain x1 , was improved more than 10 times for the tested unit, bringing linearity up to less than 0.4 (!) ppm. This value is not precisely defined, just based on THD-3 -129dB level. LIkely be better, and low cost ADC gets specification suitable to construct 7.5 digits DMM or other high grade Lab test equipment.
*INL has 16-bits Y-scale, multiply x16 to convert into PPM.
2026-03-16 3:28 PM
Moving research on the next level, using Nucleo-stm32H753zi this time.
Project allows to get unprecedented level of detalization observing INL of the ads8354 from TI.
(compare to pictures provided in the DS).