cancel
Showing results for 
Search instead for 
Did you mean: 

STM32H7 ADC calibration, I don't understand

linuxfan
Associate II

Hi all,

I am experimenting with H7 ADC (after having tried the G4 ADC).

I see in the HAL that the calibration, HAL_ADCEx_Calibration_Start(), for H7 MCU wants a parameter more than G4. One should specify ADC_CALIB_OFFSET or ADC_CALIB_OFFSET_LINEARITY.

I am confused, and reading the datasheet does not help me. Can't understand the difference between the two methods.

Should I use ADC_CALIB_OFFSET  or  ADC_CALIB_OFFSET_LINEARITY, when calling the calibration_start()?

Or should I call the calibration *twice*, one for each option?

TY,
linuxfan

1 ACCEPTED SOLUTION

Accepted Solutions
linuxfan
Associate II

Don't mind. After reading AN5354 "getting-started-with-the-stm32h7-series-mcu-16bit-adc" things are a little clearer.

"Linearity" has to be called once (after a reset), and ADC_CALIB_OFFSET must be performed once and eventually more times later, when conditions change.

 

View solution in original post

8 REPLIES 8
linuxfan
Associate II

Don't mind. After reading AN5354 "getting-started-with-the-stm32h7-series-mcu-16bit-adc" things are a little clearer.

"Linearity" has to be called once (after a reset), and ADC_CALIB_OFFSET must be performed once and eventually more times later, when conditions change.

 

I am a beginner, i'm more confused after seeing your answer, should i call ADC calibration repeatedly to reduce error?

Im working on STM32H723ZG , using its 16bit ADC to read, i perform ADC Calibration at start (the linearity one), i am getting 1-20mv error, how can i bring this to under 1mv. Without calibration the error was in 50-60mv range. Your insights would be helpful. 

As I said in the previous post, best results are achievable by calling the calibration once for linearity, and once for offset. Then, the offset calibration should be called when the temperature of the chip changes significantly.

This is what I have understood, but I could be wrong.

That said, and as you qualify yourself as a beginner, I can add that lowering the error under 1 mV seems difficult. ADCs are complicated and many many things influence their behaviour. One should take care of the reference voltages, which should be clean and stable; acquisition time and impedance of the source; frequency of the conversions, range of the input signal and so on. But noise and temperature changes are not under the control of the user...

Even having a very stable source (the voltage to read) and temperature, multiple ADC readings will differ. Every application should cope with this in the more appropriate way for the application.

Perhaps the most significant parameter for and ADC is the ENOB (effective number of bits). Suppose you have an ADC with 12 bit of resolution and an ENOB of 10 (which is quite good), and a measure range from 0 to 3,3 volts.

In theoria, you would have 4096 different values: 3,3V divided 4096 gives 0.8 - i.e. every step in the converted value means 0.8 mV. But if the effective number of bits (ENOB) is 10, you have in reality only 1024 significant steps in your reading, i.e. 3300 mV divided 1024 = 3 mV per step instead of 0.8. This means that even if you read, say, 32 mV, it could be 32 minus 1.5 mV or 32 plus 1.5 mV. This is "uncertainity".

Beware, I am not a guru: I hope someone else can confirm what I am writing. But it seems to me that reducing the error to 1 mV with a 12-bit ADC is impossible.

 

 

 

Thanks for replying, I'm using the 16 bit ADC with DMA transfer. What you said makes sense, do you think it might be the instability in the reference voltage? It is possible to achieve that precision in 16-bit mode right

#define SCALE (3300.0f/65535.0f)

uint16_t adc_val = 0;

uint16_t ADC_ARR[5];



void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc){

adc_val = ADC_ARR[0]*SCALE;

}

HAL_ADCEx_Calibration_Start(&hadc1, ADC_CALIB_OFFSET_LINEARITY, ADC_SINGLE_ENDED);

HAL_ADC_Start_DMA(&hadc1, (uint32_t*)ADC_ARR,1);

 

As I said before, the fact that your ADC is 16-bit does not mean much. What really counts is, if you know it, the ENOB - which will be surely less than 16. Otherwise you can find tables where is stated the TOTAL ERROR, in number of bits (LSB). If the total error is, for example, 4, then you take your SCALE (from your posted fragment), multiply it for the TOTAL ERROR, and you will now your precision.

Remember that data declared by the constructor of the chip assume the best conditions..., i.e. a very good layout of the PCB, good filtering and so on.

I think that your problem is not the software, the fragment you posted says very little - but perhaps you miss the calibration of the offset.

Moreover, you should try to identify better your problem. Is it a non-repeatability of the reading (sometimes you get 200, some other times you get 205)? Or is it that the value is always offset by a some semi-fixed value (you always get values higher/lower than you expect).

Moreover2: are you sure your analog value really is what you think? Measuring with precision of 1 mV is not easy, do you have instrumentation precise enough?

 

I think that your problem is not the software, the fragment you posted says very little - but perhaps you miss the calibration of the offset.  - i'm performing both calibration like this 

  HAL_ADCEx_Calibration_Start(&hadc1, ADC_CALIB_OFFSET_LINEARITY, ADC_SINGLE_ENDED);
  HAL_ADCEx_Calibration_Start(&hadc1, ADC_CALIB_OFFSET, ADC_SINGLE_ENDED);

The error is not fixed, for example if i give 0v i might get 1-2mV error, but for 2V it might be 1-20mV. I tested the error using Keithley 2450 source meter. All the clocks are in the default configuration, also for ADC, clock is derived from HSI, will that affect accuracy? From the docs the ENOB is 12.2, so that mean 7mV/step. Also is the Total unadjusted  error and ENOB same?

sreyas40_1-1758266057581.png

 

Dear sreyas40,

I am not a guru... I think I can no more help you. But I can add a last thing: you should not rely on a single reading - try to get at least 3-4 (or perhaps even 7-8) readings and take their mean. You will see more consistent results and understand better if the problem is noise, linearity or offset (or a combination of them).

About ENOB and total unadjusted error, I really don't know. I suppose that if ENOB is 12, precision would be 3300 mV divided 4096, or 0.8 mV. Considering instead 3300 mV divided by 16-bit steps, it would result in a 0.05 mV per step; an unadjusted error of 10 LSB would mean 0.5 mV (10 steps of 0.05 mV), which is not too far from 0.8 mV, but it is not the same. As you see, I'm not an expert.

Should it happen that you solve or understand, let all us know by posting here what you've discovered. I am not working just now on ADCs, but surely I will do in the near future, so I am interested (and many others I think).

 

Thanks, your insights has been really helpful, i will update if came across a solution