cancel
Showing results for 
Search instead for 
Did you mean: 

STM32H7: jumps in ADC transfer curve; both with Self-calibration, and FactoryLoad

lura
Associate II

Dear ST experts,

we're currently struggling with getting acceptable ADC performance with some of our samples of the STM32H753VIT ADC2. Our setup includes:

  • ADC2 works in Independent mode
  • using Scan Conversion Mode, 7 Channels (3, 5, 9, 10, 15, 18, 19)
  • 14-bit resolution, 16x oversamplig (each channel), 2 bit right-shift for oversampling
  • ADC2-Clock is 37,5 MHz, generated by PLL2P, before internal division by 2, Asynchronous clock mode divided by 1
  • Continuous Conversion Mode and Discontinuous Conversion Mode are both disabled
  • TIM15 triggers ADC2 every 100 µs to scan all channels
  • DMA Circular Mode
  • VRef = 3.0 Volt, external
  • We set up a minimal STM CubeIDE project to verify the described behaviour

While more than half of our parts work more or less as advertised, but with some parts we reproducibly fail to generate decent calibration values; regardless of whether we use the factory-load linear calibration values (by using

HAL_ADCEx_LinearCalibration_FactorLoad(&hadc2);

) or doing a self-calibration with

HAL_ADCEx_Calibration_Start(&hadc2, ADC_CALIB_OFFSET_LINEARITY, ADC_SINGLE_ENDED);

 

For testing, we use to apply a slow (1Hz) ramp signal to ADC2 channel3 while the ADC is running, buffer a large number of results in memory, stop the ADC when the buffer is filled, and finally read out the buffer via UART.

From the recorded ADC values, we cut out one ramp, and subtract the linear trend line. With the "bad" STM32H752 samples, we always get rather big jumps in the error curve, like this one:

stm32h7_selfcalib_bad_01.png

Notably, the jumps seem to occur at multiples of 0x2000. So, we concluded that the calibration for the bit weights does not work properly with these examples.

Our main application is to accurately sinusodial signals around 0x8000, which gets ugly whenever the amplitude is small (because we then work in a small interval around the main dicontoninuity at 0x8000)

 

When investigating the issue, we have read out both the factory calibration value, and the linear calibration values resulting from self-calibrating, and noticed that all 10-bit-parts of the 160-bit linearity calibration factor lay around 0x200 (+/- roughly 0x1B). We conjectured that 0x200 may be a "neutral" value, and manually set all 10-bit-parts to 0x200:

  uint32_t lincalbuf[ADC_LINEAR_CALIB_REG_COUNT] = {
		  0x20080200u,
		  0x20080200u,
		  0x20080200u,
		  0x20080200u,
		  0x20080200u,
		  0x00000200u,
  };
  HAL_ADCEx_LinearCalibration_SetValue(&hadc2, lincalbuf);

A bit surprisingly, this gave us much a more "continuous" error curve, with all our parts ("good" and "bad"), e.g.

stm32h7_manualcalib_better.png

Thus we have a few questions:

i) Can someone tell us what is exactly the meaning of the 10-bit parts of the linear calibration factor, preferably with some formulae? By experimenting, we concluded, that the lower eight 10-bit parts represent the bit weights of the upper 8 ADC bits. Is this correct?

ii) Did we miss something when calibrating the ADC, which may explain the bad behaviour with some parts?

iii) Are there some risks when we use our "manual" calibration values (16 times 0x200) that we may not be aware of?

 

Best Regards,
Lukas Rauber

 

16 REPLIES 16

I save the return value in a local variable, and if it is'nt equal to HAL_OK, I print into my serial output:

static void dump_calibration_values(
		char const * const prefix,
		int rv);
static int uprintf(char const * const fmt, ...);

int main(void)
{
    /* ... basic config and MX_$_Init() calls ... */
    dump_calibration_values("@startup", 0);
    int rv;
    rv = HAL_ADCEx_Calibration_Start(
      &hadc2, ADC_CALIB_OFFSET_LINEARITY, ADC_SINGLE_ENDED);
    dump_calibration_values("ADC_CALIB_OFFSET_LINEARITY", rv);

    HAL_ADC_Start_DMA(padc, (uint32_t *)dmabuf, DMABUFLEN);
    HAL_TIM_Base_Start(&htim15);

    /* ... main loop waiting for buffer to be filled ... */
}

static void dump_calibration_values(
		char const * const prefix,
		int rv)
{
  if (rv != HAL_OK)
	  uprintf("# WARNING! Calibration returned nonzero: %i\n", rv);
  uprintf("# %s\n", prefix);
  uprintf("# Offset (single-ended): 0x%x\n",
		  (unsigned)HAL_ADCEx_Calibration_GetValue(
				  padc,
				  ADC_SINGLE_ENDED));
  uint32_t buf[ADC_LINEAR_CALIB_REG_COUNT];
  rv = HAL_ADCEx_LinearCalibration_GetValue(padc, buf);
  if (rv != HAL_OK)
  	  uprintf("# WARNING! HAL_ADCEx_LinearCalibration_GetValue"
  	  		  " returned nonzero: %i\n",
			  rv);
  uint_fast8_t right_shift = 0;
  uint32_t * p = buf;
  for (int b = 0; b < 16; ++b)
  {
    unsigned short const calibval =
    		((*p) >> right_shift)
			& ((1u << 10) - 1);
    uprintf("# LinearCalib[%2d] (rv %i): 0x%hx\n",
		    b, rv, calibval);
    // next one
    if ((right_shift += 10u) >= 30)
      right_shift = 0, ++p;
  }
}

static int uprintf(char const * const fmt, ...)
{
	static char printbuf[128];
    va_list args;
    va_start(args, fmt);
	int const count = vsnprintf(
		printbuf,
		sizeof(printbuf),
		fmt,
		args);
	HAL_UART_Transmit(
		&huart4,
		(uint8_t const *)printbuf,
		count,
		HAL_MAX_DELAY);
	va_end(args);
	return sizeof(printbuf) != count;
}

But my serial output did not ever contain the "# Warning! Calibration returned non-zero..." line:

# @startup
# Offset (single-ended): 0x0
# LinearCalib[ 0] (rv 0): 0x0
# LinearCalib[ 1] (rv 0): 0x0
# LinearCalib[ 2] (rv 0): 0x0
# LinearCalib[ 3] (rv 0): 0x0
# LinearCalib[ 4] (rv 0): 0x0
# LinearCalib[ 5] (rv 0): 0x0
# LinearCalib[ 6] (rv 0): 0x0
# LinearCalib[ 7] (rv 0): 0x0
# LinearCalib[ 8] (rv 0): 0x0
# LinearCalib[ 9] (rv 0): 0x0
# LinearCalib[10] (rv 0): 0x0
# LinearCalib[11] (rv 0): 0x0
# LinearCalib[12] (rv 0): 0x0
# LinearCalib[13] (rv 0): 0x0
# LinearCalib[14] (rv 0): 0x0
# LinearCalib[15] (rv 0): 0x0
# ADC_CALIB_OFFSET_LINEARITY
# Offset (single-ended): 0x408
# LinearCalib[ 0] (rv 0): 0x201
# LinearCalib[ 1] (rv 0): 0x205
# LinearCalib[ 2] (rv 0): 0x206
# LinearCalib[ 3] (rv 0): 0x20e
# LinearCalib[ 4] (rv 0): 0x213
# LinearCalib[ 5] (rv 0): 0x21d
# LinearCalib[ 6] (rv 0): 0x216
# LinearCalib[ 7] (rv 0): 0x204
# LinearCalib[ 8] (rv 0): 0x200
# LinearCalib[ 9] (rv 0): 0x205
# LinearCalib[10] (rv 0): 0x209
# LinearCalib[11] (rv 0): 0x20c
# LinearCalib[12] (rv 0): 0x212
# LinearCalib[13] (rv 0): 0x221
# LinearCalib[14] (rv 0): 0x216
# LinearCalib[15] (rv 0): 0x1ff
0x0802
0x0800
0x080c
0x080f
0x081c
0x0823
0x082b
0x0833
0x0837
0x083b
0x0840
0x0847
... (lots of ADC values) ...

I did not check the register myself; as I read it, the HAL routine does exactly that.

The results you showed were very interesting. Seems like something weird is going on internally to the ADC.

It might be hard to get an answer from ST here and they're the only ones with the knowledge of what it's doing at this level. I always thought it was weird that the ADC required linear calibration factors at all.

I have seen jumps in ADC values (missing codes) happen when VREF+ is not stable or not decoupled properly, but those jumps lead to asymmetric results which do not appear to be happening here.

The onboard 16-bit ADC is nice, but it falls significantly short of the capabilities of a dedicated external ADC.

 

The 3 MOhm series resistance is massive. I don't think it will change anything based on your comparison of "good" vs "bad" chips, but it may be worth seeing if changing that to 10 kOhm makes a difference.

 

Thanks for sharing.

If you feel a post has answered your question, please click "Accept as Solution".

 " I did not check the register myself; as I read it, the HAL routine does exactly that. "

In normal circumstances yes, but in case of failure HAL is primary suspect. I still think that calibration didn't not complete successfully and would verify content of CR register, (same way as HAL), or even replace HAL with own function to control all steps of procedure, phases & timings .

ADC control register (ADC_CR)
Address offset: 0x08
Reset value: 0x2000 0000
Devices revision Y

Another question is Version, since H7 has a few and main difference AFAIK is ADC implementation. Is it possible that good / bad units are different in versions/ revisions?

lura
Associate II

Yeah, ST sadly does not really allow much insight into the ADC hardware. It would be really helpful to get some information about the meaning of the Linear Calibration values, in order to (1st) debug the problem, and (2nd) evaluate our proposed "solution" to set all 10-bit-fields to 0x200, especially regarding future-proofing.

Regarding the 3 MOhm input resistance: this is part of the inverting pre-amplifier setup, in order to map the voltage range we want to measure (about +/- 600 Volt) into our ADC input range of 0 - 3 Volt. I don't see in which regard this might pose a problem... Are your worries regarding input noise, or something else?

Hmm, good point.

My STM32H753 are already the (newer [?]) hardware revision "V". The differences that I found are:

  1. revision "V" hat two instead of one BOOST bits in the CR register, and
  2. revision "V" features a fixed internal ADC clock divider (by 2).

The BOOST bits are '0b10' in my setup; as I read the datasheets, this should be okay for my ADC clock of 37.5 MHz (before internal "/ 2" division), i.e. 18.75 MHz after the divider...

Following your advice, I tried a manual "bit-banging" self-calibration, as described in the reference manual, and also checked the CR register after calibration.

But sadly, no changes in the measured data. Here is the code and the output for the curious folks:

static ADC_HandleTypeDef * const padc = &hadc2;

static void calibrate_adc(void)
{
	ADC_TypeDef * const adc = padc->Instance;
	/* 1a) Ensure DEEPPWD=0, ADVREGEN=1. */
	uint32_t reg = adc->CR;
	reg &= ~0x3Fu;        // don't set "rs"-bits
	reg &= ~(1ul << 29);  // disable Deep-power-down
	reg |= (1ul << 28);   // enable ADC Voltage regulator
	adc->CR = reg;
	/* 1b) Verify that the ADC voltage regulator startup time has elapsed. */
	while (((reg = adc->ISR) & (1ul << 12)) == 0)
		uprintf("# ADCCalib: waiting for ADC voltage regulator startup."
				" ISR = 0x%08X\n", reg);
	/* 2a) Ensure that ADEN == 0. */
	reg = adc->CR;
	reg &= ~0x3Fu;      // don't set "rs"-bits
	reg	|= (1ul << 1);  // set ADDIS bit
	adc->CR = reg;
	while (((reg = adc->CR) & 0b11u) != 0b00)
		uprintf("# ADCCalib: waiting for ADC to turn off. CR == 0x%08X.\n",
				reg);
	/* 3) Select the input mode for this calibration, and
	 * select if Linearity calibration enable or not. */
	reg = adc->CR;
	reg &= ~0x3Fu;        // don't set "rs"-bits
	reg	&= ~(1ul << 30);  // disable ADCALDIGG (single-ended input mode)
	reg	|= (1ul << 16);   // enable calibration *with* Linearity calibration
	adc->CR = reg;
	/* 4) Set ADCAL = 1. */
	reg = adc->CR;
	reg &= ~0x3Fu;       // don't set "rs"-bits
	reg	|= (1ul << 31);  // set ADCAL bit
	adc->CR = reg;
	/* Wait until ADCAL == 0. */
	while (((reg = adc->CR) & (1ul << 31)) != 0)
		uprintf("# ADCCalib: waiting for ADCAL == 0. CR == 0x%08X.\n", reg);
}

/* In main(): */
  // ...
  calibrate_adc();
  uprintf("# INFO: ADC_CR register reads 0x%08X.\n", (unsigned)padc->Instance->CR);
  dump_calibration_values("ADC_Calib_BitBang", 0);
  uprintf("# INFO: ADC_CR register reads 0x%08X.\n", (unsigned)padc->Instance->CR);
  // ....

 

# @startup
# Offset (single-ended): 0x0
# LinearCalib[ 0] (rv 0): 0x0
# LinearCalib[ 1] (rv 0): 0x0
# LinearCalib[ 2] (rv 0): 0x0
# LinearCalib[ 3] (rv 0): 0x0
# LinearCalib[ 4] (rv 0): 0x0
# LinearCalib[ 5] (rv 0): 0x0
# LinearCalib[ 6] (rv 0): 0x0
# LinearCalib[ 7] (rv 0): 0x0
# LinearCalib[ 8] (rv 0): 0x0
# LinearCalib[ 9] (rv 0): 0x0
# LinearCalib[10] (rv 0): 0x0
# LinearCalib[11] (rv 0): 0x0
# LinearCalib[12] (rv 0): 0x0
# LinearCalib[13] (rv 0): 0x0
# LinearCalib[14] (rv 0): 0x0
# LinearCalib[15] (rv 0): 0x0
# ADCCalib: waiting for ADC to turn off. CR == 0x10000203.
# ADCCalib: waiting for ADCAL == 0. CR == 0x90010200.
# ADCCalib: waiting for ADCAL == 0. CR == 0x90010200.
# INFO: ADC_CR register reads 0x1FC10200.
# ADC_Calib_BitBang
# Offset (single-ended): 0x3ee
# LinearCalib[ 0] (rv 0): 0x203
# LinearCalib[ 1] (rv 0): 0x205
# LinearCalib[ 2] (rv 0): 0x207
# LinearCalib[ 3] (rv 0): 0x20e
# LinearCalib[ 4] (rv 0): 0x211
# LinearCalib[ 5] (rv 0): 0x221
# LinearCalib[ 6] (rv 0): 0x211
# LinearCalib[ 7] (rv 0): 0x202
# LinearCalib[ 8] (rv 0): 0x204
# LinearCalib[ 9] (rv 0): 0x204
# LinearCalib[10] (rv 0): 0x20d
# LinearCalib[11] (rv 0): 0x20d
# LinearCalib[12] (rv 0): 0x216
# LinearCalib[13] (rv 0): 0x21d
# LinearCalib[14] (rv 0): 0x20a
# LinearCalib[15] (rv 0): 0x202
# INFO: ADC_CR register reads 0x10010201.
0x0809
0x0810
0x0820
0x0824
0x0828
0x082d
0x0834
0x083b
0x0841
[... more ADC values...]

 The data look as ever:

stm32h753_ADC2_16R5,2uF2_AD8608_bitBangigSelfCalib.png

" My STM32H753 are already the (newer [?]) hardware revision "V". The differences that I found are:

  1. revision "V" hat two instead of one BOOST bits in the CR register, and
  2. revision "V" features a fixed internal ADC clock divider (by 2)."

What I usually do, is checking specific reg to make sure:

ZI2:
Reg: 5C001000 0010 0000 0000 0011 0110 0100 0101 0000 (0x20036450)
Bits 31:16 REV_ID[15:0]: Revision
0x1001 = Revision Z
0x1003 = Revision Y <----
0x2001 = Revision X
0x2003 = Revision V
Bits 15:12 Reserved, must be kept at reset value.
Bits 11:0 DEV_ID[11:0]: Device ID
0x450: STM32H742, STM32H743/753 and STM32H750

 

"  The BOOST bits are '0b10' in my setup; as I read the datasheets, this should be okay for my ADC clock of 37.5 MHz (before internal "/ 2" division), i.e. 18.75 MHz after the divider...  "

Since there is an issue, why not to test 0b11 ?

Have RM0433?

" Note: RM0433
adc_sclk is the system clock or system clock divided by two: when the AHB prescaler is set
to 1 (HPRE[3:0] = 0XXX in RCC_CFGR register), adc_sclk is equal to sys_ck, otherwise
adc_sclk corresponds to sys_ck/2.  "

Frankly, I don't understand what it says, as "/2" and "equals to" stated in the same sentence.

For myself , I'd get RCC-CFGR printed, and changed up and down till I see something.

May be useful:  float freq = HAL_RCCEx_GetPeriphCLKFreq(RCC_PERIPHCLK_ADC);

Regarding code, 

reg &= ~0x3Fu;        // don't set "rs"-bits
	reg &= ~(1ul << 29);  // disable Deep-power-down
	reg |= (1ul << 28);

I don't know why "u" is appended to HEX value, and why ul instead of UL , could be my C knowledge is rusty.

 

 


@MasterT wrote:

What I usually do, is checking specific reg to make sure:

ZI2:
Reg: 5C001000 0010 0000 0000 0011 0110 0100 0101 0000 (0x20036450)
Bits 31:16 REV_ID[15:0]: Revision
0x1001 = Revision Z
0x1003 = Revision Y <----
0x2001 = Revision X
0x2003 = Revision V
Bits 15:12 Reserved, must be kept at reset value.
Bits 11:0 DEV_ID[11:0]: Device ID
0x450: STM32H742, STM32H743/753 and STM32H750

 


Both the device marking and the DBGMCU_IDC register indicate I have Hardware Revision 'V'.

 


@MasterT wrote:

"  The BOOST bits are '0b10' in my setup; as I read the datasheets, this should be okay for my ADC clock of 37.5 MHz (before internal "/ 2" division), i.e. 18.75 MHz after the divider...  "

Since there is an issue, why not to test 0b11 ?


Setting BOOST to '0b11' reduces the height of the jumps a bit, but not much (i.e. the "main" jump at ADC code 0x8000 has height ~ 20 instead of ~ 25.

 


@MasterT wrote:

Have RM0433?

" Note: RM0433
adc_sclk is the system clock or system clock divided by two: when the AHB prescaler is set
to 1 (HPRE[3:0] = 0XXX in RCC_CFGR register), adc_sclk is equal to sys_ck, otherwise
adc_sclk corresponds to sys_ck/2.  "

Frankly, I don't understand what it says, as "/2" and "equals to" stated in the same sentence.

For myself , I'd get RCC-CFGR printed, and changed up and down till I see something.

May be useful:  float freq = HAL_RCCEx_GetPeriphCLKFreq(RCC_PERIPHCLK_ADC);


Regarding the clocks: I don't use adc_sclk; I use PLL2P as ADC clock source ("async clock"), which results in adc_ker_ck_input=37.5MHz, and F_adc_ker_ck = 18.75MHz:

lura_0-1743085188955.png
Interestingly, HAL_RCCEx_GetPeriphCLKFreq(RCC_PERIPHCLK_ADC)) returns 3750000 (HAL bug?); so it doesn't take into account the fixed "/2" divider in the figure above.

Note that I trigger the ADC via TIM15 every 100µs; and my timing calculation is:
7channels * 16oversampling * (8.5 {tacq} + 7.5 {tconv}) / (37.5MHz / 2) = 95.57µs < 100µs.
When I set the prescaler to PREC[3:0] to "/2", the sampling frequency is halved (i.e. one oversampled sample every 200µs); thus I conclude, that my understanding of f_ADC = F_adc_ker_ck = 18.75MHz is correct.

I also tested halving the ADC clock and compensate by setting oversampling to 8 (instead of 16). With this setting, the jumps remain, but noise is worse.

 

 


@MasterT wrote:

Regarding code, 

reg &= ~0x3Fu;        // don't set "rs"-bits
	reg &= ~(1ul << 29);  // disable Deep-power-down
	reg |= (1ul << 28);

I don't know why "u" is appended to HEX value, and why ul instead of UL , could be my C knowledge is rusty.


The "u" (or "U", which does the same) suffix is for "unsigned", and can be ignored here [Programming style: whenever possible, avoid mixing signed and unsigned ints]. "ul" and "UL" suffixes are interchangable in C, and stand for "unsigned long"; which makes sure the operand is at least 32 bits (e.g. on small/old systems where int is only 16 bits).