cancel
Showing results for 
Search instead for 
Did you mean: 

Code is broken once compile with optimisation enabled

zhiyongwoo
Associate III

Hi all,

I have uploaded those important of part of the codes here to have expert review on where my code causing the compiler interpret wrongly when Optimisation is enabled.

I have tested the code with Compile with none optimisation and it works without an issue but it start to broken even if I select the optimisation -o1.

So basically what my code is doing is, I am using DMA2 Stream2 to send data from memory to GPIOE->ODR register. I have enable only 1 interrupt (which is caused by both edge changes on Channel2) to change the value in the variable and re-enable the DMA2 Stream2 again for the next edge.

In the main.c while loop, I generated a simulated 10uS pulse and expected an interrupt and DMA stream happen at the same time.

Without Optimisation Enable, the code works as expected.

With Optimisation, -O1, interrupt works but no DMA2 after the 1st edge change. Suspect the optimisation has screw up the code in the interrupt.

Please kindly help to suggest how to code for optimisation enable?

Appreciate your help. Thank you.

In the main.c

/* Private user code ---------------------------------------------------------*/
/* USER CODE BEGIN 0 */
uint32_t GPIOE_IDR_buff;
uint32_t GPIOE_BSRR_buff;
uint32_t GPIOE_ODR_buff;
uint32_t GPIOE_ODR_DataHI;
uint32_t toggle;
/* USER CODE END 0 */
 
int main(void)
{
  /* USER CODE BEGIN 1 */
  GPIOE_IDR_buff = 0;
  GPIOE_BSRR_buff = (uint32_t)GPIB_D0_Pin << 16U;;
  GPIOE_ODR_buff = 0x80FF; //Set PIN15 HI during Testing
  GPIOE_ODR_DataHI = 0x00FF;
  toggle = 0;
  /* USER CODE END 1 */
  
 
  /* MCU Configuration--------------------------------------------------------*/
 
  /* Reset of all peripherals, Initializes the Flash interface and the Systick. */
  HAL_Init();
 
  /* USER CODE BEGIN Init */
  /*****************************************************************************
   *
   * /CS       = TIM1 - CC Chn2 + DMA + Interrupt
   * /RD       = TIM1 - CC Chn3 + DMA
   * /WR       = Digital Input (no pull-up)
   * RS2 - RS0 = Digital Input (no pull-up)
   * D7  - D0  = Digital Output (Open Drain)
   *
   ****************************************************************************/
  /* USER CODE END Init */
 
  /* Configure the system clock */
  SystemClock_Config();
 
  /* USER CODE BEGIN SysInit */
 
  /* USER CODE END SysInit */
 
  /* Initialize all configured peripherals */
  MX_GPIO_Init();
  MX_DMA_Init();
  MX_TIM1_Init();
  MX_TIM12_Init();
  /* USER CODE BEGIN 2 */
 
  /*****************************************************************************
   *
   * Configure IO for GPIB
   * /CS on TIM1_Chn2.
   * - TIM1_Chn2 trigger the TIM1_CC_IRQHandler
   * - TIM1_Chn2 DMA activates to Initialise GPIOE
   *
   * /RD on TIM1_Chn3.
   * - TIM1_Chn3 DMA activates the transfer from GPIOE_ODR_buff to GPIOE_ODR
   * - Enable only valid CS signal
   *
   * /CS on TIM1_Chn4
   * - TIM1_Chn4 DMA activates by software to read GPIOE IDR
   *
   ****************************************************************************/
  /*****************************************************************************
   * Step 1: Configure & Enable DMA
   ****************************************************************************/
  HAL_DMA_Start(htim1.hdma[TIM_DMA_ID_CC2],
                  (uint32_t)&GPIOE_ODR_buff,
                  (uint32_t)&GPIB_D0_GPIO_Port->ODR, 1);
  /*****************************************************************************
   * Step 2: Start Comparators
   *         - Chn2 = CS with DMA event + Interrupt
   *         - Chn3 = RD with DMA event
   *         - Chn4 = GND with software DMA event
   ****************************************************************************/
  __HAL_TIM_ENABLE_IT(&htim1, TIM_IT_CC2);
  TIM_CCxChannelCmd(htim1.Instance, TIM_CHANNEL_2, TIM_CCx_ENABLE);
 
  /*****************************************************************************
   * Step 3: ENABLE TIM1 DMA Request on CHN2,4 (Last)
   ****************************************************************************/
  __HAL_TIM_ENABLE_DMA(&htim1, TIM_DMA_CC2);
 
  /*****************************************************************************
   * Step 4: Start TIM1
   ****************************************************************************/
  htim1.Instance->SR = 0x00;
  __HAL_TIM_ENABLE(&htim1);
 
  /*****************************************************************************
   * TIM12
   * - Use in the delay_uS() to give a precise delay in uS resolution
   ****************************************************************************/
  __HAL_TIM_ENABLE(&htim12);
 
 
  /* USER CODE END 2 */
 
  /* Infinite loop */
  /* USER CODE BEGIN WHILE */
  while (1)
  {
    /* USER CODE END WHILE */
 
    /* USER CODE BEGIN 3 */
 
    //delay 1mS
    delay_us(10);
    dma_syc_sig_GPIO_Port->BSRR = dma_syc_sig_Pin;
 
    //delay 1mS
    delay_us(10);
    dma_syc_sig_GPIO_Port->BSRR = (uint32_t)dma_syc_sig_Pin << 16U;
  }
  /* USER CODE END 3 */
}

In the main.h

extern uint32_t GPIOE_IDR_buff;
extern uint32_t GPIOE_ODR_buff;
extern uint32_t toggle;

In the stm32f4xx_it.c

extern TIM_HandleTypeDef htim1;
 
void TIM1_CC_IRQHandler(void)
{
  /* USER CODE BEGIN TIM1_CC_IRQn 0 */
 
  //htim1.Instance->SR = 0x00;
  if (toggle)
  {
    toggle = 0;
    GPIOE_ODR_buff = 0x80FF;
  }
  else
  {
    toggle = 1;
    GPIOE_ODR_buff = 0x00FF;
  }
 
  //Enable DMA again for the next triggers
  //Copy the value in the buff into DMA Stream memory
  //So changes on buff will not affect the value sent from DMA
  DMA2->LIFCR = DMA_LIFCR_CTCIF2|DMA_LIFCR_CHTIF2|DMA_LIFCR_CTEIF2|DMA_LIFCR_CDMEIF2;     //Clear DMA2S2 flag
  __HAL_DMA_ENABLE(htim1.hdma[TIM_DMA_ID_CC2]);
 
  htim1.Instance->SR = 0x00;
 
  return;
}

3 REPLIES 3
TDK
Guru

>  GPIOE_ODR_buff = 0x80FF;

>  GPIOE_ODR_buff = 0x00FF;

Setting GPIOx->ODR to either of these values will have the same effect. Only the lower 16 bits of ODR are active. That's not what you want, right?

> With Optimisation, -O1, interrupt works but no DMA2 after the 1st edge change.

What does "no DMA2 after the 1st edge change" mean? Are you saying GPIOE->ODR isn't getting set? How do you know?

> __HAL_DMA_ENABLE(htim1.hdma[TIM_DMA_ID_CC2]);

You need to set NDTR before enabling DMA. It'll be at 0 if the previous transfer is complete.

If you feel a post has answered your question, please click "Accept as Solution".

Hi TDK,

> Setting GPIOx->ODR to either of these values will have the same effect. Only the lower 16 bits of ODR are active. That's not what you want, right?

I am using DMA2 Stream 2 to copy the value from variable, GPIOE_ODR_buff to GPIOE->ODR automatically by DMA2 Request from TIM1 Chn2 capture event. This gave me a response of 47nS compare to write directly to the GPIOE->ODR which took me approximately 150nS.

> What does "no DMA2 after the 1st edge change" mean? Are you saying GPIOE->ODR isn't getting set? How do you know?

I have a mixed signal scope to monitor the signal changes on GPIOE pin15. I notice when it first get the DMA req, pin15 change from 0 (default state) to Hi when it detect edge change from another pin, Then in the interrupt, it switch the GPIOE_ODR_buff to 0x00FF. Then I found out the DMA no longer works after the next interrupt. So from the scope, I only see a first single square wave on pin15. Then it always at LO.

I believe the DMA2 still working but I doubt the GPIOE_ODR_buff is not but the debug windows show it is.

As I said before, my code work fine without optimisation. When optimisation enable (no matter what mode), it start to be broken.

> You need to set NDTR before enabling DMA. It'll be at 0 if the previous transfer is complete.

NDTR already been set before the while loop. by enabling DMA_SxCR.Start will automatically restore the value I configure previously.

Qualify GPIOE_ODR_buff as volatile.

But this whole setup is unnecessarily complex. Set up a 2-halfword buffer in RAM with both output values, set DMA as circular with NDTR=2, start it, and remove the whole interrupt.

JW