cancel
Showing results for 
Search instead for 
Did you mean: 

SD card taking time to store data

Rsrma.1
Associate III

Hi everyone,

I'm using stm32F407VG discovery board and STM32Cube_FW_F4_V1.24.2 fimware for my project. I've included freertos and FATFS in my project. I've created 3 tasks, higher priority task(running at 100Hz) which job is to do some calculation and store data in SD card and rest 2 tasks(50Hz, 25Hz) are running perfectly and finished in 600-700 microseconds.

I'm writing 182 bytes (in structure format) of data in SD card but some time while writing it is taking more time than usual, it get stuck in a function for longer time but storing data perfectly. I don't know the reason behind this.

I've found one issue in github which link is given below-

https://github.com/STMicroelectronics/STM32CubeF4/issues/2

Below is the code snippet as per the above issue in the sd_diskio.c file -

    if(BSP_SD_WriteBlocks_DMA((uint32_t*)buff,
                              (uint32_t) (sector),
                              count) == MSD_OK)
    {
      /* Get the message from the queue */
      event = osMessageGet(SDQueueID, SD_TIMEOUT);
 
      if (event.status == osEventMessage)
      {
        if (event.value.v == WRITE_CPLT_MSG)
        {
          timer = osKernelSysTick() + SD_TIMEOUT;
          /* block until SDIO IP is ready or a timeout occur */
          while(timer > osKernelSysTick())
          {
            if (BSP_SD_GetCardState() == SD_TRANSFER_OK)
            {
              res = RES_OK;
              break;
            }
          }
        }
      }
    }
#if defined(ENABLE_SCRATCH_BUFFER)
  } else
  {
    /* Slow path, fetch each sector a part and memcpy to destination buffer */
    int i;
    uint8_t ret;
#if (ENABLE_SD_DMA_CACHE_MAINTENANCE == 1)   
    /*
    * invalidate the scratch buffer before the next write to get the actual data instead of the cached one
    */
    SCB_InvalidateDCache_by_Addr((uint32_t*)scratch, BLOCKSIZE);
#endif
 
    for (i = 0; i < count; i++) {
      ret = BSP_SD_WriteBlocks_DMA((uint32_t*)scratch, (uint32_t)sector++, 1);
      if (ret == MSD_OK) {
        /* wait for a message from the queue or a timeout */
        event = osMessageGet(SDQueueID, SD_TIMEOUT);
 
        if (event.status == osEventMessage) {
          if (event.value.v == WRITE_CPLT_MSG) {
            memcpy((void *)buff, (void *)scratch, BLOCKSIZE);
            buff += BLOCKSIZE;
          }
        }
      }
      else
      {
        break;
      }
    }

They've found the bug inside the for loop but my code is not going inside that loop because it's inside else condition and my code is working inside if condition and inside while loop BSP_SD_GetCardState() taking time to give respond which results in sd card task to take long time to finish.

while(timer > osKernelSysTick())
          {
            if (BSP_SD_GetCardState() == SD_TRANSFER_OK)
            {
              res = RES_OK;
              break;
            }
          }

I've traced its function calling and timing using percepio tracelyzer tool. I've attached a excel sheet which tells the timing when SD card task is taking more time to write data on SD card and between of those timings it's working fine. if we see closely, then generally every 22 sec this is happening.

Can someone suggest why am I getting this random behaviour and how can I reduce this time taken by a SD card?

@Community member, @Community member​ ​ - Please have a look at it. I think you can give some inputs in this.

8 REPLIES 8
TDK
Guru

> Can someone suggest why am I getting this random behaviour and how can I reduce this time taken by a SD card?

SD cards do not guarantee a constant write speed for every operation. Write time can vary, dramatically. I would guess that is what you're running into here, especially since the behavior is so periodic.

If you could get a logic analyzer on the SD card lines, it would confirm or deny if this is the problem.

To change the behavior, you would write data into a local buffer and flush it to the SD card when that buffer has enough data in it. Larger, infrequent transactions will have better performance than smaller, frequent ones.

If you feel a post has answered your question, please click "Accept as Solution".
Rsrma.1
Associate III

Generally all protocol like UART, I2C and SPI have defined read/write speed then why not SDIO have. is it a limitation of SD card or hardware of SDIO or firmware of SDIO or FATFS ? If it do not guarantee its timeline then how can someone use this in real time systems, where deadlines has to be followed.

I'm using percepio tracealyzer tool to see the time taken to write in SD card and I think its predictable because generally after every 22ms it's taking more time. If we're able to find out the cause then we must be able to change it.

Can you please give a reason also why larger and infrequent transactions will have better performance than smaller, frequent ones ?

TDK
Guru

> If it do not guarantee its timeline then how can someone use this in real time systems, where deadlines has to be followed.

Generally, you buffer the data and write it out at a metered pace. eMMC chips will have detailed specifications on how long certain operations take in the worst case. I imagine you can find the same for SD cards with enough persistence. You design your application to respect these requirements.

SD cards have a lot more going on than UART/I2C/SPI protocols. Not sure the comparison makes any sense. The clock speed of transmission is fixed (but adjustable), but there is a busy flag that the SD card asserts and the MCU needs to wait until that flag is de-asserted for certain operations.

If you feel a post has answered your question, please click "Accept as Solution".

The minimum read/write size is 512​ bytes, want to be some multiple of that ideally. So start by buffering better.

The SDIO/FATFS doesn't cache or lazy write, so you have to be more responsible for finding efficiencies. The SD Cards are managing internal blocks of 128KB or so.​

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..

I was reading one of the issue posted in this forum, you've answered don't f_sync and f_close your file repeatedly. By calling it often that guy got some improvement in speed.

But as far as I know we should use f_sync frequently so that if power lost then those data will be saved. am I right?

Please suggest if is there any other way to increase efficiency and decrease latency, so I will also try to implement those in my code.

It commits file system data to the media, but burns flash life.​

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..

what do you mean by burns the file system?

It causes excessive wear​ and reduces life span.

It Is akin to polishing shoes at every step.

Tips, Buy me a coffee, or three.. PayPal Venmo
Up vote any posts that you find helpful, it shows what's working..