2021-10-13 11:51 AM
Hi, I have issues while using "FATFS R0.12c" over SDIO on my STM32F407. I do SDIO write to file on main loop, but I also have "TIM2/TIM3/CAN1 RX0/CAN1 RX1" interrupts which has other logic (for example TIM2 sends UART message every 100ms to update the screen). Writing to file sometimes works, but eventually it returns "hsd->ErrorCode = 0x20" which means "HAL_SD_ERROR_RX_OVERRUN".
I have tried: reducing interrupt duration; using DMA on SDIO with highest priority interrupts; using non-DMA SDIO settings; increasing "hsd.Init.ClockDiv" for slower writes - but none of these things completely eliminat the error.
Are those interrupts to blame if they occur during SDIO write?
How could I fix HAL_SD_ERROR_RX_OVERRUN?
Or recover from it? (usually it causes file to be locked FR_LOCKED and I am not able to write any more)
2021-10-13 12:13 PM
Are you sure it's not some cascading failure?
I think looking at top-level error from FatFs is a mistake. As is focusing on the last failure where it dies.
Do you have some instrumentation logs from the DISKIO layer showing the entry/exit, with parameters and status for the IO reads/writes leading to the top-level failure?
Also as I think I've previously stated R0.12c is VERY OLD
2021-10-13 01:47 PM
What cascading failure do you have in mind?
I am looking at "bsp_driver_sd" layer error "HAL_SD_ERROR_RX_OVERRUN" which occur here:
It seems to originate from "f_open" when it tries to seek to end of file, fails on "get_fat" (but it seems to be random and sometimes it fails on other f_open places when HAL_SD_ReadBlocks returns HAL_SD_ERROR_RX_OVERRUN):
f_open->get_fat->move_window->SD_read->disk_read (BSP_SD_ReadBlocks)->HAL_SD_ReadBlocks returns HAL_SD_ERROR_RX_OVERRUN and f_open fails with FR_DISK_ERR because get_fat returns clst = 0xFFFFFFFF.
Can you point to more specific point where should I debug further?
2021-10-14 07:47 AM
Any ideas how could I fix this?
2021-10-14 12:43 PM
If I constantly wrtiting to file, what size to write should I choose? 256, 512 bytes are too much? should I chose smaller?
2021-10-15 12:47 PM
Seems to be better with bigger write buffer. With 4kb write buffer I still sometimes get error. I increased buffer to 16kb and now it running without issue for 30mins. I wonder if the buffer solved the issue or just mitigated it.
2021-10-22 12:30 PM
>>What cascading failure do you have in mind?
One that has occurred earlier, and FatFs is now trying to do something else having ignored the previous error. Instrument the DISKIO code so you can follow the interactions with the Hardware, a see what failed first.
>>If I constantly writing to file, what size to write should I choose? 256, 512 bytes are too much? should I chose smaller?
512 bytes is the typical sector size, doing anything less than this, and anything not aligned on a sector boundary is going to cause a lot of unnecessary interaction with the media. A single sector is the least efficient access amount, there's significant command overhead to fetch the data, the erase blocks are upward of 128KB on the NAND Flash array underneath. Also doing small writes causes a lot of block switching amongst the free erased blocks, and when those are consumed the erasure of dirty blocks. There's typically not a linear relationship between the sectors the card presents to the file system, and the NAND memory used to implement the storage.
2021-10-26 04:19 AM
Seems that bigger write buffer (16kb instead of 512bytes) has solved the issue. Or it just mitigated it?