2021-01-11 10:21 AM
Hi,
I just ran into a strange problem that when doing multiple writes to an open file fails if it happens in chunks of 1023, but it is fine in chunks of the more common 1024.
Can someone confirm this (maybe give an explanation :), as far as I know there is not suppose to be such limitation of f_write ).
I am working on an STM32F7 with fatfs and an SD card configured for 1b (not 4b).
Example code:
FATFS m_SDFatFS;
FRESULT l_fatfsResult;
l_fatfsResult = f_mount(&m_SDFatFS, "", 1);
// Fatfs write test
{
FIL l_fil; /* File object */
FRESULT l_fatfsResult; /* FatFs return code */
uint8_t l_aBuf[1023] = {0}; // make sure your stack is big enough
UINT l_sizeWritten;
l_fatfsResult = f_open( &l_fil, "mytest.jpg", FA_CREATE_ALWAYS | FA_READ | FA_WRITE );
if( l_fatfsResult != FR_OK )
{
printf( "Error creating file: %d", l_fatfsResult );
}
l_fatfsResult = f_write( &l_fil, l_aBuf, sizeof(l_aBuf), &l_sizeWritten );
if( l_fatfsResult != FR_OK )
{
printf( "Error writing file: %d", l_fatfsResult );
}
else
{
printf( "Bytes written: %d", l_sizeWritten );
}
l_fatfsResult = f_write( &l_fil, l_aBuf, sizeof(l_aBuf), &l_sizeWritten );
if( l_fatfsResult != FR_OK )
{
printf( "Error writing file: %d", l_fatfsResult );
}
else
{
printf( "Bytes written: %d", l_sizeWritten );
}
l_fatfsResult = f_close( &l_fil );
if( l_fatfsResult != FR_OK )
{
printf( "Error closing file: %d", l_fatfsResult );
}
}
If I change the buffer size to 1024 then everything works, if not then the second write fails (the first one is OK).
( I have not yet spend time looking deeper into this since at this point I am not yet sure if I will continue with fatfs/sdcard )
Regards
Bram
2021-01-11 12:52 PM
FatFS is capable of doing this, the error is coming from the DISKIO layer as a result of multiple read/write you cause to occur.
You should instrument DISKIO to see what's happening there, and dig to find what specifically is faulting.
The layers below might have dependency on the location and alignment of the buffer.
With DMA you'd need to be concerned with alignment and coherency. On the F7 with DMA I'd recommend using the DTCM RAM.
In polled mode ST has made some particularly poor choices on how to handle transfers in the most inefficient way possible. The data transfer also cannot be interrupted, the SDMMC transfer can't stall at the hardware level, the FIFO only provides limited protection.
You might try decomposing the read/write to single sector transfers, via a holding buffer (of known location and alignment), and seeing if the problem persists.
2021-01-24 11:08 AM
I have the same problem while reading sd card. I've generated two codes through the last CubeMX Version. One for STM32F4 and another for STM32F7 with the same config(HAL + FatFs). The STM32F4 is ok but the SD_Read function returns DISK_ERROR in STM32F7.
The error occurs when the read pointer is around 1024 number. For example, when the read pointer is 1023 and the read size is greater than 512, the SD_read function returns disk_error simply.
I've searched in forums, but I can't yet solve this serious problem.
I hope someone helps me to get a solution :folded_hands: . Thanks!
Best regards
Reza
2021-01-24 11:57 AM
Hi Reza,
I am continuing with a qspi flash so i am not going to further investigate the issue I am having with sdcards, but it seems you are even worse of then me.
On my system reads work (at least if i stick to using 'nice' sized buffers).
But reading your story, did you turn on ENABLE_SCRATCH_BUFFER in sd_diskio.c because I did and it is necessary if you are using DMA(will come at a performance hit though because of the extra copy action).
Good luck in any case...
Regards
Bram