cancel
Showing results for 
Search instead for 
Did you mean: 

The highest number of files in a FATFS drive: STM32L4 Cube code

martinj3456
Associate III

I request to know whether any higher limit on the number of files that a FATFS drive can store.

Using an STM32L476 Microcontroller, I'm saving an image file (300-400KB) in every 5 sec with an 8-letter filename, 'xxxxxxx.JPG' where x is a Hex Number. I'm using general sequence: f_open,f_lseek,f_write (whole image buffer), f_close and f_utime for each image write.

Some reference hints that the maximum file could be 65536. However, when I am saving more than ~18,500 (not sure the exact number, but around 18,500), the file system corrupts such as the MCU cannot write new image before formatting the drive and the contents are lost. When the SD card is inserted to the computer, Windows ask to format the card.

In a separate code, I've tried to make sub-folders say storing 5000 images in each folder, however, file system corrupts when it reaches ~18500.

This website caught my attention https://blog.paddlefish.net/?page_id=1017

I do not understand the formula behind this. Still, the number should be more 32,768 files.

What could be the reason of this issue and how to solve that. Thank you in advance for your suggestion. 

NB: 1) I am using Lexar 633x 16GB card and formatting the card before use as FAT32 (Default File System) with Default allocation unit size.

2) In Cube generated FATFS code, I'm using DMA based SD_WriteBlock functions.

3) I'm using short file name format using #define _USE_LFN   0 

8 REPLIES 8
Ozone
Lead

A large number of files means long lists to process for the SD card controller, probably messing up the timing, or triggering firmware bugs.

I remember having hit the 64k files per directory limit in a PC/WinNT4.0 based project, making the PC almost inoperable.

Otherwise, I have no specific experience with such large amount of files on SD cards.

Have you tried to organize files in sub-folders, say, one per day ?

The original FAT system had a root file limit, and remember long file names consume multiple slots. Newer versions just allocate additional clusters, probably up to 2GB deep. Likely the limits aren't realistically found if implemented/tested properly.

The real issue as @Ozone​ suggest is the unmanageability of such large directories, both in terms of you being able to read and hold the whole thing, but the need to enumerate to find and add files to the end. Most FatFs implementations don't cache.

From a human perspective finding things amongst the weeds and having usable names is a problem. When I'm recording hourly log files I break the directories by YEAR/DAY, and the files have names which will sort chronologically.

Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..
martinj3456
Associate III

Thank you both for your suggestions,

Last few days, I have tried to store every 3000 images (highest number of images to be captured by my system per day) in folders and found the same issue of drive corruption at ~18000 images. For both folder and file name, I just used auto-incremented integer say 1,2,3,..

Maybe there's something more than filename. JPG Images those I saved were 240KB each. However, when I simulated a test to store 160 KB .jpg files, the card could store 21,000 images+.

Is there any factor say total usable cluster/sectors?

I used Lexar 633 cards of 16GB and 32GB capacity formatted with default 4KB and 16KB (for 16GB card) and default 8KB and 32KB (for 16GB card) allocation unit size. Same results in all cases.

Thank you in advance for your suggestions.

Are you using reasonably current FatFs? One of the R0.13 series?

R0.11 definitely has issues

Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..

OO, yes, I am using R0.11.

I'm seeing R0.12c in most recent CubeL4 driver. Should I try that or need to port R0.13 from else.

Thank you for your suggestion.

The other not so insignificant component is the SD card.

Have you tried different brands ?

SD cards contain a controller with firmware - despite following the same standard, they might behave differently under certain conditions.

Not sure if alternatives like eMMC would make a difference.

martinj3456
Associate III

What I am facing here, people have reported in other forums.

https://github.com/ARMmbed/mbed-os/pull/5829

https://community.nxp.com/thread/321008

I've followed their tests, stored 512KB files. As they mentioned, when the file number reached 8160 (~4GB), the file system gets corrupted irrespective of SD card size/brand.

In both of these posts, they referred to a multiplication overflow of block size* sector size in higher level implementation of sector number calculation. Hence, the file corrupts when the sector write passes 8388608 (512*16384)

I've found the same in HAL_SD_WriteBlocks_DMA() in stm32l4xx_hal_sd.c

sdmmc_datainitstructure.DataLength  = BlockSize * NumberOfBlocks;

whereas the DataLength is declared as 32bit and NumberOfBlocks is 32 bit and BlockSize is 512.

However, only declaring DataLength as 64bits did not solve my issue. Is there anything else to change in DMA configuration or low-level libraries?

Have anyone faced this issue or any suggestion?

>>However, only declaring DataLength as 64bits did not solve my issue. 

Because you're not reading 4GB in one go, the scope of NumberOfBlocks in a single transfer is likely less than 128 (64KB), it will typically be smaller as the cluster size is what FatFs manages.

ST had problems for YEARS insisting on going to Byte addressing and back-and-forth to Block addressing. I think most of that is resolved. The sub-4GB SD cards use byte addressing, but the file system and block access functions don't need to. A 32-bit LBA will get you to 2TB

>>Is there anything else to change in DMA configuration or low-level libraries?

Check the CAPACITY computation, and also whether it thinks the card is SD or SDHC/XC

>>Have anyone faced this issue or any suggestion?

I'm working with 200 and 400GB Cards

Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..