2015-09-28 04:08 AM
hi, I m logging data in USB from stm32 board. now as the number of file increases in working directory, like150-200, the file open process starts taking time in opening new file.
how this can be avoided? i dont want to spend too much time in opening new file....FAT FS details: i m using open source FAT FS by elm_chanCause: everytime when we try to open new file, FAT FS check the pendrive if any file with same name exists or not. so if there are 200 files, then it checks all 200 files, if any of the file has same name as new file's name. #embedded_system #fat_fs #stm2015-09-28 05:00 AM
I don't recommend saving too many files in one directory, it WILL decrease the access speed. You must restructure the way you are saving the files, for example consider this scheme.
<FirstLetter>\<FileName> e.g. text.txt goes into t\text.txt, doc.doc goes into d\doc.doc, 123.avi goes into 1\123.avi, etc.. Or even <FirstLetter>\<SecondLetter>\<FileName> if there are too many files. Or use date-based directory scheme if you have a working RTC.2015-09-28 06:10 AM
Is the name you're using unique? Or are you trying to figure out what file names in a sequence already exist?
Pretty much any file system is going to have to navigate the current directory, you could try to come up with a scheme to optimize FatFs's if you think it' non-optimal. At the DiskIO level you could add caching so reads to the media are bigger that a single sector.2015-09-28 09:52 PM
@qwer.asdf- i have files with same extension. still i can make new directory, each having 100 files... but after sometime, i'll be having 200 folders and then again same issue will come up....
@clive1: yeah the name for new file is unique... but FS always check if there is any file with that name exist in directory or not....can you explain a bit, what kind of caching you are talking about...at disk-io level??2015-09-28 10:54 PM
Well what ever file system is going to have to enumerate the directory, if you use long file names, it's going to eat up significantly more slots on a FAT system, so using 8.3 file names will help significantly there.
Reading single sectors from any media is a highly inefficient approach, and most would get appreciably better performance it you read a couple of KB, and most structures on the media are contiguous in nature. Sub-directories on FAT systems are at least one cluster in size. On flash media the clusters are typically aligned against the erase block size boundaries of the underlying NAND devices. Micro SD cards have very specific formatting requirements. At the diskio.c level you could implement a scheme that handles reads in larger sizes, and caches them. It's going to depend on how much free RAM you have available to such a scheme, and the nature of the read requests. You could instrument your current read routines to see what the access patterns are. How fast is your subsystem now? How much data are you writing to the files? How frequently are you creating new files? How much faster does it need to get? How much RAM can you commit to making things faster?2015-09-28 11:08 PM
I'd probably look to doing 4KB reads on 4KB boundaries, perhaps committing 32KB (8 Blocks) to a cache, with an LRU replacement scheme. You want the writes to update the cached copies, and write through to the media, but not require the cached to be written back if the block is discarded.