On Monday 25 July 2005 19:38, Martin Egholm Nielsen wrote: > >>The situation: The mount times of my 32 megs JFFS2 device suddenly > >>increased from seconds to (roughly) 9 minutes, after having written some > >>files to the device. > >>The explanation of this is, according to David Woodhouse's best guess, > >>that the garbage collector uses all this time "for building up the > >>node tree for every inode after mounting". > >>The problem stems (I've been told) from the fact that I have been > >>performing "big-file-gymnastics" (11 megs uncompressed - ~3 megs > >>compressed) on the device. Possibly along with some small-file actions > >>inbetween (?)... > >>Now my question: Can YAFFS (1&2?) be provoked into showing similar > >>"unfortunate" behaviour, or is it handled in another way? > > > > When I originally evaluated JFFS2, before proposing YAFFS, I identified a > > few areas of concern wrt running JFFS2 on NAND. One of those was how its > > garbage collection strategy would scale to large files and large NAND > > partitions - which seems to be the problem you are running into here. > > Exactly... However with the recent CVS version, these 9 minutes reduced > to ~45sec - hence "only" twice of what it takes to read the entire > flash-device (raw with dd)... > > Will YAFFS' worst case scenario be somewhat like the time it takes to > perform "dd if=/dev/mtd0 of=/dev/null"? (In my sitation ~20 secs) There are some differences between YAFFS1 and YAFFS2 here, so I will split them apart. To see this in code form, read yaffs_Scan for YAFFS1 scanning and yaffs_ScanBackwards for YAFFS2 scanning. The main difference is that YAFFS1 has deleted tags markers while YAFFS2 does not. This makes the scanning different. YAFFS1 scanning looks more or less like: for(all blocks) for(all written chunks in block) read tags (ie read oob/spare) if(!tags.deleted) { if(tags says it is an object header) read whole chunk to extract file info else if is a data chunk, insert into tree } YAFFS2 scanning looks like: for(all written blocks backwards) for(all written chunks in block backwards read tags (ie read oob/spare) if(tags says it is an object header and we don't yet have file info) read whole chunk to extract file info else if it is a data chunk if is a data chunk, insert into tree } As you can see, in both cases, YAFFS only reads the nand once [well yaffs2 also reads one chunk per block to determine if the block is written first]. YAFFS only makes one pass. Most chunks will be data chunks and only the oob/spare needs to be read. The absolutely worst case would be a file system that is full of file headers (ie. thousands of zero length files). In this case, the whole nand would have to be read **once**. > > > YAFFS does not use compression and has a very clean and simple overwrite > > and garbage collection model. This makes YAFFS garbage collection and > > scanning a lot more predictable and cheaper. > > I actually could live without the compression - it's the ECC and > robustness at powerfailure I'm concerned with... > > > During mount, both YAFFS and JFFS2 rebuild trees by scanning. YAFFS > > "cheats" by using the spare/oob area as a place to store tags and by > > using fixed size chunks. This makes YAFFS scanning pretty fast. (Or > > course it could be made faster still by saving the state as big binary > > blobs). > > Nice! Is YAFFS2 faster at this? As always, it depends... > > > Now clearly a full YAFFS system will take longer to mount than an empty > > one, but I don't think it will every get anywhere near as nasty as the > > case you mention above. > > Hopefully not :-) > > > BTW: I'm not knocking JFFS2 here, I think it definitely has its place > > where space is very limited. The transition point is probably around > > 16MB, depending of course on application needs. > > I'm really considering it... I cannot force you, but I think you will be happy. Most people that have moved have reported a significant improvement is performance. -- Charles