[Yaffs] heavy file usage in yaffs
Charles Manning
manningc2@actrix.gen.nz
Tue, 25 Jan 2005 09:46:50 +1300
There is one further speed-up I'd recommend.
The strcat has to continuously search for the end of the names strfing. T=
he=20
search length will get longer and longer as the string gets loinger.
Somewhat faster would be to change the following:
> sprintf(info, "%s length %ld mode %x\n",name,s.st_size,s.st_mod=
e);
> strcat(names, info);
to=20
namesp +=3D sprintf(namesp,"%s length %ld mode %x\n",name,s.st_size,s.st=
_mode);
where namesp is a char * local initialised to names before entering the l=
oop.
This will eliminate the strcat searching and copying.
# disclaimer buffer overruns etc.
-- Charles
On Monday 24 January 2005 08:59, Jacob Dall wrote:
> Hello Charles,
>
> I accept your priority, but this slow dirlist issue is very important t=
o
> me. I really need a way faster method of giving me the name of the file=
s in
> a folder.
>
> I've come up with a solution - so far, it works for me when listing the
> root folder containing files only (no symlinks, hardlinks or folders). =
But
> I'm not sure whether or not it'll generally work no matter the layout o=
f
> files and folders.
>
> <code>
> char *yaffs_Dir (const char *path) {
> char *rest;
> yaffs_Device *dev =3D NULL;
> yaffs_Object *obj;
> yaffs_Object *entry =3D NULL;
> struct list_head *i;
> char name[1000], str[1000], info[1000];
> char *names =3D (char*)malloc(15*10000);
> int nOk =3D 0, nNull =3D 0;
> int len =3D strlen(path);
> struct yaffs_stat s;
>
> memset(names, 0, 15*10000);
> obj =3D yaffsfs_FindRoot(path,&rest);
> if (obj && obj->variantType =3D=3D YAFFS_OBJECT_TYPE_DIRECTORY) {
> list_for_each(i,&obj->variant.directoryVariant.children) {
> entry =3D (i) ? list_entry(i, yaffs_Object,siblings) : NULL;
> if (entry) {
> yaffs_GetObjectName(entry,name,YNAME_MAX+1);
> sprintf(str, "%s/%s", path, name);
> yaffs_lstat(str,&s);
> sprintf(info, "%s length %ld mode %x\n",name,s.st_size,s.st_mod=
e);
> strcat(names, info);
> nOk++;
> } else {
> nNull++;
> }
> }
> }
> printf("nOk=3D%d, nNull=3D%d\n",nOk,nNull);
> return names;
> }
> </code>
>
> In my system, this function takes approx. 10 secs to complete - that's
> amazing 60 times faster than if using the 'opendir/readdir' implemented=
in
> yaffs.
>
> Comments are very welcome.
>
> Thanks,
> Jacob
>
> ----- Original Message -----
> From: "Charles Manning" <manningc2@actrix.gen.nz>
> To: "Jacob Dall" <jacob.dall@operamail.com>, "Charles Manning"
> <Charles.Manning@trimble.co.nz>, yaffs@stoneboat.aleph1.co.uk Subject: =
Re:
> [Yaffs] heavy file usage in yaffs
> Date: Tue, 18 Jan 2005 15:20:57 +1300
>
> > On Tuesday 18 January 2005 04:00, Jacob Dall wrote:
> > > Hello Charles,
> > >
> > > All file names are on a 8.3 format, and NO, I'm not using
> > > SHORT_NAMES_IN_RAM.
> > >
> > > I've just recompiled my project defining
> > > CONFIG_YAFFS_SHORT_NAMES_IN_RAM, but unfortunately I notice no chan=
ge
> > > in time used to perform the dumpDir().
> > >
> > > The files I'm listing, was written before I defined short names in =
RAM.
> > > In this case, should one expect the operation to take less time?
> > >
> > > The CPU I'm running this test on, is comparable to a Pentium I-200M=
Hz.
> >
> > NB This only applies to yaffs_direct, not to Linux.
> >
> > I did some tests using yaffs direct as a user application on a ram
> > emulation under Linux.
> >
> > This too showed slowing. I did some profiling with gprof which pretty
> > quickly pointed to the problem...
> >
> > The way yaffs does the directory searching is to create a linked list=
of
> > items found so far in the DIR handle. When it does a read_dir, it has=
to
> > walk the list of children in the directory and check if the entry in =
in
> > the list of items found so far. This makes the look up time increase
> > proportional to the square of the number of items found (O(n^2)) so f=
ar (
> > ie. each time it looks at more directory entries as well as compare t=
hem
> > to a longer "already found" list).
> >
> > The current implementation could be sped up somewhat by using a balan=
ced
> > binary tree for the "found list". This would reduce the time to O(n l=
og
> > n). I could be motivated to do something about this but it is not a
> > current priority for me.
> >
> > The other approach is the weasle approach. Don't use such large
> > directories. but rather structure your directory tree to use smaller
> > sub-directories.
> >
> > -- Charles
> >
> >
> >
> > _______________________________________________
> > yaffs mailing list
> > yaffs@stoneboat.aleph1.co.uk
> > http://stoneboat.aleph1.co.uk/cgi-bin/mailman/listinfo/yaffs
>
> _______________________________________________
> yaffs mailing list
> yaffs@stoneboat.aleph1.co.uk
> http://stoneboat.aleph1.co.uk/cgi-bin/mailman/listinfo/yaffs