[Yaffs] problem with object (file) creation
Nick Bane
nick@cecomputing.co.uk
Tue, 12 Oct 2004 09:47:37 +0100
> A quick follow up...
>=20
> To summarise what seems to be going on, the problem is occuring=20
> because there=20
> is an inode in the cache for an obect that no longer exists in=20
> YAFFS. When=20
> asked to create a new object, YAFFS choses an objectId (same as an =
inode=20
> number) that is the same as the one in cache. This means that=20
> when iget() is=20
> called, no callback happens to yaffs_read_inode and the new=20
> object's info is=20
> not associated with the inode in the cache and we get an =
inconsistency.
>=20
> This happens very infrequently because the bucket size is=20
> relatively large=20
> (256), though Michael has made a useful test case below. It occurs to =
me=20
> that by changing the bucket size to a smaller power of 2 (say 4), it =
will=20
> become easier to force the issue.
>=20
> Michael has also provided a patch that would seem to get around=20
> the problem=20
> by aborting an object creation when the inode refernce count is too =
high=20
> (which means that this problem would have occurred). While this=20
> hack would=20
> seem to work, I think it is, as Michael says, a "dirty hack" and=20
> cause extra=20
> writes to flash etc.
>=20
> I would prefer to do one of the following:
> 1) At the time of generating the objectId (inode number) for a=20
> new object,=20
> first check that the object does not relate to an existing inode=20
> in the cache=20
> and don't allocate that number if there is a conflict.
> 2)Don't just rely on the callback to yaffs_read_inode to fill out=20
> the inode=20
> details. Also fill them out for other cases. The problem with=20
> this is that=20
> we then end up with the thing in the cache being revalidated and=20
> cross-linked=20
> to a differnt object which seems rather unhealthy to me! So I don't =
think=20
> this will work.
> 3) Stop trying to reuse object ids. Instead of just always trying=20
> to reuse=20
> the lowest value objectId in any bucket, we can rather keep allocating =
> upwards and wrap around when the objectId space is depleated (18 bits =
=3D=3D=20
> 0x40000). While this would not absolutely guarantee we don't get=20
> reuse, it=20
> would reduce the odds by a significant amount.
> 4) When we delete an object keep the object id for that object=20
> "in use" until=20
> the last iput releases it from the cache.
>=20
> While (3) will likely work very well it does leave a bitter taste in =
the=20
> mouth. I'd prefer (4) and (1) in that order. I don't trust (2),=20
> so dismiss=20
> that immediately.
>=20
> Comments/thoughts more than welcome.
>=20
> -- Charles
I entirely agree with Charles.=20
One (possibly dim) query. Why is the reference count not zero anyway =
once the object is deleted? Does the vfs hold an additional reference? =
As you say (4), unless it is zero then it should not be considered not =
to be in use.
(3) is just not good enough especially for TCL who are using yaffs in =
the medical/assistive market. It will also mean that any errors/failures =
of a system will leave a "what if its that yaffs bug again?" uncertainty =
lurking.
(1) is ok. Reading/writing NAND is the bottleneck so a cache search =
won't add anything too noticeable I guess.
Nick