1Gbyte file on a 130Mb drive (fsdb)
Andy Burgess
aab at cichlid.com
Tue Jun 25 16:47:30 AEST 1991
In article <124 at comix.UUCP> jeffl at comix.Santa-Cruz.CA.US (Jeff Liebermann) writes:
>How does one deal with a bogus 1Gigabyte file?
>I have a Xenix 2.3.3 system that has ls magically
>declare a 45Mb accounting file as 1Gbyte huge.
>
>ls declares it to be 1Gb big.
>du agrees.
>df -v gives the correct filesystem size.
>fsck "Possible wrong file size I=140" (no other errors).
>
>To add to the problem, I'm having difficulty doing
>a backup before attacking.
>
>compress bombs due to lack of working diskspace.
>tar, cpio, afio insist on trying to backup 1Gb of something.
>dd at least works and I can fit the 130Mb filesystem on one
>QIC-150 tape (whew). To add to the wierdness, all the
>reports generated by the application (Armor Systems Excalibur
>Acctg) work perfectly as if nothing were wrong.
>
Could be a sparse file. I'm not sure if Xenix supports sparse files
but on modern unices you can open a file, seek a billion bytes or
so into it and write one byte. Only one sector of data is allocated
(plus some inode sectors). Reading the file returns zeros for the
unwritten sectors. ls reports the offset of the last byte in the file.
Tar, cpio et al traditionally do not detect sparse files
and merrily save a gigabyte of zeros (although I would have expected compress
to work eventually).
>Obviously a job for fsdb...
I don't think you have anything to fix. Well maybe a bug in the application
that created the file. Don't know about the fsck message though...
Andy
--
Andy Burgess
Independent Software Consultant and Developer
aab at cichlid.com
uunet!silma!cichlid!aab
More information about the Comp.unix.xenix.sco
mailing list