Why 'C' does not need BCD...
Dan Messinger
dan at digi-g.UUCP
Wed Dec 5 02:49:52 AEST 1984
In article <247 at desint.UUCP> geoff at desint.UUCP (Geoff Kuenning) writes:
>A seek once out of every 500 transfers (which a well-designed filesystem
>can achieve) does not seriously impact performance.
I find this incredible! Do you have a disk drive for each application?
With many programs trying to access the same drive, I can't imagine how
you could get 500 transfers/seek. How many drives have that many blocks
per cylinder? How many times do you rewrite the same block?
>So? Ever try to swap out 1 MB on a 5-1/4" Winchester? For that matter, ever
>try to swap out 2 MB on an Eagle? We are talking transfer times in SECONDS
>here, compared to seek times in the 20-100ms range.
So what happened to your well-designed filesystem? A 5-1/4" winchester can
transfer ~1.3 MB/second, and an Eagle is ~1.8 MB/second. A second, but not
secondS.
>>You can do your arithmetic with 32 bits and store data as 24 bits
>>on disk (that's what UN*X used to do on PDP-11's).
>
>This is the "shooting down" of your argument that I mentioned above. As
>soon as you have to take 24-bit numbers, convert them to 32 bits, and
>convert them back to 24 for output, you have lost any possible speed
>advantage that binary might have given you. 6-digit BCD is much faster.
Are you serious? You think that adding a 00 or FF byte to the begining
of the binary number is going to take LONGER than doing BCD math? Do
you know what BCD looks like in the computer? Have you tried writting BCD
math routines? Are you a programmer?
BCD verses binary boils down to two issues. First, the well known
space vs. time trade-off. By using 4-byte binary instead of 3 byte BCD,
the program can run much faster. If there are a lot of numbers involved,
this is no small issue. By storing only 3 bytes on disk, the extra disk
space can be elliminated, with some loss in performance. But I assure
you that converting a 24 bit int to a 32 bit int takes less time than
performing a single BCD add. And how many BCD operations are going to
be performed on that number while it is in memory?
Second is portability. A BCD math library would perform more consistantly
across a wider variaty of computers. As long as a machine can address
8 bit bytes, the BCD math package should work. 32 bit ints may not be
available on some machines.
But BCD being faster than binary? I suggest that you find a local software
engineer and ask him to explain it to you. (I'm sure it will amuse him/her)
I'm not saying that binary is alwasy the better way. I'm just saying
that your reasons are absolutely ludicrous.
Dan Messinger
ihnp4!umn-cs!digi-g!dan
More information about the Comp.lang.c
mailing list