bigger longs (64 bits)
Esmond Pitt
ejp at bohra.cpg.oz
Mon Feb 12 13:28:22 AEST 1990
In article <11372 at attctc.Dallas.TX.US> markh at attctc.Dallas.TX.US (Mark Harrison) writes:
>
>As Unix tries to get a larger share of the commercial market, We will see
>a need for storing numeric values with 18-digit precision, ala COBOL and
>the IBM mainframe. This can be accomplished in 64 bits, and is probably
>the reason "they" chose 18 digits as their maximum precision.
According to a fellow who had been on the original IBM project in the
fifties, the 18 digits came about because of using BCD (4-bit decimal)
representation, in two 36-bit words.
--
Esmond Pitt, Computer Power Group
ejp at bohra.cpg.oz
More information about the Comp.unix.wizards
mailing list