simply because computers have a limited number of bits to represent a number. Floating point numbers are stored in a binary format that requires restricting the number of significant digits and in many cases, rounding off if hte number of significant digits exceeds what can be represented. the accumulated effect of several calculations can be skewed by the storage limitation.
Some software used in the scientific community can use arbitrary precision BCD math. BCD stores 2 digits per byte and can also store the signum and decimal point in a coded string that can theoretically be any length, although a practical limit is usually 512 digits due to the complexity involved in performing mathmatical operations on the data.
Some computer languages such as CA-Realizer, REXX, and Lisp support arbitrary precision BCD math directly. Many can support it with add on library functions.
2007-07-05 07:04:31
·
answer #1
·
answered by Niklaus Pfirsig 6
·
0⤊
0⤋
Floating point numbers are limited in precision, because of the way they are represented -- it's a fixed amount of storage, and therefore cannot represent an infinite number of different floating-point numbers. For a standard float, it has maybe 8 digits of accuracy; a double-precision floating-point number has about twice that.
For example, this perl program prints out 1/3 to 25 decimal places:
printf "%.25f\n", (1.0 / 3.0);
... and when I run it, it prints out this:
0.3333333333333333100000000
That's the closest number to exactly 1/3 that can be represented in the way that floating-point numbers are stored (in this case double-precision, so it is correct to about 16 digits).
The way that floating-point numbers are represented in binary is sort of similar to scientific notation: 3.333333 x 10^-1, with a limited number of digits. Every calculation is in effect rounded off at the 8th (or 16th) -- approximately -- digit, and the "closest" possible value that can be represented in floating-point form is chosen.
2007-07-05 13:52:18
·
answer #2
·
answered by McFate 7
·
0⤊
0⤋