A floating point is a number with a decimal place. Most math involves integer types, which do not support decimals.
A fixed point is a number that can also have a decimal place; however, that decimal has a fixed length. So, a fixed point integer can allow a predetermined amount of decimal places. So, it can have a code saying "put the decimal 4 places to the left." If you do math that needs a fifth decimal place, the math would stop at the fourth decimal place.
A floating point, on the other hand, says "put the decimal wherever necessary." In this case, if there's a floating point with 4 decimal places, and you're doing math that requires a fifth, the floating point would multiply the number by 10, and shift the decimal to the left one place, then do the math accordingly, adding the fifth decimal.
So, a floating point can put the decimal wherever inside and its memory limits and move it around (hence, it "floats") as needed. A fixed point, on the other hand, can have a decimal, but it can't move the decimal place around as needed.
The explanation isn't a 100% accurate technical explanation, but it should get the gist of it across.
2007-09-03 15:03:45
·
answer #1
·
answered by Caudax 2
·
0⤊
0⤋
Just as people use 10 digits to count in base 10, computers use 2 digits to count in base 2. The higher the number you want to store, the greater the number of digits you need to use.
In a computer, a 32-bit value might typically be used to store an integer value. In this case all 32 bits are used for the integer component, and it can't represent fractional values.
If you want to represent numbers with fractions then you might dedicate some of those bits to storing the fractional component. 16:16, for example, uses 16 bits for the integer bit and 16 bits for the fractional. This is called "fixed-point" because the number of bits you're using for each is always the same.
Floating point means the position of the decimal point "floats", i.e. it can move around. If you're storing very large numbers then you probably don't need fractional accuracy. Similarly, if you are storing small numbers then you can allocate more bits to the fractional digits.
The exact way in which this is accomplished is basically the same as scientific notion. In science numbers are typically written like this: 1.234 x 10^17. The "1.234" is the mantissa, the first digit is always followed by a period. The "10" indicates we're working in base 10. The "17" is the exponent, i.e. the magnitude of the number. Large exponents, like 17, correspond to large numbers. Negative exponents like -5 correpond to smaller numbers i.e. between 0 and 1.
Floating point numbers work exactly the same way. A typical example is to use 8 bits for the (signed) exponent, 23 bits for the mantissa and 1 bit to represent the sign of the entire number, with all values being represented in base 2.
2007-09-03 22:04:46
·
answer #2
·
answered by Mark F 6
·
0⤊
0⤋
Very large or very small numbers tend to have lots of zeros on one side or the other of the decimal point. (Examples: 1,000,000,000,000,000 or 0.000000000006) If you stored the number as a string, the way you would type it, it would take lots of bytes for each number and be very messy.
First , Standardize the number to the form 0.999999x10**(n) In our examples, that would be 0.100000x10**(16) and -.60000X10**(-11). Now all we have to do is store two fairly well behaved numbers, a real number between -0.999999 and +0.999999 and an exponent between -M and +M. M can be chosen dpending on how big a number you want to handle, and how many bits you want to use to represent it.
In binary computers, the fractional decimal numbers ("mantissa") are represented in binary, as are the exponents. In a 4-byte floating point number, one might allocate 23 bits for the mantissa and 9 bits for the exponent. For extended precision, 8-byte floating point numbers might allocate 55 bits for the mantissa and 9 bits for the exponent.
Note that the number of bits in the mantissa determine the precision of the number, as in number of significant digits. The number of bits in the exponent determine the maximum and minimum size.
2007-09-03 22:18:06
·
answer #3
·
answered by Computer Guy 7
·
1⤊
0⤋
Floating point numbers are simply numbers that can, but do not necessarily have to have a fractional component, as a data type on a computer.
Integers are represented by whole numbers, i.e. numbers without a fraction component or numbers to the right of a decimal point.
Floating point numbers are refinable to fractional accuracy, that is numbers to the right of the decimal are considered.
2007-09-03 22:00:25
·
answer #4
·
answered by Amanda H 6
·
0⤊
0⤋
There are 2 kinds of numbers: integer and real numbers. Integers are whole numbers (2, 12, 356). Real numbers may have a fractional part to them. Real numbers may either use fractions or a decimal portion to represent the fractional part. When real numbers have a decimal portion, they're called "floating point".
The reason this is significant is that CPUs using different part of their chip to do integer and floating point arithmetic.
Floating point processing is more important with the graphics, physics, and encryption which are used more and more in software.
2007-09-03 21:53:24
·
answer #5
·
answered by Anonymous
·
0⤊
1⤋