Locations in electronic memory circuits are identified by bit vectors, so the most cost-beneficial size for a memory circuit uses the full range of address values, or some power of 2. Since 1024 is a power of 2 and is only slightly in excess of 1000, memory chip makers found it convenient to use multiples of 1024 in marketing material, approximating or treating the additional 24 bytes of each group as manufacturing overhead. At the same time, marketing material for products that did not have the same cost-benefit considerations, such as magnetic disks and networking equipment, continued to use strict decimal-based units.
2007-04-08 15:34:54
·
answer #1
·
answered by troythom 4
·
1⤊
3⤋
Hi, It is due to the nature of binary arithmetic. Binary is mathematical base 2, or numbers composed of a series of zeros and ones. Since zero's and one's can be easily represented by two voltage levels on an electronic device, the binary number system is widely used in digital computing. Many people think that there are 1000 bytes in a kilobyte. After all, "kilo" means 1000. In most cases, this approximation is fine for determining how much space a file takes up or how much disk space you have. But there are really 1024 bytes in a kilobyte. The reason for this is because computers are based on the binary system. That means hard drives and memory are measured in powers of 2. For example, * 2^0 = 1 * 2^1 = 2 * 2^2 = 4 * 2^3 = 8 * 2^4 = 16 * 2^5 = 32 * 2^6 = 64 * 2^7 = 128 * 2^8 = 256 * 2^9 = 512 * 2^10 = 1024 Notice how 2^10 is 1024. Therefore, 2^10, or 1024 bytes compose one kilobyte. Furthermore, 1024 kilobytes compose one megabyte, and 1024 megabytes compose one gigabyte. For most practical purposes, you can estimate 1024 to 1000.
2016-03-17 09:30:53
·
answer #2
·
answered by ? 3
·
0⤊
0⤋
This Site Might Help You.
RE:
Why are there 1024 bytes in a kilobyte?
And 1024 KB in 1 MB, and 1024 MB in 1 GB etc. I know it has something to do with the fact that in binary, 10000000000 (2^10) is 1024. But so what, any number can be displayed in binary. 1000 in binary is 1111101000. So why did they choose 1024? It makes everything so confusing with people making...
2015-08-10 04:28:37
·
answer #3
·
answered by Anonymous
·
0⤊
0⤋
because 1024 is a power of 2.
Since binary math is base-2, binary math makes the most sense when it is a power of 2.
Hard drive makers use the difference for (effective) marketing reasons to make something appear bigger than it is.
However, if you actually have to do computations based in binary, it is easiest to use numbers that are simple in binary.
As an analogy, it is probably very easy for you to do computations in base 10 math, like how many tens are there in fifty? Thats easy, 5. All computers know how to do is binary, so it is easy for a computer to do 2^10 in the same way it is easy for you to do 5*10.
2007-04-08 15:31:54
·
answer #4
·
answered by Amanda H 6
·
0⤊
3⤋
it has to do with the hardware, NOT marketing! addressing of memory locations is done using binary. 4 address lines are capable of addressing 16 memory locations, 8 lines can address 256 locations, 10 address lines can address 1024 memory locations. so as you can see, for every address line added, the amount of memory that can be addressed doubles.
actually, with a 500gb hard drive you actually have 549.755gb of memory (39 address lines). your hard drive may show less due to it being formatted, which takes up some of the space, and there may be some code on it already.
2007-04-09 02:19:15
·
answer #5
·
answered by justme 7
·
1⤊
1⤋
It is an even power of 2.
2^10
2007-04-08 15:34:14
·
answer #6
·
answered by Anonymous
·
1⤊
2⤋
You are exactly right when you say that the terminology traditionally used in referring to computer memory and disk storage units of measure is based on the binary (base 2) nature of computers. Of course, the smallest unit of measure is the binary digit, or bit. But computer scientists and engineers needed terms to refer to larger chunks of memory, so they devised terms such as "nybble" (4 bits), "byte" (8 bits), and "word" (which varies depending upon the computer architecture).
Of course, it made sense to use the same base system (base 2) when referring to larger memory spaces, so the term "kilobyte" came to refer to 2^10 (two to the tenth power) bytes, or 1,024 bytes, even though the "kilo" prefix traditionally referred to 10^3 (ten to the third power, or 1,000) in the metric system (e.g., one kilometer = 1,000 meters). Other standard prefixes ("mega," "giga," "tera," etc.) similarly referred to higher-level base 2 memory spaces, rather than their traditional base 10 usages.
However, computer equipment manufacturers (hard drive manufacturers in particular) began to deviate from the accepted base 2 terminology in favor of their base 10 near-equivalents, in the interest of either clearing up consumer confusion or making their devices sound bigger than they actually are. (I personally think it was more the latter than the former.) So for example, a hard drive with 104,857,600 bytes capacity (1024 * 1024 * 100) might have been marketed as a 104 megabyte drive (in base 10 terminology), rather than a 100 megabyte drive (in traditional base 2 terminology).
In January, 1999, the International Electrotechnical Commission (IEC) proposed new prefixes for the base 2 terminology, such as "kibi" for 2^10 (two to the tenth power, or 1,024), where "kilo" would continue to refer to 10^3 (ten to the third power, or 1,000). The IEC also defined standard abbreviations for these new prefixes to help differentiate them from the base 10 prefixes (e.g., "Ki" refers to the new kibibyte term for 1,024, while "Kb" refers to the older kilobyte term for 1,000).
Call me old-school, but I *still* think in terms of base 2 terminology. I just can't bring myself to use the new IEC terms and abbreviations.
2007-04-08 19:36:37
·
answer #7
·
answered by elness 2
·
1⤊
2⤋
thats confusing but i never realized it like that :/
2007-04-08 15:35:15
·
answer #8
·
answered by austinblnd 4
·
1⤊
4⤋