English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

15 answers

Originally a byte was chosen to be a submultiple of the computer's word size, anywhere from six to nine bits (a character encoding was then fitted into this unit). The popularity of IBM's System/360 architecture starting in the 1960s and the explosion of microprocessors based on 8-bit microprocessors in the 1980s has made obsolete any meaning other than 8 bits.

2007-03-06 09:22:18 · answer #1 · answered by Barkley Hound 7 · 1 1

Because computers use the base 8 number system, just like we use base 10
units, tens, thousands
They use binary, 1, 2, 4, 8, 16, 32, 64, 128
theres 8.
One byte ranges from 0 to 255 in the value that the binary numbers can make before it has to advance into 2 bytes.

2007-03-06 09:22:12 · answer #2 · answered by Anonymous · 0 0

First off Hexadecimal is base 16 not base 8, Base 8 is called Octal.

As computers use binary ( Base 2 ) the natural form of numbers for a puter is e.g. 01001 etc, this is noy easy for us to read / understand, so we use base 8 or base 16 ( Octal or Hexadecimal ) the important thing to note is that both 8 and 16 are powers of 2 2^3 and 2^4, so an octal number can be directly and easily translated into binary so if you have an octal no 147 for example this is simple to translate into octal by translating each digit into binary .
so 147 in octal is
001 100 111 in binary ( almost did that without thinking I've been in this game for too long! )
each octal digit translates directly into three binary digits and each hexadecimal digit translates directly into 4 binary digits.
Hope this helps...

2007-03-06 09:35:06 · answer #3 · answered by adr41n 3 · 0 0

A byte was originally used to represent one typed character. With all the punctuations and numbers and cases and stuff, they decided they needed at least 128 possibilities, which is 7 bits, plus one bit for parity checking.

2007-03-06 10:55:17 · answer #4 · answered by Nomadd 7 · 0 0

A bit is a binary digit, and a byte has a binary code containing 8 binary digits. That is where the name 8-bit comes from.

2007-03-06 09:24:22 · answer #5 · answered by Doucheball 3 · 0 0

Computers actually read binary (0 or 1) A Bit is a Binary Digit. Eight is a logical grouping, for one thing, Binary can easily be converted to Octal (base8) and hexadecimel (base 16)

2007-03-06 09:26:59 · answer #6 · answered by Anonymous · 0 0

Computers are based on a hexidecimal system base 8..So its mentioned as 8 bits.

2007-03-06 15:59:54 · answer #7 · answered by priya 2 · 0 0

The terms "byte" and "word" have changed over the years. Eventually, they have standardized on even-bit numbers that make the most sense in practice.

2007-03-06 09:21:08 · answer #8 · answered by fail r us 3 · 0 1

Because when US started to make memory usage for storing it was in divisble of 8 and not 10 or 2 or 5...this was so because when saving a document its in these 10's 2's or 5's...and it cause where you have a little left when saving....

2007-03-06 09:21:54 · answer #9 · answered by ironknuckles05 2 · 0 1

it is all down to the hexadecimal stuff that memory is calculated in i think. anyway, why is one foot = 12inches? computers were really made mainly by americans who obviously didn't want to make it easy for people so that they would always have a purpose. As america used imperial measurements, there was no need to keep it all in 10s. its just a measurement

2007-03-06 09:23:06 · answer #10 · answered by Anonymous · 0 0

fedest.com, questions and answers