English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

2007-02-05 22:08:11 · 3 answers · asked by Anonymous in Computers & Internet Hardware Desktops

3 answers

8 bytes

2007-02-05 22:15:26 · answer #1 · answered by catty 4 · 0 1

All computer processors are based on binary math because of the transistors that comprise the semiconductors inside the chips. To put things in very simple terms, a bit is a single 1 or 0 either stored or processed by a transistor. All processors are referred to by their bit processing ability. For approximately the last 10 years or so, 32-bit computing has been used since the introduction of Intel's 386 platform. So what does the bit count mean?

This bit rating of the processor determines the largest numerical number that processor can handle. The largest number that can be processed in a single clock cycle will be equivalent to 2 to the power of the bit rating. Thus, a 32-bit processor can handle a number up to 2^32 or roughly 4.3 billion. Any number greater than this will require more than one clock cycle to process. A 64-bit processor on the other hand can handle a number of a 2^64 or roughly 18.4 quintillion (18,400,000,000,000,000,000). This means that a 64-bit processor would be able to more efficiently handle large number mathematics.

2007-02-06 07:23:28 · answer #2 · answered by Shahid 7 · 0 1

Hi

Check out this website..

http://compreviews.about.com/cs/cpus/a/aapr64bit.htm

regards
Bani

2007-02-06 06:16:39 · answer #3 · answered by Bani 2 · 1 0

fedest.com, questions and answers