In computer architecture, 64-bit integers, memory addresses, or other data units are those that are at most 64 bits (8 bytes) wide. Also, 64-bit CPU and ALU architectures are those that are based on registers, address buses, or data buses of that size.
As of 2004, 64-bit CPUs are common in servers, and have recently been introduced to the (previously 32-bit) mainstream personal computer arena in the form of the AMD64/EM64T and 64-bit PowerPC processor architectures.
Although a CPU may be 64-bit internally, its external data bus or address bus may have a different size, either larger or smaller, and the term is often used to describe the size of these buses as well. For instance, many current machines with 32-bit processors use 64-bit buses (e.g. the original Pentium and later CPUs), and may occasionally be referred to as "64-bit" for this reason. The term may also refer to the size of an instruction in the computer's instruction set or to any other item of data (e.g. 64-bit double-precision floating-point quantities are common). Without further qualification, however, a computer architecture described as "64-bit" generally has integer registers that are 64 bits wide and thus directly supports dealing both internally and externally with 64-bit "chunks" of integer data.
A change from a 32-bit to a 64-bit architecture is a fundamental alteration, as most operating systems must be extensively modified to take advantage of the new architecture. Other software must also be ported to use the new capabilities; older software is usually supported through either a hardware compatibility mode (in which the new processors support an older 32-bit instruction set as well as the new modes), through software emulation, or by the actual implementation of a 32-bit processor core within the 64-bit processor die (as with the Itanium processors from Intel, which include an x86 processor core to run 32-bit x86 applications). One significant exception to this is the AS/400, whose software runs on a virtual ISA, called TIMI (Technology Independent Machine Interface) which is translated to native machine code by low-level software before being executed. The low-level software is all that has to be rewritten to move the entire OS and all software to a new platform, such as when IBM transitioned their line from the older 32/48-bit "IMPI" instruction set to 64-bit PowerPC (IMPI wasn't anything like 32-bit PowerPC, so this was an even bigger transition than from a 32-bit version of an instruction set to a 64-bit version of the same instruction set). Another significant exception is IBM z/Architecture, which readily handles applications concurrently with different addressing expectations (24, 31, and 64 bit).
While 64-bit architectures indisputably make working with huge data sets in applications such as digital video, scientific computing, and large databases easier, there has been considerable debate as to whether they or their 32-bit compatibility modes will be faster than comparably-priced 32-bit systems for other tasks. In x86-64 architecture (AMD64 and EM64T), the majority of the 32-bit operating systems and applications are able to run smoothly on the 64-bit hardware.
Sun's 64-bit Java virtual machines are slower to start up than their 32-bit virtual machines because Sun still assume that all 64-bit machines are servers, and have only implemented the "server" compiler (C2) for 64-bit platforms. The "client" compiler (C1) produces worse code, but compiles much faster. So although a Java program on a 64-bit JVM may well perform better over a long period (typical for long-running "server" applications), its start-up time is likely to be much longer. For short-lived applications (such as javac) the increased start-up time can dominate the run time, making the 64-bit JVM slower overall. (Since a 64-bit motherboard can and usually does accommodate more memory, the extra memory requirements are not the major problem.)
It should be noted that speed is not the only factor to consider in a comparison of 32-bit and 64-bit processors. Applications such as multi-tasking, stress testing, and clustering (for HPC) may be more suited to a 64-bit architecture given the correct deployment. 64-bit clusters have been widely deployed in large organizations such as IBM, Vodafone, HP, Microsoft for this reason.
Manufacturers other than AMD have 64bit CPUs, such as IBM, HP, SUN, Intel, and Fujitsu, but most of these are just for servers.
2006-08-24 20:04:54
·
answer #1
·
answered by Anonymous
·
1⤊
0⤋
64 bit means it is a program which occupies space and hence 64 bit is equal to 8 byte. These are that information like from drops and drops of water it becomes a ocean likewise like this bits will be gathered according to the type of information you have stored it occupies the space of the computer till u you fill informations on the computer. It is nothing but the space which is composed on computer.
2006-08-24 20:10:52
·
answer #2
·
answered by John 1
·
1⤊
0⤋
64-bit when used with processors refers to the width of the internal registers. The biggest benefit is the ability to access more memory and larger files with less work. Some software can run faster on 64-bit processor, if it's written to take advantage of the bigger registers.
2006-08-24 20:03:58
·
answer #3
·
answered by Ken H 4
·
1⤊
0⤋
Just meaning one elemental computer cycle can work with 64 bits at once, if computers can work 64 bits at once, instead of 32 bits, it is tremandously easier to run it with a speed benefit think of how fast you gonna be if you doubles your own size. Same here.
2006-08-24 20:43:26
·
answer #4
·
answered by Andy T 7
·
1⤊
0⤋