In the next ten years.
KB Kilobyte
1,024 Bytes
MB Megabyte
1,048,576 Bytes
GB Gigabyte
1,073,741,824 Bytes
TB Terrabyte
1024 GB
1,048,576 MB
8,388,608 KB
1,099,511,627,776 Bytes
8,796,093,022,208 bits.
PB Pettabyte
1024 TB
1,048,576 GB
1,073,741,824 MB
1,099,511,627,776 KB
1,125,899,906,842,624 Bytes
9,007,199,254,740,992 bits.
EB Exabyte
1024 PB
1,048,576 TB
1,073,741,824 GB
1,099,511,627,776 MB
1,125,899,906,842,624 KB
1,152,921,504,606,846,976 Bytes
9,223,372,036,854,775,808 bits.
ZB Zettabyte
1024 EB
1,048,576 PB
1,073,741,824 TB
1,099,511,627,776 GB
1,125,899,906,842,624 MB
1,152,921,504,606,846,976 KB
1,180,591,620,717,411,303,424 Bytes
9,444,732,965,739,290,427,392 bits
YB Yottabyte
1024 ZB
1,048,576 EB
1,073,741,824 PB
1,099,511,627,776 TB
1,125,899,906,842,624 GB
1,152,921,504,606,846,976 MB
1,180,591,620,717,411,303,424 KB
1,208,925,819,614,629,174,706,176 Bytes
9,671,406,556,917,033,397,649,408 bits
2006-07-16 15:44:53
·
answer #1
·
answered by Anonymous
·
2⤊
1⤋
This will not happen in your lifetime ( unless you are 3 years old,
and in the next 85 years Medicine advances so that you live
multiple lifespans.)
IBM has the BLUE GENE project to make a PETA flop computer,
using modules of two CPU cards stacked in groups of 64, 256,
1024, etc until all are connected together as one unit, in 2000
or more CPU's.
The present limits for chip fabrications have hit a wall, for the moment, and until new techniques are developed, you will only get small increases in speed, but already there are motherboards with 4, DUAL CORE, processors on board, trying to make up
for the loss of increased speed, with just plain " MORE " CPUs.
Other comments about " BYTE " are valid. A flop is a Floating
Point Operation and is a better indication of processing power, compared to just HERTZ ( Speed ) alone.
For more "SIZE: information, see the link below,
with some of the info printed here :
______________________________________________
http://plexos.com/256_bit_CPUs_should_be_enough.htm
A look at the increases in computer memory and future limits. Also, an extended system of Units. Kilo Mega Giga Tera Peta Exa Zetta Yotta Xona Weka Vunda Uda Treda ... Luma from Jim Blower's.
22 Feb 2003. Current 32 bit PCs have a memory limit of 4 Gigabytes RAM and in a few years Pcs will be shipping with this limit. 64 bit computers will be the next step and dare I say it final one? Bill Gates (famously) thought 640 Kilobytes RAM would be enough for anyone, but he was wrong. 65,536 - 16 bit address bus e.g. BBC micro 1,048,576 - Old 16 bit (x 16 pages) IBM Pcs 4,294,967,296 - 32 bit current Pcs, 4 Giga-bytes 18,446,744,073,709,552,000 - 64 bit future Pcs, 18 Exa-bytes E P T G M K Kilo Mega Giga Tera Peta Exa Zetta Yotta Xona Weka Vunda Uda Treda Search engine google currently uses 2 Peta-bytes disc space so with 18 Exa-bytes you could fit 9000 Google's into RAM, if you could afford the RAM. Well, if 18 Exa-bytes is still too little then try 128 bit computers with:- 340,000,000,000,000,000,000,000,000,000,000,000,000 U V W X Y Z E P T G M K 340 Uda-bytes RAM memory or 170 Zetta-Googles. It?s still not that much. A person has about 7 Xona-atoms. so we can only fit in about 50 billion people to RAM. Going up to 256 bits and RAM of 100 Tebilubibytes (10^77). There are this order of atoms in the universe so a 256 bit computer could just fit that in. Errr.... wait a minute, there wouldn't be enough atoms to build the RAM. 256 bit PCU's should be enough for anyone then. Moore's law states that computing power will double every 18 months or so but the 256 limit is one indicator to the contrary. The end of a sequence in computing is approaching. From 8 bit computers, it went up quickly to 16 bit, then 32, now 64 is coming in, They might bother with 128 but there really isn't much point going higher after that. Moore's law definitely doesn't apply to this particular sequence. I suspect it may fail for other types too.
___________________________________________________
Another aspect of finding your FAST CPU speed is the
"wall" that exists ( momentarily, I am certain ) in CPU fabrication,
which someone already asked, with my brief answer :
____________________________________________________
____________________________________________________
Why have computers been stuck at around 3 gigahertz for years now? And where's Pentium 5,6,7 etc?
Back in the 1990's we went from 100Mhz to about 1.1Ghz. I could easily see the progress computers were making back then. Now, every time I check the latest computers they all top out at around 3.06Ghz. Surely, I MUST be missing something?
Nope, you are not missing anything. the manufacturers hit a brick wall in the voltages and SIZE of the chips. If you will remember, the
first Pentium chips were about 5.00 volts, then they started dropping to 3.52, into the 2 volt range, and now into the 1 volt
range. It takes TIME, even at the speed of light to get a wire or transistor from Zero ( Ground) ( which can be thought of as a ZERO
bit ), up to the working voltage, say 1.8 Volts. This lag is just to
fill the wire ( In a CPU it would P channel or N channel doped
silicon material ) full of electrons, or to drain a conductor of all electrons. By lowering the PEAK voltage, one can speed up the
process. Think of it as climbing a 45 degree hill. If you climb a 5
Unit hill, it takes twice as long as it would to climb a 2.5 unit hill.
The second factor is the size of the wiring and traces and transistors in the CPU. Technology has come a LONG way from the first integrated circuits, which would only have a few thousand transistors. The first CPU was generally called the
2002, a 2-bit processor for calculators from Intel. It had roughly 2000 transistors. I have a photograph of a celebration at Intel the day when they hit the "profit" ratio for manufactured chips that passed inspection - 10%. Then came the 4004 ( four bit with roughly 4000 transistors. ) then the 8008, the 8080, the 8086, then the
infamous 8088 used on the first IBM PC. The transistor count has gone up, as you pointed out from 8 bits, to 16, to 32, to 64, BIT, CPUs with many Millions of transistors. If you remember the
SLOT 1, or 'chocolate bar chip" that was common a few years ago, this is an example of manufacturers trying to get millions of
transistors and CACHE as close together as possible without
making a 6 inch square CPU. The Slot 1 assembly had typically 6 chips on it, using the standard chip "size" and voltage of the day, and had very powerful use of the L1 and L2 cache.
But, -- the SIZE of the traces has reached a manufacturing limit using "standard" techniques. Now you get "Dual Core" CPUs,
and the CACHE ( L1 and L2 --level 1 and Level 2 ) is reaching 2
Megabytes, in order to speed up the CPU's output, since the
speed, and size has maxed. I have read of new experimental techiniques with single ATOM channels for CPUs, and other
techniques to make the size of the parts smaller. The other practical limit, that you should easily guess, is the HUGE power requirements of the new CPU's, since the average first PC might have a 100 Watt power supply, and now 600 Watt supplies are common. This power as to "go" somewhere, and it generates tremendous heat in CPU and must be cooled, or the traces and
layers will break down. Finding ways to get rid of this heat is a major concern. One guy in Antarctica was able to overclock his 800 MegaHertz CPU into the 2000 MHz range, but, not eveyone can keep their computer at Minus 72 degrees.
As someone pointed out in an answer above, CPU "clock frequency" is not the entire factor in "speed", and better written Cache programming, more cache memory, better design,
and even more efficient software will make a computer faster as well.
A while ago, computer designers discovered Russian computer experts, who, lacking the resources of Silicon Valley, had designs and computer algorithms which did twice the work of the rather bloated "standard" programming of North America. By using
better designs, and clever programming, with LESS code, the
CPU's are now doing much more work than before.
Something simple like CACHE RAM, with better usage can radiacally improve a CPU's output. My explanation of cache,
in an earlier answer is as follows:
______________________________...
cache is memory, just like any other memory on your computer.
The difference between RAM ( random access memory) and cache
memory, is the location and speed, and how it is used.
It was discovered that by PREDICTING what memory locations
" Were GOING to be used ", the computer could speed up the
the process of reading the next set of memory locations,
by having a "predictor" program load the memory into VERY
fast memory, close to, or inside the CPU ( Central Processing UNIT ie the Pentium chip ). There are different levels of Cache RAM, usually called L1, L2, L3 etc. with
L1 Cache being built right on the chip itself, or on chips
built onto the CPU chip module at the factory. L2 Cache ram
is usually added in slots very close to where the CPU ( Pentium), is located on the motherboard. Believe it or not,
the computer chips run so fast, that electricity, moving at the speed of light, takes TIME, to travel across the circuitboard, so that the L1 cache is in the heart of the CPU, and the L2 add on cache is as close as possible to the
core to speed up the transfere.
In early Pentium ( and AMD, Cyrix, IBM, etc. clones,) the
L1 cache was controlled by what can be considered an entire computer WITHIN the CPU. This "computer" had a program, with
its own ram, and rom, and it was pre-programmed at the factory. Its job, was to watch what the (big) computer was doing, and predict, what the computer was going to do NEXT. For example, if you were running a program that used program lines 345, 346, 347, 348, then it would predict that you
would use lines 349, 350, 351, 352, etc, and load them from the " SLOW" ram cards in the ram slots, into the blazing fast Cache ram, right inside the CPU itself, where the
CPU could access it immediately. There are different kinds of cache ram within the "Cache" " Predictor" program, such
as TAG ram, Dirty ram, etc. which are used to keep track of what ram is loaded, used, and changed, by the running software that the CPU is currently using. Tag ram would keep a list of the ram locations that were loaded into the L1
cache, and the Dirty ram, would keep track of which L1 cache
locations had CHANGES made - when the "predictor" program
decided that NEW locations of slow RAM were going to be used, it had to put back the old RAM locations' memory contents. It Could ignore memory values that WERE NOT CHANGED ( saving time otherwise spent writing the information BACK to the slow RAM ), but would have to write
back any CHANGED ram back to the slow ram (marked dirty).
AMD invented very clever and powerful "predictor" programs in some of its early chips, and this made such a dramatic difference that it out-performed the much more expensive Pentium chips. The "predictor" computer program component is now extremely important in achiving faster speeds. CPUs typically have 64k ( K is one thousand (( 1024 bits) ), 128 K, 256K, or 512K of cache built in, and many motherboards can have slots near the CPU that the user can add more
L2 cache ram, for use by the predictor CPU program section.
Cache ram is typically much faster, but much more expensive to construct in the factory, so that chips with more cache are usually a lot more expensive. Since CPU speeds are reaching a limit for the moment, the manufacturers are increasing the L1 CACHE to 1 or more MEGA BYTES, on the high end chips.
SOOooooo... cache is not "just" fast memory. It is a program, inside the CPU, running constantly to predict what memory is "going" to be used. This program goes out to the slow ram, and grabs the memory contents, and loads it into
L1 cache BEFORE the computer asks for it, to speed up memory transfere. If the predictor is correct in GUESSING the next memory locations, the entire computer speeds up greatly....
______________________________...
There are 4 -CPU, Dual -core, motherboards available, since just putting "more" of the same old CPU will increase the speed.
As you pointed out, there is an almost halt in the given clock speed at the moment, but researchers around the world are
VERY busy finding new breakthroughs. I would hazard a guess, since I have been watchng the computer industry since 1978,
that you will see blazing speeds once again, in while, when so-called
experimental chip fabrications technologies are brought to the factory floors.
You can easily DOUBLE the speed of any computer you want, if you can find a way to keep it at about -72 degrees.... ie. getting rid of the heat is not a trivial factor.
There dozens of articles in all the Computer and Scientific magazines, and you can search the web for articles on experimental techniques, but everyone has accepted that at the moment, speeds are increasing VERY slowly. you might notice in the COMPUTER magazines, everyone is selling LIQUID cooling,
and going to extreme lengths to cool cases and CPUs and Video cards ( which really are CPU's ! ).
The as-yet-undiscovered techniques of the next generation of CPUs will have lower voltages, lower POWER consumption,
smaller fabrications, and ingenious ways to get rid of the heat.
Better use of caching, better algoithms in CPU process programming, and Dual/ Quad/ Multiple Processor techniques
will also be used.
Until then, you just have to wait!
___________________________________________________
___________________________________________________
This should give enough information to form an answer
to your question... there are many resourses on the web,
discussing CPU limitations and the possible inventions
that will solve the problem.. Try dogpile.com search for
keywords on the subject !
robin
2006-07-17 00:38:09
·
answer #7
·
answered by robin_graves 4
·
1⤊
0⤋