English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

If i was to overclock would the cpu get hot fast or what and how do i overclock because i hear people doing it alot.

2007-04-16 12:55:32 · 8 answers · asked by alex_visal 1 in Computers & Internet Hardware Desktops

8 answers

Downside, possibility of smokin your processor, upside, absolutely none! Back in the day, it had some advantages, today, with your processor, I doubt that you have any software that it would benefit you to overclock. You want optimal performance, buy more ram.

2007-04-16 13:00:41 · answer #1 · answered by Anonymous · 1 1

Most overclocking can be done within the BIOS. Also check with the motherboard manufacturer, with software utilities that will do tweaking as well. The problem with over-clocking is that it pushes the limits of the hardware further than it's original design and therefore can be more prone for crashes and voids the manufacturer warranty if something should happen.

Be aware, that overclocking modifies the bus-speed and or multiplier and thus creates more heat and usually increases the voltage and speed across other motherboard components as well as the CPU. Because of this, you should have ample amount of fans if not water-cooling for good cooling and be aware of how sensitive some components such as video cards and memory are to speed and voltage change. This is why overclocking should be done incrementally and preferably with software that willl stress-test the system for the most stable settings. For example, my Asus motherboard with a Nvidia chipset that uses nForce utility to automatically tweak the system and stress-test it.
I do agree with Dr. House's post in buying more ram. The slowest part of the computer in all computers will be the hard-drive. By having more memory less time is spent retreiving information from the hard-drive.

2007-04-16 13:09:10 · answer #2 · answered by Elliot K 4 · 1 0

Better do some readings first on the basics of overclocking before you do anything. This will make you understand the effect and significance of every adjustment you make and so that you will not be surprised by BSOD (Blue Screen Of Death). You will later learn that it is really easy to bring your PC back to life ( a minute or 2, maybe longer for newbies).

You will also learn that you could try via software or manually in BIOS. You are less likely to damage anything if you just stay within stock voltages. I've been doing and enjoying it for almost 4 years and I've yet to fry a proc, PSU, mobo or RAM module. Feel free to IM me.

2007-04-16 15:03:25 · answer #3 · answered by Karz 7 · 0 0

I had an AMD once and it came with software that supported over clocking...Check in the Search bar. Type, over clocking AMD duo core 5600 and see what you find.

Down sides. Overclocking causes the CPU to heat up more then usual. Need more cooling fans or 'water cooled' system installed to keep the CPU's heat down so you don't fry it in a few months.

2007-04-16 13:02:53 · answer #4 · answered by d4d9er 5 · 0 0

First off, go here: http://forums.majorgeeks.com/showthread.php?t=52812
This is the best place I have found to find information on overclocking.

With that being said, if you are willing to take the risk of voiding your warranty, having to buy a new computer, etc. then be careful, go slow, and if a problem arises then back off on your overclocking. If it is only a small boost, just to help run games like G.R.A.W. (Ghost Recon: Avanced Warfighter), then I wouldn't go to high.

As stated above, you would be better off finding more RAM with the lowest latency timings possible, and upgrading your video card.

Hope this helps!

2007-04-16 13:15:53 · answer #5 · answered by shattered_likeness 1 · 1 0

Overclocking is done in the BIOS of your motherboard (usually, press DEL to get into set up when you first start booting).

Advantages - gain some speed and maybe bragging rights (whoopee)

Disadvantages - overheat the processor to the point of uselessness, overheat ram, overheat system board, ruin your computer, make a nice boat anchor (oh, wait, maybe that's an advantage)

I've overclocked before, and got some more framerates in a few games, but overall, it wasn't worth the potential damage nor the cost of overclocking (water-cooled systems, more and bigger case fans, etc etc)

Save your money and buy more ram, a better video card, or whatever.

2007-04-16 13:01:06 · answer #6 · answered by davidinark 5 · 1 1

if i've got been you i might pay the further money to get an intel processor no longer an amd, i might additionally decide on a center 2 dou no longer a center 2 quad when you consider which you will in all danger no longer want the quad center, i might purchase the e8400 processor

2016-10-22 08:45:18 · answer #7 · answered by ? 4 · 0 0

There are several considerations when overclocking. Overclocking boosts the performance of a computer system by increasing clock frequencies, which requires certain precautions. The first consideration is to ensure that it is supplied with adequate power to operate at the new speed. However, supplying the power with improper settings or applying excessive voltage can permanently damage a component. Since tight tolerances are required for overclocking, only more expensive motherboards—with advanced settings that computer enthusiasts are likely to use—have built-in overclocking capabilities. Motherboards with fewer settings, such as those found in OEM systems, lack such features in order to eliminate the possibility of misconfiguration on behalf of an inept user and cut down on the support costs and warranty claims to the manufacturer.

All electronic circuits discharge heat generated by the movement of electrons. As clock frequencies in digital circuits increase, the temperature goes up. Due to the excessive heat produced by overclocked components, an effective cooling system is often necessary to avoid damaging the hardware. In addition, digital circuits slow down at high temperatures due to changes in MOSFET device characteristics. Wire resistance also increases slightly at higher temperatures, contributing to decreased circuit performance.

Because most stock cooling systems are designed for the amount of heat produced during non-overclocked use, overclockers typically turn to more effective cooling solutions, such as powerful fans or heavy duty heatsinks. Size, shape, and material all influence the ability of a heatsink to dissipate heat. Efficient heatsinks are often made entirely of thermally conductive copper, but these are often expensive.[1] Aluminum is more widely used material for heatsinks. Cast iron is the least expensive, but it should be avoided for its poor thermal conductivity. Many of good-quality heatsink coolers combine two or more materials to maximize thermal conductivity while minimizing cost,[1] such as the Zalman CNPS7700-ALCU CPU cooler - a cheaper copper/aluminium version of the popular all-copper Zalman CNPS7700-CU.

Water cooling and passive liquid coolant carrying waste heat to a radiator which is similar to an automobile engine's cooling system provide more effective cooling than heatsink and fan combinations when properly implemented, because liquid is denser than air and therefore offers greater thermal transference.

Thermoelectric cooling devices are becoming more and more popular these days with the onset of high TDP processors from both Intel and AMD. TEC devices create temperature differences between two plates by running an electric current through said plates. This method of cooling is extremely effective, but is very inefficient, which leads to a lot of excess heat. Because of this, it is necessary to supplement TEC devices with a beefy convection-based heatsink or a water cooling system. Companies like Vigor Gaming offer all-in-one units that combine the advantages of TEC cooling with easy installation. One major drawback of TEC is that they have a large power overhead, sometimes drawing more than 60W

Other cooling methods are forced convection and phase change cooling which is used in refrigerators. Submersion, liquid nitrogen and dry ice are used as a cooling method in extreme measures, such as record-setting attempts or one-off experiments rather than cooling an everyday system. Submersion method involves sinking a part of computer system directly into a chilled liquid substance that is thermally conductive but sufficiently low in electrical conductivity. The advantage of this technique is that no condensation can form on sensitive electronic components. A good submersion liquid is Fluorinert™ made by 3M, which is expensive and requires a permit to purchase it. Another option is mineral oil, but if it has impurities like water or scenting agents it will conduct electricity.

In 2003, Tom's Hardware Guide experimented with a Pentium 4 3.4 GHz HT processor, using liquid nitrogen and forced convection for cooling. They managed to achieve a clock frequency of over 5 GHz, which is a considerable increase over the original clock speed, and much faster than any processor in production at the time.[2] These tests are of interest to enthusiasts as illustrations of the possibilities of the performance achievable when large amounts of heat are removed from a system.

These extreme methods are generally intolerable in the long term, as they require refilling reservoirs of coolant or are noisy. Moreover, silicon-based MOSFETs will cease to function ("freeze out") below temperatures of roughly 100 K, so using extremely cold coolants may cause devices to cease functioning.

An overclocked component operates outside of the manufacturer's recommended operating conditions, and as such may operate incorrectly, leading to system instability. An unstable overclocked system, while it may work fast, can be frustrating to use. Another risk is silent data corruption—errors that are initially undetected. In general, overclockers claim that testing can ensure that an overclocked system is stable and functioning correctly. Although software tools are available for testing hardware stability, it is generally impossible for anyone but the processor manufacturer to thoroughly test the functionality of a processor. A particular "stress test" can verify only the functionality of the specific instruction sequence used in combination with the data and may not detect faults in those operations. For example, an arithmetic operation may produce the correct result but incorrect flags; if the flags are not checked, the error will go undetected. Achieving good fault coverage requires immense engineering effort, and despite all the resources dedicated to validation by manufacturers, mistakes can still be made. To further complicate matters, in process technologies such as silicon on insulator, devices display hysteresis—a circuit's performance is affected by the events of the past, so without carefully targeted tests it is possible for a particular sequence of state changes to work at overclocked speeds in one situation but not another even if the voltage and temperature are the same. Often, an overclocked system which passes stress tests experiences "inexplicable" instabilities in other programs.[3]

Many overclockers, however, are satisfied with perceived stability; while their system may operate incorrectly, the errors may not be overtly apparent to the user. In overclocking circles, "stress tests" or "torture tests" are used to check for correct operation of a component. These workloads are selected as they put a very high load on the component of interest (e.g. a graphically-intensive application for testing video cards, or a processor-intensive application for testing processors). Popular stress tests include Prime95, Super PI, SiSoftware Sandra, BOINC and Memtest86. The hope is that any functional-correctness issues with the overclocked component will show up during these tests, and if no errors are detected during the test, the component is then deemed "stable". Since fault coverage is important in stability testing, the tests are often run for long periods of time, hours or even days.

Overclockability arises in part due to the economics of the manufacturing processes of CPUs. In most cases, CPUs with different rated clock speeds are manufactured via exactly the same process. The clock speed that the CPU is rated for is the speed at which the CPU has passed the manufacturer's functionality tests when operating in worst-case conditions (for example, the highest allowed temperature and lowest allowed supply voltage). Manufacturers must also leave additional margin for reasons discussed below.

When a manufacturer rates a chip for a certain speed, it must ensure that the chip functions properly at that speed over the entire range of allowed operating conditions. When overclocking a system, the operating conditions are usually tightly controlled, making the manufacturer's margin available as free headroom. Other system components are generally designed with margins for similar reasons; overclocked systems absorb this designed headroom and operate at lower tolerances. Pentium architect Bob Colwell calls overclocking an "uncontrolled experiment in better-than-worst-case system operation".[4]

Some of what appears to be spare margin is actually required for proper operation of a processor throughout its lifetime. As semiconductor devices age, various effects such as hot carrier injection, negative bias thermal instability and electromigration reduce circuit performance. When overclocking a new chip it is possible to take advantage of this margin, but as the chip ages this can result in situations where a processor that has operated correctly at overclocked speeds for years spontaneously fails to operate at those same speeds later. If the overclocker is not actively testing for system stability when these effects become significant, errors encountered are likely to be blamed on sources other than the overclocking.

Many de facto benchmarks are used to evaluate performance. The benchmarks can themselves become a kind of 'sport', in which users compete for the highest scores. As discussed above, stability and functional correctness may be compromised when overclocking, and meaningful benchmark results depend on correct execution of the benchmark. Because of this, benchmark scores may be qualified with stability and correctness notes (e.g. an overclocker may report a score, noting that the benchmark only runs to completion 1 in 5 times, or that signs of incorrect execution such as display corruption are visible while running the benchmark).

Given only benchmark scores it may be difficult to judge the difference overclocking makes to the computing experience. For example, some benchmarks test only one aspect of the system, such as memory bandwidth, without taking into consideration how higher speeds in this aspect will improve the system performance as a whole. Apart from demanding applications such as video encoding, high-demand databases and scientific computing, memory bandwidth is typically not a bottleneck, so a great increase in memory bandwidth may be unnoticeable to a user depending on the applications they prefer to use. Other benchmarks, such as 3DMark attempt to replicate game conditions, but because some tests involve non-deterministic physics, such as ragdoll motion, the scene is slightly different each time and small differences in test score are overcome by the noise floor.


[edit] Variance
The extent to which a particular part will overclock is highly variable. Processors from different vendors, production batches, steppings, and individual units will all overclock to varying degrees

Commercial system builders or component resellers sometimes overclock to sell items at higher profit margins. The retailer makes more money by buying lower-value components, overclocking them, and selling them at prices appropriate to a non-overclocked system at the new speed. In some cases an overclocked component is functionally identical to a non-overclocked one of the new speed, however, if an overclocked system is marketed as a non-overclocked system (it is generally assumed that unless a system is specifically marked as overclocked, it is not overclocked), it is considered fraudulent.

Overclocking is sometimes offered as a legitimate service or feature for consumers, in which a manufacturer or retailer tests the overclocking capability of processors, memory, video cards, and other hardware products. Several video card manufactures now offer factory overclocked versions of their graphics accelerators, complete with a warranty, which offers an attractive solution for enthusiasts seeking an improved performance without sacrificing common warranty protections. Such factory overclocked products often demand a marginal price premium over reference-clocked components, but the performance increase and cost savings can sometimes outweigh the price increases associated with similar, albeit higher-performance offerings from the next product tier.

Naturally, manufacturers would prefer enthusiasts pay additional money for profitable high-end products, in addition to concerns of less reliable components and shortened product life spans impacting brand image. It is speculated that such concerns are often motivating factors for manufacturers to implement overclocking prevention mechanisms such as CPU locking. These measures are sometimes marketed as a consumer protection benefit, which typically generates a mixed reception from overclocking enthusiasts.


[edit] Advantages
The user can, in many cases, purchase a slower, cheaper component and overclock it to the speed of a more expensive component. The Intel Core 2 Duo E6400 (£110 or $218), for example can be overclocked to speeds of over 3Ghz with performance comparable to the Core 2 X6800 (£500 or $975).
Faster performance in games, encoding, video editing applications, and system tasks at no additional expense. This means that systems can become more "future proof" in that performance is of such high standard that an upgrade may not be required for some time.
Some systems have "bottlenecks", where small overclocking of a component can help realize the full potential of another component to a greater percentage than the limiting hardware is overclocked. For instance, many motherboards with AMD Athlon 64 processors limit the speed of four units of RAM to 333 MHz. However, the memory speed is computed by dividing the processor speed (which is a base number times a CPU multiplier, for instance 1.8 GHz is most likely 9x200 MHz) by a fixed integer such that, at stock speeds, the RAM would run at a clock rate near 333 MHz. Manipulating elements of how the processor speed is set (usually lowering the multiplier), one can often overclock the processor a small amount, around 100-200 MHz (less than 10%), and gain a RAM clock rate of 400 MHz (20% increase), realizing the full potential of the RAM.
Overclocking can be an engaging hobby in itself and supports many dedicated online communities. The PCMark website is one such site that hosts a leaderboard for the most powerful computers to be benchmarked using the program.

Many of the disadvantages of overclocking can be mitigated or reduced in severity by skilled overclockers. However, novice overclockers may make mistakes while overclocking which can introduce avoidable drawbacks, and potentially result in damage to the overclocked components.


[edit] General disadvantages
These disadvantages are unavoidable by both novices and veterans.

The lifespan of a processor is negatively affected by higher operation frequencies, increased voltages and heat. However, overclockers argue that with the rapid obsolescence of processors coupled with the long life of solid state microprocessors (10 years or more), the overclocked component will likely be replaced before its eventual failure. Also, since many overclockers are enthusiasts, they often upgrade components more often than the general population, offering further mitigation of this disadvantage.
Increased clock speeds and voltages result in higher power consumption.
While overclocked systems may be tested for stability before usage, stability problems may surface after prolonged usage due to new workloads or untested portions of the processor core. Aging effects previously discussed may also result in stability problems after a long period of time.
High-performance fans used for extra cooling can produce large amounts of noise. Older popular models of fans used by overclockers can produce 50 decibels or more- however, most modern fans are overcoming this problem by designing fans with aerodynamically optimized heatsinks for smoother airflow and minimal noise (around 20 decibels). Some people do not mind this extra noise, and it is common for overclockers to have computers that are much louder than stock machines. Noise can be reduced by utilising strategically placed larger fans which deliver more performance with less noise in the place of smaller and noisier fans, or by the use of alternate cooling methods, such as liquid and phase-change cooling, or by lining the chasis with foam insulation. Now that overclocking is of interest to a larger target audience, this is less of a concern as manufacturers have begun researching and producing high-performance fans that are no longer as loud as their predecessors. Similarly, mid- to high-end PC cases now implement larger fans (to provide better airflow with less noise) as well as being designed with cooling and airflow in mind.
Without adequate cooling, the excess heat produced by an overclocked processing unit increases the ambient air temperature of an interior case; consequently, other components may be slightly affected.
Overclocking will not necessarily save money. Non-trivial speed increases often require premium cooling equipment to avoid unacceptably high temperatures. It can also become an expensive pastime. Most people who consider themselves overclockers spend significantly more on computer equipment than the average person. However, recent innovations in CPU manufacturing technology means that significant gains can be made from certain processors. This is shown clearly in the Intel Core 2 range- the E6300 and E6400 have half the L2 cache of the rest of Conroe family whilst the CPU multipliers are locked to lower levels, the lower in the range a processor is (E6300- X7 and the E6400- X8). This basically means that the limits of Conroe processor speeds are limited only by Intels manufacturing process as any chip in the family can be overclocked to almost X6800 speeds with a marginally small difference in performance.
Overclocking has a risky potential to end in component failure ("heat death"). Most warranties do not cover defunct units that result from overclocking activities. However, overclocker friendly motherboards tend to offer safety measures that will stop this from happening (eg limitations on FSB increase) so that really, only voltage control alterations can cause such harm. It could be argued however, that incremental voltage changes have very little chance of damaging components as any signs of instabilty would manifest themselves beforehand.

[edit] Disadvantages of overclocking
Increasing the operation frequency of a component will increase its thermal output in a linear fashion, while an increase in voltage causes a quadratic increase. Overly aggressive voltage settings or improper cooling may cause chip temperatures to rise so quickly that irreversible damage is caused to the chip causing immediate failure or significantly reducing its lifetime.
More common than hardware failure is functional incorrectness. Although the hardware is not permanently damaged, this is inconvenient and can lead to instability and data loss. In rare, extreme cases entire filesystem failure may occur, causing the loss of all data.[5]
With poor placement of fans, turbulence and vortices may be created in the computer case, resulting in reduced cooling effectiveness and increased noise. In addition, improper fan mounting may cause rattling or vibration.
Improper installation of exotic cooling solutions like liquid or phase-change cooling may result in failure of the cooling system, which may result in water damage or damage to the processor due to the sudden loss of cooling.
Products sold specifically for overclocking are sometimes just decoration ("rice"). Novice buyers should be aware of the marketing hype surrounding some products. Examples include heat spreaders and heatsinks designed for chips which do not generate enough heat to benefit from these devices.

[edit] Limitations
The utility of overclocking is limited for a few reasons:

Personal computers are mostly used for tasks which are not computationally demanding, or which are performance-limited by bottlenecks outside of the local machine. For example, web browsing does not require a very fast computer, and the limiting factor will almost certainly be the speed of the internet connection of either the user or the server. Overclocking a processor will also do little to help speed up application loading times as the limiting factor is reading data off of the hard drive. Other general office tasks such as word processing and sending email are more dependent on the efficiency of the user than on the speed of the hardware. In these situations any speed increases through overclocking are unlikely to be noticeable.
It is generally accepted that, even for computationally-heavy tasks, speed increases of less than ten percent are difficult to discern. For example, when playing video games, it is difficult to discern an increase from 60 to 66 frames per second without the aid of an on-screen frame counter. Although an equivalent increase in frames per second at lower frame rates (such as an increase from 17 to 23 FPS) can mean significant improvement of gameplay


Graphics cards can also be overclocked, with utilities such as NVIDIA's Coolbits, or the PEG Link Mode on ASUS motherboards. Overclocking a video card usually shows a much better result in gaming than overclocking a processor or memory. Just like overclocking a processor, sufficient cooling is a must.

Along with the higher clock frequencies come higher temperatures, coupled with the fact that most video cards are sold with coolers designed only to support standard stock temperatures many graphics cards overheat and burn out when overclocked too much.

Prior[citation needed] to irreversible damage to the graphics card, in-game distortions known as artifacts become visible and serve as a good warning sign. Two such discriminated "warning bells" are widely understood: green-flashing, random triangles appearing on the screen in 99% of cases correspond to overheating problems on the GPU (Graphics Processing Unit) itself, while white, flashing dots appearing randomly (usually in groups) on the screen mean that the card's RAM (memory) is overheating. It is common to run into one of those problems when overclocking graphics cards. Showing both symptoms at the same time usually means an over-overclocked card (one which is drastically overheating), or poor quality components used to produce the card (in which case the card is not overclockable by any lengths).

Some overclockers may also make use of a hardware voltage modification where a potentiometer is applied to the video card to give the GPU more voltage and much better overclocking ability. Voltage mods are very risky and may result in a dead video card, especially if the voltage modification ("voltmod") is applied by an inexperienced individual. It is also worth mentioning that adding physical elements to the video card immediately voids the warranty (even if the component has been designed and manufactured with overclocking in mind, and has the appropriate section in its warranty).

Flashing and Unlocking are ways to gain performance out of a video card, without overclocking it per se.

Flashing refers to using the BIOS of another card, based on the same core and design specs, to "override" the original BIOS, thus effectively making it a higher model card; however, 'flashing' can be difficult, and sometimes a bad flash can be irreversible. Sometimes stand-alone software to modify the BIOS files can be found (GeForce 6/7 series are well regarded in this aspect). It is not necessary to acquire a BIOS file from a better model video card (although it should be said that the card which BIOS is to be used should be compatible, i.e. the same model base, design and/or manufacture process, revisions etc.). For example, video cards with 3D accelerators (the vast majority of today's market) have two voltage and speed settings - one for 2D and one for 3D - but were designed to operate with three voltage stages, the third being somewhere in the middle of the aforementioned two, serving as a fallback when the card overheats or as a middle-stage when going from 2D to 3D operation mode. Therefore, it could be wise to set this middle-stage prior to "serious" overclocking, specifically because of this fallback ability - the card can drop down to this speed, reducing by a few (or sometimes a few dozen, depending on the setting) percent of its efficiency and cool down, without dropping out of 3D mode (and afterwards return to the desired full-speed clock and voltage settings).

Some cards also have certain abilities not directly connected with overclocking. For example, NVIDIA's GeForce 6600GT (AGP flavor) features a temperature monitor (used internally by the card), which is invisible to the user in the 'vanilla' version of the card's BIOS. Modifying the BIOS (taking it out, reprogramming the values and flashing it back in) can allow a 'Temperature' tab to become visible in the card driver's advanced menu.

Unlocking refers to enabling extra pipelines and/or pixel shaders. The 6800LE, the 6800GS and 6800 (AGP models only) were some of the first cards to benefit from unlocking. While these models have either 8 or 12 pipes enabled, they share the same 16x6 GPU core as a 6800GT or Ultra, but may not have passed inspection when all their pipelines and shaders were unlocked. Currently cards from both ATI and NVIDIA are being unlocked and there is no reason to believe that this technique will disappear.

Generally, cards in the same 'family' share the same basic design, even though they run at different speeds and may have different features, effectively varying their performance (as observed with GeForce 6 series of cards, ranging from LS to 'vanilla' to GT to Ultra). This is because creating a completely new design costs more than producing the same card and disabling some features, underclocking it, and offering it as a 'budget' model. Besides that, the manufacturing process is not perfect; some cards come off the bench performing worse than others of the same design (or sometimes with defects), and can be designated as 'lower cost, slower' versions (i.e. the defective processing pipelines are disabled, the card's speed is reduced, and from an otherwise incapable GeForce 6800 we can get a 6800LE).

It is important to remember that while pipeline unlocking sounds very promising, there is absolutely no way of determining if these 'unlocked' pipelines will operate without errors, or at all (this information is solely at the manufacturer's discretion). In a worst-case scenario, the card may not start up ever again, resulting in a 'dead' piece of equipment. It is possible to revert to the card's previous settings, but it involves manual BIOS flashing using special tools and an identical but original BIOS chip


DDR SDRAM or double-data-rate synchronous dynamic random access memory is a class of memory integrated circuit used in computers. It achieves greater bandwidth than the preceding single-data-rate SDRAM by transferring data on the rising and falling edges of the clock signal (double pumped). Effectively, it nearly doubles the transfer rate without increasing the frequency of the front side bus. Thus a 100 MHz DDR system has an effective clock rate of 200 MHz when compared to equivalent SDR SDRAM, the “SDR” being a retrospective designation.

In electronic engineering, DDR2 SDRAM or double-data-rate two synchronous dynamic random access memory is a random access memory technology used for high speed storage of the working data of a computer or other digital electronic device

PC2-3200 200 MHz DDR2-400 3.200 GB/s
PC2-4200 266 MHz DDR2-533 4.264 GB/s
PC2-5300 333 MHz DDR2-667 5.336 GB/s1
PC2-6400 400 MHz DDR2-800 6.400 GB/s
PC2-8500 (here - corsair) 533 MHz DDR2-1066 8.500 GB/s
DDR3 SDRAM or double-data-rate three synchronous dynamic random access memory is the name of the new DDR memory standard that is being developed as the successor to DDR2 SDRAM.

The memory comes with a promise of a power consumption reduction of 40% compared to current commercial DDR2 modules, due to DDR3's 90 nm fabrication technology, allowing for lower operating currents and voltages (1.5 V, compared to DDR2's 1.8 V or DDR's 2.5 V). "Dual-gate" transistors will be used to reduce leakage of current.

DDR3's prefetch buffer width is 8 bit, whereas DDR2's is 4 bit, and DDR's is 2 bit.

Theoretically, these modules could transfer data at the effective clockrate of 800-1600MHz (for a single clock bandwidth of 400-800MHz), compared to DDR2's current range of 400-1066 MHz (200-533 MHz) or DDR's range of 200-600 MHz (100-300 MHz). To date, such bandwidth requirements have been mainly on the graphics market, where fast transfer of information between framebuffers is required.

DDR3 sticks maintain the 240-pin DIMM interface of DDR2, allowing DDR3 compatible chipsets to host DDR2 modules (though not both types at once). [1]

Prototypes were announced in early 2005, while the DDR3 specification is expected to be publicly available in mid-2007. Supposedly, Intel has preliminarily announced that they expect to be able to offer support for it in mid 2007 with a version of their upcoming Bearlake chipset. AMD's roadmap indicates their own adoption of DDR3 to come in 2008.

2007-04-17 02:22:24 · answer #8 · answered by Joseph G 2 · 1 1

fedest.com, questions and answers