English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

they deal with VGA in computer

2006-07-13 02:09:18 · 1 answers · asked by ysaremian 1 in Computers & Internet Hardware Other - Hardware

1 answers

The core clock is how fast the GPU (Graphics Processing Unit) runs. The memory clock specifies how fast the memory runs and are the most important things when telling the performance of a graphics card. The memory type usually, DDR2 or GDDR3 are the most common, DDR2 memory is used in many low and middle end cards. GDDR3 is faster and more power efficient and can be found in some middle end but is mostly found in high-end cards. The memory interface is usually either 128-bit or 256-bit in the realm of graphics cards, this is how much data that can be pushed through the memory at any given time. 256-bit is better and will usually only be found on high-end graphic cards. The forthcoming nVidia 8 series is supposed to use GDDR4 memory which was just developed by Samsung, it will cut power usage by 40% and increase performance.

A good configuration for a graphics card is as follows:

128-bit or 256-bit interface (you will still find 64-bit, avoid those)
DDR2 or GDDR3 memory (GDDR3 is preferred)
And I'm not going to go into core clocks and memory clocks but look to the Geforce 7600GS or 7600GT on examples of what you should look for (those cards use a 128-bit interface with the 7600GS having DDR2 memory and the 7600GT having DDR3 memory).

As far as memory, get between 256MB and 512MB. For example, you can get a Radeon X1300 with 512MB of memory for less than a 256MB 7600GT. But, the 7600GT will wipe the floor with the X1300 every time. The point here is that the core and memory clock and interface mean more than the amount of memory. A lot of memory with lower performing components equals lower performance and a lot of useless memory.

2006-07-13 02:27:04 · answer #1 · answered by conradj213 7 · 1 0

fedest.com, questions and answers