English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

I purcahsed a AMD Athlon 64 3500+, and it was delivered today, I haven't installed on my new motherboard yet, but does anyone know if it runs the same speed as a Intel Pentium 4 3.5Ghz processor does? As they do label up there processors kinda weird!
I have seen games that require 3Ghz now, and the side of the box it says it runs at 2200+ Please can anyone Help me out here???

2006-07-27 10:16:40 · 6 answers · asked by deano2806 3 in Computers & Internet Hardware Other - Hardware

6 answers

Some of the other answers are brief, to the point, and detailed enough to give you a quick idea what differences there are in
just comparing the AMD to the INTEL chipsets.

here is more information, typical to many similar questions,
which are asked VERY frequently...

hope this info helps :

_________________________________________________
_________________________________________________

AMD Equivalent?

What is the AMD equivalent to the Pentium 4 processor , 3.2 GHZ.

What is the fastest AMD processor out there and which is better?



OUCH,,, this is a much more difficult question than you
probably realize...

Instead of asking what the FASTEST AMD is, you probably mean
what is the AMD processor that will do the SOFTWARE I WANT TO USE, the fastest, since the fastest AMD may not do a job,
say run a PFS Online, as " FAST " as a " SLOWER " AMD chip
that is specifically designed to do this " QUICKLY "....

The chips are being offered to the public now in dozens of different formats, and you have to ( HAVE TO ) read the benchmark reviews to see what the actual results are, in terms
of the output given any type of real world software programs,
and " types " of programs, since the results might surprise you -
the FASTEST and the MOST EXPENSIVE chipsets may not
run as well as " slower " and inexpensive chips, so you MUST
compare data to see what you really are looking for...

I answered this question with more detail a few times, and
here is a typical run down of what is happening in the chip world
...

__________________________________________________
What is faster, AMD, INTEL, Dual, Single, at what SPEED
are they equivalent?


This is a complicated question, since two different Intel chips
rated at 3. gig, will give totally different benchmarks as to
actual computation speed, and two different AMD CPUs
rated at 3.0 Gig will give totally different speeds of Benchmark
actual computing speeds, to start off with, and the INTEL and
AMD " SPEEDS" as listed may not be the actual CLOCK SPEEDS
used, so that you are comparing a great many variables.

If you have a SPECIFIC CHIPSET of AMD and INTEL, you can
compare the two by looking at reviews of BENCHMARKS done
by hundreds of reveiwers, to see the actual computing outputs.

One Specific Chip at 3 gig might outperform another chip at
3 gig BY THE SAME company ( intel or AMD ) that is running
at 4.0 gig, in doing ONE SPECIFIC TASK, and yet fail to
run at even 3.0 gig ( equivalent ) on another program task...

So you really have to ask, for ONE software task, which of
two very SPECIFIC chips from Intel and AMD, will run better,
as tested on various Benchmarks and application tests...

AMD and Intel have different internal designs, have different CACHE programs, and different clocking schemes, so you
can't just compare on "Listed" clock speeds - that is just
too simple, in the complexities of CPU designs... It is like
asking " what is better - a 4 cylinder engine from an earthmover
or a 4 cylinder engine from a sub compact car? - they are
both 4 cyclinders, but one is diesel, the other gas, and one
weighs 50 times more than the entire car the other engine is in ! .

You have to condsider hundreds of factors in speed now -
1/ how much cache is there, what is the cache programming
2/ what is the CPU architecture in terms of multiprocessing,
hypertherthreading, lookup tables, etc etc
3/ what are the "features" built in on the CPU targeted at? -
gaming 3D rendering? - networking multitasking? etc.
4/ what CPU features are incorporated for specialty uses offer
features and compromises - ex. a laptop CPU might be slightly
slower than the SAME CPU for a Desktop, but use less power
and have extra hibernation and shutdown features, and
run cooler on a lower voltage...

This gives you some Idea that you just cannot compare raw
speed anymore...

A discussion of some of the things like how important cache might be are in other questions about CPUs and speed
that have bee asked previously.... these might give you a better idea
of the importance of many other factors in trying to compare
different CPUs...

Here are some other related details :

______________________________...
______________________________...
Why have CPU speeds stopped getting faster? Are
DUAL CORE CPUs twice as fast?

As some of the answers state, you can further research the
information at AMD.com and INTEL.com, on the benefits.
I read a great deal of info on Computer Hardware, and
most benchmark tests running both raw number crunchers and
"real" programs, showed only an improvement of about 15%
with the use of dual cores.
The problem is that the software PROGRAMs have to be
specifically written from scratch to USE the full potential of
the dual cores, and so far, very few programs have been
totally re-written for this purpose.
There is a "brick wall" of speed ( raw GigaHertz ) at the moment,
and to kick out more power, manufacturers are using larger
cache, and " Dual" cores, and motherboards with 2 or 4 entire
CPU's, sometimes with 2 or 4 DUAL core CPU's all with larger
CACHE, to increase the total computing power of the
computer. This is very costly, and only high-end power users
can afford it. Rumours of single atom chip traces, replacement of
silver and gold with platinum and iridium, and other experimental
techniques will break the speed barrier, and then the " old"
single CPU chips will come back, and the price will drop.

______________________________...

The manufacturers hit a brick wall in the voltages and SIZE of the chips. If you will remember, the
first Pentium chips were about 5.00 volts, then they started dropping to 3.52, into the 2 volt range, and now into the 1 volt
range. It takes TIME, even at the speed of light to get a wire or transistor from Zero ( Ground) ( which can be thought of as a ZERO
bit ), up to the working voltage, say 1.8 Volts. This lag is just to
fill the wire ( In a CPU it would P channel or N channel doped
silicon material ) full of electrons, or to drain a conductor of all electrons. By lowering the PEAK voltage, one can speed up the
process. Think of it as climbing a 45 degree hill. If you climb a 5
Unit hill, it takes twice as long as it would to climb a 2.5 unit hill.

The second factor is the size of the wiring and traces and transistors in the CPU. Technology has come a LONG way from the first integrated circuits, which would only have a few thousand transistors. The first CPU was generally called the
4004, a 4-bit processor for calculators from Intel. It had roughly 4000 transistors. I have a photograph of a celebration at Intel the day when they hit the "profit" ratio for manufactured chips that passed inspection - 10%. Then the 8008, the 8080, the 8086, then the
infamous 8088 used on the first IBM PC. The transistor count has gone up, as you pointed out from 8 bits, to 16, to 32, to 64, BIT, CPUs with many Millions of transistors. If you remember the
SLOT 1, or 'chocolate bar chip" that was common a few years ago, this is an example of manufacturers trying to get millions of
transistors and CACHE as close together as possible without
making a 6 inch square CPU. The Slot 1 assembly had typically 6 chips on it, using the standard chip "size" and voltage of the day, and had very powerful use of the L1 and L2 cache.
But, -- the SIZE of the traces has reached a manufacturing limit using "standard" techniques. Now you get "Dual Core" CPUs,
and the CACHE ( L1 and L2 --level 1 and Level 2 ) is reaching 2
Megabytes, in order to speed up the CPU's output, since the
speed, and size has maxed. I have read of new experimental techiniques with single ATOM channels for CPUs, and other
techniques to make the size of the parts smaller. The other practical limit, that you should easily guess, is the HUGE power requirements of the new CPU's, since the average first PC might have a 100 Watt power supply, and now 600 Watt supplies are common. This power as to "go" somewhere, and it generates tremendous heat in CPU and must be cooled, or the traces and
layers will break down. Finding ways to get rid of this heat is a major concern. One guy in Antarctica was able to overclock his 800 MegaHertz CPU into the 2000 MHz range, but, not eveyone can keep their computer at Minus 72 degrees.
As someone pointed out in an answer above, CPU "clock frequency" is not the entire factor in "speed", and better written Cache programming, more cache memory, better design,
and even more efficient software will make a computer faster as well.
A while ago, computer designers discovered Russian computer experts, who, lacking the resources of Silicon Valley, had designs and computer algorithms which did twice the work of the rather bloated "standard" programming of North America. By using
better designs, and clever programming, with LESS code, the
CPU's are now doing much more work than before.
Cache plays a BIG part in the new chips, and has rapidly increased from 125 K, to 256K to 512, to 1 Meg to 2 MEGs,
and is described in a previous answer below :

______________________________...

Cache is memory, just like any other memory on your computer.
The difference between RAM ( random access memory) and cache
memory, is the location and speed, and how it is used.

It was discovered that by PREDICTING what memory locations
" Were GOING to be used ", the computer could speed up the
the process of reading the next set of memory locations,
by having a "predictor" program load the memory into VERY
fast memory, close to, or inside the CPU ( Central Processing UNIT ie the Pentium chip ). There are different levels of Cache RAM, usually called L1, L2, L3 etc. with
L1 Cache being built right on the chip itself, or on chips
built onto the CPU chip module at the factory. L2 Cache ram
is usually added in slots very close to where the CPU ( Pentium), is located on the motherboard. Believe it or not,
the computer chips run so fast, that electricity, moving at the speed of light, takes TIME, to travel across the circuitboard, so that the L1 cache is in the heart of the CPU, and the L2 add on cache is as close as possible to the
core to speed up the transfere.
In early Pentium ( and AMD, Cyrix, IBM, etc. clones,) the
L1 cache was controlled by what can be considered an entire computer WITHIN the CPU. This "computer" had a program, with
its own ram, and rom, and it was pre-programmed at the factory. Its job, was to watch what the (big) computer was doing, and predict, what the computer was going to do NEXT. For example, if you were running a program that used program lines 345, 346, 347, 348, then it would predict that you
would use lines 349, 350, 351, 352, etc, and load them from the " SLOW" ram cards in the ram slots, into the blazing fast Cache ram, right inside the CPU itself, where the
CPU could access it immediately. There are different kinds of cache ram within the "Cache" " Predictor" program, such
as TAG ram, Dirty ram, etc. which are used to keep track of what ram is loaded, used, and changed, by the running software that the CPU is currently using. Tag ram would keep a list of the ram locations that were loaded into the L1
cache, and the Dirty ram, would keep track of which L1 cache
locations had CHANGES made - when the "predictor" program
decided that NEW locations of slow RAM were going to be used, it had to put back the old RAM locations' memory contents. It Could ignore memory values that WERE NOT CHANGED ( saving time otherwise spent writing the information BACK to the slow RAM ), but would have to write
back any CHANGED ram back to the slow ram (marked dirty).
AMD invented very clever and powerful "predictor" programs in some of its early chips, and this made such a dramatic difference that it out-performed the much more expensive Pentium chips. The "predictor" computer program component is now extremely important in achiving faster speeds. CPUs typically have 64k ( K is one thousand (( 1024 bits) ), 128 K, 256K, or 512K of cache built in, and many motherboards can have slots near the CPU that the user can add more
L2 cache ram, for use by the predictor CPU program section.
Cache ram is typically much faster, but much more expensive to construct in the factory, so that chips with more cache are usually a lot more expensive. Since CPU speeds are reaching a limit for the moment, the manufacturers are increasing the L1 CACHE to 1 or more MEGA BYTES, on the high end chips.
SOOooooo... cache is not "just" fast memory. It is a program, inside the CPU, running constantly to predict what memory is "going" to be used. This program goes out to the slow ram, and grabs the memory contents, and loads it into
L1 cache BEFORE the computer asks for it, to speed up memory transfere. If the predictor is correct in GUESSING the next memory locations, the entire computer speeds up greatly....

______________________________...
dual core processors?

when a processor is dual core (for example dual core 3.40 Ghz) does that mean that it can run at twice the clock speed?



______________________________...

There is a GREAT deal of confusion and controversy over the
SPEED of Intel and AMD processors, and with the DUAL cores,
and speed benchmarks show that the actual OUTPUT of
programs, games, and number crunching, has little to do with
the GigaHertz, listed speed of the processor, and that comparing
an AMD CPU speed to an Intel CPU speed, is irrelevant.
A slower, AMD, listed CPU GigaHertz speed, SINGLE CPU,
may outperform a higher, CPU listed speed, DUAL core AMD,
in a specific tested application, such as gaming.
You really have to look at the comparisons of testing reviews
in Computer papers, websites, etc. to keep up with all the
data... ! The programming of the cache predictor, the programming of the CPU internal operating system, and the
programming of software packages can make huge differences.
Typical Benchmarks show that Dual Cores only give a 15%
increase in speed on typical tasks.
After watching changes in computers since 1978, I will boldly
predict that the current Brick Wall in CPU speed wil dissappear.


Hope this helps...

robin

2006-07-27 12:29:21 · answer #1 · answered by robin_graves 4 · 4 2

It's not only the speed that counts here! AMD processors run very well at lower frequencies because of high level L1 and L2 cache (mainly..)!
It runs at 2.2 GHz and it has a 512 Kb L2 cache and 128 Kb for L1. The system bus is configured at 2000!
Those games will run perfectly on 3500+! I've run games demanding 3.4 Ghz on a 3000+ with no problems! Nothing to worry about..

2006-07-27 17:28:55 · answer #2 · answered by agent-X 6 · 0 0

From what I know the AMD Athon just uses a 00 number. So a 3500 show be equiviliant to a 3.5 GHz. Hope that helps. I had to do research on these jsut b4 i bought my PC I ended up getting AMD 64 3200 at the time.

2006-07-27 17:22:04 · answer #3 · answered by ? 2 · 0 0

^^
Kind of interesting except that an AMD 3500+ is not the same as a 3.5GHz P4
The AMD 3500+ is equivelant to a 2.2GHz P4 (roughly, though it may perform better/worse due to the points in the post above)

2006-07-28 06:30:14 · answer #4 · answered by Anonymous · 0 1

even though it runs ar 2.2Ghz, it will be the equivalent of an Intel 3.5Ghz because of more and faster cache memory and other factors. Ignore any game requiremnets that say 3+Ghz, it'll do fine.

2006-07-27 17:25:40 · answer #5 · answered by ngdb 2 · 0 0

2.2Ghz

2006-07-27 17:21:02 · answer #6 · answered by golddiggalova 3 · 0 0

fedest.com, questions and answers