Yes to both. Simply because modern applications demand a lot of speed and a cache is one of the best ways to achieve that without sacrificing reliability.
Disk buffer
(also known as disk cache or cache buffer)
Hard disks have historically often been packaged with embedded computers used for control and interface protocols. Since the late 1980s, nearly all disks sold have these embedded computers and either an ATA, SCSI, or Fibre Channel interface. The embedded computer usually has some small amount of memory which it uses to store the bits going to and coming from the disk platter.
The disk buffer is physically distinct from and is used differently than the page cache typically kept by the operating system in the computer's main memory. The disk buffer is controlled by the embedded computer in the disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, 2 to 16 MB, and the page cache is generally all unused physical memory, which in a 2006 PC may be as much as 2GB. And while data in the page cache is reused multiple times, the data in the disk buffer is typically never reused. In this sense, the phrases disk cache and cache buffer are misnomers, and the embedded computer's memory is more appropriately called the disk buffer.
The disk buffer has multiple uses:
Readahead / readbehind
When executing a read from the disk, the disk arm moves the read/write head to (or near) the correct track, and after some settling time the read head begins to pick up bits. Usually, the first sectors to be read are not the ones that have been requested by the operating system. The disk's embedded computer typically saves these unrequested sectors in the disk buffer, in case the operating system requests them later.
Speed matching
The speed of the disk's I/O interface to the computer almost never matches the speed at which the bits are transferred to and from the hard disk platter. The disk buffer is used so that both the I/O interface and the disk read/write head can operate at full speed.
Write acceleration
The disk's embedded microcontroller may signal the main computer that a disk write is complete immediately after receiving the write data, before the data are actually written to the platter. This early signal allows the main computer to continue working even though the data has not actually been written yet. This can be somewhat dangerous, because if power is lost before the data are permanently fixed in the magnetic media, the data will be lost from the disk buffer, and the filesystem on the disk may be left in an inconsistent state. On some disks, this vulnerable period between signaling the write complete and fixing the data can be arbitrarily long, as the write can be deferred indefinitely by newly arriving requests. For this reason, the use of write acceleration can be controversial. Consistency can be maintained, however, by using a battery-backed memory system in the disk controller for caching data - although this is typically only found in high end RAID controllers. Alternately, the caching can simply be turned off when the integrity of data is deemed more important than write performance.
Command queuing
Newer SATA and most SCSI disks can accept multiple commands while any one command is in operation. These commands are stored by the disk's embedded computer until they are completed. Should a read reference the data at the destination of a queued write, the write's data will be returned. Command queuing is different from write acceleration in that the main computer's operating system is notified when data are actually written onto the magnetic media. The OS can use this information to keep the filesystem consistent through rescheduled writes.
Write-through operation is common when operating over unreliable networks (like an ethernet LAN), because of the enormous complexity of the coherency protocol required between multiple write-back caches when communication is unreliable. For instance, web page caches and client-side network file system caches (like those in NFS or SMB) are typically read-only or write-through specifically to keep the network protocol simple and reliable.
The difference between buffers and cache
The terms are not mutually exclusive, and the functions are frequently combined, but there is a difference in intent. A buffer is a temporary storage location where a large block of data is assembled or disassembled. This may be necessary for interacting with a storage device that requires large blocks of data, or when data must be delivered in a different order than that in which it is produced, or merely desirable when small blocks are inefficient. The benefit is present even if the bufferend data is written to the buffer once and read from the buffer once.
A cache, on the other hand, hopes that the data will be read from the cache more often than it is written there. Its purpose is to eliminate accesses to the underlying storage, rather than make them more efficient.
2006-10-18 23:03:57
·
answer #1
·
answered by puppy 3
·
0⤊
0⤋
i'm TYPING IN ALL CAPS! which ability i'm VERY COOL! you should use however hard drive you want. there is no reduce to the length you are able to get as all of them have a similar casing, in basic terms diverse sized disks (until eventually you get SSD in which case this is diverse length flash memory). An exterior HDD makes no distinction both, it connects through USB 2.0 or now and again FireWire. No computer has a "HDD length" reduce. you're contemplating Processors which some motherboards can't examine thoroughly if the motherboard is to old. Btw upgrad you RAM to 3GB or larger.
2016-12-04 23:59:39
·
answer #2
·
answered by ? 4
·
0⤊
0⤋
yes it does.
The cache is used to hold the results of recent reads from the disk, and also to "pre-fetch" information that is likely to be requested in the near future, for example, the sector or sectors immediately after the one just requested.
2006-10-18 22:55:15
·
answer #3
·
answered by Chaudhry 2
·
0⤊
0⤋
The more cache the better, there is 16mb cache now too.
2006-10-18 23:53:13
·
answer #4
·
answered by MicroNap 2
·
0⤊
0⤋
yes ofcourse. it is needed. if u want to work perfectly without any problems, cache should be the other source to provide greater flexibility. go ahead.
2006-10-18 22:53:07
·
answer #5
·
answered by Anonymous
·
1⤊
0⤋
Two questions for the price of one!
The answer to both is "yes".
Rawlyn.
2006-10-18 22:53:43
·
answer #6
·
answered by Anonymous
·
0⤊
1⤋
YEAH
2006-10-18 22:53:10
·
answer #7
·
answered by pirate 1
·
0⤊
1⤋