Tuesday, September 21, 2010

Techniques and Performance of Cache

Cache-Replacement Policy

• Technique for choosing which block to replace.
- when fully associative cache is full.
- when set-associative cache’s line is full.

• Direct mapped cache has no choice.

• Random.
- replace block chosen at random.

• LRU: least-recently used
- replace block not accessed for longest time.

• FIFO: first-in-first-out
- push block onto queue when accessed.
- choose block to replace by popping queue.

Cache Write Techniques

• When written, data cache must update main memory.

• Write-through
- write to main memory whenever cache is written to
easiest to implement.
- processor must wait for slower main memory write
potential for unnecessary writes.

• Write-back
- main memory only written when “dirty” block replaced.
- extra dirty bit for each block set when cache block written to
reduces number of slow main memory writes.

Cache Impact on System Performance

• Most important parameters in terms of performance:

• Total size of cache
- total number of data bytes cache can hold.
- tag, valid and other house keeping bits not included in total

• Degree of associativity.

• Data block size.

• Larger caches achieve lower miss rates but higher access cost.

• Example:
- 2 Kbyte cache: miss rate = 15%, hit cost = 2 cycles, miss cost = 20 cycles
- avg. cost of memory access
= (0.85 * 2) + (0.15 * 20) = 4.7 cycles

• 4 Kbyte cache: miss rate = 6.5%, hit cost = 3 cycles, miss cost will not change.
- avg. cost of memory access = (0.935 * 3) + (0.065 * 20) = 4.105 cycles (improvement).

• 8 Kbyte cache: miss rate = 5.565%, hit cost = 4 cycles, miss cost will not change.
- avg. cost of memory access = (0.94435 * 4) + (0.05565 * 20) = 4.8904 cycles.

Cache Performance Trade-Offs

• Improving cache hit rate without increasing size
- Increase line size
- Change set-associativity



information shared by www.irvs.info

No comments:

Post a Comment