The Performance Of Direct Mapped Caches
As there are fewer cache lines than main memory blocks, an algorithm is designed for mapping main memory blocks into cache lines. A means is needed for determining which main memory block currently occupies a cache line. The way on how the cache mapping decides how the organiztion structure of the cache. These techniques are: direct, associative, and set associative.
As stated in the topic, we discuss a method to improve the performance of direct-mapped caches which is called "Selective Victim Caching".Direct mapping is the technique that maps every block of memory into one cache line. Implementation is easy for such mapping function by using address. The method, a fully-associative cache is employed to enchance the main direct-mapped cache is to store the "victims" of replacements from the main cache. For every replacement, the block is removed from cache and is added to victim cache where it replaces the least recently used block.
The victim cache is accessed in parallel. Effective memory access time was improved, this is done by reducing miss rate in the second level of the memory hierarchy.Selective Victim CachingThis scheme is to reduce the miss rate of direct-mapped caches.With a small fully-associate cache, victim cache, it enhances the direct-mappe main cache.Cache blocks removed are stored by victim cache during replacements.Incoming blocks are selectively placed into first-level cache of the main cache or into victim cache.This mechanism is done based a prediction scheme according to their past history of use.Furthermore, interchanges of blocks between the main cache and the victim cache are also performed selectively.A significant decrease in miss rate by using this scheme.Number of interchanges between the two caches for both small and large caches(4 Kbytes - 128 Kbytes) are improved.For example, implementation in an on-chip processor cache:An average improvement about 20 percent in miss rate for a 16K cache are seen with the simulation with 4 traces from SPEC Release 1 programs.Approximately 74% number of blocks interchanged between the main and victim is reduced.Why an improved cache mapping scheme has to be introduced?An increasing speed gap between processor and the underlying memory hierarchy since rapid technology development in manufacturing processor.Two contributing factors to this scenario are:(i) processor's cycle time has been decreasing at a faster rate than memory access time which
Article name: The Performance Of Direct Mapped Caches essay, research paper, dissertation