Up until now, data from the main memory has been placed in any block of the cache. What if each location in the main memory can be placed in specific cache blocks? Associating memory locations to specified cache blocks is called cache associativity.
There are three types of associativity:
Fully Associative
Each location in the main memory can go to any block in the cache. This has been the behavior of the cache so far in this lesson.
Direct Mapped
This association is where every location in the main memory can only be placed in one specified block in the cache. Direct-mapped associativity does not require a replacement policy since there is only one cache entry for each location in the main memory.
n-Way Set Associative
This cache associativity breaks the cache into sets of n blocks. Each location in the main memory is mapped to a specified set of blocks. This requires a replacement policy but one that only keeps track of n blocks in each set. An 4 block cache with 2 blocks per set is called 2-way set associative and has a total of 2 sets. Each location in the main memory is mapped to a set of 2.
Fully associative and direct-mapped cache are types of set-associative caches. A fully associative cache with 32 blocks is considered to be a 32-way set associated with one set. A direct-mapped cache with 32 blocks is considered to be a 1-way set associated with 32 sets.
Instructions
To implement associativity in the Cache()
class:
- The
self.sets
variable has been added and is set to1
. This defines how many sets are in the cache. - The
.random_policy()
and.fifo_policy()
methods have been modified to account for associativity.
To implement the rest of the Cache()
class associativity make these changes inside the .replace_entry()
method:
- Define a variable
set_number
and set it to the valueaddress % self.sets
.
The modulus operator returns the proper set number based on the address
and self.sets
. In this case where self.sets = 1
, set_number
will always equal 0
.
Below the set_number
definition:
- Set
index
to a call toself.fifo_policy(set_number)
Run the code. With self.sets
equal to 1
the cache is fully associative with a FIFO replacement policy.
With a fully associative cache, where self.sets = 1
, the execution time was 288.0 nanoseconds. Now, try a direct-mapped cache.
Inside the .__init__()
method:
- Change the value of
self.sets
to4
.
With a direct-mapped cache, the execution time increases to 468.0 nanoseconds. Feel free to try self.sets = 2
and see how the execution time is affected.
There are now 6 configurations for the Cache()
class with two replacement policies and three types of associativity.