0%

Different Caches

Fully Associative Caches

Basic implementation of cache. Omit it here. Note: The offset is determined by block size.

Direct Mapped Caches

For normal fully associative cache, we break down the address into: [Tag | 31 ~ X bits] [Offset | X-1 ~ 0 bits] which requires multiple tag checks.

So we design a direct-mapped cache inspired by hash table. Currently, we break down the address into: [Tag | 31 ~ X bits] [Index | X-1 ~ Y bits] [Offset | Y-1 ~ 0 bits] And the Index serves as the hashcode for the address, Tag as a identifier.

Here, for Write-back Policy cache, there are:

  • Block of data
  • Index field
  • Tag field of address as identifier
  • Valid bit
  • Dirty bit
  • No replacement management bit

every slot.

For example, for address 10010010, we can break it down into:

  • Tag: 1001
  • Index: 001
  • Offset: 0

Then look for something like cache[Index] + Offset to find avaliable data.

There are also some worst-case for such design. Since the multiple address is mapped into the same slot, We can consider the memory accesses: 00000010, 00010010, 00000010, 00010010, ... And all of the accesses will be missed.

But for fully associative cache, it only miss twice.

What direct-mapped outweighs fully-associative is its fast mapping.

Set Associative Caches

N-way set-associative: divide $ into sets, each of which consists of N slots.

  • Memory block maps to a set determined by Index field and is placed in any of the N slots of that set.
  • Call \(N\) the associativity.
  • Replcaement policy applies to every set.

Actually, from my perspective, Set Associative Cache is just a in-between of the two former.

Fully associative requires 0 index bits. Direct-mapped requires max index bits. Set-associative requires somewhere in-between.

Here is a screenshot from CS61C: img

As you can see, it's just the combination of Direct-Mapped and Fully-Associative.