Memory Hierarchy and Cache Locality Quiz Quiz

Challenge your understanding of memory hierarchy, cache locality, and their impact on computing performance. This quiz covers concepts such as cache organization, memory access patterns, locality types, and the roles of different memory levels in computer systems.

  1. Memory Hierarchy Levels

    Which of the following correctly orders the memory hierarchy from fastest to slowest?

    1. Cache, Registers, Main Memory, Hard Disk
    2. Hard Disk, Main Memory, Cache, Registers
    3. Main Memory, Cache, Registers, Hard Disk
    4. Registers, Cache, Main Memory, Hard Disk

    Explanation: Registers are the fastest form of memory, followed by cache, main memory (RAM), and then hard disk which is much slower. Option B incorrectly lists hard disk first, which is the slowest. Option C places main memory ahead of cache and registers, which is not accurate. Option D switches the order of registers and cache, but registers are actually faster than cache.

  2. Spatial Locality Example

    If a program processes array elements stored at consecutive memory addresses, which type of locality is it demonstrating?

    1. Temporal Locality
    2. Spatial Locality
    3. Sequential Recall
    4. Logical Locality

    Explanation: Spatial locality refers to accessing memory locations close together, as in processing arrays sequentially. Sequential recall is not a standard memory locality concept, making it an incorrect option. Temporal locality refers to repeatedly accessing the same locations, not consecutive ones. Logical locality is also incorrect and not a standard term for this property.

  3. Temporal Locality in Practice

    When a variable is accessed repeatedly within a short time span, which principle does this illustrate?

    1. Spatial Pattern
    2. Random Access
    3. Virtual Locality
    4. Temporal Locality

    Explanation: Temporal locality means that a recently accessed memory location is likely to be accessed again soon, often seen with variables reused in a loop. Random access does not imply repeated access, so it is not correct. Spatial pattern is vague and nonstandard in this context. Virtual locality is not a defined concept in memory access patterns.

  4. Cache Miss Types

    What type of cache miss occurs the first time a data block is accessed?

    1. Replacement Miss
    2. Hot Miss
    3. Cold Miss
    4. Conflict Miss

    Explanation: A cold miss, also called a compulsory miss, happens when data is accessed for the first time and is not yet present in the cache. Conflict miss occurs when multiple blocks compete for the same cache location. Hot miss is not a recognized term in this context. Replacement miss is not a standard term for this situation, though replacement refers to cache block eviction.

  5. Block Size Impact

    How does increasing the cache block size generally affect spatial locality?

    1. It improves spatial locality exploitation.
    2. It decreases access time due to temporal locality.
    3. It eliminates the need for replacement policies.
    4. It reduces memory capacity.

    Explanation: Larger cache blocks allow more contiguous memory to be fetched together, making better use of spatial locality when programs access nearby addresses. Decreasing access time is not directly related to temporal locality here; it's about block size and spatial access. Replacement policies are still needed regardless of block size, so option C is incorrect. Option D is inaccurate because block size does not reduce total memory capacity.

  6. Direct-Mapped Cache

    In a direct-mapped cache, each main memory block can be mapped to how many cache locations?

    1. Any available cache location
    2. Exactly two cache locations
    3. Exactly one cache location
    4. Multiple locations based on size

    Explanation: A direct-mapped cache allows each memory block to be placed in only one specific cache location based on its address. Two locations or multiple locations are possible in set-associative or fully-associative caches, not direct-mapped. The phrase 'any available cache location' describes fully-associative caches and is incorrect here.

  7. Miss Penalty Meaning

    What does the term 'miss penalty' refer to in the context of cache memory?

    1. The number of accesses to cache lines
    2. The cost of manufacturing cache chips
    3. The process of writing data back to cache
    4. The time taken to fetch data from a lower level after a cache miss

    Explanation: Miss penalty is the additional time needed to retrieve data from a lower memory level (such as main memory) after a cache miss. Manufacturing costs are unrelated to the concept of miss penalty. The number of cache accesses and the write-back process do not define miss penalty; they are different cache operations.

  8. Cache Write Policies

    Which cache policy updates both cache and main memory when a write occurs?

    1. Write-Through
    2. Write-Back
    3. Copy-Forward
    4. Write-First

    Explanation: Write-through updates both the cache and main memory whenever a write occurs, ensuring consistency between them. Write-back only updates main memory when the block is replaced, so it is not correct here. Write-first and copy-forward are not standard cache write policies; they are distractors.

  9. Locality in Matrix Multiplication

    Why does row-wise access improve cache performance in matrix multiplication?

    1. It reduces the size of each matrix element
    2. It increases spatial locality by accessing adjacent data
    3. It boosts temporal locality by reusing distant data
    4. It requires less cache memory overall

    Explanation: Accessing data row-wise usually means accessing memory locations that are close together, which takes advantage of spatial locality and leads to improved cache usage. Temporal locality is more about accessing the same item again, not neighboring data. The access pattern does not affect total cache memory requirements or the matrix element size, making options C and D incorrect.

  10. Virtual Memory and Cache

    How does virtual memory contribute to the memory hierarchy?

    1. It extends the apparent memory size available to programs
    2. It eliminates the need for registers
    3. It stores data only in cache for faster access
    4. It bypasses main memory to access the hard disk directly

    Explanation: Virtual memory enables systems to use disk storage as an extension of RAM, making it appear that there is more memory than physically installed, which is a key function in memory hierarchy. Storing data only in cache is incorrect; virtual memory relies on both RAM and secondary storage. Bypassing main memory is not how virtual memory functions. Eliminating registers is incorrect because registers remain a separate and essential memory component.