Memory Hierarchy and Cache Locality: Arrays, Linked Lists, and Access Patterns Quiz

Test your knowledge of memory hierarchy, cache utilization, and how data structures and access patterns affect real-world performance compared to Big-O analysis. Learn about arrays, linked lists, and cache efficiency in computer systems with these essential concepts.

  1. Array Storage in Memory

    Which of the following best describes how a typical array is stored in memory?

    1. Each element is placed at the address of its value
    2. Elements are linked using pointers throughout memory
    3. Elements are placed in consecutive memory locations
    4. Elements are stored in random memory locations

    Explanation: Arrays store their elements in consecutive memory locations, which allows for efficient access and cache utilization. The other options are incorrect because elements stored randomly or linked with pointers describes linked lists, not arrays. Placing elements at the address of their value is not how arrays or other common structures are arranged.

  2. Cache Locality Advantage

    Why do arrays often have better cache locality compared to linked lists when accessed sequentially?

    1. Their elements are stored together in memory
    2. They have built-in cache prefetching functions
    3. Arrays use more complicated pointer operations
    4. Linked lists are always stored in registers

    Explanation: Arrays provide better cache locality because their elements are stored contiguously in memory, allowing a single cache line to hold multiple elements during sequential access. Linked lists do not benefit as much since their nodes may be scattered throughout memory. Arrays do not use special prefetching functions or more complicated pointer operations, and linked lists are not typically stored in registers.

  3. Random Access Performance

    When performing random access on a linked list, which characteristic causes it to be slower than an array, even if both have the same number of elements?

    1. Arrays require more computations per access
    2. Arrays must search for every element starting from the first
    3. Linked lists are always larger in size
    4. Linked list nodes may not be stored next to each other in memory

    Explanation: Linked list nodes are often located at random places in memory, resulting in poor cache performance and slower access during random traversal. The other options are not correct because arrays enable direct element access without searching, do not require extra computations per access, and neither structure is inherently larger just based on the number of elements.

  4. Effect of Sequential Access

    How does sequentially traversing an array impact CPU cache performance compared to making random accesses?

    1. It makes no difference, as both are equally efficient
    2. Sequential traversal skips loading any cache lines
    3. It always takes longer and uses more memory
    4. It is usually faster due to better cache utilization

    Explanation: Sequential access benefits from spatial locality, so the CPU cache can load and reuse cache lines efficiently, making traversal faster. The claim that both methods are equally efficient or that sequential always takes longer is incorrect, as cache-efficient access patterns are generally faster. The last option is also incorrect; sequential access does not skip cache lines.

  5. Big-O vs Real-World Performance

    Why might two data structures with the same Big-O time complexity still show different speeds in practice?

    1. One must be written in a faster language
    2. Differences in cache locality and memory access patterns
    3. Big-O notation accounts for cache misses
    4. Only the slowest always matters in Big-O

    Explanation: Big-O focuses on algorithmic growth rates, not real-world details like cache behavior or memory access efficiency, so structures with the same Big-O can differ significantly in actual speed. Big-O does not account for cache misses, and implementation language or always the slowest operation are unrelated or incorrect in this context.

  6. CPU Cache Definition

    What is a CPU cache in relation to main memory?

    1. A small, fast memory located closer to the CPU
    2. A pointer table in the main memory
    3. A process managing program instructions
    4. A secondary large storage like a hard disk

    Explanation: A CPU cache is a small, fast memory designed to hold frequently accessed data near the CPU, reducing the latency compared to accessing main memory. The other options describe either storage devices, unrelated processes, or structures that are not caches.

  7. Linked List Pointer Overhead

    What memory overhead does a singly linked list node typically add over a raw array element?

    1. A pointer to the next node
    2. A distinct cache line for each value
    3. Double the value's memory
    4. An extra CPU core

    Explanation: Each linked list node usually stores a pointer to the next node, causing additional memory overhead and, often, reduced spatial locality. The other options are incorrect because the extra CPU core is unrelated, extra cache lines are not necessarily allocated, and doubling memory is an overestimate.

  8. Spatial Locality Example

    If a program accesses elements at addresses 100, 104, 108, and 112, what property is it demonstrating?

    1. Spatial locality
    2. Temporal locality
    3. Virtual addressing
    4. Data serialization

    Explanation: Accessing data at nearby addresses demonstrates spatial locality, helping the cache perform efficiently. The other options do not directly describe this access pattern: temporal locality refers to reusing the same memory locations over time, virtual addressing refers to memory mapping, and data serialization is unrelated.

  9. Impact of Memory Hierarchy

    How does memory hierarchy (registers, cache, RAM) affect running time of simple algorithms?

    1. Hierarchy does not affect algorithm speed at all
    2. Algorithms that use memory closer to the CPU usually run faster
    3. Only RAM speed matters for all algorithms
    4. Registers are slower than cache

    Explanation: Accessing data stored in registers or caches, which are closer and faster than RAM, results in shorter running times for algorithms. Registers are actually the fastest memory, so stating they are slower than cache is incorrect. The claim that hierarchy does not matter or that only RAM speed is important ignores significant performance factors.

  10. Linked List Sequential Access

    When traversing a singly linked list sequentially, what is a likely impact on CPU cache performance?

    1. Linked lists always use less cache than arrays
    2. Cache misses may occur often due to non-contiguous memory storage
    3. CPU caches are bypassed completely
    4. Cache hits are guaranteed at every step

    Explanation: Because linked list nodes can be scattered in memory, cache misses are more frequent compared to arrays, which are contiguous. It's incorrect that cache hits are always guaranteed, that linked lists use less cache, or that caches are bypassed entirely; these misrepresent how linked lists interact with CPU caches.