Test your knowledge of memory hierarchy, cache utilization, and how data structures and access patterns affect real-world performance compared to Big-O analysis. Learn about arrays, linked lists, and cache efficiency in computer systems with these essential concepts.
Which of the following best describes how a typical array is stored in memory?
Explanation: Arrays store their elements in consecutive memory locations, which allows for efficient access and cache utilization. The other options are incorrect because elements stored randomly or linked with pointers describes linked lists, not arrays. Placing elements at the address of their value is not how arrays or other common structures are arranged.
Why do arrays often have better cache locality compared to linked lists when accessed sequentially?
Explanation: Arrays provide better cache locality because their elements are stored contiguously in memory, allowing a single cache line to hold multiple elements during sequential access. Linked lists do not benefit as much since their nodes may be scattered throughout memory. Arrays do not use special prefetching functions or more complicated pointer operations, and linked lists are not typically stored in registers.
When performing random access on a linked list, which characteristic causes it to be slower than an array, even if both have the same number of elements?
Explanation: Linked list nodes are often located at random places in memory, resulting in poor cache performance and slower access during random traversal. The other options are not correct because arrays enable direct element access without searching, do not require extra computations per access, and neither structure is inherently larger just based on the number of elements.
How does sequentially traversing an array impact CPU cache performance compared to making random accesses?
Explanation: Sequential access benefits from spatial locality, so the CPU cache can load and reuse cache lines efficiently, making traversal faster. The claim that both methods are equally efficient or that sequential always takes longer is incorrect, as cache-efficient access patterns are generally faster. The last option is also incorrect; sequential access does not skip cache lines.
Why might two data structures with the same Big-O time complexity still show different speeds in practice?
Explanation: Big-O focuses on algorithmic growth rates, not real-world details like cache behavior or memory access efficiency, so structures with the same Big-O can differ significantly in actual speed. Big-O does not account for cache misses, and implementation language or always the slowest operation are unrelated or incorrect in this context.
What is a CPU cache in relation to main memory?
Explanation: A CPU cache is a small, fast memory designed to hold frequently accessed data near the CPU, reducing the latency compared to accessing main memory. The other options describe either storage devices, unrelated processes, or structures that are not caches.
What memory overhead does a singly linked list node typically add over a raw array element?
Explanation: Each linked list node usually stores a pointer to the next node, causing additional memory overhead and, often, reduced spatial locality. The other options are incorrect because the extra CPU core is unrelated, extra cache lines are not necessarily allocated, and doubling memory is an overestimate.
If a program accesses elements at addresses 100, 104, 108, and 112, what property is it demonstrating?
Explanation: Accessing data at nearby addresses demonstrates spatial locality, helping the cache perform efficiently. The other options do not directly describe this access pattern: temporal locality refers to reusing the same memory locations over time, virtual addressing refers to memory mapping, and data serialization is unrelated.
How does memory hierarchy (registers, cache, RAM) affect running time of simple algorithms?
Explanation: Accessing data stored in registers or caches, which are closer and faster than RAM, results in shorter running times for algorithms. Registers are actually the fastest memory, so stating they are slower than cache is incorrect. The claim that hierarchy does not matter or that only RAM speed is important ignores significant performance factors.
When traversing a singly linked list sequentially, what is a likely impact on CPU cache performance?
Explanation: Because linked list nodes can be scattered in memory, cache misses are more frequent compared to arrays, which are contiguous. It's incorrect that cache hits are always guaranteed, that linked lists use less cache, or that caches are bypassed entirely; these misrepresent how linked lists interact with CPU caches.