Shared vs Distributed Memory Systems Quiz Quiz

Explore core differences, architectures, and performance aspects of shared memory and distributed memory systems with this focused quiz. Enhance your understanding of parallel computing models, memory organization, and communication methods relevant to modern computer science and engineering.

  1. Direct Data Access in Memory Architectures

    Which feature best distinguishes a shared memory system from a distributed memory system in terms of data access by processors?

    1. Data redundancy is required for every computation.
    2. Each processor runs a different operating system.
    3. Processors can directly access all main memory locations.
    4. Communication always happens through the Internet.

    Explanation: In shared memory systems, all processors access a single address space, allowing them to directly read and write any memory location. In contrast, distributed memory systems require message passing as each processor has its own private memory. The statement about different operating systems is incorrect; both architectures can run single or multiple OS instances. Data redundancy is not an inherent requirement, and communication over the Internet is not a necessity for all distributed systems.

  2. Synchronization Approach Example

    In a distributed memory system where four processors each hold part of a dataset, which method is typically used for synchronization and data sharing?

    1. Using a global lock accessed by all processors
    2. Sending messages between processes
    3. Polling the same physical memory location
    4. Relying on shared cache coherence

    Explanation: Distributed memory systems lack a single shared address space, so processors synchronize and share data by explicitly sending messages to each other. Global locks and polling physical memory locations require shared memory, which is not present in distributed systems. Cache coherence mechanisms apply to shared memory architectures, not distributed memory ones.

  3. Scalability Challenge Scenario

    What is a primary scalability challenge unique to shared memory systems as the number of processors increases?

    1. Need for frequent global reboots
    2. Growing contention for access to shared data
    3. Failure of one node halting all computation
    4. Increasing network latency between nodes

    Explanation: In shared memory systems, as more processors are added, contention for shared resources and memory bandwidth increases, limiting scalability. Network latency is a bigger issue in distributed memory architectures, where processors are connected by high-latency interconnects. The failure scenario and global reboots are not unique scalability concerns in shared memory and can occur in other architectures for distinct reasons.

  4. Best-fit Application Scenario

    A scientific simulation requires frequent access to a single large dataset by many threads. Which memory system would generally provide better performance for this workload?

    1. Shared memory system
    2. Sheared memory system
    3. Distributed memory system
    4. Distributive cache system

    Explanation: Shared memory systems are efficient for workloads where multiple threads frequently access or modify common data, as all data resides in a unified address space. Distributed memory systems add overhead due to the need for explicit communication. 'Distributive cache system' and 'Sheared memory system' are not correct terms and do not refer to standard architectures.

  5. Programmer's Perspective: Communication Model

    From a programmer's perspective, what is a main difference when writing parallel code for distributed memory versus shared memory systems?

    1. Distributed memory systems eliminate the risk of data inconsistency.
    2. Shared memory systems require each thread to have its own private data segment.
    3. Explicit message passing must be implemented in distributed memory systems.
    4. Semaphores must be used in all distributed memory programs.

    Explanation: Distributed memory programming requires explicit communication, so the programmer must use message passing methods to coordinate and share data. Semaphores are not exclusive or mandatory to distributed memory programs. In shared memory systems, threads can also share or have private data; there is no such requirement that each must be private. Data inconsistency is still possible in distributed systems if messages are not correctly synchronized.