Explore core differences, architectures, and performance aspects of shared memory and distributed memory systems with this focused quiz. Enhance your understanding of parallel computing models, memory organization, and communication methods relevant to modern computer science and engineering.
Which feature best distinguishes a shared memory system from a distributed memory system in terms of data access by processors?
Explanation: In shared memory systems, all processors access a single address space, allowing them to directly read and write any memory location. In contrast, distributed memory systems require message passing as each processor has its own private memory. The statement about different operating systems is incorrect; both architectures can run single or multiple OS instances. Data redundancy is not an inherent requirement, and communication over the Internet is not a necessity for all distributed systems.
In a distributed memory system where four processors each hold part of a dataset, which method is typically used for synchronization and data sharing?
Explanation: Distributed memory systems lack a single shared address space, so processors synchronize and share data by explicitly sending messages to each other. Global locks and polling physical memory locations require shared memory, which is not present in distributed systems. Cache coherence mechanisms apply to shared memory architectures, not distributed memory ones.
What is a primary scalability challenge unique to shared memory systems as the number of processors increases?
Explanation: In shared memory systems, as more processors are added, contention for shared resources and memory bandwidth increases, limiting scalability. Network latency is a bigger issue in distributed memory architectures, where processors are connected by high-latency interconnects. The failure scenario and global reboots are not unique scalability concerns in shared memory and can occur in other architectures for distinct reasons.
A scientific simulation requires frequent access to a single large dataset by many threads. Which memory system would generally provide better performance for this workload?
Explanation: Shared memory systems are efficient for workloads where multiple threads frequently access or modify common data, as all data resides in a unified address space. Distributed memory systems add overhead due to the need for explicit communication. 'Distributive cache system' and 'Sheared memory system' are not correct terms and do not refer to standard architectures.
From a programmer's perspective, what is a main difference when writing parallel code for distributed memory versus shared memory systems?
Explanation: Distributed memory programming requires explicit communication, so the programmer must use message passing methods to coordinate and share data. Semaphores are not exclusive or mandatory to distributed memory programs. In shared memory systems, threads can also share or have private data; there is no such requirement that each must be private. Data inconsistency is still possible in distributed systems if messages are not correctly synchronized.