Parallel Programming Paradigms Quiz Quiz

Challenge your understanding of parallel programming paradigms, including concepts like shared memory, data parallelism, and task decomposition. This quiz is designed to assess key principles, benefits, and distinctions in modern parallel software development.

  1. Shared Memory vs Distributed Memory

    Which feature is most characteristic of the shared memory parallel programming paradigm, where multiple threads can access and modify common data structures?

    1. Data is stored exclusively on disk drives
    2. Global variables accessible to all threads
    3. Each thread runs in a separate physical machine
    4. Processes communicate only via message passing

    Explanation: Shared memory systems enable multiple threads to access and modify common, global variables, supporting efficient communication within a single address space. Unlike distributed memory, where processes communicate through message passing, shared memory does not require explicit inter-process communication. Storing data exclusively on disk drives is unrelated to memory paradigms, and running each thread on a separate physical machine is more typical of distributed memory setups.

  2. Data Parallelism Definition

    In a data parallel approach, which of the following best describes how tasks are executed when processing a large array?

    1. Threads communicate continuously using explicit messages
    2. Each element is processed sequentially by a single thread
    3. Different functions are applied to the entire array in parallel
    4. The same operation is applied concurrently to segments of the array

    Explanation: Data parallelism splits a dataset (like an array) and applies the same operation concurrently to each segment using multiple threads or processes. Processing elements sequentially by a single thread does not utilize parallelism. Continuous explicit messaging is more related to distributed systems, not the data parallel pattern. Applying different functions in parallel describes task parallelism, not data parallelism.

  3. Task Decomposition

    What is the primary goal of task decomposition in parallel programming, as seen when dividing a sorting process into independent sub-tasks?

    1. To increase the number of sequential dependencies
    2. To break the problem down into concurrent but independent activities
    3. To reduce the number of threads to a single unit
    4. To minimize the storage requirements of the algorithm

    Explanation: Task decomposition involves splitting a program into small, independent tasks that can be executed concurrently, which can improve efficiency and performance. Minimizing storage requirements is a separate optimization concern and not directly related to task decomposition. Increasing sequential dependencies limits parallelism, and reducing the number of threads to one does not leverage concurrent computation.

  4. Race Conditions

    When multiple threads update a shared counter variable without proper synchronization, what parallel programming issue can occur?

    1. Race condition
    2. Deadlock
    3. Thread starvation
    4. Replication error

    Explanation: A race condition happens when threads access and modify shared data without synchronization, leading to unpredictable outcomes. Deadlock refers to threads waiting indefinitely for resources, while replication error is not a standard concurrency term in this context. Thread starvation involves threads being continually denied access to resources but is not the direct result of unsynchronized updates to shared variables.

  5. Fork-Join Model Purpose

    What is the main purpose of the fork-join parallel programming model, often used to compute sums by dividing arrays and combining results?

    1. To prevent all forms of inter-thread communication
    2. To increase hardware requirements for complex computations
    3. To execute sequential algorithms faster by loop unrolling
    4. To create parallel branches that later merge into a single result

    Explanation: The fork-join model splits (forks) a task into parallel branches which are executed independently and then merged (joined) to produce the final result. Loop unrolling is a separate optimization technique and does not define the fork-join paradigm. Increasing hardware requirements is an effect, not a goal. The model does not prevent inter-thread communication, as joining threads inherently requires some coordination.