Explore fundamental concepts of mutexes, semaphores, and monitors with these easy quiz questions, designed to reinforce your understanding of process synchronization, critical sections, and concurrency control in operating systems or concurrent programming environments.
Which synchronization primitive allows only one thread to enter a critical section at a time, using locking and unlocking mechanisms?
Explanation: A mutex is designed to enforce mutual exclusion, ensuring that only one thread holds the lock and can access the critical section at a time. Semaphores can allow more than one context if initialized with a count greater than one. Caches store data and are unrelated to synchronization. Pipes are used for inter-process communication, not for guarding critical sections.
When initialized to a value greater than one, which type of semaphore allows multiple threads to access the critical section simultaneously?
Explanation: A counting semaphore can be initialized with an integer value, permitting that many threads to access the shared resource at once. A binary semaphore only allows one or zero, behaving like a mutex. 'Mutex' refers to mutual exclusion and only allows single-thread entry. 'Terminal' is unrelated to synchronization.
What characteristic distinguishes a monitor from other synchronization primitives, making it easier to manage thread coordination within a shared resource?
Explanation: Monitors provide built-in condition variables and mutual exclusion, which help coordinate access and waiting in a structured way. Non-blocking data transfer is not directly related to monitors. Unlimited access would defeat the purpose of synchronization. Monitors can be implemented in software, not just hardware.
If two processes enter a critical section without synchronization, what kind of issue is most likely to occur?
Explanation: A race condition occurs when multiple processes change shared data concurrently, leading to unpredictable results if not synchronized. Deadlock involves processes waiting on each other indefinitely, which is a different issue. Paging relates to memory management, and pipelining is a concept in instruction execution, not synchronization.
When would you use a binary semaphore instead of a counting semaphore?
Explanation: A binary semaphore only has two states: locked and unlocked, making it ideal for mutual exclusion situations. Priority scheduling is unrelated to binary or counting semaphores specifically. Allowing multiple threads requires a counting semaphore. Memory allocation tracking does not use semaphores directly.
Which primitive requires careful design to avoid deadlock when multiple locks are acquired by different threads simultaneously?
Explanation: Improper use of multiple mutexes can lead to deadlocks if threads wait for each other's locks in a circular manner. Buffers, channels, and arrays refer to storage and communication mechanisms, but they do not inherently cause deadlocks without synchronization context.
What are the standard names for the two fundamental semaphore operations?
Explanation: Semaphore operations are typically called 'Wait' (or 'P') and 'Signal' (or 'V'). Increment and Decrement refer to the internal counter but are not standard operation names. Lock and Unlock are used for mutexes. Start and Stop do not apply to semaphores.
Within a monitor, how is access to procedures designed to avoid simultaneous entry by multiple threads?
Explanation: Monitors ensure that only one thread executes a procedure at a time, providing built-in mutual exclusion. CPU time prioritization is unrelated to monitor internals. Batch processing terms and parallel, unrestricted execution do not address synchronization.
Which scenario best describes resource starvation in synchronization?
Explanation: Starvation happens when a thread is perpetually denied access due to other threads repeatedly acquiring the resource. Allowing all threads into the critical section defeats the concept of synchronization. Interruptions are not starvation. Releasing and reacquiring a mutex doesn't inherently lead to starvation unless scheduling is unfair.
Why is mutual exclusion essential in concurrent programming with shared resources?
Explanation: Mutual exclusion ensures that only one thread can modify shared data at a time, preventing conflicts and unpredictable behavior. Allocating memory, accelerating execution, and handling user input are not directly related to mutual exclusion's core purpose.