Enhance your understanding of C++ threading concepts with these beginner-friendly questions, covering thread creation, synchronization mechanisms, and basic usage scenarios. This quiz helps you grasp key elements of multithreading in C++ for safer and more efficient concurrent programming.
Which of the following is the correct way to create a new standard thread that runs a function called foo in C++?
Explanation: The correct syntax to create a new thread that runs the function foo is std::thread t(foo);. Options like thread::std(foo); and make_thread(foo) are not part of the C++ standard library. Options such as thread t = foo(); attempt to mimic variable initialization but do not use the correct thread type and constructor.
In C++, what does calling join() on a thread object do?
Explanation: Calling join() on a thread object makes the current thread wait until the associated thread finishes execution. It does not start or terminate the thread, and it does not pause it for a specific duration. Starting a thread is done during construction, and terminating a thread immediately can lead to unsafe behavior which join does not perform.
What is a data race in the context of C++ threading?
Explanation: A data race occurs when two or more threads access the same memory location concurrently, and at least one of these accesses is a write, without appropriate synchronization. It is not simply about thread speed or CPU starvation, nor does it refer to threads that do not share resources. The term specifically addresses unsafe shared access.
What is the primary role of a mutex in C++ multithreading?
Explanation: A mutex is used to protect critical sections by allowing only one thread to access a resource or code block at once. It does not make threads run faster, does not block them permanently, and has no role in memory allocation for thread objects. Its function is focused on synchronization and safety.
Why would you use std::lock_guard with a std::mutex in C++?
Explanation: std::lock_guard uses the Resource Acquisition Is Initialization (RAII) idiom to lock the mutex on creation and unlock it when it goes out of scope. It is not used to run threads in sequence, terminate them, or to copy mutex objects. This mechanism ensures mutexes are released safely even if an exception occurs.
What does std::this_thread::sleep_for(std::chrono::seconds(2)) do in C++?
Explanation: std::this_thread::sleep_for pauses the execution of the current thread for the specified duration, in this case, 2 seconds. It does not terminate, start a new thread, or adjust thread priority. The other options either misinterpret the function or assign it unrelated actions.
What does it mean when a thread is detached in C++?
Explanation: A detached thread operates independently without requiring another thread to join it, meaning its resources are released automatically when it finishes. It does not depend on the lifespan of the main thread or parent thread for execution. Waiting for user input is not related to thread detachment.
What is the typical use of std::thread::hardware_concurrency() in C++ programs?
Explanation: std::thread::hardware_concurrency() returns an estimate of the number of concurrent threads your system hardware can handle. It does not force single-threaded execution, schedule threads in real time, or report memory size of a thread. This information is typically used for optimal thread pool sizing.
What is one advantage of using std::unique_lock over std::lock_guard in C++ thread synchronization?
Explanation: std::unique_lock provides greater flexibility, such as the option to defer locking or manually unlock and re-lock, which std::lock_guard does not support. It does not inherently reduce memory usage, and it does not require working with unique pointers or restrict to read-only data. Its key feature is the more versatile lock management.
Which of the following approaches helps prevent data races when multiple threads increment a shared int counter in C++?
Explanation: Using a std::mutex ensures that only one thread can increment the shared counter at a time, preventing data races. Ignoring the issue is incorrect because race conditions can occur with both read and write operations. Simply using a loop or passing by value does not solve race conditions since the underlying shared data is not protected from concurrent access.