Green Threads u0026 User-Level Scheduling Fundamentals Quiz Quiz

Explore core concepts of green threads, their benefits, limitations, and the basics of user-level scheduling. This quiz is designed to assess essential knowledge for those interested in lightweight threading models and efficient concurrency in programming.

  1. Green Threads Origin

    Which statement best describes green threads in the context of computer programming?

    1. Green threads refer to hardware-level parallel processing units.
    2. Green threads are threads managed entirely by a user-level library instead of the operating system.
    3. Green threads are a special kind of operating system kernel thread.
    4. Green threads are a type of memory allocation method.

    Explanation: Green threads are scheduled and managed in user space, not by the OS kernel, making them user-level threads. Kernel threads are managed by the operating system, not a user-level library. Green threads are not related to hardware units or memory allocation methods, so those options do not accurately describe them.

  2. Green vs. Kernel Threads

    What is a key difference between green threads and kernel threads?

    1. Green threads always require administrative privileges.
    2. Green threads can only run on a single CPU core at a time.
    3. Green threads are typically implemented in hardware.
    4. Green threads execute machine instructions faster than kernel threads.

    Explanation: Green threads, being managed in user space, are usually limited to running on one core regardless of hardware because the OS sees them as a single process. Kernel threads, on the other hand, can run on multiple cores. Green threads do not require administrative privileges, are not inherently faster at machine instructions, and are not hardware-based, making the other options incorrect.

  3. Blocking Operation Impact

    If a green thread performs a blocking I/O operation, what typically happens to the other green threads in that process?

    1. The other green threads are scheduled by the kernel to run on another CPU.
    2. All green threads are blocked until the operation finishes.
    3. The blocking operation only affects the current thread, not others.
    4. Green threads automatically switch to non-blocking I/O.

    Explanation: Because the operating system sees the process as a single thread, if one green thread blocks on I/O, all green threads in that process are blocked. Green threads are not scheduled separately by the kernel, and blocking does not only affect the current thread. Green threads do not automatically use non-blocking I/O.

  4. User-Level Scheduling Definition

    What does user-level scheduling refer to in the context of green threads?

    1. The user-level program controls the scheduling of threads, not the operating system.
    2. The operating system arranges user programs in the background.
    3. Scheduling decisions are based solely on user input from a keyboard.
    4. The user can select which hardware to use for threads.

    Explanation: With user-level scheduling, the green thread library decides when and how threads run, instead of the operating system. The operating system does not schedule green threads individually, users do not choose hardware for green threads directly, and scheduling does not depend solely on keyboard input.

  5. Cooperative Multitasking Example

    In a green-threaded system using cooperative multitasking, what is required for one thread to allow another to run?

    1. The operating system interrupts the thread automatically.
    2. Threads are switched based on hardware timer events.
    3. The running thread must yield control voluntarily.
    4. Threads do not need to yield; switching is automatic.

    Explanation: In cooperative multitasking, threads must yield control themselves for others to run. The OS does not automatically interrupt or switch green threads, and automatic switching based on timer events requires preemptive scheduling, which is not the case here.

  6. Advantages of Green Threads

    Which is a commonly cited benefit of using green threads over kernel threads?

    1. Green threads are always more secure.
    2. Green threads are managed by the operating system, improving stability.
    3. Green threads cannot encounter bugs.
    4. Green threads have lower context-switching overhead.

    Explanation: Green threads typically switch contexts faster because it happens at the user level without OS involvement. Security is not inherently better, bugs can still occur in green threads, and they are not managed by the operating system, so those distractors are incorrect.

  7. Green Thread Portability

    Why are green threads often considered more portable across operating systems?

    1. They require proprietary hardware instructions.
    2. They are directly dependent on specific OS kernel features.
    3. User-level libraries implement green threads, so they do not rely on OS thread support.
    4. Green threads only work on older operating systems.

    Explanation: Green threads can be implemented without OS-specific thread APIs, making them portable. They do not depend on proprietary hardware or specific kernel features, and they are not limited to older operating systems, which makes the other options incorrect.

  8. Green Threads Scalability Limitation

    What is a major limitation of green threads on multi-core systems?

    1. Green threads automatically spread across all available cores.
    2. Green threads require more memory than kernel threads.
    3. Green threads need direct kernel management to scale.
    4. Green threads cannot utilize multiple CPU cores simultaneously.

    Explanation: Green threads typically run on a single core because the operating system sees them as one thread per process. They do not automatically distribute across cores, usually use less memory, and do not require kernel oversight, making the other options less appropriate.

  9. Green Thread Scheduling Algorithm

    Which scheduling algorithm is often used by green thread libraries for managing threads?

    1. Weighted Fair Queueing for hardware packets
    2. Priority-based kernel scheduling
    3. Least Recently Used (LRU) cache replacement
    4. Round-robin scheduling

    Explanation: Round-robin is a simple and common scheduling method for green thread libraries, allowing each thread a fair chance to run. Priority-based scheduling is more typical at the kernel level, LRU is for cache management, and Weighted Fair Queueing applies to networking, so the other answers do not fit.

  10. User-Level Context Switching

    How does user-level context switching in green threads commonly compare in speed to kernel-level thread switching?

    1. User-level context switching is irrelevant to performance.
    2. User-level context switching requires recompiling the kernel each time.
    3. User-level context switching is generally faster, as it avoids kernel involvement.
    4. User-level context switching is always slower due to extra system calls.

    Explanation: Since user-level context switching does not trigger operating system kernel code, it is usually faster than kernel thread context switching. Contrary to some distractors, it does not require extra system calls, recompiling the kernel, and it absolutely matters for performance.