Explore core concepts of multi-core processing and parallelism with this quiz, designed to help you identify key features, benefits, and common terminology in parallel computing and CPU architectures. Perfect for those looking to understand how modern processors handle multiple tasks efficiently.
What is the primary advantage of having multiple cores in a CPU when running several applications at once?
Explanation: Multiple cores in a CPU allow different tasks to be processed at the same time, significantly improving multitasking and overall system responsiveness. Increased cost is often a result of multi-core CPUs, but it is not an advantage. Slower performance is incorrect; multiple cores are designed for speed. Memory usage mainly depends on running programs and not directly on the number of cores.
Which of the following best describes parallelism in computing?
Explanation: Parallelism specifically means executing multiple operations or tasks at the same time, often using multiple cores or processors. Running tasks in sequence is called serial or sequential processing. Using less electricity is related to power efficiency, not parallelism. Storing more data is a function of memory, not processing.
If a computer has a quad-core CPU and needs to sort several large lists, how can parallelism optimize this task?
Explanation: Parallelizing the sorting task by distributing lists among multiple cores speeds up overall completion. Sorting lists one by one would ignore the benefits of multiple cores. Writing data twice in memory is unrelated to this scenario. Screen resolution changes have no impact on processing tasks like sorting.
Which statement correctly differentiates a CPU core from a thread?
Explanation: A core is the actual hardware capable of independently executing instructions, while a thread represents a sequence of tasks scheduled for execution. Threads are not inherently faster than cores; in fact, performance depends on implementation. Cores exist in many types of processors, not just graphics cards. Power consumption varies, but threads themselves are not physical units.
Which of the following is typically shared among all cores in a multi-core processor?
Explanation: Cores in a multi-core processor usually access the same main system memory, allowing efficient data exchange. Clock generators may synchronize the cores but are not individual for each. There is generally one operating system overseeing all cores, not one per core. Cores receive power from the same supply rather than separate ones.
What does the term 'embarrassingly parallel' describe in computing?
Explanation: An 'embarrassingly parallel' problem can be divided into many independent subtasks that do not rely on each other, simplifying parallel execution. If tasks have strong dependencies, they are not considered embarrassingly parallel. Coding mistakes are unrelated. Operations that require a single core are the opposite of parallel problems.
Why is load balancing important in parallel processing environments?
Explanation: Load balancing distributes work evenly across all cores or processing units, maximizing resource usage and minimizing delays. Running applications in sequence does not leverage parallelism. Reducing task count isn't always beneficial if it means underutilizing processors. Intentionally increasing temperature has no functional advantage.
What is the term for a process where a single instruction operates simultaneously on multiple data elements?
Explanation: SIMD allows one instruction to process several data points in parallel and is widely used in tasks like graphics processing. MISD is a rare architecture where different instructions act on the same data. FIMM and RAMD are incorrect and do not represent standard computing models.
What is a common cause of bottlenecks in parallel processing when multiple cores try to access shared resources?
Explanation: Bottlenecks commonly occur when several cores compete to access the same memory or data, causing delays and reduced performance. The number of screens is unrelated to CPU resource management. Improving cache efficiency helps decrease, not increase, bottlenecks. Running one application rarely causes contention.
What term describes software written to take advantage of multiple cores by dividing its tasks among them?
Explanation: Parallelized software distributes tasks to be executed concurrently across multiple cores, improving performance. Serialized programs do the opposite by running tasks one after another. Compression relates to reducing data size, not core utilization. Fragmentation deals with data storage, not program execution.