Challenge your understanding of divide and conquer algorithms with this intermediate-level quiz focused on time complexity, recurrence relations, and real-world applications. Improve your grasp on efficiency analysis and fundamental strategies for recursive algorithm design using divide and conquer techniques.
Given the recurrence T(n) = 2T(n/2) + n, which is commonly seen in the merge sort algorithm, what is the time complexity of this divide and conquer algorithm?
Explanation: The correct answer is O(n log n) because the Master Theorem applied to T(n) = 2T(n/2) + n gives a solution of O(n log n). The O(n^2) option corresponds to less efficient algorithms like simple sorting methods. O(log n) is typical for binary search, and O(n) would be for algorithms that only sweep the input once, which does not fit this recurrence. Only O(n log n) matches the growth rate here.
Which statement best describes how divide and conquer reduces the complexity of the Closest Pair of Points problem in computational geometry?
Explanation: The correct answer is that the set is divided in halves and efficiently merged, which enables an O(n log n) solution. Random sampling, as stated in the second option, does not guarantee the optimal pair is found and is not part of the standard divide and conquer approach. Sorting by both axes (option three) is unnecessary and does not represent the divide and conquer strategy. Exhaustive checking (option four) is brute force, not divide and conquer.
What is the time complexity of multiplying two n-digit numbers using the Karatsuba divide and conquer algorithm?
Explanation: O(n^1.585) is correct because the Karatsuba algorithm reduces the number of necessary multiplications to three per recursion, leading to this complexity. O(n^3) represents the naive cubic multiplication (for matrices). O(n log n) is not tight here—it's better than Karatsuba, but relevant for algorithms like FFT-based multiplication. O(n) is incorrect because linear time is unachievable for basic multiplication of n-digit integers.
Why is binary search considered an example of a divide and conquer algorithm when searching in a sorted array?
Explanation: Binary search reduces the problem size by half at each step, which aligns with the divide and conquer paradigm. Sequential comparison, as in the second option, is linear search and not divide and conquer. Constantly sorting the array again is inefficient and unnecessary after the initial sort (option three). Merging halves (option four) is not part of binary search—no merging is involved.
What characteristic must hold for subproblems in a divide and conquer algorithm to guarantee correct results when combining their solutions?
Explanation: Independence of subproblems ensures that solutions to each part can be correctly merged without double counting or missing parts. Equal size is not a requirement, as subproblems can differ in size as needed (ruled out the second option). Brute force is unnecessary and negates the efficiency of divide and conquer (third option). Recomputing the problem at each step (fourth option) is wasteful and contradicts the nature of divide and conquer.