Divide u0026 Conquer Algorithms: Complexity Insights Quiz Quiz

Challenge your understanding of divide and conquer algorithms with this intermediate-level quiz focused on time complexity, recurrence relations, and real-world applications. Improve your grasp on efficiency analysis and fundamental strategies for recursive algorithm design using divide and conquer techniques.

  1. Master Theorem Application

    Given the recurrence T(n) = 2T(n/2) + n, which is commonly seen in the merge sort algorithm, what is the time complexity of this divide and conquer algorithm?

    1. O(n log n)
    2. O(n^2)
    3. O(n)
    4. O(log n)

    Explanation: The correct answer is O(n log n) because the Master Theorem applied to T(n) = 2T(n/2) + n gives a solution of O(n log n). The O(n^2) option corresponds to less efficient algorithms like simple sorting methods. O(log n) is typical for binary search, and O(n) would be for algorithms that only sweep the input once, which does not fit this recurrence. Only O(n log n) matches the growth rate here.

  2. Closest Pair Problem Strategy

    Which statement best describes how divide and conquer reduces the complexity of the Closest Pair of Points problem in computational geometry?

    1. It splits the set into halves, compares pairs across the split, and combines results efficiently.
    2. It samples random pairs throughout the set and checks only the smallest distance found.
    3. It exhaustively checks every possible pair in the set without any partition.
    4. It sorts all points by both axes before dividing, doubling the sorting time.

    Explanation: The correct answer is that the set is divided in halves and efficiently merged, which enables an O(n log n) solution. Random sampling, as stated in the second option, does not guarantee the optimal pair is found and is not part of the standard divide and conquer approach. Sorting by both axes (option three) is unnecessary and does not represent the divide and conquer strategy. Exhaustive checking (option four) is brute force, not divide and conquer.

  3. Karatsuba Algorithm Complexity

    What is the time complexity of multiplying two n-digit numbers using the Karatsuba divide and conquer algorithm?

    1. O(n)
    2. O(n log n)
    3. O(n^1.585)
    4. O(n^3)

    Explanation: O(n^1.585) is correct because the Karatsuba algorithm reduces the number of necessary multiplications to three per recursion, leading to this complexity. O(n^3) represents the naive cubic multiplication (for matrices). O(n log n) is not tight here—it's better than Karatsuba, but relevant for algorithms like FFT-based multiplication. O(n) is incorrect because linear time is unachievable for basic multiplication of n-digit integers.

  4. Binary Search and Divide u0026 Conquer

    Why is binary search considered an example of a divide and conquer algorithm when searching in a sorted array?

    1. It compares the key with every element sequentially from the start.
    2. It sorts the array each time before searching for the key.
    3. It repeatedly divides the array and only searches one half based on the key.
    4. It merges both halves after searching to find the key.

    Explanation: Binary search reduces the problem size by half at each step, which aligns with the divide and conquer paradigm. Sequential comparison, as in the second option, is linear search and not divide and conquer. Constantly sorting the array again is inefficient and unnecessary after the initial sort (option three). Merging halves (option four) is not part of binary search—no merging is involved.

  5. Subproblem Independence in Divide u0026 Conquer

    What characteristic must hold for subproblems in a divide and conquer algorithm to guarantee correct results when combining their solutions?

    1. Subproblems must always have the exact same size.
    2. Subproblems must be independent and not overlap in their data.
    3. Subproblems must recompute the entire problem at each step.
    4. Subproblems should use brute force to guarantee accuracy.

    Explanation: Independence of subproblems ensures that solutions to each part can be correctly merged without double counting or missing parts. Equal size is not a requirement, as subproblems can differ in size as needed (ruled out the second option). Brute force is unnecessary and negates the efficiency of divide and conquer (third option). Recomputing the problem at each step (fourth option) is wasteful and contradicts the nature of divide and conquer.