Explore the efficiency trade-offs between greedy and dynamic programming algorithms with scenarios highlighting optimality, time complexity, and real-world applications. This quiz helps learners distinguish when to use each approach for problem-solving efficiency and correctness.
Which property must a problem have for dynamic programming to guarantee an optimal solution, illustrated by the shortest path problem in a weighted graph?
Explanation: Dynamic programming requires that a problem's optimal solution can be built from optimal solutions to its subproblems, known as optimal substructure. While the greedy choice property is essential for greedy algorithms, it alone doesn't ensure dynamic programming will work. Overlapping subproblems is also necessary for dynamic programming's efficiency but not directly about solution optimality. Divide and conquer relates to problem decomposition, not specifically to dynamic programming.
Which of the following problems can be solved optimally using a greedy algorithm instead of dynamic programming, as shown by repeatedly selecting the activity with the earliest finish time?
Explanation: The greedy approach yields an optimal solution for the activity selection problem by always picking the earliest finishing activity. The 0/1 Knapsack problem cannot be optimally solved with a greedy algorithm because a local optimum might miss the global one. Edit distance and matrix chain multiplication both require evaluating multiple combinations of subproblems, which dynamic programming handles more efficiently.
Suppose a dynamic programming solution for the Fibonacci sequence uses an array to store previous results, while a greedy approach only computes each term based on the last two; which method typically uses more memory?
Explanation: Dynamic programming often uses additional space to store intermediate results, such as arrays or tables, which increases memory usage. A greedy approach for the Fibonacci sequence typically only needs space for two variables at a time. Saying both use the same or no memory is incorrect, as memory usage is application-dependent and arrays consume more resources.
When solving the coin change problem, a greedy algorithm may not always find the minimum number of coins, while dynamic programming can achieve this; how does their time complexity typically compare for large input sizes?
Explanation: Dynamic programming typically takes more time as it explores all possible combinations, ensuring the minimum number of coins. The greedy approach is usually faster but might not provide the optimal answer for arbitrary coin denominations. The claim that greedy is always slower or that both have equal time complexity does not align with general algorithm analysis.
Which scenario best demonstrates when choosing dynamic programming over a greedy approach is necessary due to possible non-optimal greedy solutions, such as calculating optimal binary search tree cost?
Explanation: Dynamic programming is essential when pursuing a local optimum at each step might not result in the best overall solution. Greedy choices are suitable only if they always lead to an optimal answer, which is not the case for problems like optimal binary search tree. Graph cycle detection and minimum spanning tree are specific algorithm types that may or may not need dynamic programming.