Explore the essentials of algorithm efficiency with questions on Big O notation, time complexity, and pseudocode analysis. This quiz helps reinforce key concepts for comparing and optimizing algorithms using efficiency analysis techniques.
Given the following pseudocode: for i = 1 to n do print(i), which best describes the time complexity of this algorithm?
Explanation: The correct answer is O(n) because the loop executes once for each value from 1 to n, resulting in linear growth. O(log n) would be correct if the loop updated i multiplicatively, not incrementally. O(1) describes constant time, which is inaccurate for a linear iteration. O(n^2) would require nested loops rather than a single loop.
If an algorithm’s running time doubles with every additional input element, what is its Big O time complexity?
Explanation: The correct answer is O(2^n), which indicates exponential growth, as running time doubles with each added input. O(n^2) describes quadratic growth but increases much slower than exponential. O(n) is linear and much less than doubling. O(log n) represents logarithmic growth and implies much faster performance.
Which option best describes the additional memory usage of a function that only uses a few fixed-size variables, regardless of input size?
Explanation: The correct answer is O(1), representing constant space for a fixed number of variables. O(n) or O(n^2) would imply storage that scales with input size, which does not apply here. O(log n) is commonly linked to recursive algorithms with stack usage, not constant memory usage.
In pseudocode, searching for a value in an unsorted array by checking each element one by one has what worst-case time complexity?
Explanation: O(n) is correct, as in the worst case every element must be checked until the target is found or the end is reached. O(1) would only apply to instant lookup, which is not possible without sorting or indexing. O(n log n) is typical for certain sorts, not linear search. O(n^2) overestimates the complexity for a single linear search.
Consider this pseudocode: for i = 1 to n do for j = 1 to n do print(i+j). What is its overall time complexity?
Explanation: O(n^2) is correct since two nested loops each run n times, resulting in n multiplied by n total operations. O(n) is applicable for a single loop, not two. O(log n) does not suit a nested linear situation. O(n!) applies to algorithms with factorial growth, such as generating all permutations.