Explore key concepts of time and space complexity through real-world scenarios and problem-solving examples. This quiz challenges your understanding of Big-O notation, algorithm efficiency, and common misconceptions, helping you master essential analysis skills in computer science.
Given an algorithm that scans through a list of n items once to find the largest number, what is the time complexity in Big-O notation?
Explanation: The correct answer is O(n) because the algorithm examines each item exactly once, leading to a linear growth in execution time relative to input size. O(n^2) describes quadratic growth, usually from nested loops, which is not the case here. O(1) represents constant time, which would not depend on input size at all. O(log n) refers to logarithmic time, typical of algorithms that repeatedly halve the input, such as binary search, which does not apply to this sequential scan.
What is the space complexity of an algorithm that reverses an array in-place without using any extra array?
Explanation: O(1) is correct because in-place reversal only needs a fixed number of extra variables, regardless of input size. O(n) would apply if a new array of the same size was used for reversal. O(n^2) and O(log n) are not typically associated with array reversal; O(n^2) is frequently seen in algorithms with nested iterations, and O(log n) in recursive processes with logarithmic depth.
If an algorithm compares every pair of elements in an array of n integers, what is its worst-case time complexity?
Explanation: O(n^2) is the correct answer since comparing every pair involves two nested loops over n elements each, leading to n times n comparisons. O(n) signifies only a single pass, which isn't sufficient here. O(2^n) denotes exponential time, typically found in hard combinatorial problems, and O(n log n) arises in more efficient sort algorithms like mergesort, not simple pairwise comparisons.
Which of the following operations runs in constant time with respect to the input size n: accessing the k-th element in an array?
Explanation: The correct answer is O(1), as direct access to an array element takes the same time regardless of the array’s length. O(n) would suggest the time increases linearly, which happens with searching operations, not direct indexing. O(log n) is typical in binary search, and O(n^2) is for double iteration patterns; neither applies to constant time array access.
In average cases, what is the time complexity of looking up a key in a hash table with n entries, assuming a good hash function?
Explanation: With a good hash function and proper collision handling, hash table lookups are O(1) on average, since access does not depend on n. O(n) might occur in the worst-case if collisions are excessive, but this is rare with a sound design. O(log n) can be seen in tree-based data structures, not typical hash tables. O(n^2) is unrelated, as it applies to algorithms with two levels of iteration over n elements.