Test your knowledge of applying Big-O notation to arrays and strings with hash maps, focusing on frequency counting, deduplication, the two-sum problem, and understanding time and space complexities. This quiz helps you master performance trade-offs and optimization strategies for common algorithmic tasks.
What is the typical time complexity of counting the frequency of each character in a string of length n using a hash map?
Explanation: Using a hash map to count character frequencies requires a single pass through the string, resulting in an O(n) time complexity. O(log n) and O(1) are incorrect because they underestimate the need to check every character. O(n^2) would only apply if there was a nested loop, which is unnecessary in this scenario.
After deduplicating an array of n elements using a hash set, what is the space complexity in terms of n?
Explanation: Storing each unique element from the array in a hash set requires up to O(n) additional space. O(1) is unrealistic since the set could grow linearly with the input. O(n^2) is excessive and not needed, while O(log n) does not account for potentially n distinct elements.
When solving the two-sum problem for an array using a hash map, what is the most efficient achievable time complexity?
Explanation: A hash map enables you to check for complements in one pass, yielding O(n) time. O(n^2) arises from a brute-force approach, which is slower. O(log n) and O(n log n) are incorrect as they reflect binary search and sorting complexities, not hash map lookups.
What is the worst-case space complexity when counting element frequencies in an array of n integers using a hash map?
Explanation: The hash map may need to store up to n distinct keys if all elements are unique, giving O(n) space. O(1) is only true if the set of possible keys is constant-sized, which is not always the case. O(n^2) and O(log n) do not correspond to the storage pattern of this technique.
What is the primary trade-off when deduplicating an array in-place (without extra space) compared to using a hash set?
Explanation: Removing duplicates in-place often requires nested loops or sorting, increasing time complexity but saving space. Reducing both time and space is generally not possible; there is usually a trade-off. Using more space in-place contradicts the definition. In-place sorting does not always outperform other methods.
How do frequent hash collisions affect the average-case time complexity of hash map operations in algorithms like two-sum?
Explanation: If many elements hash to the same location, operations degrade toward O(n) per lookup. Collisions do affect complexity and can slow things down. Constant time is only typical when collisions are rare or well-handled. Performance does not improve with more collisions.
When deduplicating a large unsorted array, why might a hash map approach be preferred over sorting and removing duplicates?
Explanation: Hash maps can deduplicate in O(n) time, while sorting requires O(n log n). Sorting may save space in some languages but not always, and hash maps rarely use nested loops here. Sorting is not faster than a well-implemented hash map for this problem.
For a string containing only lowercase English letters, what is the best possible space complexity of frequency counting?
Explanation: The number of possible keys (letters) is constant, so the space needed is O(1). O(n) is needed for arbitrary-length keys. O(log n) and O(n^2) do not reflect the true space for such a limited alphabet.
What is an efficient approach for the two-sum problem on a sorted array, and its typical time complexity?
Explanation: With a sorted array, two pointers can find a valid pair in O(n). A hash map is not needed and would still only be O(n). Brute-force and repeated binary search require O(n^2) steps, so they are less efficient.
In which scenario does using a hash map for frequency counting come with the greatest space cost?
Explanation: If each element is unique, the hash map may require O(n) space, making it more costly. When all elements are the same or there are only two unique values, the space remains small. For tiny inputs, space is not a concern regardless of uniqueness.