Test your understanding of Big-O complexity in practical programming tasks, focusing on both execution time and memory usage. This quiz helps solidify core concepts by applying Big-O to real-world scenarios, making it ideal for coding and algorithm enthusiasts.
If you use a simple bubble sort algorithm to sort an array of 1,000 numbers, what is its typical time complexity in Big-O notation?
Explanation: Bubble sort repeatedly steps through the array and swaps adjacent elements, leading to a worst-case and average time complexity of O(n^2). O(n) is incorrect because only linear algorithms that touch each element once have this complexity. O(n log n) is typical for more efficient sorts like merge sort, and O(log n) is far too optimistic. The quadratic nature of bubble sort comes from comparing each pair of elements.
What is the space complexity when you create a new array as a copy of an existing array of size n?
Explanation: Copying an array of n elements requires allocating new space for all n items, so the space complexity is O(n). O(1) is incorrect because you need memory for each element, not just a constant amount. O(log n) and O(n^2) both underestimate or overestimate the actual space usage. Only O(n) correctly matches the memory cost for duplicating each element.
When performing binary search on a sorted array of size n, what is the time complexity for finding an element?
Explanation: Binary search divides the array in half each step, leading to O(log n) time to find an element or determine it's missing. O(n) is for linear search where each element is checked. O(n^2) is much too slow and usually relates to nested loops. O(1) would mean constant time regardless of size, which isn't possible with binary search.
What is the average-case time complexity for searching a key in a well-designed hash table?
Explanation: A hash table with a good hash function can find a key in O(1) average time because it computes the index directly. O(n) and O(log n) are for linear and binary searches. O(n^2) is not relevant here and would only happen in very poorly constructed tables. While worst-case search can be slower due to collisions, the average case remains constant time.
If you append a single item to a dynamic array (such as a list) with enough extra capacity, what is the time complexity?
Explanation: Appending an item to a dynamic array with available capacity only takes constant time, O(1), as it simply adds to the end. O(n) would only occur if resizing is needed, but that's not the case specified. O(n log n) and O(log n) do not apply to this simple operation. Constant time addition is a key feature of dynamic arrays.
For a recursive function that calls itself n times (like computing factorial), what is the space complexity due to the call stack?
Explanation: Each recursive call adds a new frame to the call stack, resulting in O(n) space usage. O(1) does not account for the growing stack. O(n^2) would be correct for algorithms that use an extra array at each step, which is not given here. O(log n) applies to divide-and-conquer recursion, but not when the depth increases linearly.
If you merge two sorted lists of lengths n and m into one sorted list, what is the time complexity?
Explanation: Merging two sorted lists involves looking at each element from both lists exactly once, leading to O(n + m) time. O(n^2) and O(m^2) are not accurate unless nested operations are performed, which is not the case. O(log n) is too fast for this process, as all elements must be considered.
Which algorithm has a space complexity of O(1), regardless of the size of its input array?
Explanation: Finding the largest number involves using just one or a few variables, so it uses constant space, O(1). Creating a reversed copy or a frequency table require space proportional to the input size. Sorting using insertion sort also modifies or stores extra information about elements, which may use more memory, though still usually O(1) extra, but the question asked for a clear O(1) space task.
When checking for duplicate pairs in an unsorted list of n elements using nested loops, what is the expected time complexity?
Explanation: Nested loops over n elements each result in O(n^2) operations, as every pair is compared. O(n) is much faster and would apply if hashing or sets are used. O(log n) and O(n log n) do not fit the nested iteration pattern. The quadratic time emerges from comparing every possible pair.
If you double the size of the input in an O(n) algorithm, like summing array elements, what generally happens to the running time?
Explanation: For O(n) algorithms, the running time increases directly with input size, so doubling the input size doubles the execution time. If it squared, it would indicate O(n^2) behavior. The time does not remain the same or halve, as those do not match the definition of linear complexity. Linear time is proportional to the size of the input.