Big-O in Real Tasks: Time and Space Efficiency Quiz Quiz

Test your understanding of Big-O complexity in practical programming tasks, focusing on both execution time and memory usage. This quiz helps solidify core concepts by applying Big-O to real-world scenarios, making it ideal for coding and algorithm enthusiasts.

  1. Sorting Algorithm Time Complexity

    If you use a simple bubble sort algorithm to sort an array of 1,000 numbers, what is its typical time complexity in Big-O notation?

    1. O(n log n)
    2. O(log n)
    3. O(n^2)
    4. O(n)

    Explanation: Bubble sort repeatedly steps through the array and swaps adjacent elements, leading to a worst-case and average time complexity of O(n^2). O(n) is incorrect because only linear algorithms that touch each element once have this complexity. O(n log n) is typical for more efficient sorts like merge sort, and O(log n) is far too optimistic. The quadratic nature of bubble sort comes from comparing each pair of elements.

  2. Space Complexity of Copying an Array

    What is the space complexity when you create a new array as a copy of an existing array of size n?

    1. O(1)
    2. O(log n)
    3. O(n^2)
    4. O(n)

    Explanation: Copying an array of n elements requires allocating new space for all n items, so the space complexity is O(n). O(1) is incorrect because you need memory for each element, not just a constant amount. O(log n) and O(n^2) both underestimate or overestimate the actual space usage. Only O(n) correctly matches the memory cost for duplicating each element.

  3. Searching in a Sorted Array

    When performing binary search on a sorted array of size n, what is the time complexity for finding an element?

    1. O(1)
    2. O(log n)
    3. O(n^2)
    4. O(n)

    Explanation: Binary search divides the array in half each step, leading to O(log n) time to find an element or determine it's missing. O(n) is for linear search where each element is checked. O(n^2) is much too slow and usually relates to nested loops. O(1) would mean constant time regardless of size, which isn't possible with binary search.

  4. Hash Table Lookup Efficiency

    What is the average-case time complexity for searching a key in a well-designed hash table?

    1. O(n^2)
    2. O(log n)
    3. O(n)
    4. O(1)

    Explanation: A hash table with a good hash function can find a key in O(1) average time because it computes the index directly. O(n) and O(log n) are for linear and binary searches. O(n^2) is not relevant here and would only happen in very poorly constructed tables. While worst-case search can be slower due to collisions, the average case remains constant time.

  5. Appending to a Dynamic Array

    If you append a single item to a dynamic array (such as a list) with enough extra capacity, what is the time complexity?

    1. O(log n)
    2. O(1)
    3. O(n log n)
    4. O(n)

    Explanation: Appending an item to a dynamic array with available capacity only takes constant time, O(1), as it simply adds to the end. O(n) would only occur if resizing is needed, but that's not the case specified. O(n log n) and O(log n) do not apply to this simple operation. Constant time addition is a key feature of dynamic arrays.

  6. Space Complexity in Recursion

    For a recursive function that calls itself n times (like computing factorial), what is the space complexity due to the call stack?

    1. O(log n)
    2. O(n^2)
    3. O(n)
    4. O(1)

    Explanation: Each recursive call adds a new frame to the call stack, resulting in O(n) space usage. O(1) does not account for the growing stack. O(n^2) would be correct for algorithms that use an extra array at each step, which is not given here. O(log n) applies to divide-and-conquer recursion, but not when the depth increases linearly.

  7. Merging Two Sorted Lists

    If you merge two sorted lists of lengths n and m into one sorted list, what is the time complexity?

    1. O(n^2)
    2. O(n + m)
    3. O(m^2)
    4. O(log n)

    Explanation: Merging two sorted lists involves looking at each element from both lists exactly once, leading to O(n + m) time. O(n^2) and O(m^2) are not accurate unless nested operations are performed, which is not the case. O(log n) is too fast for this process, as all elements must be considered.

  8. Memory Use in Constant-Time Algorithms

    Which algorithm has a space complexity of O(1), regardless of the size of its input array?

    1. Creating a reversed copy of an array
    2. Building a frequency table of elements
    3. Sorting the array using insertion sort
    4. Finding the largest number in an array

    Explanation: Finding the largest number involves using just one or a few variables, so it uses constant space, O(1). Creating a reversed copy or a frequency table require space proportional to the input size. Sorting using insertion sort also modifies or stores extra information about elements, which may use more memory, though still usually O(1) extra, but the question asked for a clear O(1) space task.

  9. Quadratic Time in Real Tasks

    When checking for duplicate pairs in an unsorted list of n elements using nested loops, what is the expected time complexity?

    1. O(n log n)
    2. O(log n)
    3. O(n^2)
    4. O(n)

    Explanation: Nested loops over n elements each result in O(n^2) operations, as every pair is compared. O(n) is much faster and would apply if hashing or sets are used. O(log n) and O(n log n) do not fit the nested iteration pattern. The quadratic time emerges from comparing every possible pair.

  10. Impact of Input Size on Linear Algorithms

    If you double the size of the input in an O(n) algorithm, like summing array elements, what generally happens to the running time?

    1. It squares
    2. It remains the same
    3. It halves
    4. It doubles

    Explanation: For O(n) algorithms, the running time increases directly with input size, so doubling the input size doubles the execution time. If it squared, it would indicate O(n^2) behavior. The time does not remain the same or halve, as those do not match the definition of linear complexity. Linear time is proportional to the size of the input.