Assess your understanding of essential sorting algorithms, asymptotic complexities, and fundamental principles in data structures. Sharpen your problem-solving skills with easy, foundational questions designed for programming fundamentals.
Which of the following sorting algorithms is stable under standard implementation?
Explanation: Merge Sort is stable since it preserves the relative order of equal elements. Heap Sort, QuickSort, and Selection Sort are generally not stable in their conventional implementations because they may swap elements in a manner that changes the order of equal elements.
What is the average case time complexity of the QuickSort algorithm?
Explanation: QuickSort averages O(n log n) time, making it efficient for most cases. O(n^2) is its worst case, O(1) is incorrect as it is unrealistic for sorting, and O(log n) is too low for sorting n items.
When does Insertion Sort achieve its best case time complexity?
Explanation: Insertion Sort works fastest when the array is already sorted, requiring only O(n) time. Reverse order is the worst case, random input is average, and all identical elements still require checking unless it's already sorted.
What is the space complexity of standard Merge Sort?
Explanation: Merge Sort requires O(n) extra space due to the temporary arrays used in merging. O(1) is not possible because of these arrays, O(log n) is for recursion stack in in-place algorithms, and O(n^2) is excessive.
Which data structure is used to implement Heap Sort?
Explanation: Heap Sort utilizes a Binary Heap to efficiently manage and retrieve the next element. AVL trees are balanced search trees, Stack and Queue are unrelated linear structures.
Which sorting algorithm does NOT rely on element-to-element comparisons?
Explanation: Counting Sort is not comparison-based; it uses counting and indexing. The other algorithms sort by comparing elements with each other.
What is the Big-O time complexity of binary search on a sorted array?
Explanation: Binary search halves the search interval each time, resulting in O(log n) time. O(n) applies to linear search, O(n log n) is typical for sorting, and O(1) suggests one-step lookup.
The master theorem is primarily used for solving what type of mathematical expressions?
Explanation: The master theorem addresses recurrence relations common in recursive algorithm analysis. Differential equations, probability, and matrices are beyond its scope.
What does Big-Θ (Theta) notation represent in algorithm analysis?
Explanation: Big-Theta describes both the upper and lower tight bounds of a function, often interpreted as average or typical case. Worst-case is O-notation, lower-bound is Omega, and loose upper bounds are not Θ.
For an algorithm running in O(2ⁿ), what happens to running time when n is doubled?
Explanation: In O(2ⁿ), each increase in n multiplies running time by about 2, leading to exponential growth. Doubling or tripling are linear patterns, and constant time growth does not apply.