Algorithm Choice
Which data structure generally offers the most efficient way to detect duplicates in a shopping list if order is not important and the list is very large?
- A. An array, using nested loops for comparison.
- B. A sorted array, using binary search.
- C. A hash set (or dictionary), checking for existing keys.
- D. A linked list, traversing the list for each item.
- E. A tree, building a search tree.
Nested Loops Efficiency
What is the time complexity of using nested loops to compare each item in a shopping list of 'n' items against every other item to find duplicates?
- A. O(n)
- B. O(log n)
- C. O(n log n)
- D. O(n^2)
- E. O(1)
HashSet Efficiency
What is the average time complexity of adding an element to a hash set (assuming a good hash function) when checking for duplicates in a shopping list?
- A. O(n)
- B. O(log n)
- C. O(n log n)
- D. O(n^2)
- E. O(1)
Space Complexity Trade-off
When comparing nested loops and a hash set for duplicate detection, which method typically requires significantly more memory?
- A. Nested loops, because they store all items multiple times.
- B. Nested loops, but only for very large lists.
- C. The hash set, as it needs to store the shopping list items plus the hash table itself.
- D. Neither, they use approximately the same amount of memory.
- E. It depends on the speed of the processor.
Sorted Array Efficiency
If a shopping list is already sorted, which method is the most efficient to find duplicates?
- A. Using nested loops for comparison.
- B. A hash set (or dictionary), checking for existing keys.
- C. Linear scan, comparing adjacent elements.
- D. A tree, building a search tree.
- E. Random selection.
Small List Scenario
For a very small shopping list (e.g., less than 10 items), which duplicate detection method is likely to be the fastest in practice due to lower overhead?
- A. Using a hash set.
- B. Using nested loops.
- C. Sorting the list and using binary search.
- D. Sorting the list and scanning adjacent elements.
- E. Using a complex algorithm.
Code Snippet Efficiency
Consider the following pseudo-code: `for i from 0 to n-1: for j from i+1 to n: if list[i] == list[j]: print 'Duplicate'`. What's the dominant factor affecting its efficiency?
- A. The speed of the 'print' statement.
- B. The size of the shopping list, 'n'.
- C. The number of duplicates in the list.
- D. The type of items in the list.
- E. None of the above.
Choosing the Right Tool
A shopping list is received from an external source, and duplicate detection is a one-time operation. Which of the following is the MOST important factor when choosing an algorithm?
- A. Minimizing memory usage at all costs.
- B. Minimizing code complexity, even if it's slightly slower.
- C. Achieving the absolute fastest execution time, regardless of code complexity.
- D. Reducing the cost of sorting first, then searching.
- E. Decreasing compilation time.
Hash function impacts
How can a poorly implemented hash function affect the efficiency of duplicate detection using a hash set?
- A. It will not affect the efficiency.
- B. It can lead to frequent collisions, degrading performance towards O(n) for insertions and lookups.
- C. It can reduce the memory usage of the hash set.
- D. It can make the code unreadable.
- E. It can increase the speed of lookups.
Best case scenarios
In which scenario using nested loops perform efficiently?
- A. When comparing images.
- B. When comparing very small datasets.
- C. When order is important.
- D. When having millions of items in the array.
- E. Nested loops will never perform efficiently.