Data Structures u0026 Algorithms: Hashing and Hash Maps Quiz

Challenge your understanding of hashing techniques, hash maps, and their role in solving algorithmic problems such as Two Sum and Subarray Sum. Each question explores critical concepts regarding hash functions, collision resolution, and real-world applications.

  1. Selecting Hash Functions

    Given a set of integer keys where values are primarily multiples of 10, which property is MOST important for a good hash function to minimize clustering in a hash map of size 10?

    1. Has a time complexity greater than O(1)
    2. Ensures uniform distribution even with regular patterns in input data
    3. Is the same as the identity function
    4. Collapses all even numbers to the same bucket
    5. Maps every key to the same index
  2. Collision Handling Strategies

    Which hash collision resolution technique can lead to primary clustering, where long runs of occupied slots promote longer probe sequences in a hash table?

    1. Quadratic probing
    2. Double hashing
    3. Linear probing
    4. Chaining
    5. Separate hashing
  3. HashMap Applications: Two Sum

    In the classic Two Sum problem, why is a hash map preferred over a sorted array and binary search for achieving optimal time complexity?

    1. Hash maps avoid the need to store indices
    2. Hash map offers average O(1) lookup and insertion
    3. Hash maps require all keys to be unique, which is always true for Two Sum
    4. Binary search allows O(1) search on unsorted data
    5. Sorted array provides the fastest insertion
  4. Designing Custom Hash Functions

    If you are designing a hash function for strings where many keys share the same suffix, which approach would most REDUCE the risk of collisions?

    1. Weigh the prefix more heavily than the suffix in the computation
    2. Use only the ASCII value of the last character
    3. Map strings to their length only
    4. Ignore the middle character of each string
    5. Return a constant hash value for all strings
  5. Subarray Sum Problem with Hash Maps

    How does a hash map facilitate a more efficient solution to finding a contiguous subarray that sums to a target value in an array of integers?

    1. By maintaining the sorted order of all window subarrays
    2. By grouping numbers of similar value into buckets
    3. By always storing every possible subarray in advance
    4. By avoiding duplicate values in all subarrays
    5. By storing prefix sums and their earliest indices for constant-time lookups
  6. Hash Map Load Factor Impact

    What is the effect of increasing the load factor in an open-addressed hash table where the table is NOT resized?

    1. Collisions will no longer occur
    2. Average lookup and insertion times increase significantly
    3. Keys become permanently immutable
    4. Memory usage for the table decreases
    5. Insertion time stays at O(1) regardless of occupancy
  7. Chaining vs. Open Addressing

    When implementing hash map collision handling, which statement accurately describes a difference between chaining and open addressing?

    1. Open addressing reduces memory overhead compared to chaining in all cases
    2. Chaining cannot handle duplicate keys but open addressing can
    3. Chaining never uses pointers; open addressing always uses them
    4. Chaining uses linked lists to store entries; open addressing stores all entries within the array
    5. In open addressing, hash function is never used
  8. Hash Map Key Design Pitfalls

    When using user-defined objects as keys in a hash map, what common pitfall can lead to entries becoming 'invisible' or unreachable from the map?

    1. Having fields in the key set to null values
    2. Reusing the hash map in multiple threads
    3. Assigning keys sequential integer values
    4. Modifying the key's fields affecting its hash code after insertion
    5. Using objects with a toString method
  9. Hash Maps and Duplicate Values

    Which statement best explains how a hash map deals with duplicate values but unique keys?

    1. It allows multiple keys to map to the same value without issue
    2. Duplicate values throw a runtime exception
    3. It sorts the values and discards identical ones
    4. It does not permit any duplicate values at all
    5. Each new value replaces the map entirely
  10. Selecting Hash Map Size

    Why is it often recommended to use a prime number as the hash table size when implementing a modular hash function?

    1. So that the modulus operation is faster
    2. It is required for storing numeric keys
    3. To help achieve better key distribution and reduce clustering
    4. Because prime sizes conserve more memory
    5. Prime table sizes eliminate the need for collision handling