Meta Data Structures u0026 Algorithms Interview Concepts Quiz Quiz

Test your knowledge of key data structures and algorithmic patterns commonly evaluated in Meta (Facebook) interviews. This quiz covers frequent question types, concepts, and trade-off discussions to help optimize your DS u0026 Algo interview preparation.

  1. Simplifying Unix Paths

    Which approach is most suitable for simplifying a Unix-style absolute path (like '/a/./b/../../c/') to its canonical form?

    1. Use a stack to process directory tokens sequentially
    2. Reverse the directory tokens before processing
    3. Count the frequency of each directory token
    4. Sort the directory names alphabetically

    Explanation: A stack is used to simulate navigation through directories, allowing efficient handling of '.' and '..' tokens to move up or stay at the current directory. Sorting or reversing does not capture the order of path traversal required. Counting frequencies is irrelevant as the order and structure of directories matter more than occurrences.

  2. Graph Cloning Technique

    When cloning an undirected connected graph node-by-node, which traversal effectively avoids infinite recursion due to cycles?

    1. Sorting the nodes before cloning
    2. Recursively cloning all nodes without tracking
    3. Depth-first search with a visited map
    4. Breadth-first search without marking visited nodes

    Explanation: Using DFS with a visited map ensures each node is cloned once, preventing infinite recursion from cycles. BFS without marking can also be problematic for cycles. Sorting does not prevent repeated traversal, and blindly recursing without records causes stack overflow with cyclic graphs.

  3. Merging k Sorted Lists Optimally

    What data structure offers the fastest practical performance for merging k sorted singly-linked lists into a single sorted list?

    1. Min-heap (priority queue)
    2. Binary Search Tree
    3. Simple array concatenation
    4. Divide and conquer without any extra structure

    Explanation: A min-heap efficiently finds the smallest current node among k lists, handling each node only once for optimal time performance. Concatenating arrays ignores sorted order, BST insertion is unnecessary overhead, and divide-and-conquer still requires multiple passes even if it offers similar asymptotic bounds.

  4. LRU Cache Internals

    Which combination of data structures is typically used to implement an efficient LRU (Least Recently Used) cache?

    1. Single array
    2. Hash map and doubly linked list
    3. Stack and min-heap
    4. Queue and binary search tree

    Explanation: A hash map provides fast key lookups, and a doubly linked list maintains access order efficiently for LRU eviction. Stack and min-heap don't facilitate both O(1) operations, queues and BSTs are unnecessary or too slow, and using only an array lacks efficient updates or eviction.

  5. Handling Backtracking in Directories

    When resolving 'cd ..' operations in filesystem paths, which stack operation reflects moving up a directory?

    1. Sort the stack elements
    2. Pop the top element from the stack
    3. Clear the entire stack
    4. Push '..' onto the stack

    Explanation: Popping the stack simulates returning to the parent directory, which is needed for '..' tokens. Pushing '..' would incorrectly add it as a directory. Clearing the stack would remove the entire history, and sorting is unrelated to directory hierarchy.

  6. Deep Copying Graphs

    Why is a mapping from original nodes to new nodes essential when cloning a graph with cycles?

    1. To increase the recursion depth
    2. To reduce memory usage
    3. To avoid creating multiple copies of the same node
    4. To sort neighbor lists alphabetically

    Explanation: The mapping ensures each original node maps to exactly one cloned node, preventing duplicates that can result from cycles. Increased recursion depth is a risk to be avoided, sorting neighbors is unrelated, and the mapping may actually increase memory usage slightly.

  7. Advantages of Min-Heap in List Merging

    What is the main advantage of using a min-heap when merging k sorted linked lists over straightforward pairwise merging?

    1. Heap-based merging sorts lists in descending order
    2. Each node is handled only once in the heap-based approach
    3. Min-heap reduces the overall memory used
    4. Pairwise merging always has better time complexity

    Explanation: Min-heap ensures each node is inserted and removed once, while pairwise merging re-examines nodes multiple times. Memory use isn't lower with heaps. Pairwise merging isn't faster, and min-heaps maintain ascending, not descending, order.

  8. Hash Map Usage in LRU Cache

    What is the primary role of the hash map in an LRU cache implementation?

    1. Maintaining the order of element usage
    2. Sorting values before eviction
    3. Providing constant-time access to nodes by key
    4. Compressing data for storage

    Explanation: The hash map enables fast key-based lookups, ensuring O(1) access for put/get. Managing usage order is handled by the doubly linked list, not the map. Sorting and compression are not the map's functions in an LRU cache.

  9. LFU vs. LRU Cache

    Which element is evicted when a Least Frequently Used (LFU) cache reaches capacity?

    1. The most recently accessed element
    2. The element with the lowest access frequency
    3. The oldest element in insertion order
    4. The largest key value

    Explanation: LFU caches evict the item used least frequently, not by recency or key size. LRU caches evict least recently used, and neither policy depends on key numerical values.

  10. Recursion vs. Iteration in Graph Cloning

    Why might an iterative solution be preferred over pure recursion for deep cloning large graphs?

    1. To ensure nodes are sorted by value
    2. To prevent stack overflow associated with deep or cyclic graphs
    3. Because recursion is always slower
    4. Since recursion cannot handle neighbor lists

    Explanation: Iterations, using an explicit stack or queue, avoid stack overflow that recursion risks on large or cyclic graphs. Recursion isn't inherently slower, node sorting is unrelated, and recursion can operate on neighbor lists fine.

  11. Heap Push and Pop Operations

    What is the time complexity of inserting or removing a single element in a min-heap of size k while merging k sorted lists?

    1. O(k)
    2. O(n)
    3. O(1)
    4. O(log k)

    Explanation: Heaps have O(log k) insertion and removal times, crucial to achieving optimal merge performance. O(1) is too optimistic, O(k) is linear and only occurs in unsorted searches, while O(n) refers to total node processing.

  12. Canonical Path Formatting

    What property must a simplified canonical path, like the result of '/home//foo/', always satisfy?

    1. No consecutive or trailing slashes, except possibly for root '/'
    2. Path ends with the last directory contained in '..'
    3. Each token separated by two slashes
    4. All directory names capitalized

    Explanation: Canonical paths must remove redundant slashes and avoid trailing slashes except for the root. Case is unchanged, tokens have single (not double) slashes, and '..' navigation should not affect the output's directory name.

  13. Stack Overflows in Recursive Solutions

    Why can pure recursion lead to stack overflow when solving graph traversal problems in interviews?

    1. Because recursion allocates function calls on the call stack, which is limited
    2. Because recursive functions cannot process leaf nodes
    3. As recursion inherently runs slower than iteration
    4. Due to data structure serialization limitations

    Explanation: Recursion grows the call stack, and too many nested calls can exceed limits—especially with large graphs or cycles. Data serialization, slower execution, or inability to reach leaves are not causes for stack overflow.

  14. Correct Doubly Linked List Updates

    What often causes errors when updating the order of elements in a doubly linked list for an LRU cache?

    1. Mixing up key-value mapping in the hash map
    2. Failing to increment the access frequency
    3. Incorrectly updating node pointers during insertions or removals
    4. Sorting the nodes after each access

    Explanation: Problems usually come from mishandling prev/next pointers, leading to broken links. Access frequency is more relevant for LFU caches, mapping errors are hash map issues, and sorting is unnecessary in DLLs.

  15. Merge Complexity Comparison

    For merging k sorted linked lists with N total nodes, what is the worst-case time complexity of the optimal heap-based solution?

    1. O(N k)
    2. O(N^2)
    3. O(N log k)
    4. O(k log N)

    Explanation: Each of the N nodes is pushed and popped from the heap, which takes O(log k) time per operation. Other stated complexities are either worse-case for naïve merging or do not apply.

  16. Path Simplification Edge Cases

    How should a sequence of '...' (three dots) in a Unix-style path be treated during simplification?

    1. It should be treated as a regular directory name
    2. It indicates the parent directory twice
    3. It should cause an error
    4. It should be collapsed like '.' or '..'

    Explanation: Only single and double dots have a special meaning; longer sequences like '...' are just directory names. They do not refer to any navigation rule, nor do they indicate errors or require collapsing.