Test your knowledge of key data structures and algorithmic patterns commonly evaluated in Meta (Facebook) interviews. This quiz covers frequent question types, concepts, and trade-off discussions to help optimize your DS u0026 Algo interview preparation.
Which approach is most suitable for simplifying a Unix-style absolute path (like '/a/./b/../../c/') to its canonical form?
Explanation: A stack is used to simulate navigation through directories, allowing efficient handling of '.' and '..' tokens to move up or stay at the current directory. Sorting or reversing does not capture the order of path traversal required. Counting frequencies is irrelevant as the order and structure of directories matter more than occurrences.
When cloning an undirected connected graph node-by-node, which traversal effectively avoids infinite recursion due to cycles?
Explanation: Using DFS with a visited map ensures each node is cloned once, preventing infinite recursion from cycles. BFS without marking can also be problematic for cycles. Sorting does not prevent repeated traversal, and blindly recursing without records causes stack overflow with cyclic graphs.
What data structure offers the fastest practical performance for merging k sorted singly-linked lists into a single sorted list?
Explanation: A min-heap efficiently finds the smallest current node among k lists, handling each node only once for optimal time performance. Concatenating arrays ignores sorted order, BST insertion is unnecessary overhead, and divide-and-conquer still requires multiple passes even if it offers similar asymptotic bounds.
Which combination of data structures is typically used to implement an efficient LRU (Least Recently Used) cache?
Explanation: A hash map provides fast key lookups, and a doubly linked list maintains access order efficiently for LRU eviction. Stack and min-heap don't facilitate both O(1) operations, queues and BSTs are unnecessary or too slow, and using only an array lacks efficient updates or eviction.
When resolving 'cd ..' operations in filesystem paths, which stack operation reflects moving up a directory?
Explanation: Popping the stack simulates returning to the parent directory, which is needed for '..' tokens. Pushing '..' would incorrectly add it as a directory. Clearing the stack would remove the entire history, and sorting is unrelated to directory hierarchy.
Why is a mapping from original nodes to new nodes essential when cloning a graph with cycles?
Explanation: The mapping ensures each original node maps to exactly one cloned node, preventing duplicates that can result from cycles. Increased recursion depth is a risk to be avoided, sorting neighbors is unrelated, and the mapping may actually increase memory usage slightly.
What is the main advantage of using a min-heap when merging k sorted linked lists over straightforward pairwise merging?
Explanation: Min-heap ensures each node is inserted and removed once, while pairwise merging re-examines nodes multiple times. Memory use isn't lower with heaps. Pairwise merging isn't faster, and min-heaps maintain ascending, not descending, order.
What is the primary role of the hash map in an LRU cache implementation?
Explanation: The hash map enables fast key-based lookups, ensuring O(1) access for put/get. Managing usage order is handled by the doubly linked list, not the map. Sorting and compression are not the map's functions in an LRU cache.
Which element is evicted when a Least Frequently Used (LFU) cache reaches capacity?
Explanation: LFU caches evict the item used least frequently, not by recency or key size. LRU caches evict least recently used, and neither policy depends on key numerical values.
Why might an iterative solution be preferred over pure recursion for deep cloning large graphs?
Explanation: Iterations, using an explicit stack or queue, avoid stack overflow that recursion risks on large or cyclic graphs. Recursion isn't inherently slower, node sorting is unrelated, and recursion can operate on neighbor lists fine.
What is the time complexity of inserting or removing a single element in a min-heap of size k while merging k sorted lists?
Explanation: Heaps have O(log k) insertion and removal times, crucial to achieving optimal merge performance. O(1) is too optimistic, O(k) is linear and only occurs in unsorted searches, while O(n) refers to total node processing.
What property must a simplified canonical path, like the result of '/home//foo/', always satisfy?
Explanation: Canonical paths must remove redundant slashes and avoid trailing slashes except for the root. Case is unchanged, tokens have single (not double) slashes, and '..' navigation should not affect the output's directory name.
Why can pure recursion lead to stack overflow when solving graph traversal problems in interviews?
Explanation: Recursion grows the call stack, and too many nested calls can exceed limits—especially with large graphs or cycles. Data serialization, slower execution, or inability to reach leaves are not causes for stack overflow.
What often causes errors when updating the order of elements in a doubly linked list for an LRU cache?
Explanation: Problems usually come from mishandling prev/next pointers, leading to broken links. Access frequency is more relevant for LFU caches, mapping errors are hash map issues, and sorting is unnecessary in DLLs.
For merging k sorted linked lists with N total nodes, what is the worst-case time complexity of the optimal heap-based solution?
Explanation: Each of the N nodes is pushed and popped from the heap, which takes O(log k) time per operation. Other stated complexities are either worse-case for naïve merging or do not apply.
How should a sequence of '...' (three dots) in a Unix-style path be treated during simplification?
Explanation: Only single and double dots have a special meaning; longer sequences like '...' are just directory names. They do not refer to any navigation rule, nor do they indicate errors or require collapsing.