Basic Definition
Which of the following best defines cache invalidation in distributed systems?
- The process of marking cached data as stale when the original data changes
- The act of increasing cache size to avoid data loss
- Duplicating data across multiple servers for redundancy
- Compressing cached data to optimize storage
- The process of encrypting cache data for security
Cache Consistency Technique
What is a common technique to maintain cache consistency between nodes in a distributed system?
- Write-through caching
- Read-around caching
- Reverse proxying
- Load balencing
- Sharding with hashing
Invalidation Scenario
If a data item is updated on server A, which method ensures all other cache nodes are instantly aware of the change?
- Broadcast invalidation
- Time-to-live (TTL) expiry
- Manual refresh
- Cache warming
- Random eviction
Time-to-Live
What is the main disadvantage of using only Time-To-Live (TTL) for cache invalidation?
- Stale data can be served until TTL expires
- It synchronizes all caches instantly
- TTL reduces server load significantly
- TTL requires manual invalidation of all entries
- There is no way to set TTL in distributed caches
Soft vs Hard Invalidation
In a distributed caching system, what distinguishes soft invalidation from hard invalidation?
- Soft invalidation marks data as stale, hard invalidation removes it immediately
- Soft invalidation encrypts data, hard invalidation decrypts it
- Soft invalidation applies only to disk, hard to memory
- Soft invalidation always refreshes data eagerly
- Hard invalidation allows stale reads while soft does not
Practical TTL Example
Given a memcached setup with TTL set to 120 seconds, what happens if the underlying database value changes after 30 seconds?
- Cache continues serving the old value until TTL expires
- Memcached immediately fetches the new value
- Database write triggers an auto-invalidation
- Cache triggers a write-through update to the database
- All cache nodes instantly flush the value
Invalidate-on-Write Technique
Which cache consistency technique is being used if every write operation to the primary data also removes or updates the cache entry?
- Invalidate-on-write
- Write-around cache
- Read-ahead cache
- Push-based notification
- Eventual consitency
Distributed Invalidations
In which scenario would you most likely face challenges with cache invalidation due to network partitions?
- A distributed system with occasional network splits
- A single-node cache server
- A client-side browser cache
- A readonly distributed file system
- A monolithic application with no external dependencies
Code Snippet Analysis
What is the effect of this pseudocode in a cache system?nnif cache.has(key):n cache.invalidate(key)
- It removes the cached entry if it exists
- It updates the entry to a new value
- It writes the cached entry to disk
- It refreshes the cache from the backend store
- It locks the key to prevent concurrent access
Read/Write Patterns
Why is cache invalidation particularly challenging in distributed systems with high write throughput?
- Frequent updates make it hard to keep all cache nodes in sync
- High write throughput reduces the need for cache
- Write traffic always bypasses the cache layer
- Cache nodes automatically merge changes without invalidation
- Cache only affects read operations and not write ones