Essential Frontend Caching Concepts for GraphQL Quiz

Test your understanding of frontend caching with GraphQL, including normalized cache keys, cache invalidation after mutations, pagination merging, optimistic updates, and TTL strategies. This quiz covers practical basics essential for efficient data management in client-side GraphQL implementations.

  1. Normalized Cache Keys Concept

    What is typically used to form a normalized cache key for a GraphQL object on the frontend, such as for a user with id 7?

    1. combining the object's __typename and its id
    2. using only the query operation name
    3. using only the object's name property
    4. storing the full serialized object as the key

    Explanation: The standard approach is to combine the object's __typename and id, which generates a unique key for each entity. Using only the object's name property is problematic because names can repeat. Storing the full serialized object as the key is inefficient and unnecessary. Using just the query operation name does not uniquely identify entities, making cache management unreliable.

  2. Cache Invalidation After Mutations

    Why is cache invalidation often necessary after a mutation that updates a specific item, such as a to-do marked complete?

    1. To avoid generating new cache keys for every mutation
    2. To prevent the UI from displaying outdated data after changes
    3. To increase the size of the cache for future use
    4. To allow users to undo their changes immediately

    Explanation: Cache invalidation ensures the UI reflects the latest data after mutations by updating or removing stale entries. Allowing undos is not directly related to cache invalidation. New cache keys are only needed when new unique objects are created, not for updates. Increasing cache size is unnecessary and goes against efficient caching practices.

  3. Normalized Cache Example

    Given a post object with __typename 'Post' and id '123', what is the correct normalized cache key for it?

    1. Post:123
    2. 123
    3. Post_123_Title
    4. posts_array_index_1

    Explanation: The convention is to concatenate the __typename and id, separated by a colon—hence, Post:123. Post_123_Title is not a standard format and includes unnecessary information. Using array indices like posts_array_index_1 is unreliable, as item order often changes. Just '123' as a key omits necessary type context and risks collisions.

  4. Cache Update After List Mutation

    After a new comment is added via mutation, how should the normalized cache be adjusted on the frontend?

    1. Only update the root query node
    2. Leave the cache unchanged until the next page reload
    3. Insert the new comment object into the relevant normalized entity and associated lists
    4. Clear the entire cache to ensure freshness

    Explanation: After mutation, the new entity should be normalized and added to appropriate references, ensuring the UI reflects the change. Clearing the entire cache is generally too drastic and wastes resources. Updating only the root query node ignores normalization and does not propagate changes properly. Leaving the cache unchanged will result in a stale UI experience.

  5. Optimistic Updates Usage

    What is the purpose of an optimistic update in a GraphQL frontend when performing an item like toggling a like on a post?

    1. Temporarily disabling the cache during all updates
    2. Slowing down UI updates until server confirmation arrives
    3. Sending multiple mutation requests for the same action
    4. Immediately reflecting the expected result in the UI before the server responds

    Explanation: Optimistic updates allow users to see immediate feedback by assuming the mutation will succeed, greatly improving user experience. Slowing down updates is the opposite approach and leads to sluggish UIs. Sending multiple mutation requests is unnecessary and could cause duplicate actions. Disabling cache for updates is not related to optimistic updates and undermines performance.

  6. Pagination Cache Merge Strategy

    When fetching paginated lists (e.g., posts page by page), what is a common cache merge strategy on the frontend?

    1. Store each page in a separate cache without relation
    2. Replace the entire cached list with only the new page
    3. Concatenate the new page of items to the existing cached list
    4. Delete the cache every time new data arrives

    Explanation: Concatenating new pages to the existing cached array ensures the full dataset is accessible as users paginate. Replacing the entire list causes data loss from previous pages. Storing each page separately makes it difficult to present combined results. Deleting the cache defeats the purpose of caching and causes unnecessary network calls.

  7. Cache Invalidation Timing

    In what scenario should a cache entry be invalidated on the frontend after a mutation?

    1. Whenever a query completes, regardless of changes made
    2. Immediately after every data read operation
    3. Only at regular time intervals, not based on data changes
    4. When the data associated with a normalized key changes on the server

    Explanation: Cache should be invalidated when underlying data has changed, ensuring the UI stays up to date. Invalidating on every query would undermine cache benefits. Doing so only at regular intervals risks staleness. Clearing cache immediately after every read discards valid, fresh data and causes unnecessary network usage.

  8. TTL (Time to Live) Trade-Offs

    What is a key trade-off of using a short TTL (Time to Live) for cached GraphQL responses on the frontend?

    1. Short TTLs make pagination impossible
    2. A short TTL stops the cache from using normalized keys
    3. The cache will stay up to date but cause more frequent network requests
    4. It ensures infinite cache freshness without any re-fetching

    Explanation: Short TTL values keep data fresh by expiring cache quickly, but this leads to more requests, increasing load and latency. Infinite freshness without re-fetching is impossible with any TTL. TTLs do not interfere with pagination functionality. Short TTL does not affect how cache keys are structured; normalization still applies.

  9. Cache Key Collisions

    What risk is present if two different object types, such as 'User' and 'Post', use only their id for the cache key?

    1. TTL will be ignored by the frontend
    2. Cache reads will always be faster
    3. Different objects may overwrite each other's data in the cache
    4. Pagination will automatically merge those object types

    Explanation: Using only the id for cache keys risks collisions—such as a User with id 3 and a Post with id 3—where entries may be overwritten. Faster reads are not guaranteed and may even be compromised due to collision handling. Pagination logic is unaffected by the presence of such collisions. TTL is a separate feature not directly linked to key naming.

  10. Limiting Stale Data with TTL

    How does setting a reasonable TTL in the cache configuration help prevent displaying stale data for rapidly changing GraphQL resources?

    1. It forces all mutations to be rejected
    2. It locks cached data indefinitely to avoid network calls
    3. It ensures cached items expire and are re-fetched after a certain period
    4. It reduces the number of normalized entities in the cache

    Explanation: TTL ensures items are automatically invalidated and refreshed, thus limiting the duration data can become stale. Locking data indefinitely is a misuse of cache, increasing the risk of outdated information. Reducing cached entity count does not inherently address staleness. TTL has no impact on accepting or rejecting mutations.