Handling Async Calls in Frontend Tests: Promises u0026 APIs Quiz

Explore essential concepts regarding the testing of asynchronous calls, including Promises and APIs, within the frontend. This quiz covers best practices, common scenarios, and key techniques for ensuring robust async operations in frontend testing environments.

  1. Properly Waiting for Promises

    When testing a function that returns a Promise, which technique ensures the test waits for the Promise to resolve before making assertions?

    1. Returning the Promise in the test function
    2. Using setInterval to delay assertions
    3. Declaring the test function as synchronous
    4. Omitting a return statement in the test function

    Explanation: Returning the Promise in the test function ensures that the testing framework knows to wait for the Promise to resolve before checking assertions. Declaring the test function as synchronous means the framework cannot track async completion. Using setInterval does not synchronize test execution and can lead to unreliable results. Omitting a return statement in the test function leaves the test framework unaware of any pending asynchronous work, so assertions may run too early.

  2. Mocking API Responses

    Which approach allows you to simulate an API response when testing frontend code that makes fetch requests?

    1. Using localStorage to store API data
    2. Adding a try-catch block around fetch
    3. Replacing the fetch function with a stub
    4. Delaying the test using setTimeout

    Explanation: Replacing the fetch function with a stub is a common way to simulate API responses during tests, allowing control over returned data and timing. Delaying tests with setTimeout does not intercept or control network calls. Adding a try-catch can help handle errors but doesn't simulate responses. Using localStorage does not mimic an async API call and is only suitable for persisting client data.

  3. Handling API Errors in Tests

    During async testing, what is the recommended way to verify that your code correctly handles API errors, such as a rejected Promise?

    1. Remove assertions related to API errors
    2. Assume all API calls succeed in tests
    3. Simulate a rejected Promise and assert error handling logic
    4. Let the test fail silently

    Explanation: Simulating a rejected Promise allows the test to verify that the code under test is handling errors as expected, such as showing error messages or updating the UI. Removing assertions skips important checks and reduces reliability. Failing silently prevents you from catching broken error handling. Assuming all API calls succeed neglects the reality of potential failures and leads to unbalanced tests.

  4. Testing Asynchronous UI Updates

    When a UI element updates as a result of an API call, what is a reliable method to assert the UI change in an async frontend test?

    1. Wait for the UI change using async methods before asserting
    2. Ignore async timing and use synchronous assertions
    3. Disable all API calls during tests
    4. Assert immediately after calling the function

    Explanation: Waiting for the UI change using async test methods ensures assertions run only after the API call has completed and the UI has updated. Asserting immediately after a function call may result in tests passing or failing randomly, depending on timing. Ignoring async timing or using only synchronous assertions is not reliable for async processes. Disabling API calls removes the scenario you intend to test.

  5. Best Practice for Clean Async Test Setup

    In a frontend test suite making repeated async API calls, what is a best practice for maintaining consistent test results?

    1. Reset or restore mocks and state before each test
    2. Share API mock responses across all tests without resetting
    3. Rely on real API endpoints in every test run
    4. Hard-code async responses within production logic

    Explanation: Resetting or restoring mocks and state before each test provides isolation, preventing one test's side effects from impacting others and ensuring predictable results. Sharing mocks without resetting leads to state leakage and inconsistent outcomes. Hard-coding responses inside production code mixes test logic with application logic, which is discouraged. Using real API endpoints risks flakiness, longer test times, and dependence on external systems.