Reverse Proxy and Load Balancing Fundamentals Quiz Quiz

Assess your understanding of reverse proxy concepts and load balancing mechanisms using widely adopted server tools. This quiz covers basic strategies, popular algorithms, configuration options, and key distinctions in traffic distribution, making it ideal for anyone interested in backend web infrastructure.

  1. Definition of a Reverse Proxy

    Which best describes the core function of a reverse proxy server in web infrastructure?

    1. It stores copies of web pages to speed up load times.
    2. It blocks all incoming traffic from external sources.
    3. It forwards client requests to multiple backend servers and relays their responses to the client.
    4. It manages IP address allocation for devices in a local network.

    Explanation: A reverse proxy acts as an intermediary, forwarding client requests to one or more backend servers and then returning the response. The second option describes a firewall, which is not the same as a reverse proxy. While reverse proxies can cache content, simply storing web pages is more aligned with a caching proxy. Managing IP addresses is the job of a DHCP server, not a reverse proxy.

  2. Purpose of Load Balancing

    Which is the main goal of load balancing when distributing web requests?

    1. To permanently block specific IP addresses from accessing the site.
    2. To ensure requests are distributed evenly among available servers.
    3. To encrypt all communication between server and client.
    4. To limit users' bandwidth usage on the network.

    Explanation: Load balancing aims to distribute client requests evenly, preventing any single server from becoming overwhelmed and helping optimize resource usage. The second option refers to secure communication, which is more about SSL or TLS. Limiting user bandwidth is unrelated to load balancing. Blocking IP addresses is a security measure, not a balancing technique.

  3. Basic Load Balancing Algorithm

    If a load balancer assigns each new client request to servers in a repeating circular order, which algorithm is it using?

    1. Round Robin
    2. One-at-a-Time
    3. Random Pick
    4. Least Connections

    Explanation: The round robin algorithm sends each incoming request to the next server in a predefined list, cycling through all available servers. Random pick assigns to any server by chance. Least connections sends requests to the server with the fewest active connections. One-at-a-Time is not a standard algorithm and does not describe this behavior.

  4. Static vs. Dynamic Backend Configuration

    What is a key difference between static and dynamic backend server configurations for load balancers?

    1. Static configurations work only with cloud-based servers, not physical machines.
    2. Static configurations require manual updating when servers change, while dynamic configurations can adjust automatically.
    3. Static configurations support only encrypted traffic, while dynamic ones do not.
    4. Static configurations route traffic based on user location, dynamic ones do not.

    Explanation: Static backend lists need to be manually changed if servers are added or removed, but dynamic configurations can discover and adjust to changes automatically. Traffic encryption is unrelated to static or dynamic configuration. Both configurations can work with any server type, and routing based on user location is a separate feature called geo-routing.

  5. Reverse Proxy Security

    How does a reverse proxy contribute to the security of backend servers?

    1. By allowing only secure FTP traffic.
    2. By hiding the backend servers' IP addresses from clients.
    3. By increasing the servers' CPU speed.
    4. By adding advertisements to web pages.

    Explanation: A reverse proxy shields the backend infrastructure by exposing only its own address to the outside world, helping to prevent direct access. Adding advertisements is unrelated and not a security function. Allowing only secure FTP traffic is not typical of reverse proxies. Increasing server CPU speed is not a service provided by proxies.

  6. Sticky Sessions Concept

    When a load balancer directs a user's subsequent requests to the same backend server, what is this technique called?

    1. Bandwidth Throttling
    2. Data Mirroring
    3. Session Termination
    4. Sticky Sessions

    Explanation: Sticky sessions, or session persistence, ensure that a user's requests are consistently handled by the same server, which is important for applications that maintain session data in memory. Data mirroring involves copying data, not traffic management. Session termination ends a session instead of maintaining it. Bandwidth throttling controls speed but not connection targeting.

  7. Proxy Forwarding Direction

    Which statement accurately describes the difference between a forward proxy and a reverse proxy?

    1. A forward proxy is always used for load balancing.
    2. A forward proxy acts for clients, while a reverse proxy acts for servers.
    3. A forward proxy encrypts data, but a reverse proxy does not.
    4. A reverse proxy provides internet access to internal users.

    Explanation: A forward proxy represents clients and hides client identity from servers, while a reverse proxy represents servers and hides their identity from clients. Internet access for internal users is achieved with a forward proxy, not reverse. Load balancing is often handled by a reverse proxy, not a forward. Both proxy types can encrypt or not, depending on their setup.

  8. Handling Server Failures

    If a backend server becomes unresponsive, what should a properly configured load balancer do?

    1. Redirect all requests to the failed server.
    2. Shut down all backend servers.
    3. Continue sending requests to all servers regardless of their responsiveness.
    4. Stop sending requests to the failed server and distribute requests to remaining servers.

    Explanation: The main function of a load balancer is to detect unresponsive servers and reroute requests to those that are functioning, maintaining service availability. Shutting down all servers is unnecessary and counterproductive. Continuing to send requests to failed servers results in errors. Redirecting all requests to the failed server worsens downtime.

  9. SSL Offloading in Proxies

    What does SSL offloading mean in the context of a reverse proxy handling HTTPS traffic?

    1. The proxy decrypts incoming requests, then forwards unencrypted requests to backend servers.
    2. The proxy blocks all SSL requests.
    3. The proxy forces all users to use HTTP instead of HTTPS.
    4. The proxy copies all traffic for analytics.

    Explanation: SSL offloading refers to the process where the proxy handles encryption and decryption, reducing the load on backend servers. Blocking SSL would disable secure access, which is not the goal. Forcing HTTP reduces security, so that's not part of SSL offloading. Copying traffic for analytics is unrelated to SSL handling.

  10. Benefits of Health Checks

    Why are health checks important in a load balancing setup with multiple backend servers?

    1. They ensure that every request is always sent to the same server.
    2. They increase the number of available IP addresses.
    3. They prevent the use of secure communication protocols.
    4. They allow the load balancer to detect and avoid routing traffic to unhealthy or offline servers.

    Explanation: Health checks regularly test backend servers, so if one becomes unhealthy, the load balancer can bypass it and maintain service continuity. Sticky sessions (option two) are unrelated to health checks. Health checks do not block secure communications or increase IP addresses; their purpose is to improve reliability, not protocol or network management.