WebGL Fundamentals: Rendering 2D and 3D Graphics Quiz Quiz

Challenge your understanding of WebGL concepts, including shaders, coordinate systems, drawing primitives, and the graphics pipeline. Perfect for learners and professionals seeking to evaluate their foundational skills in rendering 2D and 3D graphics with WebGL.

  1. Vertex Shader Purpose

    In WebGL, what is the primary role of the vertex shader when rendering a spinning 3D cube?

    1. It directly displays the graphics onto the screen.
    2. It applies colors and textures to the rendered surfaces.
    3. It transforms vertex positions from object space to clip space.
    4. It captures user input for animation control.

    Explanation: The vertex shader's main role is to process each vertex by transforming its coordinates from object space to clip space, allowing for proper positioning in 3D space. Applying colors and textures is typically handled by the fragment shader, not the vertex shader. Displaying graphics is managed by the rasterizer and the GPU pipeline as a whole, not directly by the vertex shader. Capturing user input is a task for JavaScript or UI logic, not handled within shaders.

  2. Coordinate System Origin

    When specifying 2D positions for vertices in normalized device coordinates (NDC) in WebGL, what are the limits for the x and y values that are guaranteed to be visible?

    1. Both x and y must be between -1.0 and 1.0.
    2. x must be between -1.0 and 1.0, and y between 0.0 and 1.0.
    3. x must be between -0.5 and 0.5, and y between -1.0 and 1.0.
    4. Both x and y must be between 0.0 and 1.0.

    Explanation: In NDC, the visible area is defined by x and y coordinates ranging from -1.0 to 1.0; values outside this range may be clipped and not displayed. The 0.0 to 1.0 range is commonly used for texture coordinates, not for vertex positions. The range -0.5 to 0.5 is not a standard boundary for visibility in NDC. Using 0.0 to 1.0 just for y is also incorrect, as both axes use -1.0 to 1.0 for visibility.

  3. Drawing Primitives

    Which WebGL drawing mode should you use to render a rectangle with two triangles in one draw call?

    1. LINES
    2. TRIANGLE_FAN
    3. TRIANGLES
    4. POINTS

    Explanation: The TRIANGLES mode lets you define each set of three vertices as one triangle, which is ideal for creating rectangles using two triangles. LINES mode renders individual lines between pairs of vertices, not filled shapes like rectangles. POINTS mode only displays individual points at each vertex. TRIANGLE_FAN can create complex polygons but is less standard for axis-aligned rectangles using triangle pairs.

  4. Shader Compilation Errors

    If a fragment shader in WebGL fails to compile due to a syntax error, what is the most likely outcome when attempting to render the scene?

    1. No graphics will be displayed, and an error will be reported in the console.
    2. The scene will render, but all objects will appear completely black.
    3. WebGL will automatically correct the error and use a default shader.
    4. Only parts of the scene not using the fragment shader will appear.

    Explanation: A syntax error in a fragment shader prevents the shader program from linking, which means WebGL cannot render any graphics, and typically an error message is shown in the developer console. The objects won't simply appear black; rendering is halted entirely. WebGL does not autocorrect shader errors or select backup shaders. All visible objects rely on the fragment shader, so there is no part of the scene that bypasses its use.

  5. Depth Testing Usage

    Why would you enable depth testing in WebGL when rendering overlapping 3D objects, such as a sphere passing in front of a cube?

    1. To increase rendering speed by reducing the number of fragments.
    2. To ensure closer objects correctly obscure those further away based on depth.
    3. To prevent the need for vertex shaders for each object.
    4. To improve the precision of color blending between transparent objects.

    Explanation: Depth testing lets WebGL keep track of which fragments are closest to the viewer, so objects in front can block those behind, providing correct 3D overlap. It does not directly increase rendering speed or reduce the number of fragments; in fact, it adds some processing overhead. Depth testing does not handle color blending or transparency; that's a separate blending feature. It also does not remove the requirement for vertex shaders, as these remain essential in the graphics pipeline.