PyTorch Fundamentals: Tensors and Autograd Quiz Quiz

Enhance your foundational understanding of tensors and autograd functionality, essential elements of PyTorch used for deep learning and automatic differentiation. This quiz covers tensor creation, operations, and the autograd system to reinforce core concepts and common use scenarios.

  1. Tensor Creation

    Which function is used to create a tensor with all zeros of size 3x2?

    1. ones((3,2))
    2. empty((3,2))
    3. zeros((3,2))
    4. rand((3,2))

    Explanation: The correct answer is zeros((3,2)), which constructs a tensor filled with zeros and the shape 3 rows by 2 columns. ones((3,2)) would create a tensor of ones, and rand((3,2)) creates a tensor with random float values. empty((3,2)) creates a tensor with uninitialized values that are not guaranteed to be zeros.

  2. Tensor Data Types

    What argument lets you specify the data type, such as float32 or int64, when creating a new tensor?

    1. mode
    2. dtype
    3. format
    4. type

    Explanation: The argument dtype is used to set the desired data type for a tensor, like float32 or int64. The option type is incorrect because it refers to the object’s type, not the tensor’s internal data type. mode and format are not valid arguments for defining tensor data types, making them unsuitable choices.

  3. Tensor Shape and Reshaping

    Given a tensor of size (4, 2), which method can reshape it to size (2, 4)?

    1. transpose
    2. convert
    3. viewshape
    4. reshape

    Explanation: The reshape method allows you to change the shape of a tensor without altering its contents, so reshape is correct here for converting (4, 2) to (2, 4). viewshape is a non-existent method and convert does not perform reshaping. transpose swaps dimensions but does not generalize to arbitrary reshaping.

  4. Tensor Operations

    If a tensor a contains [1, 2] and b contains [3, 4], what is the result of a + b?

    1. [5, 8]
    2. [4, 6]
    3. [2, 6]
    4. [1, 3, 2, 4]

    Explanation: Element-wise addition of [1, 2] and [3, 4] results in [4, 6], as each corresponding element is added together. [2, 6] is incorrect as it mixes the logic, while [1, 3, 2, 4] is a concatenation, not addition. [5, 8] does not represent correct element-wise addition, making it an inaccurate choice.

  5. Device Placement

    Which attribute tells you if a tensor is currently stored on the CPU or GPU?

    1. location
    2. device
    3. env
    4. platform

    Explanation: The device attribute specifies where the tensor’s memory is allocated, such as a processor or graphics processing unit. location is not a recognized tensor attribute, and env nor platform provide information about tensor storage within typical tensor frameworks.

  6. Autograd Basics

    What attribute allows a tensor to record operations for automatic differentiation?

    1. gradient_record
    2. track_gradient
    3. requires_grad
    4. enable_autograd

    Explanation: Setting requires_grad to True tells the tensor to track all operations for gradient computation. enable_autograd and gradient_record are not valid attributes, and track_gradient is not a recognized property for automatic differentiation.

  7. Gradient Storage

    After calling backward() on a scalar output, where is the computed gradient stored?

    1. In the .grad attribute of each leaf tensor
    2. In a new tensor object returned by backward()
    3. In the .requires_grad attribute
    4. In the .history attribute

    Explanation: Gradients are stored in the .grad attribute of tensors that have requires_grad set to True and were involved in the computation. backward() does not return any tensor, so that option is incorrect. .requires_grad is only a flag, and .history is not a typical attribute for tracking gradient values.

  8. Disabling Gradient Tracking

    Which context manager is used to temporarily disable gradient calculation during evaluation?

    1. pause_grad
    2. no_grad
    3. block_grad
    4. stop_grad

    Explanation: Using the no_grad context manager allows computations to run without recording gradients, which improves memory and speed during inference. stop_grad, pause_grad, and block_grad are not valid context managers for this purpose and will lead to errors if used.

  9. Detaching Tensors

    What is the purpose of calling detach() on a tensor?

    1. To create a new tensor that shares data but has no autograd history
    2. To delete the tensor from memory
    3. To change the tensor’s data type in place
    4. To save the tensor to disk

    Explanation: The detach() method produces a tensor that is not tracked for gradients, useful for inference or when gradients are not required for certain operations. It does not delete the tensor, as that is managed by garbage collection. Changing the data type requires another method, and saving to disk is achieved with different routines.

  10. Tensor Gradients

    If a tensor has requires_grad set to False, what happens when backward() is called on a computation involving that tensor?

    1. Gradients are still stored in the .grad attribute
    2. No gradients are computed for that tensor
    3. An exception is always raised
    4. The tensor’s data becomes zero

    Explanation: When requires_grad is False, no gradients are calculated or stored for that tensor during differentiation. The .grad attribute remains None for such tensors. Usually, no exception is raised unless all inputs have requires_grad set to False. The tensor's data stays unchanged, making the other options incorrect.