Enhance your foundational understanding of tensors and autograd functionality, essential elements of PyTorch used for deep learning and automatic differentiation. This quiz covers tensor creation, operations, and the autograd system to reinforce core concepts and common use scenarios.
Which function is used to create a tensor with all zeros of size 3x2?
Explanation: The correct answer is zeros((3,2)), which constructs a tensor filled with zeros and the shape 3 rows by 2 columns. ones((3,2)) would create a tensor of ones, and rand((3,2)) creates a tensor with random float values. empty((3,2)) creates a tensor with uninitialized values that are not guaranteed to be zeros.
What argument lets you specify the data type, such as float32 or int64, when creating a new tensor?
Explanation: The argument dtype is used to set the desired data type for a tensor, like float32 or int64. The option type is incorrect because it refers to the object’s type, not the tensor’s internal data type. mode and format are not valid arguments for defining tensor data types, making them unsuitable choices.
Given a tensor of size (4, 2), which method can reshape it to size (2, 4)?
Explanation: The reshape method allows you to change the shape of a tensor without altering its contents, so reshape is correct here for converting (4, 2) to (2, 4). viewshape is a non-existent method and convert does not perform reshaping. transpose swaps dimensions but does not generalize to arbitrary reshaping.
If a tensor a contains [1, 2] and b contains [3, 4], what is the result of a + b?
Explanation: Element-wise addition of [1, 2] and [3, 4] results in [4, 6], as each corresponding element is added together. [2, 6] is incorrect as it mixes the logic, while [1, 3, 2, 4] is a concatenation, not addition. [5, 8] does not represent correct element-wise addition, making it an inaccurate choice.
Which attribute tells you if a tensor is currently stored on the CPU or GPU?
Explanation: The device attribute specifies where the tensor’s memory is allocated, such as a processor or graphics processing unit. location is not a recognized tensor attribute, and env nor platform provide information about tensor storage within typical tensor frameworks.
What attribute allows a tensor to record operations for automatic differentiation?
Explanation: Setting requires_grad to True tells the tensor to track all operations for gradient computation. enable_autograd and gradient_record are not valid attributes, and track_gradient is not a recognized property for automatic differentiation.
After calling backward() on a scalar output, where is the computed gradient stored?
Explanation: Gradients are stored in the .grad attribute of tensors that have requires_grad set to True and were involved in the computation. backward() does not return any tensor, so that option is incorrect. .requires_grad is only a flag, and .history is not a typical attribute for tracking gradient values.
Which context manager is used to temporarily disable gradient calculation during evaluation?
Explanation: Using the no_grad context manager allows computations to run without recording gradients, which improves memory and speed during inference. stop_grad, pause_grad, and block_grad are not valid context managers for this purpose and will lead to errors if used.
What is the purpose of calling detach() on a tensor?
Explanation: The detach() method produces a tensor that is not tracked for gradients, useful for inference or when gradients are not required for certain operations. It does not delete the tensor, as that is managed by garbage collection. Changing the data type requires another method, and saving to disk is achieved with different routines.
If a tensor has requires_grad set to False, what happens when backward() is called on a computation involving that tensor?
Explanation: When requires_grad is False, no gradients are calculated or stored for that tensor during differentiation. The .grad attribute remains None for such tensors. Usually, no exception is raised unless all inputs have requires_grad set to False. The tensor's data stays unchanged, making the other options incorrect.