Fundamentals of Large Language Model Techniques Quiz

Test your understanding of essential concepts and techniques in Large Language Models, including tokenization, efficient fine-tuning, decoding strategies, temperature settings, and masked language modeling. This quiz is designed for those seeking to grasp the basics of LLMs and their optimization in natural language processing applications.

  1. Tokenization Basics

    What does tokenization accomplish in the context of large language models?

    1. It compresses images for faster processing.
    2. It breaks text into smaller units called tokens such as words or subwords.
    3. It stores entire documents as single data points.
    4. It translates text directly into another language.

    Explanation: Tokenization is the process of splitting text into smaller units (tokens), which may be words, subwords, or characters, making it possible for LLMs to process and understand the input. Translating text, storing documents, or compressing images are unrelated and do not fulfill the specific goal of turning language into model-compatible sequences.

  2. Tokenization Example

    Given the word 'tokenization', how might a typical tokenization method process it?

    1. 'tokenzation'
    2. 'tokenization' as a whole
    3. 'to', 'ken', 'iza', 'tion'
    4. 'token' and 'ization'

    Explanation: Tokenization can split the word 'tokenization' into subwords like 'token' and 'ization', especially for managing rare words and reducing vocabulary size. Using the entire word as one token is possible but less flexible. Splitting into arbitrary small chunks like 'to', 'ken', 'iza', 'tion' is unlikely for most LLM approaches, and 'tokenzation' contains a typo and is not a correct splitting.

  3. LoRA Overview

    In the context of LLM fine-tuning, what does LoRA (Low-Rank Adaptation) primarily achieve?

    1. Removes layers to reduce accuracy
    2. Increases the model size by adding duplicated layers
    3. Adds trainable parameters to existing layers without increasing overall model size
    4. Changes raw text into tokens

    Explanation: LoRA works by introducing new trainable parameters within the existing model, allowing changes in behavior without growing the total model size. It does not simply duplicate layers or remove layers (which could lower performance), nor does it handle tokenization.

  4. QLoRA Function

    What is the main advantage of using QLoRA over standard LoRA?

    1. It removes the need for tokenization.
    2. It increases computation time by making the model larger.
    3. It translates model outputs to new languages.
    4. It further reduces memory usage by quantizing model weights to lower bits.

    Explanation: QLoRA builds on LoRA by applying quantization, often to 4 bits, significantly reducing memory usage during training. It does not make the model larger or slower, nor is it related to translation or tokenization processes.

  5. Beam Search Concept

    Which statement best describes beam search in text generation for LLMs?

    1. It ignores all but the least probable words.
    2. It randomly selects next words without ranking.
    3. It always picks only the highest-probability word at every step.
    4. It keeps multiple top candidate sequences at each step to find the most likely output.

    Explanation: Beam search maintains several hypotheses at each step, allowing the model to explore multiple promising sequences rather than just the single best local choice. Greedy decoding picks only the top word, making it less flexible. Random or lowest-probability selection does not characterize beam search.

  6. Greedy vs. Beam Search

    How does greedy decoding differ from beam search during LLM output generation?

    1. Greedy decoding chooses only the highest-scoring word at each step.
    2. Greedy decoding uses temperature to control randomness.
    3. Greedy decoding sorts outputs alphabetically.
    4. Greedy decoding maintains several possible word sequences in parallel.

    Explanation: Greedy decoding selects the most probable next token at each step, resulting in a single path. Only beam search keeps multiple paths. Sorting alphabetically or adjusting temperature are not defining characteristics of greedy decoding.

  7. Temperature Parameter

    What effect does increasing the temperature parameter in LLM text generation have?

    1. It always chooses the most probable token, giving deterministic results.
    2. It increases the diversity of possible outputs by making less likely tokens more probable.
    3. It translates text into multiple languages automatically.
    4. It decreases randomness, making text more repetitive.

    Explanation: A higher temperature flattens the probability distribution, so unlikely tokens have a better chance of being selected, leading to more varied outputs. Lower temperature does the opposite by making the output more deterministic. Translation and repetition are not directly related to the temperature setting.

  8. Low Temperature Behavior

    If the temperature parameter is set very close to zero during text generation, what is most likely to happen?

    1. The model outputs are always random and surprising.
    2. The model runs multiple decoding paths simultaneously.
    3. The model splits all words into individual letters.
    4. The model outputs become highly predictable and repetitive.

    Explanation: A very low temperature increases the chance of picking the most likely tokens repeatedly, causing predictable and often repetitive responses. Maximum randomness is found with higher temperatures, not lower. Multiple paths are related to beam search, not temperature, and token splitting is tokenization, not temperature.

  9. Masked Language Modeling Purpose

    What is the purpose of masked language modeling (MLM) during LLM pretraining?

    1. To help models learn context by predicting missing tokens in a sentence.
    2. To generate random text without context.
    3. To translate masked text into different formats.
    4. To reduce the number of model parameters.

    Explanation: MLM helps models build contextual understanding by masking some tokens and asking the model to predict them, improving semantics. Translation or parameter reduction is not the goal of MLM. Generating purely random text is unrelated.

  10. Masked vs. Unmasked Input

    Why does masking certain words in MLM improve LLM language understanding?

    1. It ensures only common words are used in training.
    2. It compresses sentence length for efficiency.
    3. It forces the model to use context clues to infer missing information.
    4. It reduces training time by skipping sentences.

    Explanation: By masking words, the model must analyze the remaining context to correctly predict them, strengthening its grasp of language patterns. Skipping sentences, focusing only on common words, or shortening input do not contribute to contextual learning in the same targeted way.