LLM Basics & Interview Prep: Essential Concepts Quiz — Questions & Answers

This quiz contains 10 questions. Below is a complete reference of all questions, answer choices, and correct answers. You can use this section to review after taking the interactive quiz above.

  1. Question 1: Identifying Generative AI

    Which of the following best describes a key difference between generative AI and traditional discriminative AI?

    • A. Generative AI creates new content, while discriminative AI classifies existing data.
    • B. Generative AI sorts data into categories, while discriminative AI invents data.
    • C. Generative AI uses only labeled data, while discriminative AI uses unlabeled data.
    • D. Generative AI can only translate text, discriminative AI can only summarize.
    • E. Generative AI deletes irrelevant information, discriminative AI stores it.
    Show correct answer

    Correct answer: A. Generative AI creates new content, while discriminative AI classifies existing data.

  2. Question 2: Tokens in Language Models

    In the context of large language models, what is a 'token'?

    • A. A mathematical equation performed by the model
    • B. A unit of text, such as a word or part of a word, used as input or output
    • C. A binary code required for model authentication
    • D. An image used for training the model
    • E. A predefined answer stored in the database
    Show correct answer

    Correct answer: B. A unit of text, such as a word or part of a word, used as input or output

  3. Question 3: Understanding Prompt Engineering

    What is one main goal of prompt engineering in large language models?

    • A. Formatting input in a way that guides the model toward producing the desired output
    • B. Coding new neural networks from scratch
    • C. Training the model with audio samples
    • D. Compressing the model's size
    • E. Deleting incorrect responses automatically
    Show correct answer

    Correct answer: A. Formatting input in a way that guides the model toward producing the desired output

  4. Question 4: Retrieval Augmented Generation (RAG)

    How does Retrieval Augmented Generation (RAG) improve large language model outputs?

    • A. By randomly generating answers based on training data
    • B. By retrieving relevant information from external data sources to support responses
    • C. By compressing the text outputs
    • D. By removing all ambiguous words from answers
    • E. By prioritizing speed over accuracy
    Show correct answer

    Correct answer: B. By retrieving relevant information from external data sources to support responses

  5. Question 5: Purpose of Chunking

    Why do we use chunking strategies in preparing data for large language models?

    • A. To make data easier for hardware to access by splitting it into meaningful segments
    • B. To translate data into multiple languages automatically
    • C. To increase the randomness in outputs
    • D. To remove all small words from the dataset
    • E. To convert text to images for better understanding
    Show correct answer

    Correct answer: A. To make data easier for hardware to access by splitting it into meaningful segments

  6. Question 6: Vector Embeddings Explained

    What is a vector embedding in the context of large language models?

    • A. A fixed-length numeric representation of text capturing its meaning
    • B. A picture used to summarize a sentence
    • C. A database error code
    • D. A direct copy of the original document
    • E. An automatically generated graph of outputs
    Show correct answer

    Correct answer: A. A fixed-length numeric representation of text capturing its meaning

  7. Question 7: Vector Database Basics

    What is one key difference between a vector database and a traditional database?

    • A. Vector databases store data as high-dimensional vectors for similarity search, while traditional databases use structured tables
    • B. Vector databases only store images, traditional databases only store text
    • C. Vector databases are less secure than traditional databases
    • D. Traditional databases operate offline, vector databases require the internet
    • E. Vector databases cannot be queried by users
    Show correct answer

    Correct answer: A. Vector databases store data as high-dimensional vectors for similarity search, while traditional databases use structured tables

  8. Question 8: Temperature Parameter Meaning

    In large language models, what does the temperature parameter control?

    • A. The randomness or creativity of the generated output
    • B. The server's operating temperature
    • C. The number of tokens per answer
    • D. The model's training speed
    • E. The amount of memory allocated
    Show correct answer

    Correct answer: A. The randomness or creativity of the generated output

  9. Question 9: Stopping Criteria in LLMs

    Which of the following is an example of a stopping criteria for a large language model?

    • A. Setting a specific stop sequence, such as '\n\n', to indicate when the model should halt generation
    • B. Instructing the model to use uppercase only
    • C. Ordering results alphabetically
    • D. Requesting images instead of text outputs
    • E. Forcing the model to repeat the same sentence
    Show correct answer

    Correct answer: A. Setting a specific stop sequence, such as '\n\n', to indicate when the model should halt generation

  10. Question 10: Purpose of Fine-Tuning

    Why might someone fine-tune a large language model on their own data?

    • A. To adapt the model so it performs better on specific tasks or domains relevant to the user
    • B. To increase the number of hidden layers in the model
    • C. To make the model train faster on new datasets only
    • D. To transform text into audio output
    • E. To disable all randomization in answers
    Show correct answer

    Correct answer: A. To adapt the model so it performs better on specific tasks or domains relevant to the user