Identifying Generative AI
Which of the following best describes a key difference between generative AI and traditional discriminative AI?
- A. Generative AI creates new content, while discriminative AI classifies existing data.
- B. Generative AI sorts data into categories, while discriminative AI invents data.
- C. Generative AI uses only labeled data, while discriminative AI uses unlabeled data.
- D. Generative AI can only translate text, discriminative AI can only summarize.
- E. Generative AI deletes irrelevant information, discriminative AI stores it.
Tokens in Language Models
In the context of large language models, what is a 'token'?
- A. A mathematical equation performed by the model
- B. A unit of text, such as a word or part of a word, used as input or output
- C. A binary code required for model authentication
- D. An image used for training the model
- E. A predefined answer stored in the database
Understanding Prompt Engineering
What is one main goal of prompt engineering in large language models?
- A. Formatting input in a way that guides the model toward producing the desired output
- B. Coding new neural networks from scratch
- C. Training the model with audio samples
- D. Compressing the model's size
- E. Deleting incorrect responses automatically
Retrieval Augmented Generation (RAG)
How does Retrieval Augmented Generation (RAG) improve large language model outputs?
- A. By randomly generating answers based on training data
- B. By retrieving relevant information from external data sources to support responses
- C. By compressing the text outputs
- D. By removing all ambiguous words from answers
- E. By prioritizing speed over accuracy
Purpose of Chunking
Why do we use chunking strategies in preparing data for large language models?
- A. To make data easier for hardware to access by splitting it into meaningful segments
- B. To translate data into multiple languages automatically
- C. To increase the randomness in outputs
- D. To remove all small words from the dataset
- E. To convert text to images for better understanding
Vector Embeddings Explained
What is a vector embedding in the context of large language models?
- A. A fixed-length numeric representation of text capturing its meaning
- B. A picture used to summarize a sentence
- C. A database error code
- D. A direct copy of the original document
- E. An automatically generated graph of outputs
Vector Database Basics
What is one key difference between a vector database and a traditional database?
- A. Vector databases store data as high-dimensional vectors for similarity search, while traditional databases use structured tables
- B. Vector databases only store images, traditional databases only store text
- C. Vector databases are less secure than traditional databases
- D. Traditional databases operate offline, vector databases require the internet
- E. Vector databases cannot be queried by users
Temperature Parameter Meaning
In large language models, what does the temperature parameter control?
- A. The randomness or creativity of the generated output
- B. The server's operating temperature
- C. The number of tokens per answer
- D. The model's training speed
- E. The amount of memory allocated
Stopping Criteria in LLMs
Which of the following is an example of a stopping criteria for a large language model?
- A. Setting a specific stop sequence, such as '
', to indicate when the model should halt generation
- B. Instructing the model to use uppercase only
- C. Ordering results alphabetically
- D. Requesting images instead of text outputs
- E. Forcing the model to repeat the same sentence
Purpose of Fine-Tuning
Why might someone fine-tune a large language model on their own data?
- A. To adapt the model so it performs better on specific tasks or domains relevant to the user
- B. To increase the number of hidden layers in the model
- C. To make the model train faster on new datasets only
- D. To transform text into audio output
- E. To disable all randomization in answers