Understanding LLM Basics
What does the term 'Large Language Model' (LLM) refer to in the context of artificial intelligence?
- A computer program for storing large files
- A deep learning model trained on vast amounts of text data
- A programming language for mathematical calculations
- A database system for managing images
- An operating system for mobile devices
Architecture Difference
Which of the following best describes a key difference between transformers and earlier recurrent neural networks (RNNs)?
- Transformers process entire sequences in parallel, unlike RNNs that process sequentially
- RNNs can train on larger datasets than transformers
- Transformers use only one layer, while RNNs use multiple layers
- RNNs are faster than transformers on graphics processing units (GPUs)
- Transformers require handwritten grammar rules
Word Representation in LLMs
In large language models, which technique replaced simple numerical tables to represent words and capture relationships such as similar meanings?
- Word embeddings using multi-dimensional vectors
- Shuffle arrays
- One-hot decoing
- Random tokenization
- Numbered word lists
Applications of LLMs
Which of the following is a practical application of large language models, as mentioned in the context?
- Sorting digital photos by color
- Generating original copy and answering questions from knowledge bases
- Compressing video files
- Monitoring hardware temperature
- Designing circuit boards
Key Importance of LLMs
Why are large language models considered important in modern AI, based on the context provided?
- They can only be used for translating languages
- They allow one model to perform a variety of tasks using human language
- They require less electricity than other models
- They are primarily used to play music
- They need no input data to work