Machine Learning Fundamentals Quiz Quiz

  1. SVM Kernels

    Which of the following is NOT a typical kernel used in Support Vector Machines?

    1. A. Linear kernel
    2. B. Polynomial kernel
    3. C. Radial basis kernel
    4. D. Sigmoid kernel
    5. E. Tangential kernel
  2. Machine Learning Motivation

    What is the primary reason for the increasing adoption of Machine Learning?

    1. A. To replace all human jobs with computers.
    2. B. To solve real-world problems by learning from data instead of hard-coded rules.
    3. C. To make computers think like humans.
    4. D. To build more complex algorithms.
    5. E. Because everyone else is doing it.
  3. Classification vs. Regression

    Which task is suitable for classification?

    1. A. Predicting stock prices.
    2. B. Determining the temperature for tomorrow.
    3. C. Categorizing emails as spam or non-spam.
    4. D. Forecasting sales figures.
    5. E. Estimating the height of a building.
  4. Bias in Machine Learning

    What does bias in data indicate in machine learning?

    1. A. Perfectly balanced dataset
    2. B. Complete lack of errors.
    3. C. Inconsistency in the data.
    4. D. High precision in predictions.
    5. E. The data is up to date.
  5. Cross-Validation

    What is the main purpose of cross-validation in machine learning?

    1. A. To increase the size of the training data.
    2. B. To reduce the training time of the model.
    3. C. To improve the model's performance on the training data.
    4. D. To assess how the results of a statistical analysis will generalize to an independent data set.
    5. E. To validate the code syntax.
  6. Support Vectors

    What are support vectors in SVM?

    1. A. All data points used for training the model.
    2. B. The vectors that define the axes of the data.
    3. C. Points on the edge of the dividing hyperplane that determine the margin.
    4. D. Features that are most relevant in determining the output.
    5. E. The lines used to graph the data.
  7. PCA Purpose

    What is the most common use for Principal Component Analysis (PCA)?

    1. A. Increasing the number of dimensions in a dataset.
    2. B. Improving the accuracy of regression models.
    3. C. Dimension reduction.
    4. D. Enhancing the visualization of high-dimensional data.
    5. E. Data encryption.
  8. Naive Bayes Assumption

    What is the 'naive' assumption in a Naive Bayes classifier?

    1. A. That the data is normally distributed.
    2. B. That all features are equally important.
    3. C. That all attributes are independent of each other.
    4. D. That there is no noise in the data.
    5. E. That the data does not have any missing values.
  9. Unsupervised Learning

    Which of the following tasks is an example of unsupervised learning?

    1. A. Predicting housing prices based on features like size and location.
    2. B. Classifying images of cats and dogs.
    3. C. Grouping customers into segments based on their purchasing behavior.
    4. D. Identifying the gender of a person based on height and weight.
    5. E. Sorting books by title.
  10. Supervised Learning Example

    Which of the following is an example of supervised learning?

    1. A. Clustering similar articles together.
    2. B. Reducing the dimensionality of a dataset.
    3. C. Training a model to predict whether an email is spam based on labeled data.
    4. D. Discovering hidden patterns in customer transactions.
    5. E. Generating a new sequence of text based on existing text.
  11. F1 Score Calculation

    The F1 score is calculated using which two metrics?

    1. A. Accuracy and Specificity.
    2. B. Sensitivity and Specificity.
    3. C. Precision and Recall.
    4. D. Error and Variance.
    5. E. Bias and Variance.
  12. Precision Definition

    What does precision measure in machine learning?

    1. A. The proportion of actual positives that are correctly identified.
    2. B. The proportion of predicted positives that are actually positive.
    3. C. The overall correctness of the model.
    4. D. The number of false negatives.
    5. E. The number of data points.
  13. Tackling Overfitting

    What is a common technique to tackle overfitting in machine learning models?

    1. A. Adding more features to the dataset.
    2. B. Using a more complex model.
    3. C. Resampling the data and using k-fold cross-validation.
    4. D. Removing all the outliers from the data.
    5. E. Decreasing the learning rate.
  14. Ensemble Learning

    What is the primary goal of ensemble learning?

    1. A. To simplify the model and reduce training time.
    2. B. To create more powerful models by combining multiple machine learning models.
    3. C. To remove bias from the dataset.
    4. D. To increase the variance of the model.
    5. E. To reduce the need for data preprocessing.
  15. Loss vs Cost Function

    What is the key difference between a Loss Function and a Cost Function?

    1. A. Loss function is used for regression, cost function is used for classification.
    2. B. Cost function is used for a single data point, loss function is used for multiple.
    3. C. Loss function is for single data point, cost function aggregates over the entire training data.
    4. D. They are the same thing.
    5. E. Loss function is used during training, cost function is used during testing.