Master the Metrics: Accuracy, Precision, Recall u0026 F1-Score Quiz Quiz

  1. Identifying Accuracy

    If a model correctly predicts 90 out of 100 total cases, which metric describes this overall correctness?

    1. A. Recall
    2. B. Precision
    3. C. Accuracy
    4. D. Specificity
    5. E. F2-Score
  2. Understanding Precision

    In spam detection, if a model labels 30 emails as spam and 24 of them are actually spam, what metric measures the proportion of spam predictions that are correct?

    1. A. Recall
    2. B. Specificity
    3. C. Sensitivity
    4. D. Precision
    5. E. Precession
  3. Defining Recall

    A disease detection test correctly identifies 80 out of 100 actual positive cases; which metric quantifies this ability to find all positives?

    1. A. Recall
    2. B. Predilection
    3. C. Accuracy
    4. D. Precision
    5. E. Fall-out
  4. Explaining F1-Score

    Which metric combines both precision and recall into a single value using their harmonic mean?

    1. A. F1-Score
    2. B. F2-Score
    3. C. G-Mean
    4. D. ROC
    5. E. F0.5-Score
  5. Precision in Context

    If a model predicts 50 items as positive, but only 20 are actually positive, which metric would decrease due to these many false positives?

    1. A. Specificity
    2. B. Precision
    3. C. Recall
    4. D. F1-Score
    5. E. Accuracy
  6. Recall Example

    In face recognition, if there are 40 real faces and the system finds 35 of them, what does the ratio 35/40 represent?

    1. A. Recall
    2. B. Prevalence
    3. C. Specificity
    4. D. Precision
    5. E. Error Rate
  7. False Positives Effect

    An increase in false positives in a binary classification problem will mostly cause which metric to decrease?

    1. A. Recall
    2. B. Precision
    3. C. Sensitivity
    4. D. Accuracy
    5. E. Preclusion
  8. Accuracy Calculation

    Out of 150 predictions, a classifier got 120 correct and 30 wrong; what is the accuracy?

    1. A. 80%
    2. B. 70%
    3. C. 60%
    4. D. 90%
    5. E. 50%
  9. F1-Score Usefulness

    Why would you use F1-Score instead of just accuracy for a very imbalanced dataset?

    1. A. Because F1-score penalizes both false positives and false negatives
    2. B. Because F1-score only measures true negatives
    3. C. F1-score is always 1 in imbalanced datasets
    4. D. F1-score ignores recall completely
    5. E. F1-score equals accuracy in all cases
  10. Precision vs Recall Focus

    In a system where missing a positive instance is very costly (e.g., medical diagnosis), which metric should be prioritized?

    1. A. Precision
    2. B. Recall
    3. C. Specificity
    4. D. F2-Score
    5. E. Selectivity