Evaluation Metrics and Threshold Tuning Essentials Quiz

Test your understanding of how to choose and interpret evaluation metrics for classification and regression, with a special focus on threshold tuning for imbalanced data. This quiz will help you grasp key concepts essential for model evaluation and improvement.

  1. Classification vs Regression Metrics

    Which metric is most appropriate for evaluating a regression model predicting house prices?

    1. Recall
    2. Mean Squared Error
    3. Precision
    4. Accuracy

    Explanation: Mean Squared Error (MSE) is commonly used to assess how well regression models predict continuous values. Accuracy, precision, and recall are suited for classification problems where outputs are categories rather than numeric values. Applying these classification metrics to regression would not provide meaningful information about numerical prediction errors.

  2. Imbalanced Classification

    When working with a highly imbalanced dataset, which metric is generally more informative than accuracy for classification performance?

    1. F1 Score
    2. Sum Squared Error
    3. Mean Squared Error
    4. R-squared

    Explanation: The F1 Score combines precision and recall, offering a balanced way of assessing model performance on imbalanced datasets. Mean Squared Error and Sum Squared Error are regression metrics, and R-squared is also for regression tasks. These metrics do not provide insight into how well a model distinguishes minority class instances in a classification setting.

  3. Threshold Adjustment

    Why might adjusting the classification threshold from 0.5 to a lower value be helpful in detecting rare events?

    1. It improves the mean squared error
    2. It increases R-squared value
    3. It increases recall for the positive class
    4. It lowers the feature importance

    Explanation: Reducing the threshold below 0.5 allows the model to classify more instances as positive, which can increase recall for rare (positive) events. The mean squared error and R-squared are regression concepts, not directly affected by threshold changes in classification. Feature importance is unrelated to threshold decisions.

  4. Precision vs Recall

    If avoiding false negatives is very important, like in medical diagnosis, which metric should be prioritized?

    1. Recall
    2. Precision
    3. Mean Absolute Error
    4. Root Mean Squared Error

    Explanation: Recall measures the model's ability to identify all actual positives, making it critical when false negatives are costly. Precision emphasizes minimizing false positives, which may be less important in these cases. Root Mean Squared Error and Mean Absolute Error evaluate regression models, not classification models.

  5. ROC Curve Interpretation

    What does a point in the top left corner of a ROC curve plot represent?

    1. No predictive power
    2. High false positive rate
    3. High true positive rate and low false positive rate
    4. Low true positive rate

    Explanation: A point in the upper left means the model achieves a high rate of correctly detected positives while minimizing false alarms. High false positive rate would be to the upper right; low true positive rate is near the bottom; and random guessing (no predictive power) is represented by a diagonal line from bottom left to top right.

  6. Binary vs Multi-class Evaluation

    Which metric can be used for both binary and multi-class classification problems?

    1. Mean Squared Error
    2. F1 Score
    3. R-squared
    4. Root Error

    Explanation: The F1 Score is adaptable to both binary and multi-class classification tasks by using appropriate averaging strategies. Mean Squared Error and R-squared are regression-based and do not evaluate categorical outputs. 'Root Error' is not a standard metric in either setting.

  7. Interpreting Regression Metrics

    If a regression model has a Mean Absolute Error (MAE) of 2, what does this value mean?

    1. The classification model has an accuracy of 2%.
    2. The model explained 2% of the variance.
    3. On average, predictions are within 2 units of the actual values.
    4. On average, predictions are off by 2%.

    Explanation: MAE indicates the average absolute difference between predicted and actual values, so a value of 2 means predictions differ by 2 units on average. Accuracy and explained variance are classification and regression metrics respectively, but not what MAE measures. The notion of 'off by 2%' confuses error magnitude with percentage error.

  8. Threshold Selection Strategy

    When might you want to manually select a classification threshold rather than using the default 0.5?

    1. When the model is overfitting
    2. When classes are highly imbalanced
    3. When using mean squared error
    4. When features are highly correlated

    Explanation: In imbalanced scenarios, the default threshold of 0.5 may not yield optimal sensitivity or specificity, so manual tuning helps improve results for minority classes. Mean squared error applies to regression, not threshold selection. Overfitting and feature correlation relate to model complexity and design, not directly to threshold choice.

  9. Metric for Skewed Data

    In a fraud detection dataset where fraud is rare, why can accuracy be misleading?

    1. Because accuracy measures regression errors
    2. Because accuracy is independent of predictions
    3. Because accuracy always equals recall
    4. Because the model can simply predict the majority class and achieve high accuracy

    Explanation: With rare events, predicting only the majority class leads to high accuracy, but fails to identify minority class cases—making accuracy less informative. Accuracy does not always equal recall, nor does it measure regression errors. Accuracy also depends on predictions and how they match true labels.

  10. Regression Metrics Purpose

    What does the R-squared metric indicate when evaluating regression models?

    1. The rate of false positives
    2. The number of predicted classes
    3. The proportion of correctly classified examples
    4. The proportion of variance in the target explained by the model

    Explanation: R-squared quantifies how much of the variability in the dependent variable is accounted for by the regression model. It does not relate to classification accuracy, false positives, or the number of classes, which are distinct aspects of classification models.