Feature Importance Fundamentals: Random Forests vs. XGBoost Quiz

Explore essential concepts of feature importance in Random Forest and XGBoost models. This quiz evaluates key terms, interpretation, and typical applications, helping you understand how both algorithms determine and utilize feature significance for better model insights.

  1. Understanding Feature Importance Scores

    What does a high feature importance score indicate in a Random Forest model trained to predict house prices?

    1. The feature is categorical only
    2. The feature was not used in tree splits
    3. The feature has missing values
    4. The feature strongly contributes to accurate predictions

    Explanation: A high feature importance score signifies that the feature plays a significant role in making accurate predictions. Features with missing values may still receive high or low scores depending on their utility. Feature importance is not determined solely by whether a feature is categorical. If a feature was not used in any splits, its importance score would be low or zero.

  2. Default Feature Importance Calculation in Random Forest

    Which method is commonly used by Random Forests to calculate default feature importance scores?

    1. Mean decrease in impurity (Gini importance)
    2. Linear regression coefficient
    3. Root mean square error
    4. Correlation coefficient

    Explanation: Random Forest models typically use mean decrease in impurity, sometimes called Gini importance, to measure feature importance. This assesses how much each feature’s presence in a decision split reduces uncertainty. Root mean square error and correlation coefficient are general model evaluation metrics, not feature importance metrics in this context. Linear regression coefficients belong to regression models, not ensemble trees.

  3. Permutation Importance in Practice

    If the accuracy of a Random Forest model drops significantly after shuffling the values of a variable, what does this suggest about the variable’s importance?

    1. The variable is very important to the model
    2. The variable must be removed
    3. The variable causes overfitting
    4. The variable is always zero

    Explanation: A significant accuracy drop after shuffling a variable’s values suggests the feature is important; it strongly influences predictions. Overfitting is not directly indicated by this procedure, nor does it mean the variable is always zero. Removal is not recommended for important features; instead, it indicates value in including that variable.

  4. Handling of Correlated Features

    What effect does the presence of highly correlated features have on feature importance in tree-based models such as Random Forest and XGBoost?

    1. Their individual importance scores may be reduced due to split sharing
    2. Their importance scores always increase
    3. They are completely ignored by the model
    4. The model assigns zero importance to all but one

    Explanation: When features are highly correlated, tree-based models may split importance between them, reducing each feature's individual score. Their importance scores do not always increase; instead, the contribution is distributed. The model does not completely ignore correlated features nor assign zero importance to all but one, but the importance can be diluted.

  5. Gain-Based Importance in XGBoost

    In XGBoost, which metric used for feature importance represents the average improvement in accuracy from splits involving a feature?

    1. Weight
    2. Frequency
    3. Gain
    4. Permutation

    Explanation: “Gain” measures the average improvement in the model's accuracy due to splits involving a specific feature. “Frequency” or “weight” counts the number of times a feature is used in splits, not the quality of splits. “Permutation” refers to a method used outside the algorithm, not an internal metric.

  6. Interpretation of Low Feature Importance

    A variable receives a very low importance ranking in a Random Forest model predicting customer churn. What is a valid interpretation?

    1. The variable contributes little to the model's predictive power
    2. The variable causes data leakage
    3. The variable should always be deleted
    4. The variable is always the target variable

    Explanation: A low importance score implies the feature does not notably influence the model’s predictions. The variable is not the target variable, as target variables are not used as features. There is no information about data leakage, and low importance alone does not automatically justify deleting a feature—it may still have relevance after further investigation.

  7. Feature Importance Visualization

    Which type of plot is commonly used to display feature importance rankings from Random Forest or XGBoost models?

    1. Heatmap
    2. Scatter plot
    3. Pie chart
    4. Bar plot

    Explanation: Bar plots are widely used to visualize feature importance, as they clearly show the relative importance of each feature. Pie charts do not effectively represent ranking or relative importance in this context. Scatter plots and heatmaps serve different purposes, such as showing relationships or correlations.

  8. Bias in Default Importance Metrics

    Which issue is commonly associated with default feature importance measures in Random Forest and XGBoost?

    1. They guarantee unbiased rankings
    2. They can favor variables with more categories or unique values
    3. They always ignore categorical features
    4. They only work with time series data

    Explanation: Default importance measures can be biased toward features with many unique values or categories, as these split the data more frequently. Categorical features are not ignored by default. These measures are not limited to time series data, and, importantly, they do not guarantee unbiased rankings.

  9. Role of Feature Importance in Feature Selection

    How can feature importance scores assist during feature selection for a classification model?

    1. By eliminating all numerical features only
    2. By ensuring all features are retained
    3. By ranking models based on training speed
    4. By identifying and prioritizing the most helpful features

    Explanation: Feature importance helps rank the most useful predictors, supporting decisions about which features to keep or remove. Eliminating all numerical features is not typical or effective. Ranking models based on training speed is unrelated to feature importance. Retaining all features without evaluation is the opposite of purposeful feature selection.

  10. Comparing Random Forest and XGBoost Importance Usage

    Which statement accurately describes a similarity in how feature importance is used in Random Forest and XGBoost models?

    1. Both use linear regression weights for importance scoring
    2. Both require manual calculation of importances
    3. Both only consider the first feature listed in the data
    4. Both help interpret which features impact model predictions

    Explanation: Feature importance in both algorithms serves to help users understand which features are influential to predictions. Neither model restricts consideration to the first feature only. They can automatically compute importance scores during model training. They do not use linear regression weights, as their importance calculation is based on how features are used in tree splits.