Scikit-learn Essentials Quiz: Pipelines, Preprocessing, and Models Quiz

Assess your foundational knowledge of Scikit-learn concepts, including the use of pipelines, data preprocessing techniques, and selecting appropriate machine learning models. This quiz is designed for those seeking to strengthen their practical skills in machine learning workflows using Scikit-learn's core features.

  1. Purpose of Data Splitting

    Why is it important to split a dataset into training and testing sets before fitting a machine learning model?

    1. To prevent missing values
    2. To evaluate model performance on unseen data
    3. To avoid overfitting by removing features
    4. To reduce the size of the dataset

    Explanation: Splitting the dataset allows you to assess the model's performance on data it has not seen before, giving a realistic measure of generalization. Reducing dataset size and preventing missing values are not the primary goals of splitting data. Removing features is unrelated to the purpose of splitting; feature selection is a different step.

  2. Function of StandardScaler

    What does the StandardScaler preprocessing technique do when applied to a feature matrix X?

    1. It removes all outliers from the data
    2. It converts all values to their ranks
    3. It transforms features to the range [0,1]
    4. It scales features to have zero mean and unit variance

    Explanation: StandardScaler standardizes each feature by removing the mean and scaling to unit variance, which helps many algorithms perform better. Ranking values changes the order but not the scale, scaling to [0,1] is done by a different scaler, and StandardScaler does not specifically remove outliers.

  3. Pipeline Construction

    What is the primary benefit of using a pipeline in a machine learning workflow?

    1. To visualize data distributions
    2. To randomly shuffle the data
    3. To chain preprocessing and modeling steps for reproducibility
    4. To increase the number of features

    Explanation: Pipelines allow you to encapsulate multiple steps such as preprocessing and modeling, ensuring the workflow is repeatable and easier to manage. Visualizing distributions is separate from pipeline construction, and increasing features or shuffling data are unrelated to the main purpose.

  4. Imputation Choices

    If you choose SimpleImputer with the strategy 'mean', what happens to missing values in a numeric column?

    1. They are left untouched
    2. They are filled with the mode
    3. They are replaced with the column mean
    4. They are replaced with zeros

    Explanation: The 'mean' strategy fills missing values using the mean of the column. Simply leaving them or filling with zeros or the most frequent value (mode) are other strategies, but not what 'mean' does.

  5. Handling Categorical Data

    What is usually the most appropriate preprocessing method for converting categorical text features into numerical form for machine learning?

    1. Using StandardScaler
    2. Dropping categorical columns
    3. Applying SimpleImputer with mean strategy
    4. Using OneHotEncoder

    Explanation: OneHotEncoder transforms categorical variables into binary vectors, suitable for most models. StandardScaler is intended for numerical data, SimpleImputer with mean strategy cannot be applied to text, and dropping the column removes useful information instead of encoding it.

  6. Model Selection for Classification

    Which estimator is primarily used for binary classification tasks?

    1. LinearRegression
    2. LogisticRegression
    3. KMeans
    4. PCA

    Explanation: LogisticRegression is designed for classification tasks, especially binary ones. KMeans is a clustering algorithm, LinearRegression handles regression tasks, and PCA is a dimensionality reduction method, not a classifier.

  7. Applying Transformations in Pipelines

    When using a pipeline, what ensures that the same transformations learned from training data are applied to new data during prediction?

    1. The pipeline stores fitted transformers for consistent use
    2. Transformers are automatically bypassed during prediction
    3. Each step is randomly re-fitted on test data
    4. Transformers are only applied during initial fitting

    Explanation: Pipelines keep track of all fitted transformers from the training phase so that exactly the same transformations are applied during prediction. Random re-fitting or skipping transformers can cause inconsistencies, and limiting transformations to initial fitting defeats their purpose.

  8. Feature Scaling Impact

    Which type of algorithms are most affected if feature scaling is not performed?

    1. Random guess generators
    2. Naive Bayes classifiers
    3. Distance-based algorithms such as K-nearest neighbors
    4. Tree-based algorithms such as decision trees

    Explanation: Distance-based methods rely heavily on feature scales because distance computations are sensitive to magnitude. Tree-based and Naive Bayes algorithms are generally scale-invariant, and random guessing does not involve features at all.

  9. Target Variable in Supervised Learning

    In supervised learning, what is the term for the variable that the model tries to predict?

    1. Pipeline step
    2. Scaler parameter
    3. Target variable
    4. Feature matrix

    Explanation: The target variable is what the model aims to predict. The feature matrix contains the input variables, pipeline step refers to an element of pipelines, and scaler parameter is a value used in preprocessing, not prediction.

  10. Model Evaluation Metric Selection

    Which metric is most suitable for evaluating the accuracy of a classification model on balanced data?

    1. Mean squared error
    2. Accuracy score
    3. Silhouette score
    4. Adjusted R-squared

    Explanation: Accuracy score measures the proportion of correct predictions and is standard for balanced classification tasks. Mean squared error is for regression, adjusted R-squared is also used in regression, and silhouette score is relevant for clustering, not classification.