Image Segmentation: Thresholding, Contours, and Deep Learning Quiz Quiz

Explore fundamental concepts of image segmentation with questions on thresholding, contour detection, and deep learning techniques. This quiz helps you review essential principles, methods, and terminology in the field of digital image analysis and segmentation.

  1. Basic Definition

    Which of the following best describes image segmentation in computer vision?

    1. Dividing an image into regions based on pixel characteristics
    2. Compressing an image to reduce its file size
    3. Increasing the brightness of an entire image
    4. Stitching multiple images into a panorama

    Explanation: Image segmentation means partitioning an image into regions that share similar properties, like color or brightness. Increasing brightness affects all pixels equally, not segmentation. Stitching is about joining images, not dividing them. Compression reduces file size and is unrelated to identifying regions within an image.

  2. Thresholding Concept

    In thresholding-based image segmentation, what happens to pixels with intensity values above the chosen threshold?

    1. They are assigned to the foreground class
    2. They are blurred to reduce noise
    3. They are merged with neighboring pixels
    4. They become completely transparent

    Explanation: In thresholding, pixels above the threshold are typically marked as foreground, making them distinct from the background. Transparency is not a direct outcome of thresholding. Blurring is a separate preprocessing technique. Merging with neighbors is not specifically part of thresholding.

  3. Global vs. Adaptive Thresholding

    What is the main difference between global and adaptive thresholding techniques?

    1. Adaptive thresholding uses varying thresholds for different regions
    2. Adaptive methods always produce binary images
    3. Global thresholding automatically selects the threshold for each pixel
    4. Global thresholding can only process grayscale images

    Explanation: Adaptive thresholding applies different thresholds based on local neighborhoods, handling lighting variations better. Global methods use a consistent threshold throughout. Global thresholding is not limited to grayscale images alone, and adaptive methods don't necessarily guarantee binary output. Automatic local threshold selection is characteristic of adaptive, not global, approaches.

  4. Contour Detection Purpose

    What is the primary purpose of contour detection in image analysis?

    1. To colorize grayscale images
    2. To resize an image without distortion
    3. To reduce the image file size
    4. To identify the boundaries of objects within an image

    Explanation: Contour detection is mainly used to find and outline the shapes or edges of objects. Colorization changes grayscale to color, resizing alters dimensions, and file size reduction does not inherently involve locating boundaries.

  5. Binary Image Creation

    After applying a simple threshold to a grayscale image, what does the resulting binary image contain?

    1. Randomized color values
    2. A set of multiple channels per pixel
    3. Pixels with values of only 0 and 255
    4. A sharpened version of the original image

    Explanation: Thresholding produces a binary image where pixel values are usually set to either 0 (black) or 255 (white). Sharpening is unrelated to thresholding. Color is not introduced, and multi-channel data typically refers to color images, not simple binary images.

  6. Deep Learning Role

    How do deep learning models contribute to modern image segmentation tasks?

    1. They learn to classify each pixel into a category using large datasets
    2. They ignore spatial relationships between pixels
    3. They randomly assign colors to image regions
    4. They only find the brightest spots in an image

    Explanation: Deep learning models, especially convolutional ones, can assign pixel-level labels by learning from annotated datasets. Finding bright spots is feature detection, not full segmentation. Random assignment and ignoring spatial relations would not lead to meaningful segmentation results.

  7. Edge vs. Contour

    What distinguishes a contour from an edge in image processing?

    1. There is no difference; the terms are fully interchangeable
    2. A contour is a continuous curve connecting points along a boundary, while an edge often refers to a local intensity change
    3. Contours can only be detected in binary images while edges require color images
    4. Edges are always colored, but contours are monochrome

    Explanation: Contours represent full shapes or object outlines, while edges typically indicate areas of rapid intensity change and might not form closed curves. Color is not a defining property of either. Both contours and edges can be detected in both binary and grayscale images, and the terms are not fully interchangeable.

  8. Semantic Segmentation Output

    What type of output does semantic segmentation provide when applied to a street scene image?

    1. Each pixel is assigned a label, such as road, car, or pedestrian
    2. Only one representative object is detected per category
    3. The image is converted to grayscale
    4. The brightest pixel in the image is highlighted

    Explanation: Semantic segmentation classifies every pixel into a category, such as 'car' or 'road.' Highlighting a single bright pixel, converting images to grayscale, or detecting only one instance per category do not match the pixel-wise, detailed labeling provided by semantic segmentation.

  9. Thresholding Limitation

    Why might simple thresholding fail on images with uneven lighting conditions?

    1. It modifies the edges to be smoother
    2. It always produces colored output
    3. It uses a single threshold value that cannot adapt to varying brightness
    4. It requires input images to be perfectly sharp

    Explanation: Simple thresholding does not account for variations in lighting, so some regions may be misclassified as background or foreground. It does not produce color images, nor does it require perfect sharpness or modify edge smoothness directly.

  10. Instance vs. Semantic Segmentation

    Which statement correctly distinguishes instance segmentation from semantic segmentation?

    1. Semantic segmentation always produces vector graphics
    2. Instance segmentation is only possible on grayscale images
    3. Semantic segmentation detects only the background regions of an image
    4. Instance segmentation separates individual objects within the same category, while semantic segmentation labels all similar objects together

    Explanation: Instance segmentation identifies separate objects within a category, such as different cars, whereas semantic segmentation classifies all objects of the same type with one label. Semantic segmentation does not focus solely on backgrounds, nor does it always convert images to vectors or restrict itself to grayscale images.