Fundamentals of Probabilistic Reasoning: Bayes Nets and Markov Models Quiz

Explore foundational concepts of probabilistic reasoning with a focus on Bayesian Networks and Markov Models. This quiz covers key terms, structural principles, and real-world examples relevant to inference and uncertainty in artificial intelligence.

  1. Bayesian Network Structure

    What is the primary structure of a Bayesian Network used for representing probabilistic relationships among variables?

    1. Undirected cyclic graph
    2. Linear sequence of nodes
    3. Hierarchical tree
    4. Directed acyclic graph

    Explanation: A Bayesian Network is structured as a directed acyclic graph (DAG), where nodes represent variables and directed edges represent probabilistic dependencies. An undirected cyclic graph is characteristic of Markov networks, not Bayes nets. A linear sequence does not capture complex dependencies, and a hierarchical tree is more restrictive than required for Bayes nets.

  2. Conditional Independence

    In a Bayes Net modeling three variables A, B, and C where A points to B and B points to C, which statement about independence is generally true?

    1. C and A are always dependent
    2. A and B are independent given C
    3. B and C are always independent
    4. A and C are conditionally independent given B

    Explanation: In this configuration, A and C are conditionally independent given B due to the Markov property. B and C are only dependent through the network, so ‘always independent’ is inaccurate. A and B are not independent given C in this structure. C and A are not always dependent; their dependency is influenced by B.

  3. Markov Property

    Which phrase best describes the Markov property in the context of Markov Models, such as Markov Chains?

    1. All states are interconnected
    2. Future and past states jointly determine the next state
    3. The initial state predicts all future states
    4. Future states depend only on the current state

    Explanation: The Markov property asserts that the future is independent of the past given the present state, meaning future states depend only on the current state. States are not all interconnected directly; transitions typically occur between adjacent states. The next state does not depend on both past and present states nor does the initial state alone predict all futures.

  4. Hidden Markov Model Application

    If weather is a hidden variable and observed data is whether someone carries an umbrella, what kind of probabilistic model is suitable for this scenario?

    1. Hidden Markov Model
    2. Decision Tree
    3. Logistic Regression
    4. Naive Bayes Classifier

    Explanation: A Hidden Markov Model (HMM) is well-suited for problems where the underlying state (the weather) is hidden and we observe related signals (carrying an umbrella). Logistic Regression and Decision Trees do not naturally model sequential hidden states. A Naive Bayes Classifier ignores dependencies between sequential states, so it is less appropriate here.

  5. Inference in Bayes Nets

    What is the main goal when performing inference in a Bayesian Network?

    1. Building an undirected graphical model
    2. Maximizing the number of nodes
    3. Calculating conditional probabilities given observed evidence
    4. Ensuring all nodes are observed

    Explanation: Inference in a Bayesian Network is primarily about calculating conditional probabilities given certain observations. Building an undirected model is not the focus of Bayes nets. Increasing node numbers is not an objective, and it is common for many nodes to be unobserved in real applications.

  6. Example of a Markov Chain

    Which of the following situations can best be modeled by a Markov Chain?

    1. Tracking multiple people in a crowded area simultaneously
    2. Relating age and height in a population
    3. Predicting whether a person is awake or asleep during the night based only on their state in the previous hour
    4. Classifying images into categories

    Explanation: Markov Chains are suited for systems where the next state depends only on the current state, as in modeling sleep transitions hour by hour. Relating age and height is not a sequential state problem. Image classification and tracking multiple people require more complex models than a simple Markov Chain.

  7. Bayes' Rule in Networks

    What does Bayes' Rule enable you to compute in the context of Bayesian Networks?

    1. Posterior probabilities from observed data
    2. The minimum spanning tree
    3. The shortest path between nodes
    4. Maximum likelihood estimates

    Explanation: Bayes' Rule is used for updating and calculating posterior probabilities in light of new evidence. Shortest paths and spanning trees are concepts from graph algorithms, not from probabilistic inference. Maximum likelihood estimation is a broader statistical concept and not the specific result of applying Bayes' Rule.

  8. Transition Matrix in Markov Models

    What information is stored in the transition matrix of a Markov Model?

    1. Posterior probabilities given evidence
    2. Initial observations only
    3. The meaning of each state variable
    4. Probabilities of moving between each pair of states

    Explanation: A transition matrix contains probabilities for transitioning from one state to another in a Markov Model. It does not define what each state means (state variable definitions). Initial observations are separate from the transition matrix, and posterior probabilities are computed afterward using the model.

  9. Naive Bayes vs. Bayes Net

    How does a Naive Bayes classifier differ from a general Bayesian Network?

    1. It models time-dependent processes
    2. It assumes all features are conditionally independent given the class
    3. It represents every dependency explicitly in the network
    4. It uses undirected edges

    Explanation: Naive Bayes classifiers make the simplifying assumption that all features are conditionally independent given the class label. They do not use undirected edges—that describes Markov Models. Naive Bayes does not model time-dependence, and it does not explicitly represent all possible dependencies among variables.

  10. Typical Use Case for a Bayesian Network

    Which is a typical real-world application of a Bayesian Network?

    1. Diagnosing diseases based on observed symptoms
    2. Predicting the strength of a chemical bond
    3. Compressing digital images using vectors
    4. Sorting numbers in a list efficiently

    Explanation: Bayesian Networks are commonly used for diagnosis, such as inferring disease likelihoods from observed symptoms. Image compression and sorting are unrelated to probabilistic graphical models. Predicting chemical bond strength might use other modeling techniques, but is not a standard use case for Bayes nets.