Inference in Markov Random Fields
To compute marginal probabilities in a Markov Random Field (MRF), you use the method of summing over all possible configurations of the hidden variables. This process is often called brute-force enumeration because it involves evaluating the joint probability for every combination of variable assignments, then summing the probabilities where the variable of interest takes on a specific value. For example, if you want the marginal probability of variable A, you sum the joint probabilities for all values of the other variables (B and C), while keeping A fixed at a particular value. This approach is simple to implement for small graphs, but quickly becomes computationally expensive as the number of variables increases.
1234567891011121314151617181920212223242526272829303132333435import numpy as np # Possible values for each variable (binary, for simplicity) values = [0, 1] # Define factor tables for the MRF with variables A, B, C # phi1: factor on (A, B) phi1 = np.array([[0.9, 0.1], # phi1[A=0, B=0], phi1[A=0, B=1] [0.2, 0.8]]) # phi1[A=1, B=0], phi1[A=1, B=1] # phi2: factor on (B, C) phi2 = np.array([[0.3, 0.7], # phi2[B=0, C=0], phi2[B=0, C=1] [0.6, 0.4]]) # phi2[B=1, C=0], phi2[B=1, C=1] # phi3: factor on (C, A) phi3 = np.array([[0.5, 0.5], # phi3[C=0, A=0], phi3[C=0, A=1] [0.4, 0.6]]) # phi3[C=1, A=0], phi3[C=1, A=1] def joint_prob(a, b, c): # Multiply the values from each factor table for the given assignment return phi1[a, b] * phi2[b, c] * phi3[c, a] # Compute the unnormalized marginal for A unnormalized_marginal_A = np.zeros(2) for a in values: total = 0.0 for b in values: for c in values: total += joint_prob(a, b, c) unnormalized_marginal_A[a] = total # Normalize to get probabilities marginal_A = unnormalized_marginal_A / np.sum(unnormalized_marginal_A) print("Marginal probability P(A):", marginal_A)
This code demonstrates how to compute the marginal probability of variable A in a simple MRF with three binary variables: A, B, and C. The code defines factor tables for each edge in the undirected graph. The function joint_prob(a, b, c) multiplies the relevant factor values for a particular assignment. To find the marginal for A, the code loops over all possible values of B and C for each possible value of A, sums the resulting joint probabilities, and then normalizes the results so they sum to one. This brute-force approach ensures that every possible configuration is considered when calculating the marginal.
Grazie per i tuoi commenti!
Chieda ad AI
Chieda ad AI
Chieda pure quello che desidera o provi una delle domande suggerite per iniziare la nostra conversazione
Fantastico!
Completion tasso migliorato a 10
Inference in Markov Random Fields
Scorri per mostrare il menu
To compute marginal probabilities in a Markov Random Field (MRF), you use the method of summing over all possible configurations of the hidden variables. This process is often called brute-force enumeration because it involves evaluating the joint probability for every combination of variable assignments, then summing the probabilities where the variable of interest takes on a specific value. For example, if you want the marginal probability of variable A, you sum the joint probabilities for all values of the other variables (B and C), while keeping A fixed at a particular value. This approach is simple to implement for small graphs, but quickly becomes computationally expensive as the number of variables increases.
1234567891011121314151617181920212223242526272829303132333435import numpy as np # Possible values for each variable (binary, for simplicity) values = [0, 1] # Define factor tables for the MRF with variables A, B, C # phi1: factor on (A, B) phi1 = np.array([[0.9, 0.1], # phi1[A=0, B=0], phi1[A=0, B=1] [0.2, 0.8]]) # phi1[A=1, B=0], phi1[A=1, B=1] # phi2: factor on (B, C) phi2 = np.array([[0.3, 0.7], # phi2[B=0, C=0], phi2[B=0, C=1] [0.6, 0.4]]) # phi2[B=1, C=0], phi2[B=1, C=1] # phi3: factor on (C, A) phi3 = np.array([[0.5, 0.5], # phi3[C=0, A=0], phi3[C=0, A=1] [0.4, 0.6]]) # phi3[C=1, A=0], phi3[C=1, A=1] def joint_prob(a, b, c): # Multiply the values from each factor table for the given assignment return phi1[a, b] * phi2[b, c] * phi3[c, a] # Compute the unnormalized marginal for A unnormalized_marginal_A = np.zeros(2) for a in values: total = 0.0 for b in values: for c in values: total += joint_prob(a, b, c) unnormalized_marginal_A[a] = total # Normalize to get probabilities marginal_A = unnormalized_marginal_A / np.sum(unnormalized_marginal_A) print("Marginal probability P(A):", marginal_A)
This code demonstrates how to compute the marginal probability of variable A in a simple MRF with three binary variables: A, B, and C. The code defines factor tables for each edge in the undirected graph. The function joint_prob(a, b, c) multiplies the relevant factor values for a particular assignment. To find the marginal for A, the code loops over all possible values of B and C for each possible value of A, sums the resulting joint probabilities, and then normalizes the results so they sum to one. This brute-force approach ensures that every possible configuration is considered when calculating the marginal.
Grazie per i tuoi commenti!