pystruct.models.
GraphCRF
(n_states=None, n_features=None, inference_method=None, class_weight=None, directed=False)[source]¶Pairwise CRF on a general graph.
Pairwise potentials the same for all edges, are symmetric by default
(directed=False
). This leads to n_classes parameters for unary
potentials.
If directed=True
, there are n_classes * n_classes
parameters
for pairwise potentials, if directed=False
, there are only
n_classes * (n_classes + 1) / 2
(for a symmetric matrix).
Examples, i.e. X, are given as an iterable of n_examples. An example, x, is represented as a tuple (features, edges) where features is a numpy array of shape (n_nodes, n_attributes), and edges is is an array of shape (n_edges, 2), representing the graph.
Labels, Y, are given as an iterable of n_examples. Each label, y, in Y is given by a numpy array of shape (n_nodes,).
There are n_states * n_features parameters for unary potentials. For edge potential parameters, there are n_state * n_states permutations, i.e.
state_1 state_2 state 3
state_1 1 2 3
state_2 4 5 6
state_3 7 8 9
The fitted parameters of this model will be returned as an array with the first n_states * n_features elements representing the unary potentials parameters, followed by the edge potential parameters.
Say we have two state, A and B, and two features 1 and 2. The unary potential parameters will be returned as [A1, A2, B1, B2].
If directed=True
the edge potential parameters will return
n_states * n_states parameters. The rows are senders and the
columns are recievers, i.e. the edge potential state_2 > state_1
is [2,1]; 4 in the above matrix.
The above edge potential parameters example would be returned as [1, 2, 3, 4, 5, 6, 7, 8, 9] (see numpy.ravel).
If edges are undirected, the edge potential parameter matrix is assumed to be symmetric and only the lower triangle is returned, i.e. [1, 4, 5, 7, 8, 9].
Parameters:  n_states : int, default=None
n_features : int, default=None
inference_method : string or None, default=None
class_weight : None, or arraylike
directed : boolean, default=False


Methods
batch_inference (X, w[, relaxed]) 

batch_joint_feature (X, Y[, Y_true]) 

batch_loss (Y, Y_hat) 

batch_loss_augmented_inference (X, Y, w[, ...]) 

continuous_loss (y, y_hat) 

inference (x, w[, relaxed, return_energy]) 
Inference for x using parameters w. 
initialize (X, Y) 

joint_feature (x, y) 
Feature vector associated with instance (x, y). 
loss (y, y_hat) 

loss_augmented_inference (x, y, w[, relaxed, ...]) 
Lossaugmented Inference for x relative to y using parameters w. 
max_loss (y) 
__init__
(n_states=None, n_features=None, inference_method=None, class_weight=None, directed=False)[source]¶inference
(x, w, relaxed=False, return_energy=False)¶Inference for x using parameters w.
Finds (approximately) armin_y np.dot(w, joint_feature(x, y)) using self.inference_method.
Parameters:  x : tuple
w : ndarray, shape=(size_joint_feature,)
relaxed : bool, default=False
return_energy : bool, default=False


Returns:  y_pred : ndarray or tuple

joint_feature
(x, y)[source]¶Feature vector associated with instance (x, y).
Feature representation joint_feature, such that the energy of the configuration (x, y) and a weight vector w is given by np.dot(w, joint_feature(x, y)).
Parameters:  x : tuple
y : ndarray or tuple


Returns:  p : ndarray, shape (size_joint_feature,)

loss_augmented_inference
(x, y, w, relaxed=False, return_energy=False)¶Lossaugmented Inference for x relative to y using parameters w.
Finds (approximately) armin_y_hat np.dot(w, joint_feature(x, y_hat)) + loss(y, y_hat) using self.inference_method.
Parameters:  x : tuple
y : ndarray, shape (n_nodes,)
w : ndarray, shape=(size_joint_feature,)
relaxed : bool, default=False
return_energy : bool, default=False


Returns:  y_pred : ndarray or tuple
