pystruct.models.
BinaryClf
(n_features=None)[source]¶Formulate standard linear binary SVM in CRF framework.
Inputs x are simply feature arrays, labels y are -1 or 1.
Parameters: | n_features : int or None, default=None
|
---|
Notes
No bias / intercept is learned. It is recommended to add a constant one feature to the data.
It is also highly recommended to use n_jobs=1 in the learner when using this model. Trying to parallelize the trivial inference will slow the infernce down a lot!
Methods
batch_inference (X, w) |
|
batch_joint_feature (X, Y) |
|
batch_loss (Y, Y_hat) |
|
batch_loss_augmented_inference (X, Y, w[, ...]) |
|
continuous_loss (y, y_hat) |
|
inference (x, w[, relaxed]) |
Inference for x using parameters w. |
initialize (X, Y) |
|
joint_feature (x, y) |
Compute joint feature vector of x and y. |
loss (y, y_hat) |
|
loss_augmented_inference (x, y, w[, relaxed]) |
Loss-augmented inference for x and y using parameters w. |
max_loss (y) |
inference
(x, w, relaxed=None)[source]¶Inference for x using parameters w.
Finds armin_y np.dot(w, joint_feature(x, y)), i.e. best possible prediction.
For a binary SVM, this is just sign(np.dot(w, x) + b))
Parameters: | x : ndarray, shape (n_features,)
w : ndarray, shape=(size_joint_feature,)
relaxed : ignored |
---|---|
Returns: | y_pred : int
|
joint_feature
(x, y)[source]¶Compute joint feature vector of x and y.
Feature representation joint_feature, such that the energy of the configuration (x, y) and a weight vector w is given by np.dot(w, joint_feature(x, y)).
Parameters: | x : nd-array, shape=(n_features,)
y : int
|
---|---|
Returns: | p : ndarray, shape (size_joint_feature,)
|
loss_augmented_inference
(x, y, w, relaxed=None)[source]¶Loss-augmented inference for x and y using parameters w.
Minimizes over y_hat: np.dot(joint_feature(x, y_hat), w) + loss(y, y_hat) which is just sign(np.dot(x, w) + b - y)
Parameters: | x : ndarray, shape (n_features,)
y : int
w : ndarray, shape (size_joint_feature,)
|
---|---|
Returns: | y_hat : int
|