pystruct.learners.
SubgradientSSVM
(model, max_iter=100, C=1.0, verbose=0, momentum=0.0, learning_rate='auto', n_jobs=1, show_loss_every=0, decay_exponent=1, break_on_no_constraints=True, logger=None, batch_size=None, decay_t0=10, averaging=None, shuffle=False)[source]¶Structured SVM solver using subgradient descent.
Implements a margin rescaled with l1 slack penalty. By default, a constant learning rate is used. It is also possible to use the adaptive learning rate found by AdaGrad.
This class implements online subgradient descent. If n_jobs != 1, small batches of size n_jobs are used to exploit parallel inference. If inference is fast, use n_jobs=1.
Parameters:  model : StructuredModel
max_iter : int, default=100
C : float, default=1.
verbose : int, default=0
learning_rate : float or ‘auto’, default=’auto’
momentum : float, default=0.0
n_jobs : int, default=1
batch_size : int, default=None
show_loss_every : int, default=0
decay_exponent : float, default=1
decay_t0 : float, default=10
break_on_no_constraints : bool, default=True
logger : logger object. averaging : string, default=None
shuffle : bool, default=False


Attributes:  w : ndarray, shape=(model.size_joint_feature,)
``loss_curve_`` : list of float
``objective_curve_`` : list of float
``timestamps_`` : list of int

References
(Online) Subgradient Methods for Structured Prediction, AISTATS 2007
Andrew: Pegasos: Primal estimated subgradient solver for svm, Mathematical Programming 2011
Methods
fit (X, Y[, constraints, warm_start, initialize]) 
Learn parameters using subgradient descent. 
get_params ([deep]) 
Get parameters for this estimator. 
predict (X) 
Predict output on examples in X. 
score (X, Y) 
Compute score as 1  loss over whole data set. 
set_params (**params) 
Set the parameters of this estimator. 
__init__
(model, max_iter=100, C=1.0, verbose=0, momentum=0.0, learning_rate='auto', n_jobs=1, show_loss_every=0, decay_exponent=1, break_on_no_constraints=True, logger=None, batch_size=None, decay_t0=10, averaging=None, shuffle=False)[source]¶fit
(X, Y, constraints=None, warm_start=False, initialize=True)[source]¶Learn parameters using subgradient descent.
Parameters:  X : iterable
Y : iterable
constraints : None
warm_start : boolean, default=False
initialize : boolean, default=True


get_params
(deep=True)¶Get parameters for this estimator.
Parameters:  deep: boolean, optional :


Returns:  params : mapping of string to any

predict
(X)¶Predict output on examples in X.
Parameters:  X : iterable


Returns:  Y_pred : list

score
(X, Y)¶Compute score as 1  loss over whole data set.
Returns the average accuracy (in terms of model.loss) over X and Y.
Parameters:  X : iterable
Y : iterable


Returns:  score : float

set_params
(**params)¶Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects
(such as pipelines). The former have parameters of the form
<component>__<parameter>
so that it’s possible to update each
component of a nested object.
Returns:  self : 
