Reduction of Multiclass Classification to Binary Classification. Performs reduction using one against all strategy. For a multiclass classification with k classes, train k models (one per class). Each example is scored against all k models and the model with highest score is picked to label the example.
ml_one_vs_rest( x, formula = NULL, classifier = NULL, features_col = "features", label_col = "label", prediction_col = "prediction", uid = random_string("one_vs_rest_"), ... )
Object of class
Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by
Label column name. The column should be a numeric column. Usually this column is output by
Prediction column name.
A character string used to uniquely identify the ML estimator.
Optional arguments; see Details.
The object returned depends on the class of
x is a
spark_connection, the function returns an instance of a
ml_estimator object. The object contains a pointer to
Predictor object and can be used to compose
x is a
ml_pipeline, the function returns a
the predictor appended to the pipeline.
x is a
tbl_spark, a predictor is constructed then
immediately fit with the input
tbl_spark, returning a prediction model.
formula: specified When
is specified, the input
tbl_spark is first transformed using a
RFormula transformer before being fit by
the predictor. The object returned in this case is a
ml_model which is a
wrapper of a
x is a
features) is specified, the function returns a
ml_model object wrapping a
ml_pipeline_model which contains data pre-processing transformers, the ML predictor, and, for classification models, a post-processing transformer that converts predictions into class labels. For classification, an optional argument
predicted_label_col (defaults to
"predicted_label") can be used to specify the name of the predicted label column. In addition to the fitted
ml_model objects also contain a
ml_pipeline object where the ML predictor stage is an estimator ready to be fit against data. This is utilized by
type = "pipeline" to faciliate model refresh workflows.
See http://spark.apache.org/docs/latest/ml-classification-regression.html for more information on the set of supervised learning algorithms.
Other ml algorithms: