Spark ML -- Decision Trees
Perform classification and regression using decision trees.
ml_decision_tree_classifier(x, formula = NULL, max_depth = 5L, max_bins = 32L, min_instances_per_node = 1L, min_info_gain = 0, impurity = "gini", seed = NULL, thresholds = NULL, cache_node_ids = FALSE, checkpoint_interval = 10L, max_memory_in_mb = 256L, features_col = "features", label_col = "label", prediction_col = "prediction", probability_col = "probability", raw_prediction_col = "rawPrediction", uid = random_string("decision_tree_classifier_"), ...) ml_decision_tree(x, formula = NULL, type = c("auto", "regression", "classification"), features_col = "features", label_col = "label", prediction_col = "prediction", variance_col = NULL, probability_col = "probability", raw_prediction_col = "rawPrediction", checkpoint_interval = 10L, impurity = "auto", max_bins = 32L, max_depth = 5L, min_info_gain = 0, min_instances_per_node = 1L, seed = NULL, thresholds = NULL, cache_node_ids = FALSE, max_memory_in_mb = 256L, uid = random_string("decision_tree_"), response = NULL, features = NULL, ...) ml_decision_tree_regressor(x, formula = NULL, max_depth = 5L, max_bins = 32L, min_instances_per_node = 1L, min_info_gain = 0, impurity = "variance", seed = NULL, cache_node_ids = FALSE, checkpoint_interval = 10L, max_memory_in_mb = 256L, variance_col = NULL, features_col = "features", label_col = "label", prediction_col = "prediction", uid = random_string("decision_tree_regressor_"), ...)
Maximum depth of the tree (>= 0); that is, the maximum number of nodes separating any leaves from the root of the tree.
The maximum number of bins used for discretizing continuous features and for choosing how to split on features at each node. More bins give higher granularity.
Minimum number of instances each child must have after split.
Minimum information gain for a split to be considered at a tree node. Should be >= 0, defaults to 0.
Criterion used for information gain calculation. Supported: "entropy"
and "gini" (default) for classification and "variance" (default) for regression. For
Seed for random numbers.
Thresholds in multi-class classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values > 0 excepting that at most one value may be 0. The class with largest value
Set checkpoint interval (>= 1) or disable checkpoint (-1). E.g. 10 means that the cache will get checkpointed every 10 iterations, defaults to 10.
Maximum memory in MB allocated to histogram aggregation. If too small, then 1 node will be split per iteration, and its aggregates may exceed this size. Defaults to 256.
Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by
Label column name. The column should be a numeric column. Usually this column is output by
Prediction column name.
Column name for predicted class conditional probabilities.
Raw prediction (a.k.a. confidence) column name.
A character string used to uniquely identify the ML estimator.
Optional arguments; currently unused.
The type of model to fit.
(Optional) Column name for the biased sample variance of prediction.
(Deprecated) The name of the response column (as a length-one character vector.)
(Deprecated) The name of features (terms) to use for the model fit.
The object returned depends on the class of
spark_connection, the function returns an instance of a
ml_predictorobject. The object contains a pointer to a Spark
Predictorobject and can be used to compose
ml_pipeline, the function returns a
ml_pipelinewith the predictor appended to the pipeline.
tbl_spark, a predictor is constructed then immediately fit with the input
tbl_spark, returning a prediction model.
formula: specified When
formulais specified, the input
tbl_sparkis first transformed using a
RFormulatransformer before being fit by the predictor. The object returned in this case is a
ml_modelwhich is a wrapper of a
ml_decision_tree is a wrapper around
ml_decision_tree_classifier.tbl_spark and calls the appropriate method based on model type.
See http://spark.apache.org/docs/latest/ml-classification-regression.html for more information on the set of supervised learning algorithms.
Other ml algorithms: