Spark ML – Survival Regression

R/ml_regression_aft_survival_regression.R

ml_aft_survival_regression

Description

Fit a parametric survival regression model named accelerated failure time (AFT) model (see Accelerated failure time model (Wikipedia)) based on the Weibull distribution of the survival time.

Usage

 
ml_aft_survival_regression( 
  x, 
  formula = NULL, 
  censor_col = "censor", 
  quantile_probabilities = c(0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99), 
  fit_intercept = TRUE, 
  max_iter = 100L, 
  tol = 1e-06, 
  aggregation_depth = 2, 
  quantiles_col = NULL, 
  features_col = "features", 
  label_col = "label", 
  prediction_col = "prediction", 
  uid = random_string("aft_survival_regression_"), 
  ... 
) 
 
ml_survival_regression( 
  x, 
  formula = NULL, 
  censor_col = "censor", 
  quantile_probabilities = c(0.01, 0.05, 0.1, 0.25, 0.5, 0.75, 0.9, 0.95, 0.99), 
  fit_intercept = TRUE, 
  max_iter = 100L, 
  tol = 1e-06, 
  aggregation_depth = 2, 
  quantiles_col = NULL, 
  features_col = "features", 
  label_col = "label", 
  prediction_col = "prediction", 
  uid = random_string("aft_survival_regression_"), 
  response = NULL, 
  features = NULL, 
  ... 
) 

Arguments

Arguments Description
x A spark_connection, ml_pipeline, or a tbl_spark.
formula Used when x is a tbl_spark. R formula as a character string or a formula. This is used to transform the input dataframe before fitting, see ft_r_formula for details.
censor_col Censor column name. The value of this column could be 0 or 1. If the value is 1, it means the event has occurred i.e. uncensored; otherwise censored.
quantile_probabilities Quantile probabilities array. Values of the quantile probabilities array should be in the range (0, 1) and the array should be non-empty.
fit_intercept Boolean; should the model be fit with an intercept term?
max_iter The maximum number of iterations to use.
tol Param for the convergence tolerance for iterative algorithms.
aggregation_depth (Spark 2.1.0+) Suggested depth for treeAggregate (>= 2).
quantiles_col Quantiles column name. This column will output quantiles of corresponding quantileProbabilities if it is set.
features_col Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by ft_r_formula.
label_col Label column name. The column should be a numeric column. Usually this column is output by ft_r_formula.
prediction_col Prediction column name.
uid A character string used to uniquely identify the ML estimator.
Optional arguments; see Details.
response (Deprecated) The name of the response column (as a length-one character vector.)
features (Deprecated) The name of features (terms) to use for the model fit.

Details

When x is a tbl_spark and formula (alternatively, response and features) is specified, the function returns a ml_model object wrapping a ml_pipeline_model which contains data pre-processing transformers, the ML predictor, and, for classification models, a post-processing transformer that converts predictions into class labels. For classification, an optional argument predicted_label_col (defaults to "predicted_label") can be used to specify the name of the predicted label column. In addition to the fitted ml_pipeline_model, ml_model objects also contain a ml_pipeline object where the ML predictor stage is an estimator ready to be fit against data. This is utilized by ml_save with type = "pipeline" to faciliate model refresh workflows. ml_survival_regression() is an alias for ml_aft_survival_regression() for backwards compatibility.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns an instance of a ml_estimator object. The object contains a pointer to a Spark Predictor object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the predictor appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, a predictor is constructed then immediately fit with the input tbl_spark, returning a prediction model.

  • tbl_spark, with formula: specified When formula is specified, the input tbl_spark is first transformed using a RFormula transformer before being fit by the predictor. The object returned in this case is a ml_model which is a wrapper of a ml_pipeline_model.

Examples

 
 
library(survival) 
library(sparklyr) 
 
sc <- spark_connect(master = "local") 
ovarian_tbl <- sdf_copy_to(sc, ovarian, name = "ovarian_tbl", overwrite = TRUE) 
 
partitions <- ovarian_tbl %>% 
  sdf_random_split(training = 0.7, test = 0.3, seed = 1111) 
 
ovarian_training <- partitions$training 
ovarian_test <- partitions$test 
 
sur_reg <- ovarian_training %>% 
  ml_aft_survival_regression(futime ~ ecog_ps + rx + age + resid_ds, censor_col = "fustat") 
 
pred <- ml_predict(sur_reg, ovarian_test) 
pred 
#> # Source: spark<?> [?? x 7]
#>   futime fustat   age resid_ds    rx ecog_ps prediction
#>    <dbl>  <dbl> <dbl>    <dbl> <dbl>   <dbl>      <dbl>
#> 1    115      1  74.5        2     1       1       191.
#> 2    353      1  63.2        1     2       2      1369.
#> 3    377      0  58.3        1     2       1      1751.
#> 4    475      1  59.9        2     2       2       818.
#> 5    744      0  50.1        1     2       1      2792.
#> 6    770      0  57.1        2     2       1       928.

See Also

See https://spark.apache.org/docs/latest/ml-classification-regression.html for more information on the set of supervised learning algorithms. Other ml algorithms: ml_decision_tree_classifier(), ml_gbt_classifier(), ml_generalized_linear_regression(), ml_isotonic_regression(), ml_linear_regression(), ml_linear_svc(), ml_logistic_regression(), ml_multilayer_perceptron_classifier(), ml_naive_bayes(), ml_one_vs_rest(), ml_random_forest_classifier()