Feature Transformation – StandardScaler (Estimator)

R/ml_feature_standard_scaler.R

ft_standard_scaler

Description

Standardizes features by removing the mean and scaling to unit variance using column summary statistics on the samples in the training set. The “unit std” is computed using the corrected sample standard deviation, which is computed as the square root of the unbiased sample variance.

Usage

 
ft_standard_scaler( 
  x, 
  input_col = NULL, 
  output_col = NULL, 
  with_mean = FALSE, 
  with_std = TRUE, 
  uid = random_string("standard_scaler_"), 
  ... 
) 

Arguments

Arguments Description
x A spark_connection, ml_pipeline, or a tbl_spark.
input_col The name of the input column.
output_col The name of the output column.
with_mean Whether to center the data with mean before scaling. It will build a dense output, so take care when applying to sparse input. Default: FALSE
with_std Whether to scale the data to unit standard deviation. Default: TRUE
uid A character string used to uniquely identify the feature transformer.
Optional arguments; currently unused.

Details

In the case where x is a tbl_spark, the estimator fits against x to obtain a transformer, which is then immediately used to transform x, returning a tbl_spark.

Value

The object returned depends on the class of x.

  • spark_connection: When x is a spark_connection, the function returns a ml_transformer, a ml_estimator, or one of their subclasses. The object contains a pointer to a Spark Transformer or Estimator object and can be used to compose Pipeline objects.

  • ml_pipeline: When x is a ml_pipeline, the function returns a ml_pipeline with the transformer or estimator appended to the pipeline.

  • tbl_spark: When x is a tbl_spark, a transformer is constructed then immediately applied to the input tbl_spark, returning a tbl_spark

Examples

library(sparklyr)
 
sc <- spark_connect(master = "local") 
iris_tbl <- sdf_copy_to(sc, iris, name = "iris_tbl", overwrite = TRUE) 
 
features <- c("Sepal_Length", "Sepal_Width", "Petal_Length", "Petal_Width") 
 
iris_tbl %>% 
  ft_vector_assembler( 
    input_col = features, 
    output_col = "features_temp" 
  ) %>% 
  ft_standard_scaler( 
    input_col = "features_temp", 
    output_col = "features", 
    with_mean = TRUE 
  ) 
#> # Source: spark<?> [?? x 7]
#>    Sepal_L…¹ Sepal…² Petal…³ Petal…⁴ Species featu…⁵ featu…⁶
#>        <dbl>   <dbl>   <dbl>   <dbl> <chr>   <list>  <list> 
#>  1       5.1     3.5     1.4     0.2 setosa  <dbl>   <dbl>  
#>  2       4.9     3       1.4     0.2 setosa  <dbl>   <dbl>  
#>  3       4.7     3.2     1.3     0.2 setosa  <dbl>   <dbl>  
#>  4       4.6     3.1     1.5     0.2 setosa  <dbl>   <dbl>  
#>  5       5       3.6     1.4     0.2 setosa  <dbl>   <dbl>  
#>  6       5.4     3.9     1.7     0.4 setosa  <dbl>   <dbl>  
#>  7       4.6     3.4     1.4     0.3 setosa  <dbl>   <dbl>  
#>  8       5       3.4     1.5     0.2 setosa  <dbl>   <dbl>  
#>  9       4.4     2.9     1.4     0.2 setosa  <dbl>   <dbl>  
#> 10       4.9     3.1     1.5     0.1 setosa  <dbl>   <dbl>  
#> # … with more rows, and abbreviated variable names
#> #   ¹​Sepal_Length, ²​Sepal_Width, ³​Petal_Length,
#> #   ⁴​Petal_Width, ⁵​features_temp, ⁶​features

See Also

See https://spark.apache.org/docs/latest/ml-features.html for more information on the set of transformations available for DataFrame columns in Spark. Other feature transformers: ft_binarizer(), ft_bucketizer(), ft_chisq_selector(), ft_count_vectorizer(), ft_dct(), ft_elementwise_product(), ft_feature_hasher(), ft_hashing_tf(), ft_idf(), ft_imputer(), ft_index_to_string(), ft_interaction(), ft_lsh, ft_max_abs_scaler(), ft_min_max_scaler(), ft_ngram(), ft_normalizer(), ft_one_hot_encoder_estimator(), ft_one_hot_encoder(), ft_pca(), ft_polynomial_expansion(), ft_quantile_discretizer(), ft_r_formula(), ft_regex_tokenizer(), ft_robust_scaler(), ft_sql_transformer(), ft_stop_words_remover(), ft_string_indexer(), ft_tokenizer(), ft_vector_assembler(), ft_vector_indexer(), ft_vector_slicer(), ft_word2vec()