Feature Transformation -- Bucketizer

Similar to R's cut function, this transforms a numeric column into a discretized column, with breaks specified through the splits parameter.

ft_bucketizer(x, input.col, output.col, splits, ...)

Arguments

x

An object (usually a spark_tbl) coercable to a Spark DataFrame.

input.col

The name of the input column(s).

output.col

The name of the output column.

splits

A numeric vector of cutpoints, indicating the bucket boundaries.

...

Optional arguments; currently unused.

See also

See http://spark.apache.org/docs/latest/ml-features for more information on the set of transformations available for DataFrame columns in Spark.

Other feature transformation routines: ft_binarizer, ft_count_vectorizer, ft_discrete_cosine_transform, ft_elementwise_product, ft_index_to_string, ft_one_hot_encoder, ft_quantile_discretizer, ft_regex_tokenizer, ft_stop_words_remover, ft_string_indexer, ft_tokenizer, ft_vector_assembler, sdf_mutate