# Spark ML -- K-Means Clustering

K-means clustering with support for k-means|| initialization proposed by Bahmani et al.

```
ml_kmeans(x, formula = NULL, k = 2L, max_iter = 20L, tol = 1e-04,
init_steps = 2L, init_mode = "k-means||", seed = NULL,
features_col = "features", prediction_col = "prediction",
uid = random_string("kmeans_"), ...)
```

## Arguments

x | A |

formula | Used when |

k | The number of clusters to create |

max_iter | The maximum number of iterations to use. |

tol | Param for the convergence tolerance for iterative algorithms. |

init_steps | Number of steps for the k-means|| initialization mode. This is an advanced setting -- the default of 2 is almost always enough. Must be > 0. Default: 2. |

init_mode | Initialization algorithm. This can be either "random" to choose random points as initial cluster centers, or "k-means||" to use a parallel variant of k-means++ (Bahmani et al., Scalable K-Means++, VLDB 2012). Default: k-means||. |

seed | A random seed. Set this value if you need your results to be reproducible across repeated calls. |

features_col | Features column name, as a length-one character vector. The column should be single vector column of numeric values. Usually this column is output by |

prediction_col | Prediction column name. |

uid | A character string used to uniquely identify the ML estimator. |

... | Optional arguments; currently unused. |

## Value

The object returned depends on the class of `x`

.

`spark_connection`

: When`x`

is a`spark_connection`

, the function returns an instance of a`ml_estimator`

object. The object contains a pointer to a Spark`Estimator`

object and can be used to compose`Pipeline`

objects.`ml_pipeline`

: When`x`

is a`ml_pipeline`

, the function returns a`ml_pipeline`

with the clustering estimator appended to the pipeline.`tbl_spark`

: When`x`

is a`tbl_spark`

, an estimator is constructed then immediately fit with the input`tbl_spark`

, returning a clustering model.`tbl_spark`

, with`formula`

or`features`

specified: When`formula`

is specified, the input`tbl_spark`

is first transformed using a`RFormula`

transformer before being fit by the estimator. The object returned in this case is a`ml_model`

which is a wrapper of a`ml_pipeline_model`

. This signature does not apply to`ml_lda()`

.

## See also

See http://spark.apache.org/docs/latest/ml-clustering.html for more information on the set of clustering algorithms.

Other ml clustering algorithms: `ml_bisecting_kmeans`

,
`ml_gaussian_mixture`

, `ml_lda`