Copy an object into Spark, and return an R object wrapping the
copied object (typically, a Spark DataFrame).
sdf_copy_to(sc, x, name, memory, repartition, overwrite, ...)
sdf_import(x, sc, name, memory, repartition, overwrite, ...)
- The associated Spark connection.
- An R object from which a Spark DataFrame can be generated.
- The name to assign to the copied table in Spark.
- Boolean; should the table be cached into memory?
- The number of partitions to use when distributing the
table across the Spark cluster. The default (0) can be used to avoid
- Boolean; overwrite a pre-existing table with the name
if one already exists?
- Optional arguments, passed to implementing methods.
sdf_copy_to is an S3 generic that, by default, dispatches to
sdf_import. Package authors that would like to implement
sdf_copy_to for a custom object type can accomplish this by
implementing the associated method on
Other Spark data frames:
sc <- spark_connect(master = "spark://HOST:PORT")