Read a tabular data file into a Spark DataFrame.
spark_read_csv(sc, name, path, header = TRUE, columns = NULL,
infer_schema = TRUE, delimiter = ",", quote = "\"", escape = "\\",
charset = "UTF-8", null_value = NULL, options = list(),
repartition = 0, memory = TRUE, overwrite = TRUE)
- The name to assign to the newly generated table.
- The path to the file. Needs to be accessible from the cluster.
Supports the "hdfs://", "s3n://" and "file://" protocols.
- Boolean; should the first row of data be used as a header?
- A named vector specifying column types.
- Boolean; should column types be automatically inferred?
Requires one extra pass over the data. Defaults to
- The character used to delimit each column. Defaults to ','.
- The character used as a quote. Defaults to '"'.
- The character used to escape other characters. Defaults to '\'.
- The character set. Defaults to "UTF-8".
- The character to use for null, or missing, values. Defaults to
- A list of strings with additional options.
- The number of partitions used to distribute the
generated table. Use 0 (the default) to avoid partitioning.
- Boolean; should the data be loaded eagerly into memory? (That
is, should the table be cached?)
- Boolean; overwrite the table with the given name if it
You can read data from HDFS (
hdfs://), S3 (
as well as the local file system (
If you are reading from a secure S3 bucket be sure that the
variables are both defined.
FALSE, the column names are generated with a
V prefix; e.g.
V1, V2, ....
Other Spark serialization routines: