Read a CSV file into a Spark DataFrame

Read a tabular data file into a Spark DataFrame.

spark_read_csv(sc, name = NULL, path = name, header = TRUE,
  columns = NULL, infer_schema = is.null(columns), delimiter = ",",
  quote = "\"", escape = "\\", charset = "UTF-8",
  null_value = NULL, options = list(), repartition = 0,
  memory = TRUE, overwrite = TRUE, ...)



A spark_connection.


The name to assign to the newly generated table.


The path to the file. Needs to be accessible from the cluster. Supports the "hdfs://", "s3a://" and "file://" protocols.


Boolean; should the first row of data be used as a header? Defaults to TRUE.


A vector of column names or a named vector of column types.


Boolean; should column types be automatically inferred? Requires one extra pass over the data. Defaults to is.null(columns).


The character used to delimit each column. Defaults to ','.


The character used as a quote. Defaults to '"'.


The character used to escape other characters. Defaults to '\'.


The character set. Defaults to "UTF-8".


The character to use for null, or missing, values. Defaults to NULL.


A list of strings with additional options.


The number of partitions used to distribute the generated table. Use 0 (the default) to avoid partitioning.


Boolean; should the data be loaded eagerly into memory? (That is, should the table be cached?)


Boolean; overwrite the table with the given name if it already exists?


Optional arguments; currently unused.


You can read data from HDFS (hdfs://), S3 (s3a://), as well as the local file system (file://).

If you are reading from a secure S3 bucket be sure to set the following in your spark-defaults.conf spark.hadoop.fs.s3a.access.key, spark.hadoop.fs.s3a.secret.key or any of the methods outlined in the aws-sdk documentation Working with AWS credentials In order to work with the newer s3a:// protocol also set the values for spark.hadoop.fs.s3a.impl and spark.hadoop.fs.s3a.endpoint . In addition, to support v4 of the S3 api be sure to pass the driver options for the config key spark.driver.extraJavaOptions For instructions on how to configure s3n:// check the hadoop documentation: s3n authentication properties

When header is FALSE, the column names are generated with a V prefix; e.g. V1, V2, ....

See also