Read a https://parquet.apache.org/ file into a Spark
spark_read_parquet(sc, name, path, options = list(), repartition = 0,
memory = TRUE, overwrite = TRUE)
- The name to assign to the newly generated table.
- The path to the file. Needs to be accessible from the cluster.
Supports the "hdfs://", "s3n://" and "file://" protocols.
- A list of strings with additional options. See http://spark.apache.org/docs/latest/sql-programming-guide.html#configuration.
- The number of partitions used to distribute the
generated table. Use 0 (the default) to avoid partitioning.
- Boolean; should the data be loaded eagerly into memory? (That
is, should the table be cached?)
- Boolean; overwrite the table with the given name if it
You can read data from HDFS (
hdfs://), S3 (
s3n://), as well as
the local file system (
If you are reading from a secure S3 bucket be sure that the
AWS_SECRET_ACCESS_KEY environment variables are both defined.
Other Spark serialization routines: