These routines allow you to manage your connections to Spark.
spark_connect( master, spark_home = Sys.getenv("SPARK_HOME"), method = c("shell", "livy", "databricks", "test", "qubole"), app_name = "sparklyr", version = NULL, config = spark_config(), extensions = sparklyr::registered_extensions(), packages = NULL, scala_version = NULL, ... ) spark_connection_is_open(sc) spark_disconnect(sc, ...) spark_disconnect_all() spark_submit( master, file, spark_home = Sys.getenv("SPARK_HOME"), app_name = "sparklyr", version = NULL, config = spark_config(), extensions = sparklyr::registered_extensions(), scala_version = NULL, ... )
Spark cluster url to connect to. Use
The path to a Spark installation. Defaults to the path
provided by the
The method used to connect to Spark. Default connection method
The application name to be used while running in the Spark cluster.
The version of Spark to use. Required for
Custom configuration for the generated Spark connection. See
Extension R packages to enable for this connection. By
default, all packages enabled through the use of
A list of Spark packages to load. For example,
Load the sparklyr jar file that is built with the version of Scala specified (this currently only makes sense for Spark 2.4, where sparklyr will by default assume Spark 2.4 on current host is built with Scala 2.11, and therefore `scala_version = '2.12'` is needed if sparklyr is connecting to Spark 2.4 built with Scala 2.12)
Optional arguments; currently unused.
Path to R source file to submit for batch execution.
method = "livy", jars are downloaded from GitHub but the path
to a local
sparklyr JAR can also be specified through the
sc <- spark_connect(master = "spark://HOST:PORT") connection_is_open(sc)#>  TRUEspark_disconnect(sc)