Image Specifics

This page provides details about features specific to one or more images.

Apache Spark™

Specific Docker Image Options

  • -p 4040:4040 - The jupyter/pyspark-notebook and jupyter/all-spark-notebook images open SparkUI (Spark Monitoring and Instrumentation UI) at default port 4040, this option map 4040 port inside docker container to 4040 port on host machine. Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. For example: docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/pyspark-notebook.

IPython low-level output capture and forward

Spark images (pyspark-notebook and all-spark-notebook) have been configured to disable IPython low-level output capture and forward system-wide. The rationale behind this choice is that Spark logs can be verbose, especially at startup when Ivy is used to load additional jars. Those logs are still available but only in the container's logs.

If you want to make them appear in the notebook, you can overwrite the configuration in a user level IPython kernel profile. To do that you have to uncomment the following line in your ~/.ipython/profile_default/ipython_kernel_config.py and restart the kernel.

c.IPKernelApp.capture_fd_output = True

If you have no IPython profile you can initiate a fresh one by running the following command.

ipython profile create
# [ProfileCreate] Generating default config file: '/home/jovyan/.ipython/profile_default/ipython_config.py'
# [ProfileCreate] Generating default config file: '/home/jovyan/.ipython/profile_default/ipython_kernel_config.py'

Build an Image with a Different Version of Spark

You can build a pyspark-notebook image (and also the downstream all-spark-notebook image) with a different version of Spark by overriding the default value of the following arguments at build time.

  • Spark distribution is defined by the combination of the Spark and the Hadoop version and verified by the package checksum, see Download Apache Spark and the archive repo for more information.

    • spark_version: The Spark version to install (3.0.0).

    • hadoop_version: The Hadoop version (3.2).

    • spark_checksum: The package checksum (BFE4540...).

  • Spark can run with different OpenJDK versions.

    • openjdk_version: The version of (JRE headless) the OpenJDK distribution (11), see Ubuntu packages.

For example here is how to build a pyspark-notebook image with Spark 2.4.7, Hadoop 2.7 and OpenJDK 8.

# From the root of the project
# Build the image with different arguments
docker build --rm --force-rm \
    -t jupyter/pyspark-notebook:spark-2.4.7 ./pyspark-notebook \
    --build-arg spark_version=2.4.7 \
    --build-arg hadoop_version=2.7 \
    --build-arg spark_checksum=0F5455672045F6110B030CE343C049855B7BA86C0ECB5E39A075FF9D093C7F648DA55DED12E72FFE65D84C32DCD5418A6D764F2D6295A3F894A4286CC80EF478 \
    --build-arg openjdk_version=8

# Check the newly built image
docker run -it --rm jupyter/pyspark-notebook:spark-2.4.7 pyspark --version

# Welcome to
#       ____              __
#      / __/__  ___ _____/ /__
#     _\ \/ _ \/ _ `/ __/  '_/
#    /___/ .__/\_,_/_/ /_/\_\   version 2.4.7
#       /_/
#
# Using Scala version 2.11.12, OpenJDK 64-Bit Server VM, 1.8.0_275

Usage Examples

The jupyter/pyspark-notebook and jupyter/all-spark-notebook images support the use of Apache Spark in Python, R, and Scala notebooks. The following sections provide some examples of how to get started using them.

Using Spark Local Mode

Spark local mode is useful for experimentation on small data when you do not have a Spark cluster available.

Local Mode in Python

In a Python notebook.

from pyspark.sql import SparkSession

# Spark session & context
spark = SparkSession.builder.master('local').getOrCreate()
sc = spark.sparkContext

# Sum of the first 100 whole numbers
rdd = sc.parallelize(range(100 + 1))
rdd.sum()
# 5050
Local Mode in R

In a R notebook with SparkR.

library(SparkR)

# Spark session & context
sc <- sparkR.session("local")

# Sum of the first 100 whole numbers
sdf <- createDataFrame(list(1:100))
dapplyCollect(sdf,
              function(x)
              { x <- sum(x)}
             )
# 5050

In a R notebook with sparklyr.

library(sparklyr)

# Spark configuration
conf <- spark_config()
# Set the catalog implementation in-memory
conf$spark.sql.catalogImplementation <- "in-memory"

# Spark session & context
sc <- spark_connect(master = "local", config = conf)

# Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>%
    spark_apply(function(e) sum(e))
# 5050
Local Mode in Scala

Spylon kernel instantiates a SparkContext for you in variable sc after you configure Spark options in a %%init_spark magic cell.

%%init_spark
# Configure Spark to use a local master
launcher.master = "local"
// Sum of the first 100 whole numbers
val rdd = sc.parallelize(0 to 100)
rdd.sum()
// 5050

Connecting to a Spark Cluster in Standalone Mode

Connection to Spark Cluster on Standalone Mode requires the following set of steps:

  1. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being deployed, run the same version of Spark.

  2. Deploy Spark in Standalone Mode.

  3. Run the Docker container with --net=host in a location that is network addressable by all of your Spark workers. (This is a Spark networking requirement.)

Note: In the following examples we are using the Spark master URL spark://master:7077 that shall be replaced by the URL of the Spark master.

Standalone Mode in Python

The same Python version needs to be used on the notebook (where the driver is located) and on the Spark workers. The python version used at driver and worker side can be adjusted by setting the environment variables PYSPARK_PYTHON and / or PYSPARK_DRIVER_PYTHON, see Spark Configuration for more information.

from pyspark.sql import SparkSession

# Spark session & context
spark = SparkSession.builder.master('spark://master:7077').getOrCreate()
sc = spark.sparkContext

# Sum of the first 100 whole numbers
rdd = sc.parallelize(range(100 + 1))
rdd.sum()
# 5050
Standalone Mode in R

In a R notebook with SparkR.

library(SparkR)

# Spark session & context
sc <- sparkR.session("spark://master:7077")

# Sum of the first 100 whole numbers
sdf <- createDataFrame(list(1:100))
dapplyCollect(sdf,
              function(x)
              { x <- sum(x)}
             )
# 5050

In a R notebook with sparklyr.

library(sparklyr)

# Spark session & context
# Spark configuration
conf <- spark_config()
# Set the catalog implementation in-memory
conf$spark.sql.catalogImplementation <- "in-memory"
sc <- spark_connect(master = "spark://master:7077", config = conf)

# Sum of the first 100 whole numbers
sdf_len(sc, 100, repartition = 1) %>%
    spark_apply(function(e) sum(e))
# 5050
Standalone Mode in Scala

Spylon kernel instantiates a SparkContext for you in variable sc after you configure Spark options in a %%init_spark magic cell.

%%init_spark
# Configure Spark to use a local master
launcher.master = "spark://master:7077"
// Sum of the first 100 whole numbers
val rdd = sc.parallelize(0 to 100)
rdd.sum()
// 5050

Define Spark Dependencies

Spark dependencies can be declared thanks to the spark.jars.packages property (see Spark Configuration for more information).

They can be defined as a comma-separated list of Maven coordinates at the creation of the Spark session.

from pyspark.sql import SparkSession

spark = (
    SparkSession.builder.appName("elasticsearch")
    .config(
        "spark.jars.packages",
        "org.elasticsearch:elasticsearch-spark-30_2.12:7.13.0"
    )
    .getOrCreate()
)

Dependencies can also be defined in the spark-defaults.conf. However, it has to be done by root so it should only be considered to build custom images.

USER root
RUN echo "spark.jars.packages org.elasticsearch:elasticsearch-spark-30_2.12:7.13.0" >> "${SPARK_HOME}/conf/spark-defaults.conf"
USER ${NB_UID}

Jars will be downloaded dynamically at the creation of the Spark session and stored by default in ${HOME}/.ivy2/jars (can be changed by setting spark.jars.ivy).

Note: This example is given for Elasticsearch.

Tensorflow

The jupyter/tensorflow-notebook image supports the use of Tensorflow in single machine or distributed mode.

Single Machine Mode

import tensorflow as tf

hello = tf.Variable('Hello World!')

sess = tf.Session()
init = tf.global_variables_initializer()

sess.run(init)
sess.run(hello)

Distributed Mode

import tensorflow as tf

hello = tf.Variable('Hello Distributed World!')

server = tf.train.Server.create_local_server()
sess = tf.Session(server.target)
init = tf.global_variables_initializer()

sess.run(init)
sess.run(hello)