Customizing your environment for z/OS Spark
Before you can use z/OS® Spark, you must customize your environment for it and its dependent products. Complete this task after you install Open Data Analytics for z/OS but before your first use of it.
Before you begin
Follow the instructions in Installing IBM Open Data Analytics for z/OS to install Open Data Analytics for z/OS on your system.
About this task
The default program location for z/OS Spark is /usr/lpp/IBM/izoda/spark/.
Procedure
Complete the following tasks to customize your environment for z/OS Spark. You can use the
z/OS Spark configuration
workflow as described in Using the Spark configuration workflow or you can follow these
individual steps. You will need access to a TSO session, an OMVS session (preferably through a
Putty terminal):
- Verifying the Java and bash environments
- Verifying configuration requirements for z/OS UNIX System Services
- Setting up a user ID for use with z/OS Spark
- Verifying the env command path
- Customizing the Apache Spark directory structure
- Configuring networking for Apache Spark
- Configuring z/OS Spark client authentication
- Configuring IBM Java
- Creating jobs to start and stop Spark processes
- Setting up started tasks to start and stop Spark processes
- Configuring memory and CPU options
- Configuring z/OS workload management for Apache Spark
Results
You have customized your environment for z/OS Spark.
What to do next
Continue with Customizing the Data Service server.