Restricting the ability to start or stop the Spark cluster
Complete this task to restrict the ability to start or stop the Spark cluster components (such as,
Master and Worker) via the shell scripts located in the $SPARK_HOME/sbin
directory.
About this task
Apache Spark provides Bash shell scripts for starting the individual components, such as the Master, the Worker, or the History server. However, when carefully allocating resources for the Spark cluster via WLM, having extra cluster components can disrupt the expected port, storage and zIIP allocations.
Setting the Spark “sbin” directory contents to be unreadable and unexecutable
By default, the contents of the $SPARK_HOME/sbin
directory have permissions of
“755”, or “rwxr-xr-x”. To restrict users from starting their own Spark cluster, the permissions of
the directory and its contents can be changed to “700” or “rwx------“. This would prevent users from
using the scripts within the directory to start Spark cluster components, seeing the
contents of the Spark
“$SPARK_HOME/sbin
” directory, and copying the scripts to their own directories
(where they could set their own permissions and bypass the execution restrictions). Alternately, you
could use permission value “750” (or “rwxr-x---”) in the commands that follow to allow userids
within the Spark group to
start or stop the Spark
cluster components (and potentially, start their own cluster, as they might do if they were testing
a new spark-defaults.conf setting, for example).
Following are commands to alter permission of sbin:
SYS1.SPARK.ZFS
for Spark and the mountpoint is
/usr/lpp/IBM/izoda/spark/
, from TSO OMVS or a Putty session, logged in as the owner
of the SPARK_HOME
directory:tsocmd "mount filesystem('SYS1.SPARK.ZFS') type(zfs) mode(rdwr)
mountpoint('/usr/lpp/IBM/izoda/spark')"
cd $SPARK_HOME
chmod -R 700 ./sbin
tsocmd "mount filesystem('SYS1.SPARK.ZFS') type(zfs) mode(read)
mountpoint('/usr/lpp/IBM/izoda/spark')"
tsocmd "mount filesystem('SYS1.SPARK.ZFS') type(zfs) mode(rdwr)
mountpoint('/usr/lpp/IBM/izoda/spark')"
cd $SPARK_HOME
chmod -R 755 ./sbin
tsocmd "mount filesystem('SYS1.SPARK.ZFS') type(zfs) mode(read)
mountpoint('/usr/lpp/IBM/izoda/spark')"
tsocmd "mount filesystem('SYS1.SPARK.ZFS') type(zfs) mode(rdwr)
mountpoint('/usr/lpp/IBM/izoda/spark')"
cd $SPARK_HOME
chmod -R 700 ./sbin chmod 711 ./sbin/
chmod 755 ./sbin/spark-configuration-checker.sh
tsocmd "mount filesystem('SYS1.SPARK.ZFS') type(zfs) mode(read)
mountpoint('/usr/lpp/IBM/izoda/spark')"
Some of these commands use the -R command (recursive) flag; others do not. You can reuse the
"755" form for any other shell scripts in that directory for which you want to allow execution. Note
that some scripts invoke other scripts in the ./sbin
directory, and will need to
have their permissions changed as well.
Using standard z/OS MVS command authorization
Most installations will have tight controls over which operators are allowed to use START, STOP
and CANCEL commands. If you have used the supplied SAZKSAMP
examples to create
Started Task JCL for the Spark Master and Worker, you may want to update the RACF controls over the
START and STOP of those started tasks to further secure the Spark cluster environment. Refer to “MVS
Commands, RACF Access Authorities, and Resource Names” in z/OS MVS System Commands. For example, you might need to
define new resources like MVS.START.STC.AZKMSTR
and
MVS.STOP.STC.AZKMSTR
to control the Master. (Similar names would exist for the
Worker; for example, MVS.START.STC.AZKWRKR
.)