Assigning job names to Spark processes

Assigning unique job names to Spark processes helps to identify the purpose of each process, correlate a process to an application, and facilitate the grouping of processes into a WLM service class.

You can use Spark properties and the _BPX_JOBNAME environment variable to assign job names to executors, drivers, and other Spark processes. If you are using started tasks to start various Spark processes, you may use the job names you setup for the started task instead.

Note: The user ID of the Spark worker daemon requires READ access to the BPX.JOBNAME profile in the FACILITY class to change the job names of the executors and drivers.
Note: A specification that yields a job name that is more than 8 characters, raises an exception.