IBM Support

sc not defined in Jupyter notebook

Troubleshooting


Problem

I have a notebook in which I add:
sparkSession=sparkSession(sc).builder.getOrCreate()
I get:
Name Error: sc is not defined

Diagnosing The Problem

User had imported a custom Hive JDBC jar.

Resolving The Problem

Navigate to the Admin Dashboard, and under scripts, select the script for adding/removing custom JDBC drivers and jars and remove the Hive JDBC jar. Restart the notebook environment and Insert-to-Code for Spark DataFrame should work fine afterwards.

Document Location

Worldwide

[{"Business Unit":{"code":"BU059","label":"IBM Software w\/o TPS"},"Product":{"code":"SSHGWL","label":"IBM Watson Studio Local"},"Component":"","Platform":[{"code":"PF025","label":"Platform Independent"}],"Version":"All Versions","Edition":"","Line of Business":{"code":"LOB10","label":"Data and AI"}}]

Document Information

Modified date:
30 December 2019

UID

ibm10878939