Jupyter kernels and notebook environments (Watson Studio)
Jupyter notebooks run in kernels in Jupyter notebook environments or, if the notebooks use Spark APIs, those kernels run in a Spark environment.
The number of notebook Juypter kernels started in an environment depends on the environment type:
-
CPU or GPU environments
When you open a notebook in edit mode, exactly one interactive session connects to a Jupyter kernel for the notebook language and the environment runtime that you select. The runtime is started per user and not per notebook. This means that if you open a second notebook with the same environment template, a second kernel is started in that runtime. Resources are shared. If you want to avoid sharing runtime resources, you must associate each notebook with its own environment template.
Important: Stopping a notebook kernel doesn't stop the environment runtime in which the kernel is started because other notebook kernels could still be active in that runtime. Only stop an environment runtime if you are sure that no kernels are active. -
Spark environments
When you open a notebook in edit mode in a Spark environment, a dedicated Spark cluster is started, even if another notebook was opened in the same Spark environment template. Each notebook kernel has its own Spark driver and set of Spark executors. No resources are shared.
If necessary, you can restart or reconnect to a kernel. When you restart a kernel, the kernel is stopped and then started in the same session, but all execution results are lost. When you reconnect to a kernel after losing a connection, the notebook is connected to the same kernel session, and all previous execution results which were saved are available.
If you accidentally close your notebook browser window while the notebook is still running, or are logged out by the system if your job runs very long, the kernel will remain active. When you reopen the same notebook, the notebook is connected to the same kernel and all output cells are retained. The execution progress of a notebook can only be restored for notebooks that run in a local kernel. If your notebook runs in a Spark or Hadoop cluster, all notebook changes that were not saved before you left the notebook or closed the web browser window will be lost and execution progress will not be restored after the notebook page is reopened.
Learn more
Parent topic: Creating notebooks