Table of contents

Coding and running a notebook (Watson Studio)

After you create a notebook, you’re ready to start writing and running code to analyze data.

A notebook runs in a Jupyter kernel in the environment that you specified at the time you created the notebook. If the environment is a standard default environment and you select this environment for more than one notebook, multiple notebook kernels are started in the same runtime. If the environment is a Spark environment, the kernel is started in a dedicated Spark runtime.

Before you start coding, become familiar with the notebook interface and how to code in Markdown to annotate your code.

To open a notebook in edit mode, click the edit icon (Edit icon). If the notebook is locked, you might be able to unlock and edit it.

When you open a new notebook in edit mode, the notebook is considered to be untrusted by the Jupyter service by default. When you run an untrusted notebook, content deemed untrusted will not be executed. Untrusted content includes any Javascript, or HTML or Javascript in Markdown cells or in any output cells that you did not generated.

To tell the service to trust your notebook content and execute all cells:

  1. Click Not Trusted in the upper right corner of the notebook.
  2. Click Trust to execute all cells.

To develop analytic applications in a notebook, follow these general steps:

  1. Import preinstalled libraries or add your own libraries to your environment:
  2. Load and access data. See Load and access data. Alternatively, to access assets programmatically, see:
  3. Prepare and analyze the data with the appropriate methods:
  4. Collaborate with other project members. You can add comments to notebooks by clicking the comment icon (comment icon).
  5. If necessary, schedule the notebook to run at a regular time. See Schedule a notebook job.
  6. When you’re not actively working on the notebook, click File > Stop Kernel to stop the notebook kernel and free up resources.

    If you accidentally close your notebook browser window while the notebook is still running, or are logged out by the system if your job runs very long, the kernel will remain active. When you reopen the same notebook, the notebook is connected to the same kernel and all output cells are retained. The execution progress of a notebook can only be restored for notebooks that run in a local kernel. If your notebook runs in a Spark or Hadoop cluster, all notebook changes that were not saved before you left the notebook or closed the web browser window will be lost.

Watch this video to see how to code and run a notebook to visualize and analyze precipitation data from the UNData portal.

This video provides a visual method as an alternative to following the written steps in this documentation.

Learn more