Table of contents

Data load support

The Insert to code function is available for project data assets in Jupyter notebooks when you click the Find and Add Data icon (Shows the find data icon) and select an asset in the notebook sidebar. The asset can be data from a file or a data source connection.

By clicking in an empty code cell in your notebook and then clicking the Insert to code link below an asset name, you can select to:

  • Insert the data source access credentials. This capability is available for all data assets that are added to a project. With the credentials, you can write your own code to access the asset and load the data into data structures of your choice in your notebook.
  • Generate code that is added to the notebook cell. The inserted code serves as a quick start to allow you to easily begin working with a data set or connection. For production systems, you should carefully review the inserted code to determine if you should write your own code that better meets your needs and performance requirements.

    When you run the code cell, the data is accessed and loaded into the data structure you selected. This capability is not available for all data sources.

The following tables show you which data source connections (file types and database connections) support generating code which loads data into given data structures in a notebook. The Insert to code function options for generating code vary depending on the data source, the notebook coding language, and the notebook runtime compute.

Supported files types

 

Data source Notebook coding language Compute engine type Available support to load data
CSV files      
  Python Anaconda Python distribution  Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame and sparkSessionDataFrame 
    With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame and sparkSessionDataFrame 
    With Hadoop Load data into R data frame and sparkSessionDataFrame
  Scala With Spark Load data into sparkSessionDataFrame
    With Hadoop Load data into sparkSessionDataFrame
JSON files      
  Python Anaconda Python distribution  Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame and sparkSessionDataFrame 
    With Hadoop Load data into pandasDataFrame and sparkSessionDataFrame
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame and sparkSessionDataFrame 
    With Hadoop Load data into R data frame and sparkSessionDataFrame
  Scala With Spark No data load support
    With Hadoop Load data into sparkSessionDataFrame
.xlsx and .xls files      
  Python Anaconda Python distribution  Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame 
  R Anaconda R distribution No data load support
    With Spark No data load support
  Scala With Spark No data load support
Octet-stream file types      
  Python Anaconda Python distribution  No data load support
    With Spark No data load support 
  R Anaconda R distribution Load data in R data frame
    With Spark Load data in rDataObject
  Scala With Spark No data load support
Binary files      
  Python Anaconda Python distribution  No data load support
    With Spark No data load support 
    Hadoop No data load support
  R Anaconda R distribution No data load support
    With Spark No data load support
    Hadoop Load data in rDataObject
  Scala With Spark No data load support
    With Hadoop Load data into sparkSessionDataFrame

 

Supported database connections

Data source Notebook coding language Compute engine type Available support to load data
Db2 Warehouse on Cloud      
  Python Anaconda Python distribution  Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame and sparkSessionDataFrame 
    With Hadoop Load data into pandasDataFrame, ibmdbpy, sparkSessionDataFrame and sqlContext
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame and sparkSessionDataFrame 
    With Hadoop Load data into R data frame, ibmdbr, sparkSessionDataFrame and sqlContext
  Scala With Spark Load data into sparkSessionDataFrame
    With Hadoop Load data into sparkSessionDataFrame and sqlContext
- Amazon Simple Storage Services (S3)
- Amazon Simple Storage Services (S3) with an IAM access policy
     
  Python With Hadoop Load data into pandasStreamingBody and sparkSessionSetup
  R With Hadoop Load data into rTextConnection and sparkSessionSetup
  Scala With Hadoop No data load support
- IBM Db2 on Cloud
- IBM Db2 Database
- IBM Db2 z/OS
- Microsoft SQL Server
- Netezza (PureData System for Analytics)
- IBM Informix
- Generic JDBC
- IBM Databases for PostgreSQL
- Oracle Database
- Data Virtualization
     
  Python Anaconda Python distribution Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame and sparkSessionDataFrame 
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame and sparkSessionDataFrame 
  Scala With Spark Load data into sparkSessionDataFrame
- IBM Db2 Hosted
- Apache HDFS
- IBM Db2 Big SQL
     
  Python Anaconda Python distribution Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame and sparkSessionDataFrame 
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame and sparkSessionDataFrame 
  Scala With Spark No data load support
- Cognos Analytics      
  Python Anaconda Python distribution Load data into pandasDataFrame

In the generated code:
- Edit the path parameter in the last line of code
- Remove the comment tagging

To read data, see Reading data from a data source
To search data, see Searching for data objects
To write data, see Writing data to a data source
    With Spark No data load support 
  R Anaconda R distribution Load data into R data frame

In the generated code:
- Edit the path parameter in the last line of code
- Remove the comment tagging

To read data, see Reading data from a data source
To search data, see Searching for data objects
To write data, see Writing data to a data source
    With Spark No data load support
  Scala With Spark No data load support
- Microsoft Azure Cosmos DB      
  Python Anaconda Python distribution Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame 
  R Anaconda R distribution No data load support
    With Spark No data load support 
  Scala With Spark No data load support
- HTTP
- Apache Cassandra
- Amazon RDS for PostgreSQL
     
  Python Anaconda Python distribution Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame 
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame 
  Scala With Spark Load data into sparkSessionDataFrame
- Amazon RDS for MySQL      
  Python Anaconda Python distribution Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame and into sparkSessionDataFrame
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame and sparkSessionDataFrame 
  Scala With Spark Load data into sparkSessionDataFrame
- IBM Cloud Object Storage      
  Python Anaconda Python distribution Load data into pandasDataFrame
    With Spark Load data into pandasDataFrame 
  R Anaconda R distribution Load data into R data frame
    With Spark Load data into R data frame 
  Scala With Spark No data load support
- Mounted storage volumes      
  Python Anaconda Python distribution Load data into pandasDataFrame
    With Spark No data load support 
  R Anaconda R distribution Load data into R data frame
    With Spark No data load support 
  Scala With Spark No data load support